text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 15 Note on numpy
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
A = np.array([[56.0, 0.0, 4.4, 68.0],
[1.2,104.0,52.0,8.0],
[1.8,135.0,99.0,0.9]])
print(A)
cal = A.sum(axis=0)
print(cal)
percentage = 100*A/cal.reshape(1,4)
print(percentage)
import numpy as np
a = np.random.randn(5)
print(a)
print(a.shape)
print(a.T)
print(np.dot(a,a.T))
a = np.random.randn(5,1)
print(a)
print(a.T)
print(np.dot(a,a.T))
A = np.random.randn(4,3)
B = np.sum(A, axis = 1, keepdims = True)
print(B.shape)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sandbox-3', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we read in the data.
Step2: Let's see what it looks like.
Step3: Let's get a full list of columns.
Step4: Let's regress 'value' on 'egdp', 'igdp' and 'dist', all in logs. To do this we make a formula object.
Step5: Replication using Projection
Step6: Now reproduce the coefficients by computing $\hat \beta$, using the matrix expression given in the lectures.
Step7: Next replicate the value for $R^2$ produced in the table above using the formula given in the lecture slides.
|
<ASSISTANT_TASK:>
Python Code:
matplotlib inline
import pandas as pd
import numpy as np
from numpy import log
import statsmodels.formula.api as smf
data = pd.read_csv("trade_data.csv")
data.head()
data.columns
formula = "log(value) ~ log(egdp) + log(igdp) + log(dist)"
model = smf.ols(formula, data)
result = model.fit(cov_type='HC1')
print(result.summary())
data2 = data[['value', 'egdp', 'igdp', 'dist']]
data2 = data2.dropna()
y = np.asarray(np.log(data2.value))
X = np.ones((len(y), 4))
X[:, 1] = log(data2.egdp)
X[:, 2] = log(data2.igdp)
X[:, 3] = log(data2.dist)
# Derive betahat using the expression from the lectures
print(betahat)
# Derive R^2 using y, Py, etc. as defined in the lecture
print(Rsq)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the MNIST database
Step2: Each digit is represented by a one hot vector where the index of the 1 represents the number
Step3: Load the saved weight matrices that were created by trainging the model
Step4: The network where the mental imagery occurs
Step5: The following is not part of the brain model, it is used to view the output for the ensemble
Step6: Pickle the probe's output if it takes a long time to run
Step7: Testing
|
<ASSISTANT_TASK:>
Python Code:
import nengo
import numpy as np
import cPickle
from nengo_extras.data import load_mnist
from nengo_extras.vision import Gabor, Mask
from matplotlib import pylab
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import random
import scipy.ndimage
# --- load the data
img_rows, img_cols = 28, 28
(X_train, y_train), (X_test, y_test) = load_mnist()
X_train = 2 * X_train - 1 # normalize to -1 to 1
X_test = 2 * X_test - 1 # normalize to -1 to 1
temp = np.diag([1]*10)
ZERO = temp[0]
ONE = temp[1]
TWO = temp[2]
THREE= temp[3]
FOUR = temp[4]
FIVE = temp[5]
SIX = temp[6]
SEVEN =temp[7]
EIGHT= temp[8]
NINE = temp[9]
labels =[ZERO,ONE,TWO,THREE,FOUR,FIVE,SIX,SEVEN,EIGHT,NINE]
dim =28
label_weights = cPickle.load(open("label_weights1000.p", "rb"))
activity_to_img_weights = cPickle.load(open("activity_to_img_weights1000.p", "rb"))
identity_after_encoder_weights = cPickle.load(open("identity_after_encoder_weights1000.p", "r"))
#rotated_after_encoder_weights_5000 = cPickle.load(open("rotated_after_encoder_weights_5000.p", "r"))
#rotation_weights = cPickle.load(open("rotation_weights_clockwise5000.p","rb"))
#label_weights = cPickle.load(open("label_weights_rot_enc5000.p", "rb"))
#activity_to_img_weights = cPickle.load(open("activity_to_img_weights_rot_enc5000.p", "r"))
#rotated_after_encoder_weights = cPickle.load(open("rotated_counter_after_encoder_weights_rot_enc5000.p", "r"))
rng = np.random.RandomState(9)
n_hid = 1000
model = nengo.Network(seed=3)
with model:
#Stimulus only shows for brief period of time
stim = nengo.Node(lambda t: THREE if t < 0.1 else 0) #nengo.processes.PresentInput(labels,1))#
ens_params = dict(
eval_points=X_train,
neuron_type=nengo.LIF(),
intercepts=nengo.dists.Choice([-0.5]),
max_rates=nengo.dists.Choice([100]),
)
# linear filter used for edge detection as encoders, more plausible for human visual system
encoders = Gabor().generate(n_hid, (11, 11), rng=rng)
encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True)
ens = nengo.Ensemble(n_hid, dim**2, seed=3, encoders=encoders, **ens_params)
#Recurrent connection on the neurons of the ensemble to maintain the image
nengo.Connection(ens.neurons, ens.neurons, transform = identity_after_encoder_weights.T, synapse=0.2)
#Can't just connect neurons to neurons
#nengo.Connection(ens.neurons, ens.neurons, synapse=0.2)
#Connect stimulus to ensemble, transform using learned weight matrices
nengo.Connection(stim, ens, transform = np.dot(label_weights,activity_to_img_weights).T, synapse=0.1)
#Collect output, use synapse for smoothing
probe = nengo.Probe(ens.neurons,synapse=0.1)
sim = nengo.Simulator(model)
sim.run(5)
'''Animation for Probe output'''
fig = plt.figure()
output_acts = []
for act in sim.data[probe]:
output_acts.append(np.dot(act,activity_to_img_weights))
def updatefig(i):
im = pylab.imshow(np.reshape(output_acts[i],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'),animated=True)
return im,
ani = animation.FuncAnimation(fig, updatefig, interval=0.1, blit=True)
plt.show()
print(len(sim.data[probe]))
plt.subplot(161)
plt.title("100")
pylab.imshow(np.reshape(output_acts[100],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.subplot(162)
plt.title("500")
pylab.imshow(np.reshape(output_acts[500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.subplot(163)
plt.title("1000")
pylab.imshow(np.reshape(output_acts[1000],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.subplot(164)
plt.title("1500")
pylab.imshow(np.reshape(output_acts[1500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.subplot(165)
plt.title("2000")
pylab.imshow(np.reshape(output_acts[2000],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.subplot(166)
plt.title("2500")
pylab.imshow(np.reshape(output_acts[2500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.show()
#The filename includes the number of neurons and which digit is being rotated
filename = "mental_rotation_output_ONE_" + str(n_hid) + ".p"
cPickle.dump(sim.data[probe], open( filename , "wb" ) )
testing = np.dot(ONE,np.dot(label_weights,activity_to_img_weights))
plt.subplot(121)
pylab.imshow(np.reshape(testing,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
#Get image
testing = np.dot(ONE,np.dot(label_weights,activity_to_img_weights))
#Get activity of image
_, testing_act = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=testing)
#Get rotated encoder outputs
testing_rotate = np.dot(testing_act,rotated_after_encoder_weights)
#Get activities
testing_rotate = ens.neuron_type.rates(testing_rotate, sim.data[ens].gain, sim.data[ens].bias)
for i in range(5):
testing_rotate = np.dot(testing_rotate,rotated_after_encoder_weights)
testing_rotate = ens.neuron_type.rates(testing_rotate, sim.data[ens].gain, sim.data[ens].bias)
#testing_rotate = np.dot(testing_rotate,rotation_weights)
testing_rotate = np.dot(testing_rotate,activity_to_img_weights)
plt.subplot(122)
pylab.imshow(np.reshape(testing_rotate,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.show()
plt.subplot(121)
pylab.imshow(np.reshape(X_train[0],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
#Get activity of image
_, testing_act = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=X_train[0])
testing_rotate = np.dot(testing_act,activity_to_img_weights)
plt.subplot(122)
pylab.imshow(np.reshape(testing_rotate,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Data Frame
Step2: Drop Missing Values Using NumPy
Step3: Drop Missing Values Using pandas
|
<ASSISTANT_TASK:>
Python Code:
# Load library
import numpy as np
import pandas as pd
# Create feature matrix
X = np.array([[1, 2],
[6, 3],
[8, 4],
[9, 5],
[np.nan, 4]])
# Remove observations with missing values
X[~np.isnan(X).any(axis=1)]
# Load data as a data frame
df = pd.DataFrame(X, columns=['feature_1', 'feature_2'])
# Remove observations with missing values
df.dropna()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Line with Gaussian noise
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
def random_line(m, x, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
y = m * x + b + N(0, sigma**2)
return y
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
Plot a random line with slope m, intercept b and size points.
# YOUR CODE HERE
raise NotImplementedError()
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
# YOUR CODE HERE
raise NotImplementedError()
#### assert True # use this cell to grade the plot_random_line interact
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: These features are
Step2: Derived Features
Step3: Here is a broad description of the keys and what they mean
Step4: We clearly want to discard the "boat" and "body" columns for any classification into survived vs not survived as they already contain this information. The name is unique to each person (probably) and also non-informative. For a first try, we will use "pclass", "sibsp", "parch", "fare" and "embarked" as our features
Step5: The data now contains only useful features, but they are not in a format that the machine learning algorithms can understand. We need to transform the strings "male" and "female" into binary variables that indicate the gender, and similarly for "embarked".
Step6: This transformation successfully encoded the string columns. However, one might argue that the class is also a categorical variable. We can explicitly list the columns to encode using the columns parameter, and include pclass
Step7: With all of the hard data loading work out of the way, evaluating a classifier on this data becomes straightforward. Setting up the simplest possible model, we want to see what the simplest score can be with DummyClassifier.
Step8: Exercise
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_iris
iris = load_iris()
print(iris.data.shape)
measurements = [
{'city': 'Dubai', 'temperature': 33.},
{'city': 'London', 'temperature': 12.},
{'city': 'San Francisco', 'temperature': 18.},
]
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer()
vec
vec.fit_transform(measurements).toarray()
vec.get_feature_names()
import os
import pandas as pd
titanic = pd.read_csv(os.path.join('datasets', 'titanic3.csv'))
print(titanic.columns)
titanic.head()
labels = titanic.survived.values
features = titanic[['pclass', 'sex', 'age', 'sibsp', 'parch', 'fare', 'embarked']]
features.head()
pd.get_dummies(features).head()
features_dummies = pd.get_dummies(features, columns=['pclass', 'sex', 'embarked'])
features_dummies.head(n=16)
data = features_dummies.values
import numpy as np
np.isnan(data).any()
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import Imputer
train_data, test_data, train_labels, test_labels = train_test_split(data, labels, random_state=0)
imp = Imputer()
imp.fit(train_data)
train_data_finite = imp.transform(train_data)
test_data_finite = imp.transform(test_data)
from sklearn.dummy import DummyClassifier
clf = DummyClassifier('most_frequent')
clf.fit(train_data_finite, train_labels)
print("Prediction accuracy: %f" % clf.score(test_data_finite, test_labels))
# %load solutions/10_titanic.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We're going to test on some train images, so loading the training set labels.
Step2: Using the DataLoader to set up the parameters, you could replace it with something much simpler.
Step3: The next function is going to iterate over a test generator to get the outputs.
Step4: We get the default "no transformation" parameters for the model.
Step5: And set up the test generator on the first 256 patients of the training set (512 images).
Step6: Then we can get some predictions.
Step7: Legend
|
<ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('../')
import cPickle as pickle
import re
import glob
import os
from generators import DataLoader
import time
import holoviews as hv
import theano
import theano.tensor as T
import numpy as np
import pandas as p
import lasagne as nn
from utils import hms, architecture_string, get_img_ids_from_iter
%pylab inline
rcParams['figure.figsize'] = 16, 6
# rcParams['text.color'] = 'red'
# rcParams['xtick.color'] = 'red'
# rcParams['ytick.color'] = 'red'
np.set_printoptions(precision=3)
np.set_printoptions(suppress=True)
dump_path = '../dumps/2015_07_17_123003.pkl'
model_data = pickle.load(open(dump_path, 'rb'))
# Let's set the in and output layers to some local vars.
l_out = model_data['l_out']
l_ins = model_data['l_ins']
chunk_size = model_data['chunk_size'] * 2
batch_size = model_data['batch_size']
#print "Batch size: %i." % batch_size
#print "Chunk size: %i." % chunk_size
output = nn.layers.get_output(l_out, deterministic=True)
input_ndims = [len(nn.layers.get_output_shape(l_in))
for l_in in l_ins]
xs_shared = [nn.utils.shared_empty(dim=ndim)
for ndim in input_ndims]
idx = T.lscalar('idx')
givens = {}
for l_in, x_shared in zip(l_ins, xs_shared):
givens[l_in.input_var] = x_shared[idx * batch_size:(idx + 1) * batch_size]
compute_output = theano.function(
[idx],
output,
givens=givens,
on_unused_input='ignore'
)
# Do transformations per patient instead?
if 'paired_transfos' in model_data:
paired_transfos = model_data['paired_transfos']
else:
paired_transfos = False
#print paired_transfos
train_labels = p.read_csv('../data/new_trainLabels.csv')
print train_labels.head(20)
# Get all patient ids.
patient_ids = sorted(set(get_img_ids_from_iter(train_labels.image)))
num_chunks = int(np.ceil((2 * len(patient_ids)) / float(chunk_size)))
# Where all the images are located:
# it looks for [img_dir]/[patient_id]_[left or right].jpeg
img_dir = '../test_resized/'
data_loader = DataLoader()
new_dataloader_params = model_data['data_loader_params']
new_dataloader_params.update({'images_test': patient_ids})
new_dataloader_params.update({'labels_test': train_labels.level.values})
new_dataloader_params.update({'prefix_train': img_dir})
data_loader.set_params(new_dataloader_params)
def do_pred(test_gen):
outputs = []
for e, (xs_chunk, chunk_shape, chunk_length) in enumerate(test_gen()):
num_batches_chunk = int(np.ceil(chunk_length / float(batch_size)))
print "Chunk %i/%i" % (e + 1, num_chunks)
print " load data onto GPU"
for x_shared, x_chunk in zip(xs_shared, xs_chunk):
x_shared.set_value(x_chunk)
print " compute output in batches"
outputs_chunk = []
for b in xrange(num_batches_chunk):
out = compute_output(b)
outputs_chunk.append(out)
outputs_chunk = np.vstack(outputs_chunk)
outputs_chunk = outputs_chunk[:chunk_length]
outputs.append(outputs_chunk)
return np.vstack(outputs), xs_chunk
no_transfo_params = model_data['data_loader_params']['no_transfo_params']
#print no_transfo_params
# The default gen with "no transfos".
test_gen = lambda: data_loader.create_fixed_gen(
data_loader.images_test[:128*2],
chunk_size=chunk_size,
prefix_train=img_dir,
prefix_test=img_dir,
transfo_params=no_transfo_params,
paired_transfos=paired_transfos,
)
%%time
outputs_orig, chunk_orig = do_pred(test_gen)
d={}
for i,patient in zip(range(0,outputs_orig.shape[0],2),patient_ids):
a=hv.RGB.load_image('../test_resized//'+str(patient)+'_left.jpeg')
b=hv.RGB.load_image('../test_resized//'+str(patient)+'_right.jpeg')
a=a + hv.Bars(outputs_orig[i])
b=b+hv.Bars(outputs_orig[i+1])
d[patient] = (a+b).cols(2)
hv.notebook_extension()
result=hv.HoloMap(d)
result
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load observations
Step2: Model simulation
Step3: Structural options
Step4: Run the model
Step5: Focus on a subperiod to plot
Step6: Plot the model output and observations to evaluate the fit
Step7: Plot modelled and filtered subflows in function of time
Step8: Plot fractions in time overview
Step9: Soil moisture plot
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
from matplotlib.ticker import LinearLocator
sns.set_style('whitegrid')
mpl.rcParams['font.size'] = 16
mpl.rcParams['axes.labelsize'] = 16
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
from VHM import VHM_flexible
import brewer2mpl
setblue = brewer2mpl.get_map('Greys', 'Sequential', 6,
reverse = True).mpl_colors
data = pd.read_csv("/media/DATA/Githubs/project_breach_pdm_application/data/data_brach_case_nete.csv",
parse_dates=True, index_col=0)
data.head()
# Parameters
umax =280.0
uevap = 150.0
c1s = 1.8
c2s = 0.4
c3s = 1.0
c1o = -3.9
c2o = 1.59
c3o = 0.0
c4o = 0.0
c1i = -2.7
c2i = 1.
c3i = 0.0
c4i = 0.0
nso = 50
nsi = 50
Kg = 2400.0
Ki =120.0
Ko =10.0
# Define the constants
area = 361.
timestep = 1.
# Define the initial conditions
u = 170.0
qg =1.0
cg =0.0
qo =0.0
co =0.0
qi =1.0
ci =0.0
pars = [umax,uevap,c1s,c2s,c3s,c1o,c2o,c3o,c4o,c1i,c2i,c3i,c4i,nso,nsi,Kg,Ki,Ko]
constants = [area,timestep]
init_conditions = [u, qg, cg, qo, co, qi, ci]
structure_options=['relative', 'nonlinear', True, True, '211']
rain = data['rain'].values
pet = data['evapotranspiration'].values
vhm_output = VHM_flexible(pars, constants, init_conditions,
structure_options, rain, pet)
outflows, fractions, moisture = vhm_output
# create dataframe with
data['modtot'] = outflows[:, 0]
data['modover'] = outflows[:, 1]
data['modinter'] = outflows[:, 2]
data['modbase'] = outflows[:, 3]
data['fracover'] = fractions[:, 0]
data['fracinter'] = fractions[:, 1]
data['fracbase'] = fractions[:, 2]
data['fracsoil'] = fractions[:, 3]
data['fractotal'] = data['fracover'] + data['fracinter'] + data['fracbase'] + data['fracsoil']
data['soil'] = moisture
data2plot = data['2003':'2005']
fig, axs = plt.subplots(1, 1, figsize=(14, 6), sharex=True)
axs.plot(data2plot.index, data2plot['modtot'], label='modelled')
axs.plot(data2plot.index, data2plot['meas'], label='observed')
axs.set_ylabel("flow ($m^3s^{-1}$)")
axs.yaxis.labelpad = 15
axs.xaxis.set_major_locator(
mpl.dates.MonthLocator(interval = 12))
axs.xaxis.set_major_formatter(
mpl.dates.DateFormatter('%d %b \n %Y'))
axs.tick_params(axis = 'x', pad = 15, direction='out')
# y-axis
axs.tick_params(axis = 'y', pad = 5, direction='out')
#remove spines
axs.spines['bottom'].set_visible(False)
axs.spines['top'].set_visible(False)
# set grid
axs.grid(which='both', axis='both', color='0.7',
linestyle='--', linewidth=0.8)
# line colors of the plots
axs.lines[0].set_color(setblue[0])
axs.lines[1].set_color(setblue[2])
# line widths
for line in axs.lines:
line.set_linewidth(1.2)
axs.legend(loc='upper right', fontsize=16, ncol=2, bbox_to_anchor=(1., 1.1))
#plt.savefig('vhm_flow_example.pdf', dpi=300)
#plt.savefig('vhm_flow_example.png', dpi=300)
overf = pd.read_csv("Filter_Overlandflow3.txt", index_col=0, sep='\t', parse_dates=True, dayfirst=True)
overf.columns = ['overland flow']
interf = pd.read_csv("Filter_Interflow3.txt", index_col=0, sep='\t', parse_dates=True, dayfirst=True)
interf.columns = ['interflow']
basef = pd.read_csv("Filter_Baseflow3.txt", index_col=0, sep='\t', parse_dates=True, dayfirst=True)
basef.columns = ['baseflow']
subflow_data = overf.join(interf).join(basef)
subflow2plot = subflow_data['2003':'2005']
fig, axs = plt.subplots(3, 1, figsize=(14, 6), sharex=True)
fig.subplots_adjust(hspace = 0.2)
#first plot
axs[0].plot(data2plot.index, data2plot['modover'], label='subflow modelled')
axs[0].plot(subflow2plot.index, subflow2plot['overland flow'].values, label='subflow seperation')
axs[0].set_ylabel("overland flow \n ($m^3s^{-1}$)")
axs[0].yaxis.labelpad = 15
#second plot
axs[1].plot(data2plot.index, data2plot['modinter'])
axs[1].plot(subflow2plot.index, subflow2plot['interflow'].values)
axs[1].yaxis.tick_right()
axs[1].yaxis.set_label_position("right")
axs[1].set_ylabel("interflow \n ($m^3s^{-1}$)")
axs[1].yaxis.labelpad = 15
# third plot
axs[2].plot(data2plot.index, data2plot['modbase'])
axs[2].plot(subflow2plot.index, subflow2plot['baseflow'].values)
axs[2].xaxis.set_major_locator(
mpl.dates.MonthLocator(interval = 12))
axs[2].xaxis.set_major_formatter(
mpl.dates.DateFormatter('%d %b \n %Y'))
axs[2].tick_params(axis = 'x', pad = 15, direction='out')
axs[2].set_ylabel("baseflow \n($m^3s^{-1}$)")
axs[2].yaxis.labelpad = 10
#editing of the style:
for ax in axs:
# y-axis
ax.tick_params(axis = 'y', pad = 5, direction='out')
ax.yaxis.set_major_locator(LinearLocator(3))
#remove spines
ax.spines['bottom'].set_visible(False)
ax.spines['top'].set_visible(False)
# set grid
ax.grid(which='both', axis='both', color='0.7',
linestyle='--', linewidth=0.8)
# line colors of the plots
ax.lines[0].set_color(setblue[0])
ax.lines[1].set_color(setblue[2])
# line widths
for line in ax.lines:
line.set_linewidth(1.2)
# remove ticklabels if redundant
if not ax.is_last_row():
ax.set_xlabel('')
plt.setp(axs[1].get_xminorticklabels(), visible=False)
plt.setp(axs[1].get_xmajorticklabels(), visible=False)
plt.setp(axs[1].get_xminorticklabels(), visible=False)
temp = axs[0]
temp.legend(loc='upper right', fontsize=16, ncol=2, bbox_to_anchor=(1., 1.4))
fig.savefig('vhm_subflow_example.pdf')
fig.savefig('vhm_subflow_example.png')
fig, axs = plt.subplots(1, 1, figsize=(14, 6), sharex=True)
axs.plot(data2plot.index, data2plot['fracover'],'-', label='fraction overland flow')
axs.plot(data2plot.index, data2plot['fracinter'],'-.', label='fraction interflow')
axs.plot(data2plot.index, data2plot['fracbase'],':', label='fraction base flow')
axs.plot(data2plot.index, data2plot['fracsoil'],'-', label='fraction infiltration')
axs.plot(data2plot.index, data2plot['fractotal'],'-', label='total fractions')
axs.set_ylabel("fractions")
axs.yaxis.labelpad = 15
axs.xaxis.set_major_locator(
mpl.dates.MonthLocator(interval = 12))
axs.xaxis.set_major_formatter(
mpl.dates.DateFormatter('%d %b \n %Y'))
axs.tick_params(axis = 'x', pad = 15, direction='out')
# y-axis
axs.tick_params(axis = 'y', pad = 5, direction='out')
axs.yaxis.set_ticks([0,0.5,1.])
axs.set_ylim([0., 1.05])
#remove spines
axs.spines['bottom'].set_visible(False)
axs.spines['top'].set_visible(False)
# set grid
axs.grid(which='both', axis='both', color='0.7',
linestyle='--', linewidth=0.8)
# line colors of the plots
axs.lines[0].set_color(setblue[0])
axs.lines[1].set_color(setblue[0])
axs.lines[2].set_color(setblue[1])
axs.lines[3].set_color(setblue[1])
axs.lines[4].set_color(setblue[3])
# line widths
for line in axs.lines:
line.set_linewidth(1.2)
axs.legend(loc='upper right', fontsize=16, ncol=3, bbox_to_anchor=(1., 0.95))
#plt.savefig('vhm_fractions_example_noante.pdf', dpi=300)
#plt.savefig('vhm_fractions_example_noante.png', dpi=300)
fig, axs = plt.subplots(1, 1, figsize=(14, 6), sharex=True)
axs.plot(data2plot.index, data2plot['soil'],'-')
axs.set_ylabel(r"soil moisture ($mm$)")
axs.yaxis.labelpad = 15
axs.xaxis.set_major_locator(
mpl.dates.MonthLocator(interval = 12))
axs.xaxis.set_major_formatter(
mpl.dates.DateFormatter('%d %b \n %Y'))
axs.tick_params(axis = 'x', pad = 15, direction='out')
# y-axis
axs.tick_params(axis = 'y', pad = 5, direction='out')
#remove spines
axs.spines['bottom'].set_visible(False)
axs.spines['top'].set_visible(False)
# set grid
axs.grid(which='both', axis='both', color='0.7',
linestyle='--', linewidth=0.8)
# line colors of the plots
axs.lines[0].set_color(setblue[0])
# line widths
for line in axs.lines:
line.set_linewidth(1.2)
#plt.savefig('vhm_moisture_example.pdf', dpi=300)
#plt.savefig('vhm_moisture_example.png', dpi=300)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: b) Replace the ??? so that it prints the position of Manchester in the list
Step2: c) Replace the ??? so that it prints Belfast
Step3: d) Use a negative index to print Belfast
Step4: e) Force Python to generate a list index out of range error. NB
Step5: f) Think about what the next line creates, then run the code.
Step6: g) What would you change ??? to, to return [16.5, 13.4, 14.0]?
Step7: h) What are two different ways of getting [15.2, 14.8] from the temperatures list?
Step8: i) Notice that the list of temperatures is the same length as the list of cities, that's because these are (roughly) average temperatures for each city! Given this, how do you print
Step9: Now copy+paste your code and change only one thing in order to print out
Step10: 1.2 Manipulating Multiple Lists
Step11: j) How do you get Python to print
Step12: k) How to you get Python to print
Step13: Let's re-set the lists (run the next code block)
Step14: l) How would you print out
Step15: m) How would you print out
Step16: n) How would you print out [3, 2, 6, 5] with a permanent change to the list (not slicing)? NB
Step17: Dictionaries
Step18: a) Add a record to the dictionary for Chennai (data here)
Step19: b) In one line of code, print out the airport code for Chennai
Step20: c) Check you understand the difference between the following two blocks of code by running them, checking the output and editing them (e.g. try the code again, but replacing Berlin with London)
Step21: d) Adapting the code below, print out the city name and airport code for every city in our Atlas.
Step22: Loops
Step23: b) Print out every city on a separate line using a for loop
Step24: c) Now print using a loop this new data structure
|
<ASSISTANT_TASK:>
Python Code:
cities = ["Bristol", "London", "Manchester", "Edinburgh", "Belfast", "York"]
print("The position of Manchester in the list is: " + str(cities.???('Manchester')))
print("The position of Manchester in the list is: " + str(cities.index('Manchester')))
print(cities[2 + ???])
print(cities[2 + 2])
print(cities[???])
print(cities[-2])
print(cities[???])
print(cities[6]) #anything above five would do it
temperatures = [15.6, 16.5, 13.4, 14.0, 15.2, 14.8]
print(temperatures[???])
print(temperatures[1:4])
print(temperatures[???])
print(temperatures[???])
print(temperatures[4:6])
print(temperatures[-3:-1])
city="Manchester" # Use this to start the solution...
#your code here
city="Manchester" # Use this to get the solution...
index = cities.index(city)
print("The average temperature in " + cities[index] + " is " + str(temperatures[index]) + " degrees.")
???
city="Belfast"
index = cities.index(city)
print("The average temperature in " + cities[index] + " is " + str(temperatures[index]) + " degrees.")
list1 = [1, 2, 3]
list2 = [4, 5, 6]
print( ??? )
print ( list1 + list2 )
print( ??? )
print( list1+[list2])
list1 = [1, 2, 3]
list2 = [4, 5, 6]
list3 = ???
list3.???
print(list3)
list3 = list1+list2
list3.reverse()
print(list3)
list1.???
list2.???
print( list1+list2 )
list1.reverse()
list2.reverse()
print( list1+list2 )
list1.???
list2.???
print( list1+list2 )
list1.remove(1)
list2.remove(4)
print( list1+list2 )
cities = {
'San Francisco': [37.77, -122.43, 'SFO'],
'London': [51.51, -0.08, 'LDN'],
'Paris': [48.86,2.29, 'PAR'],
'Beijing': [39.92,116.40 ,'BEI'],
}
cities = ???
cities = {
'San Francisco': [37.77, -122.43, 'SFO'],
'London': [51.51, -0.08, 'LDN'],
'Paris': [48.86,2.29, 'PAR'],
'Beijing': [39.92,116.40 ,'BEI'],
'Chennai': [13.08, 80.27,'MAA']
}
print(???)
print("The airport code for Chennai is " + cities["Chennai"][2])
print(cities['Berlin'])
print(cities.get('Berlin'))
#Berlin is not in the dict.
#The first code block above returns an error because Berlin is missing
#The second does not because it uses the .get method which handles the error for us (returning a None value)
#This second method is 'safer' because of how it handles this error
#compare to use using the two different methods for London
print(cities['London'])
print(cities.get('London'))
print(cities.get('London'))
for k, v in cities.items():
print(k)
for k, v in cities.items():
print("The city of " + str(k) + " has an airport code of " + str(v[2]) )
for ??? in cities.???:
print(??? + " is at latitude " + str(???))
for city, latitude in cities.items():
print(city + " is at latitude " + str(latitude[0]))
for c in ???:
print(???)
for c in cities.items():
print(c)
citiesB = [
{'name': 'San Francisco',
'position': [37.77, -122.43],
'airport': 'SFO'},
{'name': 'London',
'position': [51.51, -0.08],
'airport': 'LDN'},
{'name': 'Paris',
'position': [48.86, 2.29],
'airport': 'PAR'},
{'name': 'Beijing',
'position': [39.92, 116.40],
'airport': 'BEI'}
]
for ??? in citiesB.???:
print(??? + " is at latitude " + str(???))
citiesB = [
{'name': 'San Francisco',
'position': [37.77, -122.43],
'airport': 'SFO'},
{'name': 'London',
'position': [51.51, -0.08],
'airport': 'LDN'},
{'name': 'Paris',
'position': [48.86, 2.29],
'airport': 'PAR'},
{'name': 'Beijing',
'position': [39.92, 116.40],
'airport': 'BEI'}
]
for city in citiesB:
print(city['name'] + " is at latitude " + str(city['position'][0]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then, we can build our features
Step2: Finally, we register the feature
Step3: and test it
|
<ASSISTANT_TASK:>
Python Code:
from impact.core.features import BaseAnalyteFeature, BaseAnalyteFeatureFactory
class ODNormalizedData(BaseAnalyteFeature):
# The constructor should accept all required analytes as parameters
def __init__(self, biomass, reporter):
self.biomass = biomass
self.reporter = reporter
self.normalized_data = None
# This data property assures that the data is returned, or calculated as needed
@property
def data(self):
if self.normalized_data is None: self.calculate()
return self.normalized_data
# This is where the property is actually calculated and set
def calculate(self):
self.normalized_data = self.reporter.data_vector/self.biomass.data_vector
# The feature factory watches for those analytes
class ODNormalizedDataFactory(BaseAnalyteFeatureFactory):
# define what the feature
requires = ['biomass','reporter']
name = 'od_normalized_data'
# constructor should initialize variables until all required analytes are present,
# this will ensure that despite the order analytes are added, feature will be calculated appropriately
def __init__(self):
self.biomass = None
self.reporter = None
# define how to handle new analytes
def add_analyte_data(self, analyte_data):
if analyte_data.trial_identifier.analyte_type == 'reporter':
self.reporter = analyte_data
elif analyte_data.trial_identifier.analyte_type == 'biomass':
self.biomass = analyte_data
if self.reporter is not None and self.biomass is not None:
setattr(analyte_data,self.name,ODNormalizedData(biomass,reporter))
import impact.core.SingleTrial as SingleTrial
SingleTrial.register_feature(ODNormalizedDataFactory)
from impact.core.AnalyteData import Biomass, Reporter
from impact.core.TrialIdentifier import ReplicateTrialIdentifier as TI
t = [0,1,2,3,4]
biomass_data = [0.1,0.2,0.4,0.8,0.8]
reporter_data = [1000,2000,3000,4000,5000]
biomass = Biomass()
biomass.time_vector = t
biomass.data_vector = biomass_data
ti = TI()
ti.analyte_name = 'OD'
ti.analyte_type = 'biomass'
biomass.trial_identifier = ti
reporter = Reporter()
reporter.time_vector = t
reporter.data_vector = reporter_data
ti = TI()
ti.analyte_name = 'gfp'
ti.analyte_type = 'reporter'
reporter.trial_identifier = ti
trial = SingleTrial()
trial.add_analyte_data(biomass)
trial.add_analyte_data(reporter)
import numpy as np
calculated_data = trial.analyte_dict['gfp'].od_normalized_data.data
expected_data = np.array(reporter_data)/biomass_data
print(calculated_data)
print(expected_data)
print(all(calculated_data==expected_data))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: <a href="https
Step3: Examples
|
<ASSISTANT_TASK:>
Python Code:
# run this once per session to bring in a required library
!pip --quiet install sparqlwrapper | grep -v 'already satisfied'
from SPARQLWrapper import SPARQLWrapper, JSON
import pandas as pd
import io
import requests
# This function shows how to use rdflib to query a REMOTE sparql dataset
q1 = SELECT distinct ?prop ?type1 ?type2 WHERE {
?type1 rdfs:subClassOf* <https://schema.org/Organization> .
?type2 rdfs:subClassOf* <https://schema.org/Person> .
?prop <https://schema.org/domainIncludes> ?type1 .
?prop <https://schema.org/domainIncludes> ?type2 .
}
pd.set_option('display.max_colwidth', None)
# data
wd_endpoint = 'https://query.wikidata.org/sparql'
sdo_endpoint = "https://dydra.com/danbri/schema-org-v11/sparql"
# utility function
def df_from_query(querystring=q1, endpoint=sdo_endpoint):
sparql = SPARQLWrapper(endpoint)
sparql.setQuery(querystring)
sparql.setReturnFormat(JSON)
results = sparql.query().convert()
return( pd.json_normalize(results['results']['bindings']) )
# This shows how to use rdflib to query a LOCAL sparql dataset
# TODO: Need a function that loads https://webschemas.org/version/latest/schemaorg-current-https.nt into a named graph SPARQL store
import rdflib
import json
from collections import Counter
from rdflib import Graph, plugin, ConjunctiveGraph
from rdflib.serializer import Serializer
def toDF(result):
return pd.DataFrame(result, columns=result.vars)
# Fetch Schema.org definitions
sdo_current_https_url = "https://webschemas.org/version/latest/schemaorg-current-https.nq"
sdo_all_https_url = "https://webschemas.org/version/latest/schemaorg-all-https.nq"
# TODO - is this the only way to figure out what is in the attic? except both files use same NG URL
g = ConjunctiveGraph(store="IOMemory")
g.parse( sdo_all_https_url, format="nquads", publicID="https://schema.org/")
#g.parse( sdo_current_https_url, format="nquads", publicID="https://schema.org/")
result = toDF( g.query("select * where { GRAPH ?g { ?article_type rdfs:subClassOf <https://schema.org/NewsArticle> ; rdfs:label ?label }}") )
result
toDF( g.query("select * where { ?attic_term <https://schema.org/isPartOf> <https://attic.schema.org> ; rdfs:label ?label }") )
grandchild_count_query = SELECT ?child (count(?grandchild) as ?nGrandchildren) where { ?child rdfs:subClassOf <https://schema.org/Thing> . OPTIONAL { ?grandchild rdfs:subClassOf ?child } } GROUP BY ?child order by desc(count(?grandchild))
res = g.query (grandchild_count_query)
mydf = toDF( res )
#mydf.plot(kind='bar')
mydf.columns
# https://www.shanelynn.ie/bar-plots-in-python-using-pandas-dataframes/
mydf['nGrandchildren']
print(mydf)
mydf['nGrandchildren'].plot(kind='bar')
result
x = df_from_query(q1)
x
# First we clone the entire schema.org repo, then we collect up the examples from .txt files:
!git clone https://github.com/schemaorg/schemaorg
!find . -name \*example\*.txt -exec ls {} \;
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Expected output
Step3: Expected Output
Step4: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be
Step5: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
Step7: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step15: Expected Output
Step16: Expected Output
Step18: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Step20: Expected Output
|
<ASSISTANT_TASK:>
Python Code:
### START CODE HERE ### (≈ 1 line of code)
test = "Hello World"
### END CODE HERE ###
print ("test: " + test)
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+math.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s*(1-s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
# GRADED FUNCTION: image2vector
def image2vector(image):
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
### START CODE HERE ### (≈ 1 line of code)
v = image.reshape((image.shape[0]*image.shape[1]*image.shape[2], 1))#v.reshape((v.shape[0]*v.shape[1]*3,1))
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, ord = 2, axis=1, keepdims=True)
# Divide x by its norm.
x = x/x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
# GRADED FUNCTION: softmax
def softmax(x):
Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp, axis = 1, keepdims = True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp/x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
# GRADED FUNCTION: L1
def L1(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
diff = np.abs(np.subtract(yhat, y))
loss = np.sum(diff)
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
# GRADED FUNCTION: L2
def L2(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
diff = np.subtract(yhat, y)
loss = np.sum(np.dot(diff, diff))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'sandbox-1', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: When you process text, you have a nice long series of steps, but let's say you're interested in three things
Step2: The scikit-learn package does a ton of stuff, some of which includes the above. We're going to start by playing with the CountVectorizer.
Step3: Let's take a look at what it found out!
Step4: Okay, that looks like trash and garbage. What's a "sparse array"??????
Step5: If we put on our Computer Goggles we see that the first sentence has the first word 3 times, the second word 1 time, the third word 1 time, etc... But we can't read it, really. It would look nicer as a dataframe.
Step6: What do all of those numbers mean????
Step7: So sentence #4 has "at" once, and the first sentence has "bought" once, and the last sentence has "the" three times. But hey, those are garbage words! They're cluttering up our dataframe! We need to add stopwords!
Step8: I still see meowed and meowing and fish and fishes - they seem the same, so let's lemmatize/stem them.
Step9: We're going to use one that features a STEMMER - something that strips the endings off of words (or tries to, at least). This one is from nltk.
Step10: Now lets look at the new version of that dataframe.
Step11: TF-IDF
Step12: Probably the one where fish appears three times.
Step13: Now our numbers have shifted a little bit. Instead of just being a count, it's the percentage of the words.
Step14: What's the highest combined? for 'fish' and 'meow'?
Step15: Indices 4 and 6 (numbers 5 and 7) are tied - but meow never even appears in one of them!
Step16: Let's take a look at our OLD values, then our NEW values, just for meow and fish.
Step17: Notice how 'meow' increased in value because it's an infrequent term, and fish dropped in value because it's so frequent.
Step18: LOOK AT HOW IMPORTANT MEOW IS. Meowing is out of this world important, because no one ever meows.
Step19: 4 categories of documents
|
<ASSISTANT_TASK:>
Python Code:
texts = [
"Penny bought bright blue fishes.",
"Penny bought bright blue and orange fish.",
"The cat ate a fish at the store.",
"Penny went to the store. Penny ate a bug. Penny saw a fish.",
"It meowed once at the bug, it is still meowing at the bug and the fish",
"The cat is at the store. The cat is orange. The cat is meowing at the fish.",
"Penny is a fish"
]
Penny bought bright blue fishes
tokenized - penny bought bright blue fishes
"Penny bought bright blue fishes".split()
from sklearn.feature_extraction.text import CountVectorizer
count_vectorizer = CountVectorizer()
# .fit_transfer TOKENIZES and COUNTS
X = count_vectorizer.fit_transform(texts)
X
X.toarray()
pd.DataFrame(X.toarray())
# A fish is Penny
count_vectorizer.get_feature_names()
pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names())
# We'll make a new vectorizer
count_vectorizer = CountVectorizer(stop_words='english')
# .fit_transfer TOKENIZES and COUNTS
X = count_vectorizer.fit_transform(texts)
print(count_vectorizer.get_feature_names())
pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names())
# This is what our normal tokenizer looks like
def boring_tokenizer(str_input):
words = re.sub(r"[^A-Za-z0-9\-]", " ", str_input).lower().split()
return words
count_vectorizer = CountVectorizer(stop_words='english', tokenizer=boring_tokenizer)
X = count_vectorizer.fit_transform(texts)
print(count_vectorizer.get_feature_names())
from nltk.stem.porter import PorterStemmer #it doesn't know what words are, it just chop off the ends from the words
porter_stemmer = PorterStemmer()
#
print(porter_stemmer.stem('fishes'))
print(porter_stemmer.stem('meowed'))
print(porter_stemmer.stem('oranges'))
print(porter_stemmer.stem('meowing'))
print(porter_stemmer.stem('organge'))
porter_stemmer = PorterStemmer()
def stemming_tokenizer(str_input):
words = re.sub(r"[^A-Za-z0-9\-]", " ", str_input).lower().split()
words = [porter_stemmer.stem(word) for word in words]
return words
count_vectorizer = CountVectorizer(stop_words='english', tokenizer=boring_tokenizer)
X = count_vectorizer.fit_transform(texts)
print(count_vectorizer.get_feature_names())
pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names())
pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names())
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stemming_tokenizer, use_idf=False, norm='l1')
X = tfidf_vectorizer.fit_transform(texts)
pd.DataFrame(X.toarray(), columns=tfidf_vectorizer.get_feature_names())
tfidf_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stemming_tokenizer, use_idf=False, norm='l1')
X = tfidf_vectorizer.fit_transform(texts)
df = pd.DataFrame(X.toarray(), columns=tfidf_vectorizer.get_feature_names())
df
# Just add the columns together
pd.DataFrame([df['fish'], df['meow'], df['fish'] + df['meow']], index=["fish", "meow", "fish + meow"]).T
# use_idf=True is default, but I'll leave it in,idf inverse document frequency
idf_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stemming_tokenizer, use_idf=True, norm='l1')
X = idf_vectorizer.fit_transform(texts)
idf_df = pd.DataFrame(X.toarray(), columns=idf_vectorizer.get_feature_names())
idf_df
# OLD dataframe
pd.DataFrame([df['fish'], df['meow'], df['fish'] + df['meow']], index=["fish", "meow", "fish + meow"]).T
# NEW dataframe
pd.DataFrame([idf_df['fish'], idf_df['meow'], idf_df['fish'] + idf_df['meow']], index=["fish", "meow", "fish + meow"]).T
# use_idf=True is default, but I'll leave it in
l2_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stemming_tokenizer, use_idf=True)
X = l2_vectorizer.fit_transform(texts)
l2_df = pd.DataFrame(X.toarray(), columns=l2_vectorizer.get_feature_names())
l2_df
# normal TF-IDF dataframe
pd.DataFrame([idf_df['fish'], idf_df['meow'], idf_df['fish'] + idf_df['meow']], index=["fish", "meow", "fish + meow"]).T
# L2 norm TF-IDF dataframe
pd.DataFrame([l2_df['fish'], l2_df['meow'], l2_df['fish'] + l2_df['meow']], index=["fish", "meow", "fish + meow"]).T
# Initialize a vectorizer
vectorizer = TfidfVectorizer(use_idf=True, tokenizer=boring_tokenizer, stop_words='english')
X = vectorizer.fit_transform(texts) #fit_transform
# KMeans clustering is a method of clustering.
from sklearn.cluster import KMeans
number_of_clusters = 2
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :5]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
results = pd.DataFrame()
results['text'] = texts
results['category'] = km.labels_
results
from sklearn.cluster import KMeans
number_of_clusters = 4
km = KMeans(n_clusters=number_of_clusters)
km.fit(X)
print("Top terms per cluster:")
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(number_of_clusters):
top_ten_words = [terms[ind] for ind in order_centroids[i, :5]]
print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
results = pd.DataFrame()
results['text'] = texts
results['category'] = km.labels_
results
max_features: number of tokens
ax=df.plot(kind='scatter',x='fish',y='penni',alpha=0.25)
ax.set_xlabel("Fish")
ax.set_ylabel("Penny")
import matplotlib.pyplot as plt
color_list=['r','b','g','y']
colors = [color_list[i]] for i in df['category']
ax.scatter(df['fish'])
ax.set
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Selecting Asset Data
Step2: Historical Data Requests
Step3: Historical Options Data Requests
Step4: Get Fundamental Data
Step5: Indicators
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# Imports
from clr import AddReference
AddReference("System")
AddReference("QuantConnect.Common")
AddReference("QuantConnect.Jupyter")
AddReference("QuantConnect.Indicators")
from System import *
from QuantConnect import *
from QuantConnect.Data.Market import TradeBar, QuoteBar
from QuantConnect.Jupyter import *
from QuantConnect.Indicators import *
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
import pandas as pd
# Create an instance
qb = QuantBook()
spy = qb.AddEquity("SPY")
eur = qb.AddForex("EURUSD")
# Gets historical data from the subscribed assets, the last 360 datapoints with daily resolution
h1 = qb.History(360, Resolution.Daily)
# Plot closing prices from "SPY"
h1.loc["SPY"]["close"].plot()
# Gets historical data from the subscribed assets, from the last 30 days with daily resolution
h2 = qb.History(timedelta(30), Resolution.Daily)
# Plot high prices from "EURUSD"
h2.loc["EURUSD"]["high"].plot()
# Gets historical data from the subscribed assets, between two dates with daily resolution
h3 = qb.History(spy.Symbol, datetime(2014,1,1), datetime.now(), Resolution.Daily)
# Only fetchs historical data from a desired symbol
h4 = qb.History(spy.Symbol, 360, Resolution.Daily)
# or qb.History("SPY", 360, Resolution.Daily)
# Only fetchs historical data from a desired symbol
# When we are not dealing with equity, we must use the generic method
h5 = qb.History[QuoteBar](eur.Symbol, timedelta(30), Resolution.Daily)
# or qb.History[QuoteBar]("EURUSD", timedelta(30), Resolution.Daily)
goog = qb.AddOption("GOOG")
goog.SetFilter(-2, 2, timedelta(0), timedelta(180))
option_history = qb.GetOptionHistory(goog.Symbol, datetime(2017, 1, 4))
print option_history.GetStrikes()
print option_history.GetExpiryDates()
h6 = option_history.GetAllData()
data = qb.GetFundamental(["AAPL","AIG","BAC","GOOG","IBM"], "ValuationRatios.PERatio")
data
# Example with BB, it is a datapoint indicator
# Define the indicator
bb = BollingerBands(30, 2)
# Gets historical data of indicator
bbdf = qb.Indicator(bb, "SPY", 360, Resolution.Daily)
# drop undesired fields
bbdf = bbdf.drop('standarddeviation', 1)
# Plot
bbdf.plot()
# For EURUSD
bbdf = qb.Indicator(bb, "EURUSD", 360, Resolution.Daily)
bbdf = bbdf.drop('standarddeviation', 1)
bbdf.plot()
# Example with ADX, it is a bar indicator
adx = AverageDirectionalIndex("adx", 14)
adxdf = qb.Indicator(adx, "SPY", 360, Resolution.Daily)
adxdf.plot()
# For EURUSD
adxdf = qb.Indicator(adx, "EURUSD", 360, Resolution.Daily)
adxdf.plot()
# Example with ADO, it is a tradebar indicator (requires volume in its calculation)
ado = AccumulationDistributionOscillator("ado", 5, 30)
adodf = qb.Indicator(ado, "SPY", 360, Resolution.Daily)
adodf.plot()
# For EURUSD.
# Uncomment to check that this SHOULD fail, since Forex is data type is not TradeBar.
# adodf = qb.Indicator(ado, "EURUSD", 360, Resolution.Daily)
# adodf.plot()
# SMA cross:
symbol = "EURUSD"
# Get History
hist = qb.History[QuoteBar](symbol, 500, Resolution.Daily)
# Get the fast moving average
fast = qb.Indicator(SimpleMovingAverage(50), symbol, 500, Resolution.Daily)
# Get the fast moving average
slow = qb.Indicator(SimpleMovingAverage(200), symbol, 500, Resolution.Daily)
# Remove undesired columns and rename others
fast = fast.drop('rollingsum', 1).rename(columns={'simplemovingaverage': 'fast'})
slow = slow.drop('rollingsum', 1).rename(columns={'simplemovingaverage': 'slow'})
# Concatenate the information and plot
df = pd.concat([hist.loc[symbol]["close"], fast, slow], axis=1).dropna(axis=0)
df.plot()
# Get indicator defining a lookback period in terms of timedelta
ema1 = qb.Indicator(ExponentialMovingAverage(50), "SPY", timedelta(100), Resolution.Daily)
# Get indicator defining a start and end date
ema2 = qb.Indicator(ExponentialMovingAverage(50), "SPY", datetime(2016,1,1), datetime(2016,10,1), Resolution.Daily)
ema = pd.concat([ema1, ema2], axis=1)
ema.plot()
rsi = RelativeStrengthIndex(14)
# Selects which field we want to use in our indicator (default is Field.Close)
rsihi = qb.Indicator(rsi, "SPY", 360, Resolution.Daily, Field.High)
rsilo = qb.Indicator(rsi, "SPY", 360, Resolution.Daily, Field.Low)
rsihi = rsihi.rename(columns={'relativestrengthindex': 'high'})
rsilo = rsilo.rename(columns={'relativestrengthindex': 'low'})
rsi = pd.concat([rsihi['high'], rsilo['low']], axis=1)
rsi.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Some Mock Data
Step2: Now, let's create some mock data with some Gaussian noise
Step3: Define a likelihood
Step4: Define an emcee-compatible likelihood function
Step6: First, we define a small utility function that will take a dictionary in which keys may be dot-paths, and converts it to a nested dictionary
Step7: So, this will do the following
Step8: This will enable us to pass a list of parameter names that we want updated, which could be parameters of nested models. This means our posterior function is fairly general, and can accept any model parameters to be updated
Step9: We can test that the log_prob function works
Step10: Notice the derived parameters
Step11: On the advice of the emcee documentation, we set up some initial positions of the walkers around the solution.
Step12: Now we can plot the posterior in a corner plot, along with the derived parameters, and the true input values
Step13: And we're done! The posterior contains the truth to within 1-sigma.
|
<ASSISTANT_TASK:>
Python Code:
import emcee
import halomod
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from multiprocess import Pool
import corner
%matplotlib inline
emcee.__version__
halomod.__version__
model = halomod.TracerHaloModel(
z=0.2,
transfer_model='EH',
rnum=30,
rmin=0.1,
rmax=30,
hod_model='Zehavi05',
hod_params={
"M_min": 12.0,
"M_1": 12.8,
'alpha': 1.05,
'central': True
},
dr_table=0.1,
dlnk=0.1,
dlog10m=0.05
)
np.random.seed(1234)
mock_data = model.corr_auto_tracer + np.random.normal(scale = 0.1 * np.abs(model.corr_auto_tracer))
mock_ngal = model.mean_tracer_den
plt.plot(model.r, model.corr_auto_tracer)
plt.scatter(model.r, mock_data)
plt.xscale('log')
plt.yscale('log')
def chi_square(model, data, sigma):
return np.sum(norm.logpdf(data, loc=model, scale=sigma))
fiducial_model = model.clone()
def flat_to_nested_dict(dct: dict) -> dict:
Convert a dct of key: value pairs into a nested dict.
Keys that have dots in them indicate nested structure.
def key_to_dct(key, val, dct):
if '.' in key:
key, parts = key.split('.', maxsplit=1)
if key not in dct:
dct[key] = {}
key_to_dct(parts, val, dct[key])
else:
dct[key] = val
out = {}
for k, v in dct.items():
key_to_dct(k, v, out)
return out
flat_to_nested_dict(
{
'nested.key': 1,
'nested.key2': 2,
'non_nested': 3
}
)
def log_prob(param_values, param_names, data, model, bounds=None, derived=()):
# Pack parameters into a dict
params = dict(zip(param_names, param_values))
# Allow for simple bounded flat priors.
bounds = bounds or {}
for key, val in params.items():
bound = bounds.get(key, (-np.inf, np.inf))
if not bound[0] < val < bound[1]:
return (-np.inf,) + (None,)*len(derived)
# Update the base model with all the parameters that are being constrained.
params = flat_to_nested_dict(params)
model.update(**params)
ll = chi_square(model.corr_auto_tracer, data[0], 0.1 * np.abs(model.corr_auto_tracer))
ll += chi_square(model.mean_tracer_den, data[1], 1e-4)
if not np.isfinite(ll):
return (-np.inf, ) + (None,)*len(derived)
derived = tuple(getattr(model, d) for d in derived)
out = (ll,) + derived
return out
log_prob(
[12.0, 12.8, 1.05],
['hod_params.M_min', 'hod_params.M_1', 'hod_params.alpha'],
(mock_data, mock_ngal),
model,
derived=['satellite_fraction', 'mean_tracer_den']
)
backend = emcee.backends.HDFBackend("backend.h5")
backend.reset(100, 3)
blobs_dtype = [("sat_frac", float), ("tracer_den", float), ("bias_effective_tracer", float), ("corr_auto_tracer", (float, len(mock_data)))]
sampler = emcee.EnsembleSampler(
nwalkers = 100,
ndim = 3,
log_prob_fn = log_prob,
kwargs = {
'param_names': ['hod_params.M_min', 'hod_params.M_1', 'hod_params.alpha'],
'data': (mock_data, mock_ngal),
'model': model,
'derived': ['satellite_fraction', 'mean_tracer_den', 'bias_effective_tracer', 'corr_auto_tracer'],
},
pool = Pool(32),
blobs_dtype=blobs_dtype,
backend=backend
)
initialpos = np.array([
fiducial_model.hod.params['M_min'],
fiducial_model.hod.params['M_1'],
fiducial_model.hod.params['alpha']
]) + 1e-4 * np.random.normal(size=(sampler.nwalkers, sampler.ndim))
sampler.run_mcmc(initialpos, nsteps=10000, progress=True);
flatchain = sampler.get_chain(discard=500, thin=5, flat=True)
blobs = sampler.get_blobs(discard=500, thin=5, flat=True)
flatchain = np.hstack((
flatchain,
np.atleast_2d(blobs['sat_frac']).T,
np.atleast_2d(np.log10(blobs['tracer_den'])).T,
np.atleast_2d(blobs['bias_effective_tracer']).T
))
np.save('flatchain', flatchain)
corner.corner(
flatchain,
labels=[r'$M_{\rm min}$', '$M_1$', r'$\alpha$', r'$f_{\rm sat}$', r'$\log_{10}\bar{n}_g$',
r'$b_{\rm eff}$'],
quantiles=(0.16, 0.84),
show_titles=True,
#range=lim,
levels=(1-np.exp(-0.5),1-np.exp(-2),1-np.exp(-4)),
plot_datapoints=False,
plot_density=False,
fill_contours=True,
color="blue",
hist_kwargs={"color":"black"},
smooth=0.5,
smooth1d=0.5,
truths=[12., 12.8, 1.05, None, None, None],
truth_color='darkgray'
);
plt.savefig("default_corner.pdf")
xi_out = sampler.get_blobs(discard=500, thin=5, flat=True)['corr_auto_tracer']
quantiles = np.quantile(xi_out, [0.16, 0.50, 0.84], axis=0)
plt.scatter(model.r, mock_data / quantiles[1])
plt.errorbar(model.r, mock_data / quantiles[1], yerr = 0.2*np.abs(fiducial_model.corr_auto_tracer) / quantiles[1], fmt='none')
plt.fill_between(model.r, quantiles[0] / quantiles[1], quantiles[2]/ quantiles[1], alpha=0.3)
plt.xscale('log')
plt.xlabel("r [Mpc/$h$]", fontsize=14)
plt.ylabel(r"$\xi(r) / \hat{\xi}(r)$", fontsize=14);
plt.savefig("residuals.pdf")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Second step
Step2: Third step
Step3: ANOTHER EXAMPLE of f_item
Step4: SLIDING WINDOWS OF STREAMS
Step5: Synchronous Join
Step6: Asynchronous Join
Step7: Time-based Join
|
<ASSISTANT_TASK:>
Python Code:
from IoTPy.agent_types.simple import f_item
def f(item, M, multiples_stream, non_multiples_stream):
if item%M:
multiples_stream.append(item)
else:
non_multiples_stream.append(item)
x = Stream(name='input stream')
y = Stream(name='even numbers in stream x')
z = Stream(name='odd numbers in stream x')
f_item(func=f, in_stream=x, M=2, multiples_stream=y, non_multiples_stream=z)
from IoTPy.agent_types.sink import sink_element
from IoTPy.agent_types.simple import f_item
# STEP 1: Define function
def f(item, M, multiples_stream, non_multiples_stream):
if item%M:
multiples_stream.append(item)
else:
non_multiples_stream.append(item)
# STEP 2: Declare streams and call f_item
x = Stream(name='input stream')
y = Stream(name='even numbers in stream x')
z = Stream(name='odd numbers in stream x')
f_item(func=f, in_stream=x, M=2, multiples_stream=y, non_multiples_stream=z)
# STEP 3: test f_item()
# Put test values in the input streams.
x.extend(list(range(10)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
max_and_min = [0, 0]
# STEP 1: Define function
def g(item, max_and_min, max_stream, min_stream):
if item > max_and_min[0]:
max_and_min[0] = item
max_stream.append(item)
if item < max_and_min[1]:
max_and_min[1] = item
min_stream.append(item)
# STEP 2: Declare streams and call f_item
x = Stream(name='input stream')
y = Stream(name='new maxima')
z = Stream(name='new minima')
f_item(func=g, in_stream=x, max_and_min=max_and_min, max_stream=y, min_stream=z)
# STEP 3: test f_item()
# Put test values in the input streams.
x.extend([5, 4, 8, -3, -1, 10, -5, 6, 20, -12])
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
from IoTPy.agent_types.simple import f_window
x = Stream('x')
y = Stream('y')
def g(window, out_stream):
out_stream.append(sum(window))
f_window(func=g, in_stream=x, window_size=2, step_size=1, out_stream=y)
# Put test values in the input streams.
x.extend(list(range(10)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
from IoTPy.agent_types.simple import join_synch
w = Stream(name='w')
x = Stream(name='x')
y = Stream(name='y')
z = Stream(name='z')
def h(alist, out_stream): out_stream.append(sum(alist))
join_synch(func=h, in_streams=[w, x, y], out_stream=z)
# Put test values in the input streams.
w.extend(list(range(100, 110)))
x.extend(list(range(0, 20, 2)))
y.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream w are')
print (recent_values(w))
print ('recent values of stream x are')
print (recent_values(x))
print ('recent values of stream y are')
print (recent_values(y))
print ('recent values of stream z are')
print (recent_values(z))
from IoTPy.agent_types.simple import join_asynch
def h(index_item, out_stream):
index, item = index_item
print (index,': ', item)
out_stream.append(item)
x = Stream('x')
y = Stream('y')
z = Stream('z')
join_asynch(func=h, in_streams=[x, y], out_stream=z)
# Put test values in the input streams.
x.extend([10, 11])
y.extend([0, 1, 2])
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream z are')
print (recent_values(z))
# Put test values in the input streams.
x.extend([12, 13, 14])
y.extend([3])
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream z are')
print (recent_values(z))
from IoTPy.agent_types.simple import join_timed
def f(timestamped_list):
print (timestamped_list)
x = Stream(name='x')
y = Stream(name='y')
join_timed(func=f, in_streams=[x, y])
# Put test values in the input streams.
x.extend([[1, 'x[0]'], [3, 'x[1]']])
y.extend([[3, 'y[0]'], [5, 'y[2]'], [5, 'y[3]']])
# Execute a step
run()
# Put test values in the input streams.
x.append([4, 'x[2]'])
run()
x.append([5, 'x[3]'])
run()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Python 3.4 中,声明抽象基类的最简单方式是子类化 abc.ABC。我们的例子就是这么做的,从 Python 3.0 到 Python 3.3,必须在 class 语句使用 metaclas = 关键字
Step2: 用函数实现 “策略” 模式
Step3: 新的 Order 类使用更加简单,如下所示:
Step4: 看到测试结果一样,并且简单了一些
Step5: 上面例子虽然可用且容易阅读,但是有些重复可能导致不易察觉的缺陷,若想添加新的促销策略,不仅要定义相应的函数,而且还要把它添到 promos 列表,我们会在接下来改进它
Step6: 另一个可行的方法是将所有的策略函数都存放在一个单独的模块中,除了 best_promo,这里我们将 3 个策略函数存放在了 promotions.py 中
Step7: inspect.getmembers 函数用户获取对象(这里是 promotions 模块)的属性,第二个参数是可选的判断条件(一个布尔值函数)。我们使用的是 inspect.isfunction,只获取模块中的函数
|
<ASSISTANT_TASK:>
Python Code:
from abc import ABC, abstractmethod
from collections import namedtuple
Customer = namedtuple('Customer', 'name fidelity')
class LineItem:
def __init__(self, product, quantity, price):
self.product = product
self.quantity = quantity
self.price = price
def total(self):
return self.price * self.quantity
class Order: #上下文
def __init__(self, customer, cart, promotion = None):
self.customer = customer
self.cart = cart
self.promotion = promotion
def total(self):
if not hasattr(self, '__total'):
self.__total = sum(item.total() for item in self.cart)
return self.__total
def due(self):
if self.promotion is None:
discount = 0
else:
discount = self.promotion.discount(self)
return self.total() - discount
def __repr__(self):
fmt = '<Order total: {:.2f} due {:.2f}>'
return fmt.format(self.total(), self.due())
class Promotion(ABC):
@abstractmethod
def discount(self, order):
'''返回折扣金额(正值)'''
class FidelityPromo(Promotion):
'''为积分为 1000 或以上的顾客提供 5% 折扣'''
def discount(self, order):
return order.total() * .05 if order.customer.fidelity >= 1000 else 0
class BulkItemPromo(Promotion):
'''单个商品为 20 个或以上时提供 %10 折扣'''
def discount(self, order):
discount = 0
for item in order.cart:
if item.quantity >= 20:
discount += item.total() * .1
return discount
class LargeOrderPromo(Promotion):
'''订单中的不同商品达到 10 个或以上时提供 %7 折扣'''
def discount(self, order):
distinct_items = {item.product for item in order.cart}
if len(distinct_items) >= 10:
return order.total() * 0.07
return 0
joe = Customer('John Doe', 0)
ann = Customer('Ann Smith', 1100) # joe 积分是 0,ann 积分是 1100
cart = [LineItem('banana', 4, .5), #有 3 个商品购物车
LineItem('apple', 10, 1.5),
LineItem('watermellon', 5, 5.0)]
Order(joe, cart, FidelityPromo()) # fidelityPromo 没有给 joe 提供折扣
Order(ann, cart, FidelityPromo()) # ann 得到了 5% 的折扣,因为他积分大于 1000
banana_cart = [LineItem('banana', 30, .5), # 30 个香蕉和 10 个苹果
LineItem('apple', 10, 1.5)]
Order(joe, banana_cart, BulkItemPromo()) # BulkItemPromo 为 joe 购买的香蕉优惠了 1.5 元
long_order = [LineItem(str(item_code), 1, 1.0)
for item_code in range(10)] # 构造 10 种不同的商品
Order(joe, long_order, LargeOrderPromo()) # LargeOrderPromo 为 joe 的整个订单提供了 7% 的折扣
Order(joe, cart, LargeOrderPromo()) #
from abc import ABC, abstractmethod
from collections import namedtuple
Customer = namedtuple('Customer', 'name fidelity')
class LineItem:
def __init__(self, product, quantity, price):
self.product = product
self.quantity = quantity
self.price = price
def total(self):
return self.price * self.quantity
class Order: #上下文
def __init__(self, customer, cart, promotion = None):
self.customer = customer
self.cart = cart
self.promotion = promotion
def total(self):
if not hasattr(self, '__total'):
self.__total = sum(item.total() for item in self.cart)
return self.__total
def due(self):
if self.promotion is None:
discount = 0
else:
discount = self.promotion(self) # 这里修改了直接调用函数
return self.total() - discount
def __repr__(self):
fmt = '<Order total: {:.2f} due {:.2f}>'
return fmt.format(self.total(), self.due())
def fidelity_promo(order):
'''为积分为 1000 或以上的顾客提供 5% 折扣'''
return order.total() * .05 if order.customer.fidelity >= 1000 else 0
def bulk_item_promo(order):
'''单个商品为 20 个或以上时提供 %10 折扣'''
discount = 0
for item in order.cart:
if item.quantity >= 20:
discount += item.total() * .1
return discount
def large_order_promo(order):
'''订单中的不同商品达到 10 个或以上时提供 %7 折扣'''
distinct_items = {item.product for item in order.cart}
if len(distinct_items) >= 10:
return order.total() * 0.07
return 0
joe = Customer('John Doe', 0)
ann = Customer('Ann Smith', 1100) # joe 积分是 0,ann 积分是 1100
cart = [LineItem('banana', 4, .5), #有 3 个商品购物车
LineItem('apple', 10, 1.5),
LineItem('watermellon', 5, 5.0)]
Order(joe, cart, fidelity_promo) # fidelityPromo 没有给 joe 提供折扣
Order(ann, cart, fidelity_promo) # ann 得到了 5% 的折扣,因为他积分大于 1000
banana_cart = [LineItem('banana', 30, .5), # 30 个香蕉和 10 个苹果
LineItem('apple', 10, 1.5)]
Order(joe, banana_cart, bulk_item_promo) # BulkItemPromo 为 joe 购买的香蕉优惠了 1.5 元
long_order = [LineItem(str(item_code), 1, 1.0)
for item_code in range(10)] # 构造 10 种不同的商品
Order(joe, long_order, large_order_promo) # LargeOrderPromo 为 joe 的整个订单提供了 7% 的折扣
Order(joe, cart, large_order_promo) #
promos = [fidelity_promo, bulk_item_promo, large_order_promo]
def best_promo(order):
return max(promo(order) for promo in promos)
Order(joe, long_order, best_promo)
Order(joe, banana_cart, best_promo)
Order(ann, cart, best_promo)
promos = [globals()[name] for name in globals()
if name.endswith('_promo')
and name != 'best_promo']
def best_promo(order):
'''选择可用的最佳折扣'''
return max(promo(order) for promo in promos)
import inspect
import promotions
promos = [func for name, func in inspect.getmembers(promotions, inspect.isfunction)]
print(promos)
def best_promo(order):
'''选择最佳折扣'''
return max(promo(order) for promo in promos)
class MacroCommand:
'''一个执行一组命令的命令'''
def __init__(self, commands):
self.commands = list(commands) # 1
def __call__(self):
for command in self.commands: # 2
command()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Init
Step2: Loading OTU table (filter to just bulk samples)
Step3: Which gradient(s) to simulate?
Step4: Notes
Step5: Total richness of starting (bulk-soil) community
Step6: Number of taxa in all fractions corresponding to each bulk soil sample
Step7: Distribution of total sequences per fraction
Step8: Distribution fitting
Step9: Notes
Step10: Loading metadata
Step11: Determining association
Step12: Number of taxa along the gradient
Step13: Notes
Step14: For each sample, writing a table of OTU_ID and count
Step15: Making directories for simulations
Step16: Rank-abundance distribution for each sample
Step17: Taxon abundance range for each sample-fraction
Step18: Total abundance of each target taxon
Step19: For each sample, writing a table of OTU_ID and count
|
<ASSISTANT_TASK:>
Python Code:
baseDir = '/home/nick/notebook/SIPSim/dev/priming_exp/'
workDir = os.path.join(baseDir, 'exp_info')
otuTableFile = '/var/seq_data/priming_exp/data/otu_table.txt'
otuTableSumFile = '/var/seq_data/priming_exp/data/otu_table_summary.txt'
metaDataFile = '/var/seq_data/priming_exp/data/allsample_metadata_nomock.txt'
#otuRepFile = '/var/seq_data/priming_exp/otusn.pick.fasta'
#otuTaxFile = '/var/seq_data/priming_exp/otusn_tax/otusn_tax_assignments.txt'
#genomeDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/genomes/'
import glob
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
library(fitdistrplus)
if not os.path.isdir(workDir):
os.makedirs(workDir)
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 1:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R -w 900 -h 400
tbl.h.s = tbl.h %>%
group_by(sample) %>%
summarize(total_count = sum(count)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
ggplot(tbl.h.s, aes(day, total_count, color=rep %>% as.character)) +
geom_point() +
facet_grid(isotope ~ treatment) +
theme(
text = element_text(size=16)
)
%%R
tbl.h.s$sample[grepl('700', tbl.h.s$sample)] %>% as.vector %>% sort
%%R
# bulk soil samples for gradients to simulate
samples.to.use = c(
"X12C.700.14.05.NA",
"X12C.700.28.03.NA",
"X12C.700.45.01.NA",
"X13C.700.14.08.NA",
"X13C.700.28.06.NA",
"X13C.700.45.01.NA"
)
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(ends_with('.NA'))
tbl$OTUId = rownames(tbl)
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 1:(ncol(tbl)-1)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R -w 800
tbl.s = tbl.h %>%
filter(count > 0) %>%
group_by(sample, isotope, treatment, day, rep, fraction) %>%
summarize(n_taxa = n())
ggplot(tbl.s, aes(day, n_taxa, color=rep %>% as.character)) +
geom_point() +
facet_grid(isotope ~ treatment) +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
%%R -w 800 -h 350
# filter to just target samples
tbl.s.f = tbl.s %>% filter(sample %in% samples.to.use)
ggplot(tbl.s.f, aes(day, n_taxa, fill=rep %>% as.character)) +
geom_bar(stat='identity') +
facet_grid(. ~ isotope) +
labs(y = 'Number of taxa') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
%%R
message('Bulk soil total observed richness: ')
tbl.s.f %>% select(-fraction) %>% as.data.frame %>% print
%%R -i otuTableFile
# loading OTU table
tbl = read.delim(otuTableFile, sep='\t') %>%
select(-ends_with('.NA'))
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R
# basename of fractions
samples.to.use.base = gsub('\\.[0-9]+\\.NA', '', samples.to.use)
samps = tbl.h$sample %>% unique
fracs = sapply(samples.to.use.base, function(x) grep(x, samps, value=TRUE))
for (n in names(fracs)){
n.frac = length(fracs[[n]])
cat(n, '-->', 'Number of fraction samples: ', n.frac, '\n')
}
%%R
# function for getting all OTUs in a sample
n.OTUs = function(samples, otu.long){
otu.long.f = otu.long %>%
filter(sample %in% samples,
count > 0)
n.OTUs = otu.long.f$OTUId %>% unique %>% length
return(n.OTUs)
}
num.OTUs = lapply(fracs, n.OTUs, otu.long=tbl.h)
num.OTUs = do.call(rbind, num.OTUs) %>% as.data.frame
colnames(num.OTUs) = c('n_taxa')
num.OTUs$sample = rownames(num.OTUs)
num.OTUs
%%R
tbl.s.f %>% as.data.frame
%%R
# joining with bulk soil sample summary table
num.OTUs$data = 'fractions'
tbl.s.f$data = 'bulk_soil'
tbl.j = rbind(num.OTUs,
tbl.s.f %>% ungroup %>% select(sample, n_taxa, data)) %>%
mutate(isotope = gsub('X|\\..+', '', sample),
sample = gsub('\\.[0-9]+\\.NA', '', sample))
tbl.j
%%R -h 300 -w 800
ggplot(tbl.j, aes(sample, n_taxa, fill=data)) +
geom_bar(stat='identity', position='dodge') +
facet_grid(. ~ isotope, scales='free_x') +
labs(y = 'Number of OTUs') +
theme(
text = element_text(size=16)
# axis.text.x = element_text(angle=90)
)
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(-ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R -h 400
tbl.h.s = tbl.h %>%
group_by(sample) %>%
summarize(total_seqs = sum(count))
p = ggplot(tbl.h.s, aes(total_seqs)) +
theme_bw() +
theme(
text = element_text(size=16)
)
p1 = p + geom_histogram(binwidth=200)
p2 = p + geom_density()
grid.arrange(p1,p2,ncol=1)
%%R -w 700 -h 350
plotdist(tbl.h.s$total_seqs)
%%R -w 450 -h 400
descdist(tbl.h.s$total_seqs, boot=1000)
%%R
f.n = fitdist(tbl.h.s$total_seqs, 'norm')
f.ln = fitdist(tbl.h.s$total_seqs, 'lnorm')
f.ll = fitdist(tbl.h.s$total_seqs, 'logis')
#f.c = fitdist(tbl.s$count, 'cauchy')
f.list = list(f.n, f.ln, f.ll)
plot.legend = c('normal', 'log-normal', 'logistic')
par(mfrow = c(2,1))
denscomp(f.list, legendtext=plot.legend)
qqcomp(f.list, legendtext=plot.legend)
%%R
gofstat(list(f.n, f.ln, f.ll), fitnames=plot.legend)
%%R
summary(f.ln)
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(-ends_with('.NA')) %>%
select(-starts_with('X0MC'))
tbl = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
mutate(sample = gsub('^X', '', sample))
tbl %>% head
%%R
# summarize
tbl.s = tbl %>%
group_by(sample) %>%
summarize(total_count = sum(count))
tbl.s %>% head(n=3)
%%R -i metaDataFile
tbl.meta = read.delim(metaDataFile, sep='\t')
tbl.meta %>% head(n=3)
%%R -w 700
tbl.j = inner_join(tbl.s, tbl.meta, c('sample' = 'Sample'))
ggplot(tbl.j, aes(Density, total_count, color=rep)) +
geom_point() +
facet_grid(Treatment ~ Day)
%%R -w 600 -h 350
ggplot(tbl.j, aes(Density, total_count)) +
geom_point(aes(color=Treatment)) +
geom_smooth(method='lm') +
labs(x='Buoyant density', y='Total sequences') +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R
tbl.s = tbl %>%
filter(count > 0) %>%
group_by(sample) %>%
summarize(n_taxa = sum(count > 0))
tbl.j = inner_join(tbl.s, tbl.meta, c('sample' = 'Sample'))
tbl.j %>% head(n=3)
%%R -w 900 -h 600
ggplot(tbl.j, aes(Density, n_taxa, fill=rep, color=rep)) +
#geom_area(stat='identity', alpha=0.5, position='dodge') +
geom_point() +
geom_line() +
labs(x='Buoyant density', y='Number of taxa') +
facet_grid(Treatment ~ Day) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
%%R -i otuTableFile
# loading OTU table
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(matches('OTUId'), ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
# long table format w/ selecting samples of interest
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) %>%
filter(sample %in% samples.to.use,
count > 0)
tbl.h %>% head
%%R
message('Number of samples: ', tbl.h$sample %>% unique %>% length)
message('Number of OTUs: ', tbl.h$OTUId %>% unique %>% length)
%%R
tbl.hs = tbl.h %>%
group_by(OTUId) %>%
summarize(
total_count = sum(count),
mean_count = mean(count),
median_count = median(count),
sd_count = sd(count)
) %>%
filter(total_count > 0)
tbl.hs %>% head
%%R -i workDir
setwd(workDir)
samps = tbl.h$sample %>% unique %>% as.vector
for(samp in samps){
outFile = paste(c(samp, 'OTU.txt'), collapse='_')
tbl.p = tbl.h %>%
filter(sample == samp, count > 0)
write.table(tbl.p, outFile, sep='\t', quote=F, row.names=F)
message('Table written: ', outFile)
message(' Number of OTUs: ', tbl.p %>% nrow)
}
p = os.path.join(workDir, '*_OTU.txt')
files = glob.glob(p)
baseDir = os.path.split(workDir)[0]
newDirs = [os.path.split(x)[1].rstrip('.NA_OTU.txt') for x in files]
newDirs = [os.path.join(baseDir, x) for x in newDirs]
for newDir,f in zip(newDirs, files):
if not os.path.isdir(newDir):
print 'Making new directory: {}'.format(newDir)
os.makedirs(newDir)
else:
print 'Directory exists: {}'.format(newDir)
# symlinking file
linkPath = os.path.join(newDir, os.path.split(f)[1])
if not os.path.islink(linkPath):
os.symlink(f, linkPath)
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(matches('OTUId'), ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
# long table format w/ selecting samples of interest
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) %>%
filter(sample %in% samples.to.use,
count > 0)
tbl.h %>% head
%%R
# ranks of relative abundances
tbl.r = tbl.h %>%
group_by(sample) %>%
mutate(perc_rel_abund = count / sum(count) * 100,
rank = row_number(-perc_rel_abund)) %>%
unite(day_rep, day, rep, sep='-')
tbl.r %>% as.data.frame %>% head(n=3)
%%R -w 900 -h 350
ggplot(tbl.r, aes(rank, perc_rel_abund)) +
geom_point() +
# labs(x='Buoyant density', y='Number of taxa') +
facet_wrap(~ day_rep) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(-ends_with('.NA')) %>%
select(-starts_with('X0MC'))
tbl = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
mutate(sample = gsub('^X', '', sample))
tbl %>% head
%%R
tbl.ar = tbl %>%
#mutate(fraction = gsub('.+\\.', '', sample) %>% as.numeric) %>%
#mutate(treatment = gsub('(.+)\\..+', '\\1', sample)) %>%
group_by(sample) %>%
mutate(rel_abund = count / sum(count)) %>%
summarize(abund_range = max(rel_abund) - min(rel_abund)) %>%
ungroup() %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.ar %>% head(n=3)
%%R -w 800
tbl.ar = tbl.ar %>%
mutate(fraction = as.numeric(fraction))
ggplot(tbl.ar, aes(fraction, abund_range, fill=rep, color=rep)) +
geom_point() +
geom_line() +
labs(x='Buoyant density', y='relative abundanc range') +
facet_grid(treatment ~ day) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
%%R -i otuTableFile
# loading OTU table
tbl = read.delim(otuTableFile, sep='\t') %>%
select(-ends_with('.NA'))
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R
# basename of fractions
samples.to.use.base = gsub('\\.[0-9]+\\.NA', '', samples.to.use)
samps = tbl.h$sample %>% unique
fracs = sapply(samples.to.use.base, function(x) grep(x, samps, value=TRUE))
for (n in names(fracs)){
n.frac = length(fracs[[n]])
cat(n, '-->', 'Number of fraction samples: ', n.frac, '\n')
}
%%R
# function for getting mean OTU abundance from all fractions
OTU.abund = function(samples, otu.long){
otu.rel.abund = otu.long %>%
filter(sample %in% samples,
count > 0) %>%
ungroup() %>%
group_by(sample) %>%
mutate(total_count = sum(count)) %>%
ungroup() %>%
mutate(perc_abund = count / total_count * 100) %>%
group_by(OTUId) %>%
summarize(mean_perc_abund = mean(perc_abund),
median_perc_abund = median(perc_abund),
max_perc_abund = max(perc_abund))
return(otu.rel.abund)
}
## calling function
otu.rel.abund = lapply(fracs, OTU.abund, otu.long=tbl.h)
otu.rel.abund = do.call(rbind, otu.rel.abund) %>% as.data.frame
otu.rel.abund$sample = gsub('\\.[0-9]+$', '', rownames(otu.rel.abund))
otu.rel.abund %>% head
%%R -h 600 -w 900
# plotting
otu.rel.abund.l = otu.rel.abund %>%
gather('abund_stat', 'value', mean_perc_abund, median_perc_abund, max_perc_abund)
otu.rel.abund.l$OTUId = reorder(otu.rel.abund.l$OTUId, -otu.rel.abund.l$value)
ggplot(otu.rel.abund.l, aes(OTUId, value, color=abund_stat)) +
geom_point(shape='O', alpha=0.7) +
scale_y_log10() +
facet_grid(abund_stat ~ sample) +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank(),
legend.position = 'none'
)
%%R -i workDir
setwd(workDir)
# each sample is a file
samps = otu.rel.abund.l$sample %>% unique %>% as.vector
for(samp in samps){
outFile = paste(c(samp, 'frac_OTU.txt'), collapse='_')
tbl.p = otu.rel.abund %>%
filter(sample == samp, mean_perc_abund > 0)
write.table(tbl.p, outFile, sep='\t', quote=F, row.names=F)
cat('Table written: ', outFile, '\n')
cat(' Number of OTUs: ', tbl.p %>% nrow, '\n')
}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plotting the data
Step2: Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.
Step3: This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it.
Step4: TODO
Step5: Splitting the data into Training and Testing
Step6: Splitting the data into features and targets (labels)
Step7: Training the 2-layer Neural Network
Step8: TODO
Step9: Calculating the Accuracy on the Test Data
|
<ASSISTANT_TASK:>
Python Code:
# Importing pandas and numpy
import pandas as pd
import numpy as np
# Reading the csv file into a pandas DataFrame
data = pd.read_csv('student_data.csv')
# Printing out the first 10 rows of our data
data[:10]
# Importing matplotlib
import matplotlib.pyplot as plt
# Function to help us plot
def plot_points(data):
X = np.array(data[["gre","gpa"]])
y = np.array(data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
# Plotting the points
plot_points(data)
plt.show()
# Separating the ranks
data_rank1 = data[data["rank"]==1]
data_rank2 = data[data["rank"]==2]
data_rank3 = data[data["rank"]==3]
data_rank4 = data[data["rank"]==4]
# Plotting the graphs
plot_points(data_rank1)
plt.title("Rank 1")
plt.show()
plot_points(data_rank2)
plt.title("Rank 2")
plt.show()
plot_points(data_rank3)
plt.title("Rank 3")
plt.show()
plot_points(data_rank4)
plt.title("Rank 4")
plt.show()
# TODO: Make dummy variables for rank
one_hot_data = pass
# TODO: Drop the previous rank column
one_hot_data = pass
# Print the first 10 rows of our data
one_hot_data[:10]
# Making a copy of our data
processed_data = one_hot_data[:]
# TODO: Scale the columns
# Printing the first 10 rows of our procesed data
processed_data[:10]
sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)
train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)
print("Number of training samples is", len(train_data))
print("Number of testing samples is", len(test_data))
print(train_data[:10])
print(test_data[:10])
features = train_data.drop('admit', axis=1)
targets = train_data['admit']
features_test = test_data.drop('admit', axis=1)
targets_test = test_data['admit']
print(features[:10])
print(targets[:10])
# Activation (sigmoid) function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_prime(x):
return sigmoid(x) * (1-sigmoid(x))
def error_formula(y, output):
return - y*np.log(output) - (1 - y) * np.log(1-output)
# TODO: Write the error term formula
def error_term_formula(y, output):
pass
# Neural Network hyperparameters
epochs = 1000
learnrate = 0.5
# Training function
def train_nn(features, targets, epochs, learnrate):
# Use to same seed to make debugging easier
np.random.seed(42)
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features.values, targets):
# Loop through all records, x is the input, y is the target
# Activation of the output unit
# Notice we multiply the inputs and the weights here
# rather than storing h as a separate variable
output = sigmoid(np.dot(x, weights))
# The error, the target minus the network output
error = error_formula(y, output)
# The error term
# Notice we calulate f'(h) here instead of defining a separate
# sigmoid_prime function. This just makes it faster because we
# can re-use the result of the sigmoid function stored in
# the output variable
error_term = error_term_formula(y, output)
# The gradient descent step, the error times the gradient times the inputs
del_w += error_term * x
# Update the weights here. The learning rate times the
# change in weights, divided by the number of records to average
weights += learnrate * del_w / n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
out = sigmoid(np.dot(features, weights))
loss = np.mean((out - targets) ** 2)
print("Epoch:", e)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
print("=========")
print("Finished training!")
return weights
weights = train_nn(features, targets, epochs, learnrate)
# Calculate accuracy on test data
tes_out = sigmoid(np.dot(features_test, weights))
predictions = tes_out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generative Network Model
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.stats import powerlaw
import scipy as sp
import seaborn as sns
sns.set()
%matplotlib inline
edges =[]
for line in open('facebook_combined.txt'):
if line[0] != '#':
# print(line.replace('\n','').split(' ')) # \n es salto de linea, .split(' ') -> separar por espacio, .split('\t') -> si esta separado por tabulación
d = line.replace('\n','').split(' ')
edges.append((d[0],d[1]))
#print(edges)
import networkx as nx
G = nx.Graph() # G = nx.Graph(edges), otro modo de crear el grafo
G.add_edges_from(edges)
degrees = [degree for _, degree in G.degree().items()]
# print(degrees)
sns.distplot(degrees)
e, l, s = sp.stats.powerlaw.fit(degrees)
e, l, s
ax = sns.distplot(sp.stats.powerlaw.rvs(e, l, s, size=100000))
import matplotlib.pyplot as plt
log = np.log(degrees)
plt.hist(log, log=True)
a,b,l,s = sp.stats.fit(r)
(a,b,l,s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code::
import numpy as np
import gpt_2_simple as gpt2
from datetime import datetime
from google.colab import files
from tensorflow.python.framework import ops
gpt2.download_gpt2(model_name="124M")
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
sess = gpt2.start_tf_sess()
gpt2.finetune(sess, dataset= file_name, model_name=model_size, steps=200, restore_from='fresh',run_name = run_name,print_every=10,sample_every=50,save_every=50,learning_rate=0.0007)
ops.reset_default_graph()
sess = gpt2.start_tf_sess()
gpt2.load_gpt2(sess, run_name=run_name)
gpt2.generate(sess, run_name=run_name, temperature=.7, length=100, prefix=None, top_k=40, nsamples=10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Perform the simulation
Step2: Plot pretty pictures of what we just did
|
<ASSISTANT_TASK:>
Python Code:
from tvb.simulator.lab import *
LOG.info("Configuring...")
#Initialize a Model, Coupling, and Connectivity.
oscillator = models.Generic2dOscillator()
white_matter = connectivity.Connectivity.from_file("connectivity_96.zip")
white_matter.speed = numpy.array([4.0])
white_matter_coupling = coupling.Linear(a=0.0126)
#Initialise an Integrator
heunint = integrators.HeunDeterministic(dt=2 ** -4)
#Initialise some Monitors with period in physical time
momo = monitors.Raw()
mama = monitors.TemporalAverage(period=2 ** -2)
#Bundle them
what_to_watch = (momo, mama)
#Define the stimulus
#Specify a weighting for regions to receive stimuli...
white_matter.configure()
nodes = [0, 7, 13, 33, 42]
weighting = numpy.zeros((white_matter.number_of_regions,))
weighting[nodes] = numpy.array([2.0 ** -2, 2.0 ** -3, 2.0 ** -4, 2.0 ** -5, 2.0 ** -6])
eqn_t = equations.Gaussian()
eqn_t.parameters["midpoint"] = 16.0
stimulus = patterns.StimuliRegion(temporal=eqn_t,
connectivity=white_matter,
weight=weighting)
#Initialise Simulator -- Model, Connectivity, Integrator, Monitors, and stimulus.
sim = simulator.Simulator(model=oscillator,
connectivity=white_matter,
coupling=white_matter_coupling,
integrator=heunint,
monitors=what_to_watch,
stimulus=stimulus)
sim.configure()
#Clear the initial transient, so that the effect of the stimulus is clearer.
#NOTE: this is ignored, stimuli are defined relative to each simulation call.
LOG.info("Initial integration to clear transient...")
for _, _ in sim(simulation_length=128):
pass
LOG.info("Starting simulation...")
#Perform the simulation
raw_data = []
raw_time = []
tavg_data = []
tavg_time = []
for raw, tavg in sim(simulation_length=64):
if not raw is None:
raw_time.append(raw[0])
raw_data.append(raw[1])
if not tavg is None:
tavg_time.append(tavg[0])
tavg_data.append(tavg[1])
LOG.info("Finished simulation.")
#Plot defaults in a few combinations
#Plot the stimulus
plot_pattern(sim.stimulus)
#Make the lists numpy.arrays for easier use.
RAW = numpy.array(raw_data)
TAVG = numpy.array(tavg_data)
#Plot raw time series
figure(1)
plot(raw_time, RAW[:, 0, :, 0])
title("Raw -- State variable 0")
figure(2)
plot(raw_time, RAW[:, 1, :, 0])
title("Raw -- State variable 1")
#Plot temporally averaged time series
figure(3)
plot(tavg_time, TAVG[:, 0, :, 0])
title("Temporal average")
#Show them
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Set Configuration
Step2: 3. Enter DV360 Report Emailed To BigQuery Recipe Parameters
Step3: 4. Execute DV360 Report Emailed To BigQuery
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/google/starthinker
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'email':'', # Email address report was sent to.
'subject':'.*', # Regular expression to match subject. Double escape backslashes.
'dataset':'', # Existing dataset in BigQuery.
'table':'', # Name of table to be written to.
'dbm_schema':'[]', # Schema provided in JSON list format or empty list.
'is_incremental_load':False, # Append report data to table based on date column, de-duplicates.
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'email':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'read':{
'from':'noreply-dv360@google.com',
'to':{'field':{'name':'email','kind':'string','order':1,'default':'','description':'Email address report was sent to.'}},
'subject':{'field':{'name':'subject','kind':'string','order':2,'default':'.*','description':'Regular expression to match subject. Double escape backslashes.'}},
'link':'https://storage.googleapis.com/.*',
'attachment':'.*'
},
'write':{
'bigquery':{
'dataset':{'field':{'name':'dataset','kind':'string','order':3,'default':'','description':'Existing dataset in BigQuery.'}},
'table':{'field':{'name':'table','kind':'string','order':4,'default':'','description':'Name of table to be written to.'}},
'schema':{'field':{'name':'dbm_schema','kind':'json','order':5,'default':'[]','description':'Schema provided in JSON list format or empty list.'}},
'header':True,
'is_incremental_load':{'field':{'name':'is_incremental_load','kind':'boolean','order':6,'default':False,'description':'Append report data to table based on date column, de-duplicates.'}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2.Drucke alle die Zahlen von 0 bis 4 aus
Step2: 3.Drucke die Zahlen 3,4,5 aus
Step3: 4.Baue einen For-Loop, indem Du alle geraden Zahlen ausdruckst, die tiefer sind als 237.
Step4: 5.Addiere alle Zahlen in der Liste
Step5: 6.Addiere nur die Zahlen, die gerade sind
Step6: 7.Drucke mit einem For Loop 5 Mal hintereinander Hello World aus
Step7: 8.Entwickle ein Programm, das alle Nummern zwischen 2000 und 3200 findet, die durch 7, aber nicht durch 5 teilbar sind. Das Ergebnis sollte auf einer Zeile ausgedruckt werden. Tipp
Step8: 9.Schreibe einen For Loop, der die Nummern in der folgenden Liste von int in str verwandelt.
Step9: 10.Schreibe nun ein Programm, das alle Ziffern 4 mit dem Buchstaben A ersetzte, alle Ziffern 5 mit dem Buchtaben B.
|
<ASSISTANT_TASK:>
Python Code:
primeNumbers = [2, 3, 5, 7]
for prime in primeNumbers:
print(prime)
for x in range(5):
print(x)
for x in range(3, 6):
print(x)
numbers = [
951, 402, 984, 651, 360, 69, 408, 319, 601, 485, 980, 507, 725, 547, 544,
615, 83, 165, 141, 501, 263, 617, 865, 575, 219, 390, 984, 592, 236, 105, 942, 941,
386, 462, 47, 418, 907, 344, 236, 375, 823, 566, 597, 978, 328, 615, 953, 345,
399, 162, 758, 219, 918, 237, 412, 566, 826, 248, 866, 950, 626, 949, 687, 217,
815, 67, 104, 58, 512, 24, 892, 894, 767, 553, 81, 379, 843, 831, 445, 742, 717,
958, 609, 842, 451, 688, 753, 854, 685, 93, 857, 440, 380, 126, 721, 328, 753, 470,
743, 527
]
# Hier kommt Dein Code:
for number in numbers:
if number%2 == 0 and number < 237:
print(number)
#Lösung:
added = 0
for number in numbers:
added += number
print(added)
#Lösung:
new_list = []
for elem in numbers:
if elem%2 == 0:
new_list.append(elem)
sum(new_list)
for i in range(5):
print("Hello World")
#Lösung
l=[]
for i in range(2000, 3200):
if (i%7==0) and (i%5!=0):
l.append(str(i))
print(','.join(l))
lst = range(45,99)
lst = list(lst)
index = 0
print(lst)
for number in lst:
lst[index] = str(number)
index += 1
new_list = []
for elem in lst:
if '4' in elem:
elem = elem.replace('4', 'A')
if '5' in elem:
elem = elem.replace('5', 'B')
new_list.append(elem)
newnewlist
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Explore the Data
Step3: Preprocessing
Step5: Tokenize Punctuation
Step6: Preprocess all the data and save it
Step7: Check Point
Step8: Build the Neural Network
Step10: Input
Step12: Word Embedding
Step14: Build RNN Cell and Initialize
Step16: Build RNN
Step18: Build the Neural Network
Step20: Batches
Step21: Neural Network Training
Step22: Build the Graph
Step23: Train
Step24: Save Parameters
Step25: Checkpoint
Step27: Implement Generate Functions
Step29: Choose Word
Step30: Generate TV Script
|
<ASSISTANT_TASK:>
Python Code:
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
view_sentence_range = (20, 30)
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import common
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
vocab_to_int, int_to_vocab = common.create_lookup_tables(text)
return vocab_to_int, int_to_vocab
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
x = {
'.' : '||period||',
',' : '||comma||',
'"' : '||quote||',
'(' : '||left_bracket||',
'?' : '||question_mark||',
'!' : '||exclamation||',
'\n' : '||new_line||',
')' : '||right_bracket||',
';' : '||semi-colon||' ,
'--' : '||dash_dash||'
}
return x
tests.test_tokenize(token_lookup)
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
input_ = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name ='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return input_, targets, learning_rate
tests.test_get_inputs(get_inputs)
def get_embed(input_, n_vocab, n_embedding):
Create embedding for <input_data>.
:param input_: TF placeholder for text input.
:param n_vocab: Number of words in vocabulary.
:param n_embedding: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable( tf.random_uniform( (n_vocab, n_embedding)))
embed = tf.nn.embedding_lookup( embedding, input_)
return embed
tests.test_get_embed(get_embed)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
lstms = [tf.contrib.rnn.BasicLSTMCell(rnn_size)]
cell = tf.contrib.rnn.MultiRNNCell( lstms )
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name="initial_state")
return cell, initial_state
tests.test_get_init_cell(get_init_cell)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
state = tf.identity(state, name="final_state")
return outputs, state
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embed = get_embed(input_data, vocab_size, embed_dim)
outputs, state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None,
weights_initializer = tf.truncated_normal_initializer(
mean = 0.0, stddev = .1),
biases_initializer=tf.zeros_initializer() )
return logits, state
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
int_text = int_text[0:(len(int_text) - len(int_text) % (batch_size * seq_length))]
targets = np.zeros(len(int_text)).astype(int)
targets[:-1] = int_text[1:]
targets[-1] = int_text[0]
elements_per_batch = batch_size * seq_length
num_batches = int(len(int_text) / elements_per_batch)
# Build the batched data
batches = np.zeros((num_batches, 2, batch_size, seq_length)).astype(int)
for sequence in range(batch_size):
for batch in range(num_batches):
start_at = (batch * seq_length) + ( sequence * seq_length * num_batches )
# Append to inputs
batches[batch, 0, sequence, :] = int_text[start_at:(start_at + seq_length)]
# Append to targets
batches[batch, 1, sequence, :] = targets[start_at:(start_at + seq_length)]
return batches
tests.test_get_batches(get_batches)
num_epochs = 300
batch_size = 128
rnn_size = 256
embed_dim = 200
seq_length = 15
learning_rate = .001
show_every_n_batches = 100
save_dir = './save'
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
tensors = ["input:0", "initial_state:0", "final_state:0", "probs:0"]
output = []
for t in tensors:
output.append( loaded_graph.get_tensor_by_name(t) )
return output[0], output[1], output[2], output[3]
tests.test_get_tensors(get_tensors)
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
return int_to_vocab[np.argmax(probabilities)]
tests.test_pick_word(pick_word)
gen_length = 300
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'homer_simpson'
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Scenario
Step2: Here are some quick hints at interpreting the field names
Step3: Now use the following cell to create the materials for the doctors.
Step4: For our idea of what to show, run the cell below.
Step5: If you'd like to discuss your approach or see what others have done, we have a discussion forum here.
Step 2
Step6: For our solution, run the line below.
Step7: Step 3
Step8: Uncomment the relevant line below to see one solution.
Step9: Step 4
Step10: For a hint, uncomment the line below.
Step11: Step 5
Step12: For a hint, uncomment the line below.
|
<ASSISTANT_TASK:>
Python Code:
from learntools.ml_explainability.ex4 import *
print("Setup Complete")
import pandas as pd
data = pd.read_csv('../input/hospital-readmissions/train.csv')
data.columns
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
data = pd.read_csv('../input/hospital-readmissions/train.csv')
y = data.readmitted
base_features = [c for c in data.columns if c != "readmitted"]
X = data[base_features]
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
my_model = RandomForestClassifier(n_estimators=30, random_state=1).fit(train_X, train_y)
# Your code here
____
# Run this code cell to receive credit!
q_1.solution()
# Your Code Here
____
# Check your answer (Run this code cell to receive credit!)
q_2.solution()
# Your Code Here
____
# Check your answer (Run this code cell to receive credit!)
q_3.solution()
# Your Code Here
____
# q_4.hint()
# Check your answer (Run this code cell to receive credit!)
q_4.solution()
# Your Code Here
____
# q_5.hint()
# Check your answer (Run this code cell to receive credit!)
q_5.solution()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 色空間変換
Step2: サンプル画像をダウンロードする
Step3: 使い方
Step4: 画像は以下の方法で表示できます。
Step5: RGB からグレースケールに変換する
Step6: RGB から BGR に変換する
Step7: RGB から CIE XYZ に変換する
Step8: RGB から YCbCr に変換する
Step9: さらに面白いことに、YCbCrは各成分が知覚的に意味のある情報を持つY'(ルマ)、Cb(青色差クロマ)、Cr(赤色差クロマ)という成分に分解することができます。
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install tensorflow-io
!curl -o sample.jpg -L https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg
!ls -ls sample.jpg
import tensorflow as tf
import tensorflow_io as tfio
image = tf.image.decode_jpeg(tf.io.read_file('sample.jpg'))
print(image.shape, image.dtype)
import matplotlib.pyplot as plt
plt.figure()
plt.imshow(image)
plt.axis('off')
plt.show()
grayscale = tfio.experimental.color.rgb_to_grayscale(image)
print(grayscale.shape, grayscale.dtype)
# use tf.squeeze to remove last channel for plt.imshow to display:
plt.figure()
plt.imshow(tf.squeeze(grayscale, axis=-1), cmap='gray')
plt.axis('off')
plt.show()
bgr = tfio.experimental.color.rgb_to_bgr(image)
print(bgr.shape, bgr.dtype)
plt.figure()
plt.imshow(bgr)
plt.axis('off')
plt.show()
# convert to float32
image_float32 = tf.cast(image, tf.float32) / 255.0
xyz_float32 = tfio.experimental.color.rgb_to_xyz(image_float32)
# convert back uint8
xyz = tf.cast(xyz_float32 * 255.0, tf.uint8)
print(xyz.shape, xyz.dtype)
plt.figure()
plt.imshow(xyz)
plt.axis('off')
plt.show()
ycbcr = tfio.experimental.color.rgb_to_ycbcr(image)
print(ycbcr.shape, ycbcr.dtype)
plt.figure()
plt.imshow(ycbcr, cmap='gray')
plt.axis('off')
plt.show()
y, cb, cr = ycbcr[:,:,0], ycbcr[:,:,1], ycbcr[:,:,2]
# Y' component
plt.figure()
plt.imshow(y, cmap='gray')
plt.axis('off')
plt.show()
# Cb component
plt.figure()
plt.imshow(cb, cmap='gray')
plt.axis('off')
plt.show()
# Cr component
plt.figure()
plt.imshow(cr, cmap='gray')
plt.axis('off')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: With this we have expanded our dataset by factor 8. Let's look at the distribution of datapoints over all classes
Step2: All preprocessing steps can be found in [ xyz .py ].
Step3: To acutally see something you need to add some lines while running your session
|
<ASSISTANT_TASK:>
Python Code:
## Augumenting functions based on Naoki Shibuya's work! Thank you!
## https://github.com/naokishibuya/car-traffic-sign-classification
import cv2
import numpy as np
def resizeImage(image):
return cv2.resize(img, (48,48))
def random_brightness(image, ratio):
hsv = cv2.cvtColor(image, cv2.COLOR_RGB2HSV)
brightness = np.float64(hsv[:, :, 2])
brightness = brightness * (1.0 + np.random.uniform(-ratio, ratio))
brightness[brightness>255] = 255
brightness[brightness<0] = 0
hsv[:, :, 2] = brightness
return cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
def random_rotation(image, angle):
if angle == 0:
return image
angle = np.random.uniform(-angle, angle)
rows, cols = image.shape[:2]
size = cols, rows
center = cols/2, rows/2
scale = 1.0
rotation = cv2.getRotationMatrix2D(center, angle, scale)
return cv2.warpAffine(image, rotation, size)
def random_translation(image, translation):
if translation == 0:
return 0
rows, cols = image.shape[:2]
size = cols, rows
x = np.random.uniform(-translation, translation)
y = np.random.uniform(-translation, translation)
trans = np.float32([[1,0,x],[0,1,y]])
return cv2.warpAffine(image, trans, size)
def random_shear(image, shear):
if shear == 0:
return image
rows, cols = image.shape[:2]
size = cols, rows
left, right, top, bottom = shear, cols - shear, shear, rows - shear
dx = np.random.uniform(-shear, shear)
dy = np.random.uniform(-shear, shear)
p1 = np.float32([[left , top],[right , top ],[left, bottom]])
p2 = np.float32([[left+dx, top],[right+dx, top+dy],[left, bottom+dy]])
move = cv2.getAffineTransform(p1,p2)
return cv2.warpAffine(image, move, size)
def augment_image(image, brightness, angle, translation, shear):
image = resizeImage(image)
aug_images = []
aug_images.append(image)
aug_images.append(random_brightness(image, brightness))
aug_images.append(random_rotation(image, angle))
aug_images.append(random_translation(image, translation))
aug_images.append(random_shear(image, shear))
aug_images.append(random_rotation(random_brightness(image, brightness), angle))
aug_images.append(random_translation(random_brightness(image, brightness), translation))
aug_images.append(random_shear(random_brightness(image, brightness), shear))
return aug_images
import random
## our expanded dataset is stored in
enh_images, enh_labels = (...)
#since we need to know the start and end of the current class
lower_bound = 0
upper_bound = len(enh_images)
eq_images = []
eq_labels = []
#assumtion: class indizes are sorted and increasing
for classindex in range(43):
for position in range(lower_bound, upper_bound):
if enh_labels[position] != classindex:
upper_bound = position
break
sample_index= random.sample(range(lower_bound, upper_bound), 200)
for i in sample_index:
eq_images.append(enh_images[i][:])
for i in sample_index:
eq_labels.append(enh_labels[i])
cnt = 0
for i in sample_index:
enh_images.pop(i - cnt)
enh_labels.pop(i - cnt)
cnt +=1
lower_bound = upper_bound
upper_bound = len(enh_images)
import skimage
def variable_summaries(var, summary_name):
with tf.name_scope("Summary_" + str(summary_name)):
tf.summary.scalar("raw", var)
tf.summary.histogram('histogram', var)
def convertToUINT8(x):
return skimage.img_as_ubyte(x / np.amax(x))
## Based on panmari's post. Thank you!
## https://gist.github.com/panmari/4622b78ce21e44e2d69c
def VisualizeConvolutions(myTensor, sizeInfo, name):
V = tf.slice(myTensor, (0, 0, 0, 0), (1, -1, -1, -1), name='slice_' + name)
V = tf.reshape(V, (sizeInfo[0], sizeInfo[1], sizeInfo[2]))
V = tf.transpose(V, (2, 0, 1))
V = tf.reshape(V, (-1, sizeInfo[0], sizeInfo[1], 1))
tf.summary.image(name, V, max_outputs=3)
## Add this in your graph definition to use TensorBoard's "IMAGE" Tab
## This will print the first 32 feature maps of each layer after the convolution
## as well as the original input image
with tf.name_scope('Visualiz_convolutions'):
img_raw = tf.slice(x_image, (0, 0, 0, 0), (1, -1, -1, -1), name='slice_raw_image')
tf.summary.image("raw_image", img_raw, max_outputs=32)
## note: (width, height, output kernels)
## 48px = orig. image // 24px = feature map after maxpool
VisualizeConvolutions(h_conv1, [48, 48, 96], "Conv1")
VisualizeConvolutions(h_conv2, [24, 24, 96], "Conv2")
VisualizeConvolutions(h_conv3, [12, 12, 48], "Conv3")
logs_path = r'C:/....'
cross_entropy = (...)
accuracy = (...)
variable_summaries(cross_entropy, 'crossentropy')
variable_summaries(accuracy, 'TrainingAcc')
summary_op = tf.summary.merge_all()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
summary_writer = tf.summary.FileWriter(logs_path)
summary_writer.add_graph(sess.graph)
#Start training and logging
for i in range(100):
(...)
_, summary = sess.run([train_step, summary_op], feed_dict={(...)})
summary_writer.add_summary(summary, i)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problema Prático 8.9
Step2: Exemplo 8.10
Step3: Problema Prático 8.10
|
<ASSISTANT_TASK:>
Python Code:
print("Exemplo 8.9\n")
from sympy import *
t = symbols('t')
V = 12
C = 1/2
L = 1
#Para t < 0
i0 = 0
v0 = V
print("i(0):",i0,"A")
print("v(0):",v0,"V")
#Para t = oo
i_f = V/(4 + 2)
vf = V*2/(4 + 2)
print("i(oo):",i_f,"A")
print("v(oo):",vf,"V")
#Para t > 0
#desativar fontes independentes
#i = v/2 + C*dv/dt
#4i + L*di/dt + v = 0
#4*(v/2 + 1/2*dv/dt) + d(v/2 + 1/2*dv/dt)/dt + v = 0
#2v + 2dv/dt + 1/2*dv/dt + 1/2*d^2v/t^2 + v = 0
#d^2v/dt^2 + 5dv/dt + 6v = 0
#s^2 + 5s + 6 = 0
s1 = -2
s2 = -3
#Raizes reais e negativas: Superamortecido
#vn(t) = A1*exp(-2t) + A2*exp(-3t)
#vss(t) = v(oo) = 4
#v(t) = 4 + A1*exp(-2t) + A2*exp(-3t)
#dv(0)/dt = -2A1 -3A2 = ic(0)/C
#ic(0) = -6
#C = 1/2
#-2A1 - 3A2 = -12
#2A1 + 3A2 = 12
#v(0) = 4 + A1 + A2 = 12
#A1 + A2 = 8
#2(8 - A2) + 3A2 = 12
A2 = -4
A1 = 12
v = A1*exp(s1*t) + A2*exp(s2*t) + vf
print("Resposta completa v(t):",v,"V")
#i = v/2 + C*dv/dt
i = v/2 + C*diff(v,t)
print("i(t):",i,"A")
print("Problema Prático 8.9")
s = symbols('s')
C = 1/20
L = 2
Is = 3
#Para t < 0
v0 = 0
i0 = 0
print("v(0):",v0,"V")
print("i(0):",i0,"A")
#Para t = oo
i_f = Is
vf = 4*Is
print("i(oo):",i_f,"A")
print("v(oo):",vf,"V")
#Para t > 0
dv0 = Is/C
di0 = 10*Is/L
print("dv(0)/dt:",dv0,"V/s")
print("di(0)/dt:",di0,"A/s")
#desativar fontes indep.
#4i + L*di/dt - v + 10i = 0
#i = -C*dv/dt
#14(-1/20*dv/dt) + 2(-1/20*d^2v/dt^2) - v = 0
#-1/10*d^2v/dt^2 - 7/10*dv/dt - v = 0
#d^2v/dt^2 + 7*dv/dt + 10v = 0
#s^2 + 7s + 10 = 0
r = solve(s**2 + 7*s + 10,s)
s1,s2 = r[0],r[1]
print("Raízes s1 e s2: {0} , {1}".format(s1,s2))
#Raizes reais e negativas: Superamortecido
#v(t) = vf + A1*exp(-5t) + A2*exp(-2t)
#v0 = A1 + A2 = -12
#A1 = -12 - A2
#dv0/dt = -5A1 -2A2 = 60
#-5A1 - 2(-12 - A1) = 60
A1 = (60-24)/(-3)
A2 = -12 - A1
print("Constantes A1 e A2: {0} , {1}".format(A1,A2))
v = A1*exp(s1*t) + A2*exp(s2*t) + vf
print("Resposta completa v(t):",v,"V")
#3 = C*dv/dt + i
i = 3 - C*diff(v,t)
print("Resposta i(t):",i,"A")
print("Exemplo 8.10\n")
V = 7
L1 = 1/2
L2 = 1/5
#Para t < 0
i1_0 = 0
i2_0 = 0
print("i1(0):",i1_0,"A")
print("i2(0):",i2_0,"A")
#Para t = oo
i_f = V/3
print("i(oo):",i_f,"A")
#Para t > 0
#di1(0)/dt = vl/L1
di1 = V/L1
#di2(0)/dt = vl/L2
di2 = 0/L2
print("di1(0)/dt:",di1,"A/s")
print("di2(0)/dt:",di2,"A/s")
#desligar fontes indep.
#3i1 + 1/2*di1/dt + (i1 - i2) = 0
#4i1 + 1/2*di1/dt - i2 = 0
#1/5*di2/t + i2 - i1 = 0
#4/5*di1/dt + 1/10*d^2i1/dt^2 + 4i1 + 1/2*di1/dt - i1 = 0
#d^2i1/dt^2 + 13di1/dt + 30i1 = 0
#s^2 + 13s + 30 = 0
r = solve(s**2 + 13*s + 30,s)
s1,s2 = r[0],r[1]
print("Raizes s1 e s2: {0} , {1}".format(s1,s2))
#raizes reais e negativas: Superamortecido
#i1(t) = 7/3 + A1*exp(-10t) + A2*exp(-3t)
#i1(0) = 7/3 + A1 + A2 = 0
#A1 = -7/3 - A2
#di1(0)/dt = -10A1 -3A2 = 14
#-10(-7/3 - A2) - 3A2 = 14
A2 = (14 - 70/3)/7
A1 = -7/3 - A2
print("Constantes A1 e A2: {0} , {1}".format(A1,A2))
i1 = i_f + A1*exp(s1*t) + A2*exp(s2*t)
print("i1(t):",i1,"A")
#V = 3i1 + L1*di1/dt + (i1 - i2)
i2 = 3*i1 + L1*diff(i1,t) + i1 - V
print("i2(t):",i2,"A")
vo = i1 - i2
print("V0(t):",vo,"V")
print("Problema Prático 8.10")
V = 20
C1 = 1/2
C2 = 1/3
#Para t < 0
v1_0 = 0
v2_0 = 0
print("v1(0) e v2(0):",v1_0,"V")
#Para t = oo
v1_f = V
v2_f = V
print("v1(oo) e v2(oo):",v1_f,"V")
#Para t > 0
#dv1(0)/dt = i1(0)/C1 = (V/1)/(1/2)
dv1 = V/C1
#dv2(0)/dt = i2(0)/C2 = 0/C2
dv2 = 0
print("dv1(0)/dt:",dv1,"V/s")
print("dv2(0)/dt:",dv2,"V/s")
#desligar fontes indep.
#v1/1 + C1*dv1/dt + vo/1 = 0
#vo = v1-v2
#v1 + 1/2*dv1/dt + v1-v2 = 0
#dv1/dt + 4v1 - 2v2 = 0
#v1 = 1*C2*dv2/dt + v2
#1/3*d^2v2/dt^2 + dv2/dt + 4/3*dv2/dt + 4v2 - 2v2 = 0
#d^2v2/dt^2 + 7dv2/dt + 6v2 = 0
#s^2 + 7s + 6 = 0
r = solve(s**2 + 7*s + 6,s)
s1,s2 = r[0],r[1]
print("Raizes para v2:",s1,s2)
#raizes reais e negativas: Superamortecido
#v2(t) = 20 + A1*exp(-6t) + A2*exp(-t)
#v2(0) = 20 + A1 + A2 = 0
#A2 = -20 - A1
#dv2(0)/dt = -6A1 - A2 = 0
#-6A1 - (-20 - A1) = 0
A1 = 20/5
A2 = -20 - A1
print("Constantes A1 e A2:",A1,A2)
v2 = v2_f + A1*exp(s1*t) + A2*exp(s2*t)
print("v2(t):",v2,"V")
v1 = C2*diff(v2,t) + v2
print("v1(t):",v1,"V")
vo = v1 - v2
print("Resposta vo(t):",vo,"V")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading data
Step2: By default, ~mne.io.read_raw_fif displays some information about the file
Step3: ~mne.io.Raw objects also have several built-in plotting methods; here we
Step4: Preprocessing
Step5: Once we're confident about which component(s) we want to remove, we pass them
Step6: Detecting experimental events
Step7: The resulting events array is an ordinary 3-column
Step8: Event dictionaries like this one are used when extracting epochs from
Step9: For paradigms that are not event-related (e.g., analysis of resting-state
Step10: We'll also pass the event dictionary as the event_id parameter (so we can
Step11: Next we'll pool across left/right stimulus presentations so we can compare
Step12: Like ~mne.io.Raw objects, ~mne.Epochs objects also have a number of
Step13: <div class="alert alert-info"><h4>Note</h4><p>Both `~mne.io.Raw` and `~mne.Epochs` objects have `~mne.Epochs.get_data`
Step14: Estimating evoked responses
Step15: We can also get a more detailed view of each ~mne.Evoked object using other
Step16: Evoked objects can also be combined to show contrasts between conditions,
Step17: Inverse modeling
Step18: Finally, in order to plot the source estimate on the subject's cortical
|
<ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
print(raw)
print(raw.info)
raw.plot_psd(fmax=50)
raw.plot(duration=5, n_channels=30)
# set up and fit the ICA
ica = mne.preprocessing.ICA(n_components=20, random_state=97, max_iter=800)
ica.fit(raw)
ica.exclude = [1, 2] # details on how we picked these are omitted here
ica.plot_properties(raw, picks=ica.exclude)
orig_raw = raw.copy()
raw.load_data()
ica.apply(raw)
# show some frontal channels to clearly illustrate the artifact removal
chs = ['MEG 0111', 'MEG 0121', 'MEG 0131', 'MEG 0211', 'MEG 0221', 'MEG 0231',
'MEG 0311', 'MEG 0321', 'MEG 0331', 'MEG 1511', 'MEG 1521', 'MEG 1531',
'EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006',
'EEG 007', 'EEG 008']
chan_idxs = [raw.ch_names.index(ch) for ch in chs]
orig_raw.plot(order=chan_idxs, start=12, duration=4)
raw.plot(order=chan_idxs, start=12, duration=4)
events = mne.find_events(raw, stim_channel='STI 014')
print(events[:5]) # show the first 5
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'buttonpress': 32}
fig = mne.viz.plot_events(events, event_id=event_dict, sfreq=raw.info['sfreq'],
first_samp=raw.first_samp)
reject_criteria = dict(mag=4000e-15, # 4000 fT
grad=4000e-13, # 4000 fT/cm
eeg=150e-6, # 150 µV
eog=250e-6) # 250 µV
epochs = mne.Epochs(raw, events, event_id=event_dict, tmin=-0.2, tmax=0.5,
reject=reject_criteria, preload=True)
conds_we_care_about = ['auditory/left', 'auditory/right',
'visual/left', 'visual/right']
epochs.equalize_event_counts(conds_we_care_about) # this operates in-place
aud_epochs = epochs['auditory']
vis_epochs = epochs['visual']
del raw, epochs # free up memory
aud_epochs.plot_image(picks=['MEG 1332', 'EEG 021'])
frequencies = np.arange(7, 30, 3)
power = mne.time_frequency.tfr_morlet(aud_epochs, n_cycles=2, return_itc=False,
freqs=frequencies, decim=3)
power.plot(['MEG 1332'])
aud_evoked = aud_epochs.average()
vis_evoked = vis_epochs.average()
mne.viz.plot_compare_evokeds(dict(auditory=aud_evoked, visual=vis_evoked),
legend='upper left', show_sensors='upper right')
aud_evoked.plot_joint(picks='eeg')
aud_evoked.plot_topomap(times=[0., 0.08, 0.1, 0.12, 0.2], ch_type='eeg')
evoked_diff = mne.combine_evoked([aud_evoked, vis_evoked], weights=[1, -1])
evoked_diff.pick_types(meg='mag').plot_topo(color='r', legend=False)
# load inverse operator
inverse_operator_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis-meg-oct-6-meg-inv.fif')
inv_operator = mne.minimum_norm.read_inverse_operator(inverse_operator_file)
# set signal-to-noise ratio (SNR) to compute regularization parameter (λ²)
snr = 3.
lambda2 = 1. / snr ** 2
# generate the source time course (STC)
stc = mne.minimum_norm.apply_inverse(vis_evoked, inv_operator,
lambda2=lambda2,
method='MNE') # or dSPM, sLORETA, eLORETA
# path to subjects' MRI files
subjects_dir = os.path.join(sample_data_folder, 'subjects')
# plot the STC
stc.plot(initial_time=0.1, hemi='split', views=['lat', 'med'],
subjects_dir=subjects_dir)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Show event-related fields images
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id, tmin, tmax = 1, -0.2, 0.4
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
# Create epochs, here for gradiometers + EOG only for simplicity
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=('grad', 'eog'), baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
# and order with spectral reordering
# If you don't have scikit-learn installed set order_func to None
from sklearn.cluster.spectral import spectral_embedding # noqa
from sklearn.metrics.pairwise import rbf_kernel # noqa
def order_func(times, data):
this_data = data[:, (times > 0.0) & (times < 0.350)]
this_data /= np.sqrt(np.sum(this_data ** 2, axis=1))[:, np.newaxis]
return np.argsort(spectral_embedding(rbf_kernel(this_data, gamma=1.),
n_components=1, random_state=0).ravel())
good_pick = 97 # channel with a clear evoked response
bad_pick = 98 # channel with no evoked response
# We'll also plot a sample time onset for each trial
plt_times = np.linspace(0, .2, len(epochs))
plt.close('all')
mne.viz.plot_epochs_image(epochs, [good_pick, bad_pick], sigma=.5,
order=order_func, vmin=-250, vmax=250,
overlay_times=plt_times, show=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Statistically Encode Variables
Step2: Example - Census_OEMModelIdentifier
Step3: Predict Test and Submit to Kaggle
|
<ASSISTANT_TASK:>
Python Code:
# IMPORT LIBRARIES
import pandas as pd, numpy as np, os, gc
# LOAD AND FREQUENCY-ENCODE
FE = ['EngineVersion','AppVersion','AvSigVersion','Census_OSVersion']
# LOAD AND ONE-HOT-ENCODE
OHE = [ 'RtpStateBitfield','IsSxsPassiveMode','DefaultBrowsersIdentifier',
'AVProductStatesIdentifier','AVProductsInstalled', 'AVProductsEnabled',
'CountryIdentifier', 'CityIdentifier',
'GeoNameIdentifier', 'LocaleEnglishNameIdentifier',
'Processor', 'OsBuild', 'OsSuite',
'SmartScreen','Census_MDC2FormFactor',
'Census_OEMNameIdentifier',
'Census_ProcessorCoreCount',
'Census_ProcessorModelIdentifier',
'Census_PrimaryDiskTotalCapacity', 'Census_PrimaryDiskTypeName',
'Census_HasOpticalDiskDrive',
'Census_TotalPhysicalRAM', 'Census_ChassisTypeName',
'Census_InternalPrimaryDiagonalDisplaySizeInInches',
'Census_InternalPrimaryDisplayResolutionHorizontal',
'Census_InternalPrimaryDisplayResolutionVertical',
'Census_PowerPlatformRoleName', 'Census_InternalBatteryType',
'Census_InternalBatteryNumberOfCharges',
'Census_OSEdition', 'Census_OSInstallLanguageIdentifier',
'Census_GenuineStateName','Census_ActivationChannel',
'Census_FirmwareManufacturerIdentifier',
'Census_IsTouchEnabled', 'Census_IsPenCapable',
'Census_IsAlwaysOnAlwaysConnectedCapable', 'Wdft_IsGamer',
'Wdft_RegionIdentifier']
# LOAD ALL AS CATEGORIES
dtypes = {}
for x in FE+OHE: dtypes[x] = 'category'
dtypes['MachineIdentifier'] = 'str'
dtypes['HasDetections'] = 'int8'
# LOAD CSV FILE
df_train = pd.read_csv('../input/train.csv', usecols=dtypes.keys(), dtype=dtypes)
print ('Loaded',len(df_train),'rows of TRAIN.CSV!')
# DOWNSAMPLE
sm = 2000000
df_train = df_train.sample(sm)
print ('Only using',sm,'rows to train and validate')
x=gc.collect()
import math
# CHECK FOR NAN
def nan_check(x):
if isinstance(x,float):
if math.isnan(x):
return True
return False
# FREQUENCY ENCODING
def encode_FE(df,col,verbose=1):
d = df[col].value_counts(dropna=False)
n = col+"_FE"
df[n] = df[col].map(d)/d.max()
if verbose==1:
print('FE encoded',col)
return [n]
# ONE-HOT-ENCODE ALL CATEGORY VALUES THAT COMPRISE MORE THAN
# "FILTER" PERCENT OF TOTAL DATA AND HAS SIGNIFICANCE GREATER THAN "ZSCORE"
def encode_OHE(df, col, filter, zscore, tar='HasDetections', m=0.5, verbose=1):
cv = df[col].value_counts(dropna=False)
cvd = cv.to_dict()
vals = len(cv)
th = filter * len(df)
sd = zscore * 0.5/ math.sqrt(th)
#print(sd)
n = []; ct = 0; d = {}
for x in cv.index:
try:
if cv[x]<th: break
sd = zscore * 0.5/ math.sqrt(cv[x])
except:
if cvd[x]<th: break
sd = zscore * 0.5/ math.sqrt(cvd[x])
if nan_check(x): r = df[df[col].isna()][tar].mean()
else: r = df[df[col]==x][tar].mean()
if abs(r-m)>sd:
nm = col+'_BE_'+str(x)
if nan_check(x): df[nm] = (df[col].isna()).astype('int8')
else: df[nm] = (df[col]==x).astype('int8')
n.append(nm)
d[x] = 1
ct += 1
if (ct+1)>=vals: break
if verbose==1:
print('OHE encoded',col,'- Created',len(d),'booleans')
return [n,d]
# ONE-HOT-ENCODING from dictionary
def encode_OHE_test(df,col,dt):
n = []
for x in dt:
n += encode_BE(df,col,x)
return n
# BOOLEAN ENCODING
def encode_BE(df,col,val):
n = col+"_BE_"+str(val)
if nan_check(val):
df[n] = df[col].isna()
else:
df[n] = df[col]==val
df[n] = df[n].astype('int8')
return [n]
cols = []; dd = []
# ENCODE NEW
for x in FE:
cols += encode_FE(df_train,x)
for x in OHE:
tmp = encode_OHE(df_train,x,0.005,5)
cols += tmp[0]; dd.append(tmp[1])
print('Encoded',len(cols),'new variables')
# REMOVE OLD
for x in FE+OHE:
del df_train[x]
print('Removed original',len(FE+OHE),'variables')
x = gc.collect()
from keras import callbacks
from sklearn.metrics import roc_auc_score
class printAUC(callbacks.Callback):
def __init__(self, X_train, y_train):
super(printAUC, self).__init__()
self.bestAUC = 0
self.X_train = X_train
self.y_train = y_train
def on_epoch_end(self, epoch, logs={}):
pred = self.model.predict(np.array(self.X_train))
auc = roc_auc_score(self.y_train, pred)
print("Train AUC: " + str(auc))
pred = self.model.predict(self.validation_data[0])
auc = roc_auc_score(self.validation_data[1], pred)
print ("Validation AUC: " + str(auc))
if (self.bestAUC < auc) :
self.bestAUC = auc
self.model.save("bestNet.h5", overwrite=True)
return
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense, Dropout, BatchNormalization, Activation
from keras.callbacks import LearningRateScheduler
from keras.optimizers import Adam
#SPLIT TRAIN AND VALIDATION SET
X_train, X_val, Y_train, Y_val = train_test_split(
df_train[cols], df_train['HasDetections'], test_size = 0.5)
# BUILD MODEL
model = Sequential()
model.add(Dense(100,input_dim=len(cols)))
model.add(Dropout(0.4))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(100))
model.add(Dropout(0.4))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer=Adam(lr=0.01), loss="binary_crossentropy", metrics=["accuracy"])
annealer = LearningRateScheduler(lambda x: 1e-2 * 0.95 ** x)
# TRAIN MODEL
model.fit(X_train,Y_train, batch_size=32, epochs = 20, callbacks=[annealer,
printAUC(X_train, Y_train)], validation_data = (X_val,Y_val), verbose=2)
del df_train
del X_train, X_val, Y_train, Y_val
x = gc.collect()
# LOAD BEST SAVED NET
from keras.models import load_model
model = load_model('bestNet.h5')
pred = np.zeros((7853253,1))
id = 1
chunksize = 2000000
for df_test in pd.read_csv('../input/test.csv',
chunksize = chunksize, usecols=list(dtypes.keys())[0:-1], dtype=dtypes):
print ('Loaded',len(df_test),'rows of TEST.CSV!')
# ENCODE TEST
cols = []
for x in FE:
cols += encode_FE(df_test,x,verbose=0)
for x in range(len(OHE)):
cols += encode_OHE_test(df_test,OHE[x],dd[x])
# PREDICT TEST
end = (id)*chunksize
if end>7853253: end = 7853253
pred[(id-1)*chunksize:end] = model.predict_proba(df_test[cols])
print(' encoded and predicted part',id)
id += 1
# SUBMIT TO KAGGLE
df_test = pd.read_csv('../input/test.csv', usecols=['MachineIdentifier'])
df_test['HasDetections'] = pred
df_test.to_csv('submission.csv', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Filter
Step2: Examples
Step3: <table align="left" style="margin-right
Step4: <table align="left" style="margin-right
Step5: <table align="left" style="margin-right
Step6: <table align="left" style="margin-right
Step7: <table align="left" style="margin-right
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
!pip install --quiet -U apache-beam
import apache_beam as beam
def is_perennial(plant):
return plant['duration'] == 'perennial'
with beam.Pipeline() as pipeline:
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{
'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'
},
{
'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'
},
{
'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'
},
{
'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'
},
{
'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'
},
])
| 'Filter perennials' >> beam.Filter(is_perennial)
| beam.Map(print))
import apache_beam as beam
with beam.Pipeline() as pipeline:
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{
'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'
},
{
'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'
},
{
'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'
},
{
'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'
},
{
'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'
},
])
| 'Filter perennials' >>
beam.Filter(lambda plant: plant['duration'] == 'perennial')
| beam.Map(print))
import apache_beam as beam
def has_duration(plant, duration):
return plant['duration'] == duration
with beam.Pipeline() as pipeline:
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{
'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'
},
{
'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'
},
{
'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'
},
{
'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'
},
{
'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'
},
])
| 'Filter perennials' >> beam.Filter(has_duration, 'perennial')
| beam.Map(print))
import apache_beam as beam
with beam.Pipeline() as pipeline:
perennial = pipeline | 'Perennial' >> beam.Create(['perennial'])
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{
'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'
},
{
'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'
},
{
'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'
},
{
'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'
},
{
'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'
},
])
| 'Filter perennials' >> beam.Filter(
lambda plant,
duration: plant['duration'] == duration,
duration=beam.pvalue.AsSingleton(perennial),
)
| beam.Map(print))
import apache_beam as beam
with beam.Pipeline() as pipeline:
valid_durations = pipeline | 'Valid durations' >> beam.Create([
'annual',
'biennial',
'perennial',
])
valid_plants = (
pipeline
| 'Gardening plants' >> beam.Create([
{
'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'
},
{
'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'
},
{
'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'
},
{
'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'
},
{
'icon': '🥔', 'name': 'Potato', 'duration': 'PERENNIAL'
},
])
| 'Filter valid plants' >> beam.Filter(
lambda plant,
valid_durations: plant['duration'] in valid_durations,
valid_durations=beam.pvalue.AsIter(valid_durations),
)
| beam.Map(print))
import apache_beam as beam
with beam.Pipeline() as pipeline:
keep_duration = pipeline | 'Duration filters' >> beam.Create([
('annual', False),
('biennial', False),
('perennial', True),
])
perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{
'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'
},
{
'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'
},
{
'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'
},
{
'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'
},
{
'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'
},
])
| 'Filter plants by duration' >> beam.Filter(
lambda plant,
keep_duration: keep_duration[plant['duration']],
keep_duration=beam.pvalue.AsDict(keep_duration),
)
| beam.Map(print))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Take the set of pings, make sure we have actual clientIds and remove duplicate pings.
Step2: We're going to dump each event from the pings. Do a little empty data sanitization so we don't get NoneType errors during the dump. We create a JSON array of active experiments as part of the dump.
Step3: Create a set of events from "saved-session" UI telemetry. Output the data to CSV or Parquet.
|
<ASSISTANT_TASK:>
Python Code:
def safe_str(obj):
return the byte string representation of obj
if obj is None:
return unicode("")
return unicode(obj)
def dedupe_pings(rdd):
return rdd.filter(lambda p: p["meta/clientId"] is not None)\
.map(lambda p: (p["meta/documentId"], p))\
.reduceByKey(lambda x, y: x)\
.map(lambda x: x[1])
def dedupe_addons(rdd):
return rdd.map(lambda p: (p[0] + safe_str(p[2]) + safe_str(p[3]), p))\
.reduceByKey(lambda x, y: x)\
.map(lambda x: x[1])
def clean(s):
try:
s = s.decode("ascii").strip()
return s if len(s) > 0 else None
except:
return None
def transform(ping):
output = []
# These should not be None since we filter those out & ingestion process adds the data
clientId = ping["meta/clientId"]
submissionDate = dt.datetime.strptime(ping["meta/submissionDate"], "%Y%m%d")
addonset = {}
addons = ping["environment/addons/activeAddons"]
if addons is not None:
for addon, desc in addons.iteritems():
name = clean(desc.get("name", None))
if name is not None:
addonset[name] = 1
persona = ping["environment/addons/persona"]
if len(addonset) > 0 or persona is not None:
addonarray = None
if len(addonset) > 0:
addonarray = json.dumps(addonset.keys())
output.append([clientId, submissionDate, addonarray, persona])
return output
channels = ["nightly", "aurora", "beta", "release"]
batch_date = os.environ.get('date')
if batch_date:
start = end = dt.datetime.strptime(batch_date, '%Y%m%d')
else:
start = start = dt.datetime.now() - dt.timedelta(1)
day = start
while day <= end:
for channel in channels:
print "\nchannel: " + channel + ", date: " + day.strftime("%Y%m%d")
pings = get_pings(sc, app="Fennec", channel=channel,
submission_date=(day.strftime("%Y%m%d"), day.strftime("%Y%m%d")),
build_id=("20100101000000", "99999999999999"),
fraction=1)
subset = get_pings_properties(pings, ["meta/clientId",
"meta/documentId",
"meta/submissionDate",
"environment/addons/activeAddons",
"environment/addons/persona"])
subset = dedupe_pings(subset)
print subset.first()
rawAddons = subset.flatMap(transform)
print "\nrawAddons count: " + str(rawAddons.count())
print rawAddons.first()
uniqueAddons = dedupe_addons(rawAddons)
print "\nuniqueAddons count: " + str(uniqueAddons.count())
print uniqueAddons.first()
s3_output = "s3n://net-mozaws-prod-us-west-2-pipeline-analysis/mobile/android_addons"
s3_output += "/v1/channel=" + channel + "/submission=" + day.strftime("%Y%m%d")
schema = StructType([
StructField("clientid", StringType(), False),
StructField("submissiondate", TimestampType(), False),
StructField("addons", StringType(), True),
StructField("lwt", StringType(), True)
])
grouped = sqlContext.createDataFrame(uniqueAddons, schema)
grouped.coalesce(1).write.parquet(s3_output, mode="overwrite")
day += dt.timedelta(1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Animations
Step2: So animation plotting is based off creating a function. So in this case, we are animating a line plot.
Step3: This works! Animating a 2D image is similar. Except in the animate function, you will set both x and y data.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
points_in = 0
for i in range(100):
x = np.random.rand()
y = np.random.rand()
r = np.sqrt(x**2 + y**2)
if r <= 1:
points_in += 1
pi_4 = points_in/(i+1)
print("pi = {}".format(pi_4 * 4.))
%matplotlib nbagg
import matplotlib.pyplot as plt
import matplotlib.animation as ani
import numpy as np
fig, ax = plt.subplots()
x = np.linspace(0, 6*np.pi, 1000)
y = np.sin(x)
line, = ax.plot(x, y)
def update(i):
shift = np.pi/50
x = np.linspace(0, 6*np.pi, 1000)
y = np.sin(x + i*shift)
return line.set_data(x, y)
anim = ani.FuncAnimation(fig, update, frames=100)
plt.show()
fig, ax = plt.subplots()
points_in = 0
points_all = 0
def calculate_point(i):
global points_in, points_all, ax
x = np.random.rand()
y = np.random.rand()
r = np.sqrt(x**2 + y**2)
if r <= 1:
points_in += 1
ax.plot(x, y, 'bo')
else:
ax.plot(x, y, 'ro')
points_all += 1
pi_4 = points_in/points_all
pi = pi_4 * 4.
plt.title("pi = {}".format(pi))
x = np.linspace(0, 1, 1000)
y = np.sqrt(1 - x**2)
plt.plot(x, y)
anim = ani.FuncAnimation(fig, calculate_point, 100, interval=10)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So what’s going on here? Well, we have list which contains several different data-types. For every 'item' we try to multiply item by itself. If 'item' is a number this makes sense and so we print item * item. However, if we try to multiply a string by a string we get a TypeError, which the except statement catches. So if we receive a TypeError we try something else, in this particular case we add item to item, and thus "aa", "bb", etc get printed.
Step2: In this case Python didn't receive a TypeError and thus the except block of code failed to execute. Now, we can fix this code in one of two ways
Step4: The bad fix just leaves a blank except statement, this catches ALL errors. The good fix meanwhile specifically states what errors it should catch, but the code will still fail if the error is something other than Type or dividing by zero.
|
<ASSISTANT_TASK:>
Python Code:
a_list = [10, 32.4, -14.2, "a", "b", [], [1,2]]
for item in a_list:
try:
print(item * item)
except TypeError:
print(item + item)
item = 0
try:
item / item
except TypeError:
print(item + item)
x = 0
# The bad fix first...
try:
x / x
except:
print("Bad ", x + x)
# The Good fix...
try:
item / item
except (TypeError, ZeroDivisionError): # please note the "SnakeCase".
print("Good", x + x)
def character_movement(x, y):
where (x,y) is the position on a 2-d plane
return [("start", (x, y)),
("left", (x -1, y)),("right", (x + 1, y)),
("up", (x, y - 1)), ("down", (x, y + 1))]
the_map = [ [0, 0, 0],
[0, 0, 0],
[0, 0, 1]] # 1 denotes our character
moves = character_movement(2, 2)
print("Starting square = (2,2)")
for (direction, position) in moves[1:]:
print("Trying to move '{}' to square {}:".format(direction, position))
try:
the_map[position[1]][position[0]] = 2
print(*the_map, sep="\n")
print("\n")
except IndexError:
print("Square {}, is out of bounds. IndexError sucessfully caught.\n".format(position))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pandas - Cleaning data
Step2: We notice that the columns describe features of the Titanic passengers, such as age, sex, and class. Of particular interest is the column Survived, which describes whether or not the passenger survived. When training our model, what we are essentially doing is assessing how each feature impacts whether or not the passenger survived (or if the feature makes an impact at all).
Step3: Next, we review the type of data in the columns, and their respective counts.
Step4: We notice that the columns Age and Embarked have NaNs or missing values. As previously discussed, we take the approach of simply removing the rows with missing values.
Step5: Question
Step6: Similarly for Embarked, we review the range of values and create a new column called Port that represents, as a numerical value, where each passenger embarks from.
Step7: Question
Step8: We review the columns our final, processed data set.
Step9: For convenience, we move the column Survived to the left-most column. We note that the left-most column is indexed as 0.
Step10: In our final review of our training data, we check that (1) the column Survived is the left-most column (2) there are no NaN values, and (3) all the values are in numerical form.
Step11: Finally, we convert the processed training data from a Pandas dataframe into a numerical (Numpy) array.
Step12: Scikit-learn - Training the model
Step13: We use the processed training data to 'train' (or 'fit') our model. The column Survived will be our second input, and the set of other features (with the column PassengerId omitted) as the first.
Step14: Scikit-learn - Making predictions
Step15: We then review a selection of the data.
Step16: We notice that test data has columns similar to our training data, but not the column Survived. We'll use our trained model to predict values for the column Survived.
Step17: We now apply the trained model to the test data (omitting the column PassengerId) to produce an output of predictions.
Step18: Pandas - Preparing for submission
Step19: We briefly review our predictions.
Step20: Finally, we output our results to a .csv file.
Step21: However, it appears that we have a problem. The Kaggle submission website expects "the solution file to have 418 predictions."
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
df = pd.read_csv('../data/train.csv')
df.head(10)
df = df.drop(['Name', 'Ticket', 'Cabin'], axis=1)
df.info()
df = df.dropna()
df['Sex'].unique()
df['Gender'] = df['Sex'].map({'female': 0, 'male':1}).astype(int)
df['Embarked'].unique()
df['Port'] = df['Embarked'].map({'C':1, 'S':2, 'Q':3}).astype(int)
df = df.drop(['Sex', 'Embarked'], axis=1)
cols = df.columns.tolist()
print(cols)
cols = [cols[1]] + cols[0:1] + cols[2:]
df = df[cols]
df.head(10)
df.info()
train_data = df.values
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators = 100)
model = model.fit(train_data[0:,2:], train_data[0:,0])
df_test = pd.read_csv('../data/test.csv')
df_test.head(10)
df_test = df_test.drop(['Name', 'Ticket', 'Cabin'], axis=1)
df_test = df_test.dropna()
df_test['Gender'] = df_test['Sex'].map({'female': 0, 'male':1})
df_test['Port'] = df_test['Embarked'].map({'C':1, 'S':2, 'Q':3})
df_test = df_test.drop(['Sex', 'Embarked'], axis=1)
test_data = df_test.values
output = model.predict(test_data[:,1:])
result = np.c_[test_data[:,0].astype(int), output.astype(int)]
df_result = pd.DataFrame(result[:,0:2], columns=['PassengerId', 'Survived'])
df_result.head(10)
df_result.to_csv('../results/titanic_1-0.csv', index=False)
df_result.shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We need the number of passengers for each port of embarkment and class
Step2: and the total number of passengers for each port
Step3: Now we can calculate the percentage of each class per port
Step4: We can use a bar chart to visualize the results
Step5: The graph shows that more than 90% of the passengers that embarked from Queenstown in Ireland where third class passengers, in contrast with the other ports, where third class made up for 40-55% of the total number of passengers.
Step6: We do not have the age for every passenger, so we will have to take this into account when analysing the data. The rest of the data is complete.
Step7: Because the Survived column can only contain 1 for yes and 0 for no, we can count all the survivors of each sex by grouping by sex and adding the Survived column.
Step8: It looks like the survival rate is very biased towards the female sex
Step9: We can know calculate the number of passengers in every age bin and how many of them survived
Step10: The graph shows a survival rate of 60% for children below 10 years old. For the rest of the age groups, the survival rate oscillates between 30% and 45%.
Step11: No females in the age group 70-80. Maybe that is one of the causes nobody survived in this bin. Now we plot female and male survival rates by age group
Step12: The graph shows that for children below 10 both female and male survival rates are very similar. However, for the rest of age groups, the female survival rates are high, with the minimum around 70%, and male survival rates are low, with the maximum around 20%.
Step13: And now we continue
Step14: At first sight, it looks like having paid for a cabin affected the probability of survival. The problem is most cabins where occupied by fist class passengers
Step15: and it looks like the passenger class also influenced survivability
Step16: So, a fairer comparison could be between the 40 first class passengers that did not pay for a cabin and the rest of the same class
|
<ASSISTANT_TASK:>
Python Code:
class_port = titanic[['PassengerId', 'Pclass', 'Embarked']]
print class_port.isnull().any()
print
class_port = class_port.dropna()
print class_port.isnull().any()
passengers_by_port_class = class_port.groupby(['Embarked', 'Pclass'], as_index=False).count()
print passengers_by_port_class
passengers_by_port_class['Total'] = passengers_by_port_class.groupby('Embarked')['PassengerId'].transform(sum)
print passengers_by_port_class
passengers_by_port_class['Percent'] = passengers_by_port_class.PassengerId * 100 / passengers_by_port_class.Total
print passengers_by_port_class
%pylab inline
c_percents = passengers_by_port_class.loc[passengers_by_port_class['Embarked'] == 'C']['Percent']
q_percents = passengers_by_port_class.loc[passengers_by_port_class['Embarked'] == 'Q']['Percent']
s_percents = passengers_by_port_class.loc[passengers_by_port_class['Embarked'] == 'S']['Percent']
class_port_percents = pd.DataFrame.from_dict({'Cherbourg':c_percents.values, 'Queenstown':q_percents.values, 'Southampton':s_percents.values}, orient='index')
class_port_percents.columns = ['1st', '2nd', '3rd']
ax = class_port_percents.plot(kind='bar', rot=0, title='Percentage of passengers in each class by port of embarkment')
ax.set_xlabel('Port')
sex_age_surv = titanic[['PassengerId', 'Sex', 'Age', 'Survived']]
sex_age_surv_count = sex_age_surv.count()
print sex_age_surv_count
sex_surv = sex_age_surv[['PassengerId', 'Sex', 'Survived']]
sex_surv_count = sex_surv[['PassengerId', 'Sex']].groupby('Sex').count()
print sex_surv_count
sex_surv_count['Survived'] = sex_surv[['Survived', 'Sex']].groupby('Sex').sum()
print sex_surv_count
totals = pd.Series(sex_surv_count.sum(), name='total')
sex_surv_count = sex_surv_count.append(totals)
sex_surv_count['SurvPerc'] = sex_surv_count.Survived * 100 / sex_surv_count.PassengerId
print sex_surv_count
sex_surv_count['SurvPerc'].plot(kind='bar', rot=0, title='Percentage of survivors by sex')
age_surv = sex_age_surv[['PassengerId', 'Age', 'Survived']].dropna()
print age_surv.Age.max()
age_surv['bin'] = pd.cut(age_surv['Age'],np.arange(0,90,10), right=False)
print age_surv.head()
age_surv_count = age_surv[['PassengerId', 'bin']].groupby('bin').count()
age_surv_count['Survived'] = age_surv[['Survived', 'bin']].groupby('bin').sum()
age_surv_count['SurvPerc'] = age_surv_count.Survived * 100 / age_surv_count.PassengerId
print age_surv_count
ax = age_surv_count['SurvPerc'].plot(kind='bar', rot=0, title='Percentage of survivors by age')
ax.set_xlabel('Age')
sex_age_surv_copy = sex_age_surv.copy()
sex_age_surv_copy.dropna()
sex_age_surv_copy['bin'] = pd.cut(sex_age_surv['Age'],np.arange(0,90,10), right=False)
print sex_age_surv_copy.head()
sex_age_surv_count = sex_age_surv_copy[['PassengerId', 'Sex', 'bin']].groupby(['Sex', 'bin']).count()
sex_age_surv_count['Survived'] = sex_age_surv_copy[['Survived', 'Sex', 'bin']].groupby(['Sex', 'bin']).sum()
sex_age_surv_count['SurvPerc'] = sex_age_surv_count.Survived * 100 / sex_age_surv_count.PassengerId
print sex_age_surv_count
ax = sex_age_surv_count.loc['female']['SurvPerc'].plot()
ax = sex_age_surv_count.loc['male']['SurvPerc'].plot(ax=ax, title='Percentage of female and male survivors by age')
ax.legend(['Females', 'Males'], loc='best')
ax.set_xlabel('Age')
titanic['Has_Cabin'] = titanic.Cabin.notnull()
cabin_surv = titanic[['PassengerId', 'Has_Cabin', 'Survived']].copy()
cabin_surv_count = cabin_surv[['PassengerId', 'Has_Cabin']].groupby('Has_Cabin').count()
print cabin_surv_count
cabin_surv_count['Survived'] = cabin_surv[['Survived', 'Has_Cabin']].groupby('Has_Cabin').sum()
print cabin_surv_count
print titanic[['PassengerId', 'Has_Cabin', 'Pclass']].groupby(['Has_Cabin', 'Pclass']).count()
class_surv = titanic[['PassengerId', 'Pclass']].copy()
class_surv_count = class_surv.groupby('Pclass').count()
class_surv_count['Survived'] = titanic[['Survived', 'Pclass']].groupby('Pclass').sum()
print class_surv_count
cabin_surv = titanic[['PassengerId', 'Has_Cabin', 'Survived']][titanic['Pclass'] == 1].copy()
cabin_surv_count = cabin_surv[['PassengerId', 'Has_Cabin']].groupby('Has_Cabin').count()
cabin_surv_count['Survived'] = cabin_surv[['Survived', 'Has_Cabin']].groupby('Has_Cabin').sum()
cabin_surv_count['SurvPerc'] = cabin_surv_count['Survived'] * 100 / cabin_surv_count['PassengerId']
print cabin_surv_count
ax = cabin_surv_count.SurvPerc.plot(kind='bar', rot=0, title='Percentage of first class survivors by cabin rental')
ax.set_xlabel('Had a cabin')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: DPLL with Branching Heuristics
Step2: Each of these branching heuristics was applied only after the pure literal and the unit clause heuristic failed in selecting a splitting variable.
Step3: Over the years, many types of MOMs heuristics have been proposed.
Step4: Freeman’s POSIT <a name="ref-1"/>[1] version counts both the number of positive $x$ and negative $\lnot{x}$ occurrences of a given variable $x$.
Step5: Zabih and McAllester’s <a name="ref-2"/>[2] version of the heuristic counts the negative occurrences $\lnot{x}$ of each given variable $x$.
Step6: DLIS & DLCS
Step7: Dynamic Largest Combined Sum considers the values $C_P$ and $C_N$ combined
Step8: JW & JW2
Step9: The two-sided Jeroslow and Wang’s heuristic identifies the variable $x$ with the largest sum $J(x) + J(\lnot{x})$, and assigns to $x$ value true, if $J(x) \geq J(\lnot{x})$, and value false otherwise.
Step10: CDCL with 1UIP Learning Scheme, 2WL Lazy Data Structure, VSIDS Branching Heuristic & Restarts
Step11: Clause Learning with 1UIP Scheme
Step12: 2WL Lazy Data Structure
Step13: VSIDS Branching Heuristic
Step14: Restarts
Step15: Glucose
Step16: Experimental Results
Step18: Australia
Step20: SAT
Step21: DPLL
Step22: CDCL
Step24: France
Step26: SAT
Step27: DPLL
Step28: CDCL
Step30: USA
Step32: SAT
Step33: DPLL
Step34: CDCL
Step35: Zebra Puzzle
Step36: SAT
Step37: DPLL
Step38: CDCL
|
<ASSISTANT_TASK:>
Python Code:
from logic import *
%psource dpll_satisfiable
%psource dpll
%psource min_clauses
%psource moms
%psource momsf
%psource posit
%psource zm
%psource dlis
%psource dlcs
%psource jw
%psource jw2
%psource cdcl_satisfiable
%psource conflict_analysis
%psource pl_binary_resolution
%psource backjump
%psource unit_propagation
%psource TwoWLClauseDatabase
%psource assign_decision_literal
%psource luby
%psource glucose
from csp import *
australia_csp = MapColoringCSP(list('RGB'), SA: WA NT Q NSW V; NT: WA Q; NSW: Q V; T: )
%time _, checks = AC3b(australia_csp, arc_heuristic=dom_j_up)
f'AC3b with DOM J UP needs {checks} consistency-checks'
%time backtracking_search(australia_csp, select_unassigned_variable=mrv, inference=forward_checking)
australia_sat = MapColoringSAT(list('RGB'), SA: WA NT Q NSW V; NT: WA Q; NSW: Q V; T: )
%time model = dpll_satisfiable(australia_sat, branching_heuristic=no_branching_heuristic)
%time model = dpll_satisfiable(australia_sat, branching_heuristic=moms)
%time model = dpll_satisfiable(australia_sat, branching_heuristic=momsf)
%time model = dpll_satisfiable(australia_sat, branching_heuristic=posit)
%time model = dpll_satisfiable(australia_sat, branching_heuristic=zm)
%time model = dpll_satisfiable(australia_sat, branching_heuristic=dlis)
%time model = dpll_satisfiable(australia_sat, branching_heuristic=dlcs)
%time model = dpll_satisfiable(australia_sat, branching_heuristic=jw)
%time model = dpll_satisfiable(australia_sat, branching_heuristic=jw2)
%time model = cdcl_satisfiable(australia_sat)
{var for var, val in model.items() if val}
france_csp = MapColoringCSP(list('RGBY'),
AL: LO FC; AQ: MP LI PC; AU: LI CE BO RA LR MP; BO: CE IF CA FC RA
AU; BR: NB PL; CA: IF PI LO FC BO; CE: PL NB NH IF BO AU LI PC; FC: BO
CA LO AL RA; IF: NH PI CA BO CE; LI: PC CE AU MP AQ; LO: CA AL FC; LR:
MP AU RA PA; MP: AQ LI AU LR; NB: NH CE PL BR; NH: PI IF CE NB; NO:
PI; PA: LR RA; PC: PL CE LI AQ; PI: NH NO CA IF; PL: BR NB CE PC; RA:
AU BO FC PA LR)
%time _, checks = AC3b(france_csp, arc_heuristic=dom_j_up)
f'AC3b with DOM J UP needs {checks} consistency-checks'
%time backtracking_search(france_csp, select_unassigned_variable=mrv, inference=forward_checking)
france_sat = MapColoringSAT(list('RGBY'),
AL: LO FC; AQ: MP LI PC; AU: LI CE BO RA LR MP; BO: CE IF CA FC RA
AU; BR: NB PL; CA: IF PI LO FC BO; CE: PL NB NH IF BO AU LI PC; FC: BO
CA LO AL RA; IF: NH PI CA BO CE; LI: PC CE AU MP AQ; LO: CA AL FC; LR:
MP AU RA PA; MP: AQ LI AU LR; NB: NH CE PL BR; NH: PI IF CE NB; NO:
PI; PA: LR RA; PC: PL CE LI AQ; PI: NH NO CA IF; PL: BR NB CE PC; RA:
AU BO FC PA LR)
%time model = dpll_satisfiable(france_sat, branching_heuristic=no_branching_heuristic)
%time model = dpll_satisfiable(france_sat, branching_heuristic=moms)
%time model = dpll_satisfiable(france_sat, branching_heuristic=momsf)
%time model = dpll_satisfiable(france_sat, branching_heuristic=posit)
%time model = dpll_satisfiable(france_sat, branching_heuristic=zm)
%time model = dpll_satisfiable(france_sat, branching_heuristic=dlis)
%time model = dpll_satisfiable(france_sat, branching_heuristic=dlcs)
%time model = dpll_satisfiable(france_sat, branching_heuristic=jw)
%time model = dpll_satisfiable(france_sat, branching_heuristic=jw2)
%time model = cdcl_satisfiable(france_sat)
{var for var, val in model.items() if val}
usa_csp = MapColoringCSP(list('RGBY'),
WA: OR ID; OR: ID NV CA; CA: NV AZ; NV: ID UT AZ; ID: MT WY UT;
UT: WY CO AZ; MT: ND SD WY; WY: SD NE CO; CO: NE KA OK NM; NM: OK TX AZ;
ND: MN SD; SD: MN IA NE; NE: IA MO KA; KA: MO OK; OK: MO AR TX;
TX: AR LA; MN: WI IA; IA: WI IL MO; MO: IL KY TN AR; AR: MS TN LA;
LA: MS; WI: MI IL; IL: IN KY; IN: OH KY; MS: TN AL; AL: TN GA FL;
MI: OH IN; OH: PA WV KY; KY: WV VA TN; TN: VA NC GA; GA: NC SC FL;
PA: NY NJ DE MD WV; WV: MD VA; VA: MD DC NC; NC: SC; NY: VT MA CT NJ;
NJ: DE; DE: MD; MD: DC; VT: NH MA; MA: NH RI CT; CT: RI; ME: NH;
HI: ; AK: )
%time _, checks = AC3b(usa_csp, arc_heuristic=dom_j_up)
f'AC3b with DOM J UP needs {checks} consistency-checks'
%time backtracking_search(usa_csp, select_unassigned_variable=mrv, inference=forward_checking)
usa_sat = MapColoringSAT(list('RGBY'),
WA: OR ID; OR: ID NV CA; CA: NV AZ; NV: ID UT AZ; ID: MT WY UT;
UT: WY CO AZ; MT: ND SD WY; WY: SD NE CO; CO: NE KA OK NM; NM: OK TX AZ;
ND: MN SD; SD: MN IA NE; NE: IA MO KA; KA: MO OK; OK: MO AR TX;
TX: AR LA; MN: WI IA; IA: WI IL MO; MO: IL KY TN AR; AR: MS TN LA;
LA: MS; WI: MI IL; IL: IN KY; IN: OH KY; MS: TN AL; AL: TN GA FL;
MI: OH IN; OH: PA WV KY; KY: WV VA TN; TN: VA NC GA; GA: NC SC FL;
PA: NY NJ DE MD WV; WV: MD VA; VA: MD DC NC; NC: SC; NY: VT MA CT NJ;
NJ: DE; DE: MD; MD: DC; VT: NH MA; MA: NH RI CT; CT: RI; ME: NH;
HI: ; AK: )
%time model = dpll_satisfiable(usa_sat, branching_heuristic=no_branching_heuristic)
%time model = dpll_satisfiable(usa_sat, branching_heuristic=moms)
%time model = dpll_satisfiable(usa_sat, branching_heuristic=momsf)
%time model = dpll_satisfiable(usa_sat, branching_heuristic=posit)
%time model = dpll_satisfiable(usa_sat, branching_heuristic=zm)
%time model = dpll_satisfiable(usa_sat, branching_heuristic=dlis)
%time model = dpll_satisfiable(usa_sat, branching_heuristic=dlcs)
%time model = dpll_satisfiable(usa_sat, branching_heuristic=jw)
%time model = dpll_satisfiable(usa_sat, branching_heuristic=jw2)
%time model = cdcl_satisfiable(usa_sat)
{var for var, val in model.items() if val}
zebra_csp = Zebra()
zebra_csp.display(zebra_csp.infer_assignment())
%time _, checks = AC3b(zebra_csp, arc_heuristic=dom_j_up)
f'AC3b with DOM J UP needs {checks} consistency-checks'
zebra_csp.display(zebra_csp.infer_assignment())
%time backtracking_search(zebra_csp, select_unassigned_variable=mrv, inference=forward_checking)
zebra_sat = associate('&', map(to_cnf, map(expr, filter(lambda line: line[0] not in ('c', 'p'), open('aima-data/zebra.cnf').read().splitlines()))))
%time model = dpll_satisfiable(zebra_sat, branching_heuristic=no_branching_heuristic)
%time model = dpll_satisfiable(zebra_sat, branching_heuristic=moms)
%time model = dpll_satisfiable(zebra_sat, branching_heuristic=momsf)
%time model = dpll_satisfiable(zebra_sat, branching_heuristic=posit)
%time model = dpll_satisfiable(zebra_sat, branching_heuristic=zm)
%time model = dpll_satisfiable(zebra_sat, branching_heuristic=dlis)
%time model = dpll_satisfiable(zebra_sat, branching_heuristic=dlcs)
%time model = dpll_satisfiable(zebra_sat, branching_heuristic=jw)
%time model = dpll_satisfiable(zebra_sat, branching_heuristic=jw2)
%time model = cdcl_satisfiable(zebra_sat)
{var for var, val in model.items() if val and var.op.startswith(('Englishman', 'Japanese', 'Norwegian', 'Spaniard', 'Ukrainian'))}
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: format float as percentage
Step2: truncate to at most 2 decimal places
Step3: left padding with zeros
Step4: right padding with zeros
Step5: comma separators
|
<ASSISTANT_TASK:>
Python Code:
'{:.2f}'.format(8.499)
'{:.2f}%'.format(10.12345)
import re
def truncate(num,decimal_places):
dp = str(decimal_places)
return re.sub(r'^(\d+\.\d{,'+re.escape(dp)+r'})\d*$',r'\1',str(num))
truncate(8.499,decimal_places=2)
truncate(8.49,decimal_places=2)
truncate(8.4,decimal_places=2)
truncate(8,decimal_places=2)
# make the total string size AT LEAST 9 (including digits and points), fill with zeros to the left
'{:0>9}'.format(3.499)
# make the total string size AT LEAST 2 (all included), fill with zeros to the left
'{:0>2}'.format(3)
# make the total string size AT LEAST 11 (including digits and points), fill with zeros to the RIGHT
'{:<011}'.format(3.499)
"{:,}".format(100000)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plotting a line chart from a Pandas object
Step2: Creating bar charts
Step3: Creating a pie chart
Step4: Defining elements of a plot
Step5: Plot Formatting
Step6: Note
Step7: Create labels and annotations
Step8: Time series visualizations
Step9: Histograms, box plots, and scatter plots
|
<ASSISTANT_TASK:>
Python Code:
x=range(1,10)
y=[1,2,3,4,0,4,3,2,1]
plt.plot(x,y)
# address = some data set
# cars = pd.read_csv(address)
# cars.columns = ['car_names','mpg','cyl','disp','hp','drat','wt','qsec','vs','am',gear',carb']
#mpg = cars['mpg']
#mpg.plot()
plt.bar(x,y)
#Creating bar chart from pandas object
#mpg.plot(kind='bar')
x=[1,2,3,4,0.5]
plt.pie(x)
plt.show()
#plt.savefig('pie_chart.jpeg') Saves plot as jpeg in wd
#plt.show()
x=range(1,10)
y=[1,2,3,4,0,4,3,2,1]
fig=plt.figure()
ax = fig.add_axes([.1, .1,1,1])
ax.plot(x,y)
fig = plt.figure()
ax = fig.add_axes([.1, .1,1,1])
ax.set_xlim([1,9])
ax.set_ylim([0,5])
ax.set_xticks([0,1,2,4,5,6,8,9,10])
ax.set_yticks([0,1,2,3,4,5])
ax.plot(x,y)
fig = plt.figure()
ax = fig.add_axes([.1, .1,1,1])
ax.set_xlim([1,9])
ax.set_ylim([0,5])
ax.grid() #seems like this takes away grid? different than tutorial
ax.plot(x,y)
fig = plt.figure()
fig, (ax1,ax2)=plt.subplots(1,2)
ax1.plot(x)
ax2.plot(x,y)
x = range(1,10)
y= [1,2,3,4,0.5,4,3,2,1]
plt.bar(x,y)
wide=[0.5,0.5,0.5,0.9,0.9,0.5,0.5,0.9,0.9]
color = ['salmon']
plt.bar(x, y, width=wide, color=color, align='center')
z = [1,2,3,4,0.5]
color_theme = ['#A9A9A9', '#FFA07A', '#B0E0E6','#FFE4CA','#BDB76B'] #hex codes
plt.pie(z, colors = color_theme)
plt.show()
#line styles
x1= range(0,10)
y1=[10,9,8,7,6,5,4,3,2,1]
plt.plot(x,y,ls = 'steps', lw = 5)
plt.plot(x1,y1, ls = '--', lw = 10)
#plot markers
plt.plot(x,y,marker = '1', mew=20)
plt.plot(x1,y1, marker = '+', mew=15)
#functional method
x= range(1,10)
y=[1,2,3,4,0.5,4,3,2,1]
plt.bar(x,y)
plt.xlabel('your x-axis label')
plt.ylabel('your y-axis label')
z= [1,2,3,4,0.5]
veh_type = ['bicycle', 'motorbike','car','van','stroller']
plt.pie(z, labels= veh_type)
plt.show()
#object oriented has ways of doing this too
#uses cars dataset
#didn't copy down everything
#fig = plt.figure()
#ax = fig.add_axes([])
#mpg.plot()
#ax.set_xticks(range(32))
#ax.set_xticklabels()
#ax.set_title('Title goes here')
#ax.set_xlabel('car names')
#ax.set_ylabel('miles/gal')
#ax.legend(loc='best')
#add legend
plt.pie(z)
plt.legend(veh_type, loc='best')
plt.show()
#annotate
#object oriented: ax.annotate('Toyota Corolla', xy=(19,33,9), xytext=(21,35),
#arrowprops = dict(facecolor='black',shrink=0.05))
#address = 'address'
#df = pd.read_csv(address, index_col='Order Date', parse_dates=True)
#df.head()
#df2 = df.sample(n=100, random_state=25, axis=0)
#plt.xlabel('Order Date')
#plt.ylabel('Order Quantity')
#plt.title('Superstore Sales')
#df2['Order Quantity'].plot()
from pandas.tools.plotting import scatter_matrix
#pandas dataset import
#mpg.plot(kind='hist')
# or plt.hist(mpg) with plt.show()
#with seaborn
#sb.distplot(mpg)
#scatterplots
#cars.plot(kind='scatter',x='hp',y='mpg',c=['darkgray'], s=150)
#sb.regplot(x='hp', y='mpg', data=cars, scatter=True)
#seaborn automatically creates trend line
#sb.pairplot(cars)
#get subset of data using dataframes:
#cars_df = pd.DataFrame((cars.ix[:,(1,3,4,6)].values), columns = ['mpg', 'disp','hp','wt'])
#boxplots
#cars.boxplot(column='mpg', by='am')
#in seaborn
#sb.boxplot(x='am', y='mpg', data=cars, palette='hls')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What was the most popular type of complaint, and how many times was it filed?
Step2: Make a horizontal bar graph of the top 5 most frequent complaint types.
Step3: Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually.
Step4: According to your selection of data, how many cases were filed in March? How about May?
Step5: I'd like to see all of the 311 complaints called in on April 1st.
Step6: What was the most popular type of complaint on April 1st?
Step7: What month has the most reports filed? How many? Graph it.
|
<ASSISTANT_TASK:>
Python Code:
df=pd.read_csv("311-2014.csv",nrows=20000)
df.head()
df.columns
df.info()
dateutil.parser.parse('07/16/1990').month
def parse_date (str_date):
return dateutil.parser.parse(str_date)#dateutil is a module, import parser class, then transform a string into a python time object
df['Created Date']= df['Created Date'].apply(parse_date)
df.head(3)
df['Complaint Type'].value_counts()
# the most popular type of complaint is blocked driveway, and it was filed 2534 times
df['Complaint Type'].value_counts().head(5).sort_values().plot(kind='barh')
df['Borough'].value_counts()
people_bronx= 1438159
people_queens= 2321580
people_manhattan=1636268
people_brooklyn= 2621793
people_staten_island= 473279
complaints_per_capita_bronx= 29610/people_bronx
complaints_per_capita_bronx
[x for x in list(df['Created Date'].values)]# if np.datetime64(x, 'M') == 3]
date_index = pd.DatetimeIndex(df['Created Date'].values)#for dataframe, each column is a series(object), call a values method
len([x for x in date_index.month if x == 3])
iterable = filter(lambda x: x == 3, list(date_index.month))
march_days = 0
for x in iterable:
march_days += 1
march_days
df.index=df['Created Date']
df.head()
df['2015-04-01']
df['2015-04-01']['Complaint Type'].value_counts().head(3)
df.resample('M').count().plot(y="Unique Key",legend=False)
#http://pandas.pydata.org/pandas-docs/stable/timeseries.html#up-and-downsampling
#resample is a time-based groupby, followed by a reduction method on each of its groups
ax= df.groupby(df.index.month).count().plot(y='Unique Key', legend=False)
ax.set_xticks([1,2,3,4,5,6,7,8,9,10,11, 12])
ax.set_xticklabels(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
ax.set_ylabel("Number of Complaints")
ax.set_title("311 complains filed monthly in 2015")
#september has the most complaints cases filed
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
|
<ASSISTANT_TASK:>
Python Code:
# Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne import io
from mne.connectivity import spectral_connectivity, seed_target_indices
from mne.datasets import sample
from mne.time_frequency import AverageTFR
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Add a bad channel
raw.info['bads'] += ['MEG 2443']
# Pick MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,
exclude='bads')
# Create epochs for left-visual condition
event_id, tmin, tmax = 3, -0.2, 0.5
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6),
preload=True)
# Use 'MEG 2343' as seed
seed_ch = 'MEG 2343'
picks_ch_names = [raw.ch_names[i] for i in picks]
# Create seed-target indices for connectivity computation
seed = picks_ch_names.index(seed_ch)
targets = np.arange(len(picks))
indices = seed_target_indices(seed, targets)
# Define wavelet frequencies and number of cycles
cwt_freqs = np.arange(7, 30, 2)
cwt_n_cycles = cwt_freqs / 7.
# Run the connectivity analysis using 2 parallel jobs
sfreq = raw.info['sfreq'] # the sampling frequency
con, freqs, times, _, _ = spectral_connectivity(
epochs, indices=indices,
method='wpli2_debiased', mode='cwt_morlet', sfreq=sfreq,
cwt_freqs=cwt_freqs, cwt_n_cycles=cwt_n_cycles, n_jobs=1)
# Mark the seed channel with a value of 1.0, so we can see it in the plot
con[np.where(indices[1] == seed)] = 1.0
# Show topography of connectivity from seed
title = 'WPLI2 - Visual - Seed %s' % seed_ch
layout = mne.find_layout(epochs.info, 'meg') # use full layout
tfr = AverageTFR(epochs.info, con, times, freqs, len(epochs))
tfr.plot_topo(fig_facecolor='w', font_color='k', border='k')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: $y = Xw$
Step2: Results
Step3: 梯度下降法求解
Step4: 随机梯度下降法求解
|
<ASSISTANT_TASK:>
Python Code:
df['x4'] = 1
X = df.iloc[:,(0,1,2,4)].values
y = df.y.values
inv_XX_T = inv(X.T.dot(X))
w = inv_XX_T.dot(X.T).dot(df.y.values)
w
qr(inv_XX_T)
X.shape
#solve(X,y)##只能解方阵
def f(w,X,y):
return ((X.dot(w)-y)**2/(2*1000)).sum()
def grad_f(w,X,y):
return (X.dot(w) - y).dot(X)/1000
w0 = np.array([100.0,100.0,100.0,100.0])
epsilon = 1e-10
alpha = 0.1
check_condition = 1
while check_condition > epsilon:
w0 += -alpha*grad_f(w0,X,y)
check_condition = abs(grad_f(w0,X,y)).sum()
print w0
def cost_function(w,X,y):
return (X.dot(w)-y)**2/2
def grad_cost_f(w,X,y):
return (np.dot(X, w) - y)*X
w0 = np.array([1.0, 1.0, 1.0, 1.0])
epsilon = 1e-3
alpha = 0.01
# 生成随机index,用来随机索引数据.
random_index = np.arange(1000)
np.random.shuffle(random_index)
cost_value = np.inf #初始化目标函数值
while abs(grad_f(w0,X,y)).sum() > epsilon:
for i in range(1000):
w0 += -alpha*grad_cost_f(w0,X[random_index[i]],y[random_index[i]])
#检查目标函数变化趋势, 如果趋势变化达到临界值, 更新更小的步长继续计算
difference = cost_value - f(w0, X, y)
if difference < 1e-10:
alpha *= 0.9
cost_value = f(w0, X, y)
print w0
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We create a small 2D grid where P is a tracer that we want to interpolate. In each grid cell, P has a random value between 0.1 and 1.1. We then set P[1,1] to 0, which for Parcels specifies that this is a land cell
Step2: We create a Particle class that can sample this field
Step3: Now, we perform four different interpolation on P, which we can control by setting fieldset.P.interp_method. Note that this can always be done after the FieldSet creation. We store the results of each interpolation method in an entry in the dictionary pset.
Step4: And then we can show each of the four interpolation methods, by plotting the interpolated values on the Particle locations (circles) on top of the Field values (background colors)
Step5: The white box is here the 'land' point where the tracer is set to zero and the crosses are the locations of the grid points. As you see, the interpolated value is always equal to the field value if the particle is exactly on the grid point (circles on crosses).
Step6: So in summary, Parcels has four different interpolation schemes for tracers
Step7: Let's define a simple FieldSet with two timesteps, a 0.5 m/s zonal velocity and no meridional velocity.
Step8: Now create four particles and a Sampling class so we can sample the Field P
Step10: The key now is that we need to create a sampling Kernel where the Field P is sampled at time+particle.dt and that we concatenate this kernel after the AdvectionRK4 Kernel
Step11: We can now run these kernels on the ParticleSet
Step12: And we can check whether the Particle.p values indeed are consistent with the calc_p() values
Step13: And the same for the netcdf file (note that we need to convert time from nanoseconds to seconds)
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from parcels import FieldSet, ParticleSet, JITParticle, Variable, AdvectionRK4
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
import xarray as xr
dims = [5, 4]
dx, dy = 1./dims[0], 1./dims[1]
dimensions = {'lat': np.linspace(0., 1., dims[0], dtype=np.float32),
'lon': np.linspace(0., 1., dims[1], dtype=np.float32)}
data = {'U': np.zeros(dims, dtype=np.float32),
'V': np.zeros(dims, dtype=np.float32),
'P': np.random.rand(dims[0], dims[1])+0.1}
data['P'][1, 1] = 0.
fieldset = FieldSet.from_data(data, dimensions, mesh='flat')
class SampleParticle(JITParticle):
p = Variable('p', dtype=np.float32)
def SampleP(particle, fieldset, time):
particle.p = fieldset.P[time, particle.depth, particle.lat, particle.lon]
pset = {}
for p_interp in ['linear', 'linear_invdist_land_tracer', 'nearest', 'cgrid_tracer']:
fieldset.P.interp_method = p_interp # setting the interpolation method for fieldset.P
xv, yv = np.meshgrid(np.linspace(0, 1, 8), np.linspace(0, 1, 8))
pset[p_interp] = ParticleSet(fieldset, pclass=SampleParticle, lon=xv.flatten(), lat=yv.flatten())
pset[p_interp].execute(SampleP, endtime=1, dt=1)
fig, ax = plt.subplots(1, 4, figsize=(18, 5))
for i, p in enumerate(pset.keys()):
data = fieldset.P.data[0, :, :]
data[1, 1] = np.nan
x = np.linspace(-dx/2, 1+dx/2, dims[0]+1)
y = np.linspace(-dy/2, 1+dy/2, dims[1]+1)
if p == 'cgrid_tracer':
for lat in fieldset.P.grid.lat:
ax[i].axhline(lat, color='k', linestyle='--')
for lon in fieldset.P.grid.lon:
ax[i].axvline(lon, color='k', linestyle='--')
ax[i].pcolormesh(y, x, data, vmin=0.1, vmax=1.1)
ax[i].scatter(pset[p].lon, pset[p].lat, c=pset[p].p, edgecolors='k', s=50, vmin=0.1, vmax=1.1)
xp, yp = np.meshgrid(fieldset.P.lon, fieldset.P.lat)
ax[i].plot(xp, yp, 'kx')
ax[i].set_title("Using interp_method='%s'" % p)
plt.show()
plt.scatter(pset['linear'].lon, pset['linear'].lat, c=pset['linear_invdist_land_tracer'].p-pset['linear'].p,
edgecolors='k', s=50, cmap=cm.bwr, vmin=-0.25, vmax=0.25)
plt.colorbar()
plt.title("Difference between 'interp_method=linear' and 'interp_method=linear_invdist_land_tracer'")
plt.show()
def calc_p(t, y, x):
return 10*t+x+0.2*y
dims = [2, 4, 5]
dimensions = {'lon': np.linspace(0., 1., dims[2], dtype=np.float32),
'lat': np.linspace(0., 1., dims[1], dtype=np.float32),
'time': np.arange(dims[0], dtype=np.float32)}
p = np.zeros(dims, dtype=np.float32)
for i, x in enumerate(dimensions['lon']):
for j, y in enumerate(dimensions['lat']):
for n, t in enumerate(dimensions['time']):
p[n, j, i] = calc_p(t, y, x)
data = {'U': 0.5*np.ones(dims, dtype=np.float32),
'V': np.zeros(dims, dtype=np.float32),
'P': p}
fieldset = FieldSet.from_data(data, dimensions, mesh='flat')
xv, yv = np.meshgrid(np.arange(0, 1, 0.5), np.arange(0, 1, 0.5))
class SampleParticle(JITParticle):
p = Variable('p', dtype=np.float32)
pset = ParticleSet(fieldset, pclass=SampleParticle, lon=xv.flatten(), lat=yv.flatten())
def SampleP(particle, fieldset, time):
offset sampling by dt
particle.p = fieldset.P[time+particle.dt, particle.depth, particle.lat, particle.lon]
kernels = AdvectionRK4 + pset.Kernel(SampleP) # Note that the order of concatenation matters here!
pfile = pset.ParticleFile("interpolation_offset.nc", outputdt=1)
pset.execute(kernels, endtime=1, dt=1, output_file=pfile)
pfile.close()
for p in pset:
assert np.isclose(p.p, calc_p(p.time, p.lat, p.lon))
ds = xr.open_dataset("interpolation_offset.nc").isel(obs=1)
for i in range(len(ds['p'])):
assert np.isclose(ds['p'].values[i], calc_p(float(ds['time'].values[i])/1e9, ds['lat'].values[i], ds['lon'].values[i]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Set Configuration
Step2: 3. Enter CM360 Conversion Upload From BigQuery Recipe Parameters
Step3: 4. Execute CM360 Conversion Upload From BigQuery
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/google/starthinker
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
FIELDS = {
'account':'',
'auth_cm':'user', # Credentials used for CM.
'floodlight_activity_id':'',
'auth_bigquery':'user', # Credentials for BigQuery.
'floodlight_conversion_type':'encryptedUserId', # Must match the values specifed in the last column.
'encryption_entity_id':'', # Typically the same as the account id.
'encryption_entity_type':'DCM_ACCOUNT',
'encryption_entity_source':'DATA_TRANSFER',
'dataset':'Source containing the conversion data.',
'table':'Source containing the conversion data.',
'legacy':False, # Matters if source is a view.
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'conversion_upload':{
'auth':{'field':{'name':'auth_cm','kind':'authentication','order':1,'default':'user','description':'Credentials used for CM.'}},
'account_id':{'field':{'name':'account','kind':'string','order':0,'default':''}},
'activity_id':{'field':{'name':'floodlight_activity_id','kind':'integer','order':1,'default':''}},
'conversion_type':{'field':{'name':'floodlight_conversion_type','kind':'choice','order':2,'choices':['encryptedUserId','encryptedUserIdCandidates','dclid','gclid','matchId','mobileDeviceId'],'default':'encryptedUserId','description':'Must match the values specifed in the last column.'}},
'encryptionInfo':{
'encryptionEntityId':{'field':{'name':'encryption_entity_id','kind':'integer','order':3,'default':'','description':'Typically the same as the account id.'}},
'encryptionEntityType':{'field':{'name':'encryption_entity_type','kind':'choice','order':4,'choices':['ADWORDS_CUSTOMER','DBM_ADVERTISER','DBM_PARTNER','DCM_ACCOUNT','DCM_ADVERTISER','DFP_NETWORK_CODE'],'default':'DCM_ACCOUNT'}},
'encryptionSource':{'field':{'name':'encryption_entity_source','kind':'choice','order':5,'choices':['AD_SERVING','DATA_TRANSFER'],'default':'DATA_TRANSFER'}}
},
'from':{
'bigquery':{
'auth':{'field':{'name':'auth_bigquery','kind':'authentication','order':1,'default':'user','description':'Credentials for BigQuery.'}},
'dataset':{'field':{'name':'dataset','kind':'string','order':6,'default':'Source containing the conversion data.'}},
'table':{'field':{'name':'table','kind':'string','order':7,'default':'Source containing the conversion data.'}},
'legacy':{'field':{'name':'legacy','kind':'boolean','order':8,'default':False,'description':'Matters if source is a view.'}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For snakemake the workflow definition needs to be specified in a Snakefile and can be executed by calling snakemake in a terminal in the same location as the Snakefile. Here the example rule above has been exported into a Snakefile using the %%writefile jupyter magic command.
Step2: This fails with snakemake complaining about
Step3: And ask snakemake again to generate the fileB.txt for us
Step4: Internally snakemake is first resolving the set of rules into a directed acyclic graph (dag) to determine in which order the rules need to be executed. We can generate a visualization of the workflow using the --dag flag in combination with dot and display (for local notebook instances) or save the graph as svg (e.g. for remote instances).
Step5: The resulting graph shows the dependencies between the two rules, which were automatically enumerated. The line style (continuous/dashed) indicated whether the rules were already executed or not.
Step6: Here we already used a different notation to specify in the shell command cp {input} {output} instead of explicitly repeating the input and output filenames. These placeholders will be substituted by snakemake during execution by the filenames defined as input / output. We can use the same notation to generalize the required input of the rule depending on the output, e.g., we permit the copy rule to work for arbitrary files having a certain naming scheme. Here a new folder new_folder is automatically generated for the copied files.
Step7: For running the workflow now, we need to specify which file we actually need as a final result and snakemake takes care of the individual steps to generate that file. We specify the desired output file as a snakemake argument
Step8: To generate a set of output files, we can either request these individually when running snakemake, e.g. using snakemake -np new_folder/file{0,1,2,3,4,5,6,7,8,9}.txt. In case the workflow output is not being changed frequently, it is also possible to add a final rule (conventionally named 'all'), which requests all desired output files of the workflow
Step9: Here I used the snakemake function expand, which extends a given statement (here new_folder_file{id}.txt) for all combinations of parameters provided (here id values from 0 to 10). This permits to easily applied a set of rules to a number of different files.
Step10: The corresponding snakemake rule now needs to provide the argument to the generate_data.py script
Step11: Additional features worth having a look at
Step12: Neo and Elephant are handling physical units consistently during the analysis by using the Python module quantities. This also requires parameters to be supplied in the correct dimension, such that the physical units can be matched during analysis. In general Neo objects capture all minimal information relevant for the interpretation of the data. In case of the spiketrain, this encompasses the start and stop times of recording/spike extraction as well the sampling rate and potential custom annotations in form of a dictionary.
Step13: An a first rule in our new workflow, let's generate multiple datasets with spiking activity and save them for future analysis steps. From the variety of file formats supported by Neo NIX has an hdf5 backend. Let's implement a virtual expiment, generating 100 Poisson spiketrains stored in the NIX framework
Step14: We export this piece of code into a standalone script so we can use it in the snakemake workflow
Step15: The first rule in the workflow knows how to utilize the script to generate 10 new datasets. We limit the ids of the generated datasets from 0 to 9 to be able to use higher ids for different datasets.
Step16: And running the workflow. The output files should appear here. Have a look at the shell output of snakemake. It should list the number and type of jobs which are going to be run and the wildcards and status of each job.
Step17: To visualize the generated data, we implement a utility script to load the data and a second script plot the data using Matplotlib.
Step18: Now we can visualize the previously generated data. The introduced correlation should be visible already by eye in this arrangement of the spiketrains. In case of random reordering however this looks different.
Step19: For the context of this analysis we teach snakemake how to go from arbitraty datasets in the nix format to a plot of the data. Now the exact data location of the nix file is not relevant any more since we introduced an additional folder wildcard.
Step20: Running SPADE
Step21: We add a rule to the snakemake workflow for runnig spade
Step22: Now we can run spade by simply requesting the analysis result from snakemake
Step23: Apply analysis to experimental data <img src=https
Step24: For fetching the data we directly download the data using wget. A more elegant version would involve installation of git annex or the gin client to be consistent with the version management handled by gin. We also add a generic rule which permits the conversion of BlackRock files to nix files while cropping only a selection of the data. The final rule run_complete requests visualizations of all analysis runs and therefore triggers the whole workflow.
Step25: The whole workflow encompasses now artificial data generation as well as retrieval of experimental data as well as a common analysis and visualization.
|
<ASSISTANT_TASK:>
Python Code:
%%writefile Snakefile
rule:
input: 'fileA.txt'
output: 'fileB.txt'
shell: 'cp fileA.txt fileB.txt'
%%sh
snakemake fileB.txt
%%writefile -a Snakefile
rule:
output: 'fileA.txt'
shell: 'touch fileA.txt'
%%sh
snakemake
%%sh
snakemake --dag | dot | display
snakemake --dag | dot -Tsvg > dag0.svg
%%writefile Snakefile
rule copy_A_to_B:
input: 'fileA.txt'
output: 'fileB.txt'
shell: 'cp {input} {output}'
rule create_A:
output: 'fileA.txt'
shell: 'touch fileA.txt'
%%sh
snakemake --dag | dot | display
snakemake --dag | dot -Tsvg > dag1.svg
%%writefile Snakefile
rule copy_to_new_folder:
input: 'original_data/file{id}.txt'
output: 'new_data/file{id}.txt'
shell: 'cp {input} {output}'
rule create_file:
output: 'original_data/file{id}.txt'
shell: 'touch {output}'
%%sh
snakemake new_data/fileZ.txt --dag | dot | display
snakemake new_data/fileZ.txt --dag | dot -Tsvg > dag2.svg
snakemake new_data/fileZ.txt
%%writefile Snakefile
rule all:
input: expand('new_data/file{id}.txt', id=range(10))
rule copy_to_new_folder:
input: 'original_data/file{id}.txt'
output: 'new_data/file{id}.txt'
shell: 'cp {input} {output}'
rule create_file:
output: 'original_data/file{id}.txt'
shell: 'touch {output}'
%%sh
snakemake --dag | dot | display
snakemake --dag | dot -Tsvg > dag3.svg
%%writefile generate_data.py
import sys
import numpy as np
def generate_random_data(output_filename):
# write a random number in an output file
with open(output_filename, "w") as f:
f.write(np.random.random())
# extracting the output filename from the command line parameters provided
output_filename = sys.argv[1]
generate_random_data(output_filename)
%%writefile Snakefile
rule all:
input: expand('new_data/file{id}.txt', id=range(10))
rule copy_to_new_folder:
input: 'original_data/file{id}.txt'
output: 'new_data/file{id}.txt'
shell: 'cp {input} {output}'
rule generate_data:
output: 'original_data/file{id}.txt'
run: 'generate_data.py {output}'
%%sh
snakemake --dag | dot | display
snakemake --dag | dot -Tsvg > dag4.svg
# imports for data handling and visualization
from quantities import Hz, ms
from elephant.spike_train_generation import homogeneous_poisson_process, compound_poisson_process
import neo
import numpy as np
import matplotlib.pyplot as plt
spiketrain = homogeneous_poisson_process(20*Hz, 0*ms, 1000*ms)
print('The spiketrain', spiketrain)
print('Spiketrain attributes and physical units')
print(['{}: {}'.format(att, getattr(spiketrain, att))
for att in ['t_start', 't_stop', 'sampling_rate', 'annotations']])
print(['{}: {}'.format(att, getattr(spiketrain, att).units.dimensionality)
for att in ['t_start', 't_stop', 'sampling_rate']])
import neo
import quantities as pq
from elephant.spike_train_generation import homogeneous_poisson_process, compound_poisson_process
def generate_data(output_file, n=10):
with neo.io.NixIO(output_file, 'ow') as io:
# generate neo structure
block = neo.Block(experiment='poisson')
block.segments.append(neo.Segment(name='trial 1'))
# generate correlated spike trains
sts = compound_poisson_process(rate=5*pq.Hz, A=[0]+[0.90]+[0]*9+[0.1], t_stop=10*pq.s)
# add background poisson spike trains
for i in range(89):
sts.append(homogeneous_poisson_process(rate=5*pq.Hz, t_stop=10*pq.s))
block.segments[0].spiketrains.extend(sts)
block.create_relationship()
io.write_block(block)
generate_data('original_data/example_dataset.nix')
%%writefile generate_poisson.py
import sys
import neo
import quantities as pq
from quantities import Hz, ms
from elephant.spike_train_generation import homogeneous_poisson_process, compound_poisson_process
def generate_data(output_file, n=10):
with neo.io.NixIO(output_file, 'ow') as io:
# generate neo structure
block = neo.Block(experiment='poisson')
block.segments.append(neo.Segment(name='trial 1'))
# generate correlated spike trains
sts = compound_poisson_process(rate=5*pq.Hz, A=[0]+[0.90]+[0]*9+[0.1], t_stop=10*pq.s)
# add background poisson spike trains
for i in range(89):
sts.append(homogeneous_poisson_process(rate=5*pq.Hz, t_stop=10*pq.s))
block.segments[0].spiketrains.extend(sts)
block.create_relationship()
io.write_block(block)
if __name__=='__main__':
generate_data(*sys.argv[1:])
%%sh
python generate_poisson.py original_data/example_dataset.nix
%%writefile Snakefile
rule all:
input: expand('original_data/dataset{id}.nix', id=range(10))
rule generate_data:
output: 'original_data/dataset{id,[0-9]}.nix'
shell: 'python generate_poisson.py {output}'
%%sh
snakemake -pr
%%sh
snakemake --dag | dot | display
snakemake --dag | dot -Tsvg > dag5.svg
%%writefile plot_data.py
import sys
import numpy as np
import neo
import matplotlib.pyplot as plt
def plot_data(data_filename, plot_filename):
# opening the data file
with neo.io.NixIO(data_filename, 'ro') as io:
block = io.read_block()
# plot spiketrains
for i, spiketrain in enumerate(block.segments[0].spiketrains):
plt.plot(spiketrain, [i]*len(spiketrain), 'C0.')
# this is for later steps in the analysis: plot patterns if already detected
if 'pattern' in spiketrain.annotations:
pattern_spikes = spiketrain[np.where(spiketrain.annotations['pattern'])]
plt.plot(pattern_spikes, [i]*len(pattern_spikes), 'r.')
plt.xlabel('Time [{}]'.format(spiketrain[0].units.dimensionality.latex))
plt.ylabel('Spiketrains')
plt.savefig(plot_filename)
if __name__=='__main__':
plot_data(*sys.argv[1:])
from plot_data import plot_data
plot_data('original_data/dataset2.nix', 'original_data/dataset2.png')
%%writefile -a Snakefile
rule plot_data:
input: '{folder}/dataset{id}.nix'
output: '{folder}/dataset{id}.png'
shell: 'python plot_data.py {input} {output}'
%%sh
snakemake original_data/dataset0.png
%%writefile run_spade.py
import sys
import numpy as np
import neo
import quantities as pq
import elephant.spade
def clean_nix_annotations(block):
'''helper function needed for neo 0.7.1'''
# removing unnessesary annotations to permit block to be saved again via nix
objs = [block] + block.filter(data=True, container=True)
for obj in objs:
if 'nix_name' in obj.annotations:
obj.annotations.pop('nix_name')
if 'neo_name' in obj.annotations:
obj.annotations.pop('neo_name')
def run_spade(data_filename, output_filename):
# loading dataset
with neo.io.get_io(data_filename, 'rw') as io:
block = io.read_block()
spiketrains = block.segments[0].spiketrains
# run spade analysis
patterns = elephant.spade.spade(
data=spiketrains, binsize=1*pq.ms, winlen=1, dither=5*pq.ms,
min_spikes=3, n_surr=10, psr_param=[0,0,3],
output_format='patterns')['patterns']
# convert first pattern into annotations of spiketrains
# default: no spike belongs to any pattern
for spiketrain in spiketrains:
spiketrain.annotate(pattern=np.array([False]*len(spiketrain)))
# convert first detected pattern to annotations of spiketrains
for pattern_id, pattern in enumerate(patterns):
neurons = pattern['neurons']
times = pattern['times']
lags = pattern['lags']
for n, neuron in enumerate(neurons):
if n == 0:
pattern_times = times
else:
pattern_times = times + lags[n-1]
for pattern_time in pattern_times:
spike_idx = np.abs(spiketrains[neuron]-pattern_time).argmin()
spiketrains[neuron].annotations['pattern'][spike_idx] = True
# overwrite original neo file
clean_nix_annotations(block)
with neo.io.NixIO(output_filename, 'ow') as io2:
io2.write_block(block)
if __name__=='__main__':
run_spade(*sys.argv[1:])
%%writefile -a Snakefile
rule run_spade:
input: 'original_data/dataset{id}.nix'
output: 'results/dataset{id}.nix'
shell: 'python run_spade.py {input} {output}'
%%sh
snakemake results/dataset0.nix
%%writefile convert_to_nix.py
import sys
import neo
import quantities as pq
def convert_data_to_nix(input_filename, nix_filename):
print(input_filename)
'''extracting first seconds of first spiketrains into separate nix file'''
io = neo.io.BlackrockIO(input_filename)
original_block = io.read_block()
# Extracting spiketrain and append to new block
block = neo.Block(experiment='monkey')
block.segments.append(neo.Segment(name='first seconds of recording'))
new_spiketrains = [st.time_slice(0*pq.s,10*pq.s) for st in original_block.segments[-1].spiketrains[:100]]
block.segments[0].spiketrains = new_spiketrains
# save new block
with neo.io.NixIO(nix_filename, 'ow') as io2:
io2.write_block(block)
if __name__=='__main__':
convert_data_to_nix(*sys.argv[1:])
%%writefile -a Snakefile
rule download_data:
output:
nev = 'original_data/dataset10.nev',
ns2 = 'original_data/dataset10.ns2'
shell: '''
wget -O {output.nev} https://web.gin.g-node.org/INT/multielectrode_grasp/raw/24cd5caee3ae79066ca37844cab931d04dcad977/datasets/i140703-001-03.nev
wget -O {output.ns2} https://web.gin.g-node.org/INT/multielectrode_grasp/raw/24cd5caee3ae79066ca37844cab931d04dcad977/datasets/i140703-001.ns2
'''
rule convert_to_nix:
input: '{filename}.nev',
'{filename}.ns2'
output: '{filename}.nix'
shell: 'python convert_to_nix.py {wildcards.filename} {output}'
rule run_complete:
input: expand('results/dataset{id}.png', id=range(11))
%%sh
snakemake run_complete
%%sh
snakemake run_complete --dag | dot | display
snakemake run_complete --dag | dot -Tsvg > run_complete.svg
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: More info on notebooks and cells is here.
Step2: The magic %pylab sets up the interactive namespace from numpy and matplotlib and inline adds the plots to the notebook. These plots are rendered in PNG format by default.
Step3: Advanced magic commands
Step4: The improvement is especially significant when you have to use vectors or matrices
Step5: A very useful feature of numpy are masks and masked arrays. You can easily select all the values of a vector or an array that fulfill certain condition using a mask.
Step6: Let's compare to a traditional brute-force approach
Step7: 4) Plotting
Step8: Exercise
Step9: 4.2) Seaborn (https
Step10: Warning messages
Step11: Exercise
Step12: 5) Use interactive documentation
Step13: Now we are going to use astropy.table to read the BinaryTableHDU. We could also read it using hdulist[1].data but let's make use of this nice package
Step14: We can also select any column by simply using table['NAME_OF_THE_COLUMN']
Step15: Exercise
Step16: Exercise
Step17: 7) Keep track of the units. Use astropy.units
Step18: Let's see an example where some units are assumed
Step19: 8) A complete example
Step20: Exercise
Step21: Optional exercise
Step22: 8.b) Extra
Step23: 9) Create a lognormal simulation and compute its correlation function
Step24: We will try to calculate the correlation function in the direction of the line-of-sight
Step25: 10) Create sky plots
Step26: Exercise
Step27: 11) Using astropy.cosmology (http
|
<ASSISTANT_TASK:>
Python Code:
import datetime
print(datetime.datetime.now())
%pylab inline
!ls
#Example of how to compute the sum of two lists
def add(x,y):
add=0
for element_x in x:
add=add+element_x
for element_y in y:
add=add+element_y
return add
my_list = range(0,100)
print(my_list)
%timeit -n10 sum_1=add(my_list,my_list) #I compute 10 iterations
#Example using numpy arrays
my_array = np.arange(0,100,1)
print(my_array)
%timeit -n10 np.sum(my_array+my_array) #I compute 10 iterations
#%load ex1.py
#my_list = [[1,2],[3,4]]
#my_array=np.arange(1,5,1)
#my_array=my_array.reshape(2,2)
#In this example we will split a random array into three different categories taking advantage of the numpy masks
#We generate an array with 1000 random elements in the interval [0,1)
my_array = np.random.random(1000)
%time mask = [np.logical_and(my_array>i/3.,my_array<(i+1)/3.) for i in range(0,3)]
print(len(my_array[mask[0]]), len(my_array[mask[1]]), len(my_array[mask[2]]))
#This is a very simple implementation.
#Maybe sorting the list first or using a matrix instead of lists it would be faster
%time #Use %%time for python2.x
arr1=[]
arr2=[]
arr3=[]
for element in my_array:
if(element>0 and element<1./3.):
arr1.append(element)
if(element>1./3. and element<2./3.):
arr2.append(element)
else:
arr3.append(element)
#First we are going to set up the plots to be SVGs instead of the default PNGs
### Uncomment this cell to use SVG
#%config InlineBackend.figure_formats = {'svg',}
#We will sample the function in 100 points from 0 to pi
x = np.linspace(0,np.pi,100)
#We compute the sine of the numpy array x
y = np.sin(x)
#We make the plot (it automatically generates the figure)
plt.plot(x,y,'-',color='green',label='$\sin(x)$')
#We add the label to the X and Y axes
plt.xlabel('$x$')
plt.ylabel('$\sin(x)$')
#We generate the legend
plt.legend()
#We change the limits of the X and Y axes
plt.xlim(-0.05,np.pi+0.05)
plt.ylim(-0.05,1.05)
#plt.hist2d
# %load ex2.py
def ex2():
rs = np.random.RandomState(112)
x=np.linspace(0,10,11)
y=np.linspace(0,10,11)
X,Y = np.meshgrid(x,y)
X=X.flatten()
Y=Y.flatten()
weights=np.random.random(len(X))
plt.hist2d(X,Y,weights=weights); #The semicolon here avoids that Jupyter shows the resulting arrays
ex2()
#First let's import seaborn (a warning will appear because it conflicts with %pylab inline)
import seaborn as sns
#Compare with matplotlib style (you can still use the same commands but they will render in seaborn style)
#We make the plot (it automatically generates the figure)
plt.plot(x,y,'-',color='green',label='$\sin(x)$')
#We add the label to the X and Y axes
plt.xlabel('$x$')
plt.ylabel('$\sin(x)$')
#We generate the legend
plt.legend()
#We change the limits of the X and Y axes
plt.xlim(-0.05,np.pi+0.05)
plt.ylim(-0.05,1.05)
#sns.jointplot()
# %load ex3.py
def ex3():
rs = np.random.RandomState(112)
x=np.linspace(0,10,11)
y=np.linspace(0,10,11)
X,Y = np.meshgrid(x,y)
X=X.flatten()
Y=Y.flatten()
weights=np.random.random(len(X))
sns.jointplot(X,Y,kind='hex',joint_kws={'C':weights}); #The semicolon here avoids that Jupyter shows the resulting arrays
ex3()
#We import the package needed to read the file
import astropy.io.fits as fits
path = './downloaded_data/LSST_i_trimmed.fits.gz'
#We open the file and it gives us an hdulist
hdulist = fits.open(path)
#We can check what this hdulist has using print
print(hdulist)
#We are going to see what is in the image, we use imshow and select a gray colormap
#we also select a minimum of 0 in the colorbar (vmin) and a maximum of 250 (vmax)
plt.imshow(hdulist[0].data,vmin=0,vmax=250,cmap='gray')
#Show the colorbar
plt.colorbar()
#Importing astropy.table
import astropy.table
#reading the table. In a multi-hdu file we can specify the hdu with read(path,hdu=num_hdu)
table = astropy.table.Table.read(path)
#we show the contents of the table
table
#We print the purity column of the table
print(table['purity'])
plt.hist
# %load ex4.py
def ex4():
masks = [np.logical_and(table['purity']>i/4.,table['purity']<(i+1)/4.) for i in range(0,4)]
for i in range(0,4):
label = str(i/4.)+' < purity < '+str((i+1)/4.)
plt.hist(table['snr_iso'][masks[i]],range=(0,20),bins=40, label=label, alpha=0.5, normed=True)
plt.legend()
plt.figure()
for i in range(0,4):
label = str(i/4.)+' < purity < '+str((i+1)/4.)
plt.hist(table['snr_grpf'][masks[i]],range=(0,20),bins=40, label=label, alpha=0.5, normed=True)
plt.legend()
ex4()
#We are going to use some columns of the table above to produce a useful pairplot
#We make use of numpy masks!
selection = np.empty(len(table['snr_grpf']),dtype='a20')
mask_03 = table['purity']<=0.3
mask_06 = np.logical_and(table['purity']>0.3,table['purity']<=0.6)
mask_09 = np.logical_and(table['purity']>0.6,table['purity']<=0.9)
mask_1 = table['purity']>0.9
selection[mask_03]="purity<=0.3"
selection[mask_06]="0.3<purity<=0.6"
selection[mask_09]="0.6<purity<=0.9"
selection[mask_1]="purity>0.9"
#We require the values dg1 and dg2 to be finite in order that seaborn creates automatically the histograms
masked_array = np.logical_not(np.logical_or(np.isinf(table['dg1_grp']),np.isinf(table['dg2_grp'])))
#We are going to plot just 1000 points
nobj=500
#We will use certain columns of the table
cols = [selection[masked_array][0:nobj],table['dg1_grp'][masked_array][0:nobj], \
table['dg2_grp'][masked_array][0:nobj],table['e1'][masked_array][0:nobj], \
table['e2'][masked_array][0:nobj]]
new_table = astropy.table.Table(cols,names=('selection','dg1_grp','dg2_grp','e1','e2'))
#Seaborn pairplot requires a pandas data frame
df = new_table.to_pandas()
sns.pairplot(df, hue='selection')
#We are going to check the correlations using heatmap
corr = df.corr()
sns.heatmap(corr)
import astropy.units as u
x = 10*u.km
x.to(u.imperial.mile) + 10*u.Mpc
#We read a quasar-catalog data table
quasar_table = astropy.table.Table.read('./downloaded_data/quasar_table.fits')
#We import speclite to compute magnitudes
import speclite
import speclite.filters
sdss = speclite.filters.load_filters('sdss2010-*')
#Spectrum of quasar #40
wave = np.load('./downloaded_data/wave.npy') #No units included but units are Angstroms
flux = np.load('./downloaded_data/flux.npy') #It comes without units but they're 1e-17 erg/cm**2/s/AA
#We use get magnitudes to compute the magnitudes. If the units are not included, it assumes (erg/cm**2/s/AA, AA)<-(flux, wave)
mags = sdss.get_ab_magnitudes(flux*1e-17*u.erg/u.cm**2/u.s/u.AA,wave*u.AA)
#If we don't use the correct units...
mags_wrong = sdss.get_ab_magnitudes(flux,wave)
mags_boss = np.hstack(quasar_table['PSFMAG_%d' %f][40] for f in range(0,5))
print(mags)
print(mags_boss)
print(mags_wrong)
#Now we are going to prepare a Boosted decision tree photo-z estimator
from sklearn.ensemble import GradientBoostingRegressor
#Prepare the training array
mags = np.vstack([quasar_table['PSFMAG_%d' % f] for f in range(0,5)]).T
z = quasar_table['Z_VI']
print(len(z))
#train on 20% of the points
mag_train = mags[::5]
z_train = z[::5]
print(len(z_train))
#test on 5% of the points
mag_test = mags[::18]
z_test = z[::18]
#Set up the tree
clf = GradientBoostingRegressor(n_estimators=500, learning_rate=0.1,max_depth=3, random_state=0)
#Train the tree
clf.fit(mag_train, z_train)
#Test it!
z_fit_train = clf.predict(mag_train)
z_fit = clf.predict(mag_test)
#Compute rms in the training set and test set
rms_train = np.mean(np.sqrt((z_fit_train - z_train) ** 2))
rms_test = np.mean(np.sqrt((z_fit - z_test) ** 2))
plt.scatter(z_test,z_fit, color='k', s=0.1)
plt.plot([-0.1, 6], [-0.1, 6], ':k')
plt.text(0.04, 5, "rms = %.3f" % (rms_test))
plt.xlabel('$z_{true}$')
plt.ylabel('$z_{fit}$')
# %load ex6.py
def ex6():
colors = np.vstack([quasar_table['PSFMAG_%d' % f]-quasar_table['PSFMAG_%d' % (f+1)] for f in range(0,4)]).T
color_train = colors[::5]
color_test = colors[::18]
clf.fit(color_train, z_train)
#Test it!
z_fit_train = clf.predict(color_train)
z_fit = clf.predict(color_test)
#Compute rms in the training set and test set
rms_train = np.mean(np.sqrt((z_fit_train - z_train) ** 2))
rms_test = np.mean(np.sqrt((z_fit - z_test) ** 2))
plt.scatter(z_test,z_fit, color='k', s=0.1)
plt.plot([-0.1, 6], [-0.1, 6], ':k')
plt.text(0.04, 5, "rms = %.3f" % (rms_test))
plt.xlabel('$z_{true}$')
plt.ylabel('$z_{fit}$')
ex6()
# %load opt_ex1.py
# %load opt_nn.py
import randomfield
%time generator = randomfield.Generator(8, 128, 1024, grid_spacing_Mpc_h=1.0, verbose=True)
delta = generator.generate_delta_field(smoothing_length_Mpc_h=2.0, seed=123, show_plot=True)
%%time
#Let's compute a simple version of the correlation function in the direction of the direction of the line-of-sight
corr = np.zeros(delta.shape[2])
for i in range(1,delta.shape[2]-1):
corr[i]=np.sum(delta[:,:,i:]*delta[:,:,:-i])/(delta.shape[0]*delta.shape[1]*(delta.shape[2]-1))
r = np.linspace(0,delta.shape[2],delta.shape[2]+1)
plt.plot(r[1:-1],r[1:-1]**2*corr[1:])
plt.xlim(0,200)
plt.xlabel(r'$r_{\parallel}$ [Mpc h$^{-1}$]')
plt.ylabel(r'$r_{\parallel}^{2}*\xi_{\parallel}(r_{\parallel})$ [Mpc$^{2}$ h$^{-2}$]')
plt.ylim(-4500,300);
def plot_sky(ra, dec, data=None, nside=4, label='', projection='eck4', cmap=plt.get_cmap('jet'), norm=None,
hide_galactic_plane=False, healpy=False):
from mpl_toolkits.basemap import Basemap
from matplotlib.collections import PolyCollection
from astropy.coordinates import SkyCoord
ra=ra.to(u.deg).value
dec=dec.to(u.deg).value
if(healpy):
import healpy as hp
# get pixel area in degrees
pixel_area = hp.pixelfunc.nside2pixarea(nside, degrees=True)
# find healpixels associated with input vectors
pixels = hp.ang2pix(nside, 0.5*np.pi-np.radians(dec), np.radians(ra))
# find unique pixels
unique_pixels = np.unique(pixels)
# count number of points in each pixel
bincounts = np.bincount(pixels)
# if no data provided, show counts per sq degree
# otherwise, show mean per pixel
if data is None:
values = bincounts[unique_pixels]/pixel_area
else:
weighted_counts = np.bincount(pixels, weights=data)
values = weighted_counts[unique_pixels]/bincounts[unique_pixels]
# find pixel boundaries
corners = hp.boundaries(nside, unique_pixels, step=1)
corner_theta, corner_phi = hp.vec2ang(corners.transpose(0,2,1))
corner_ra, corner_dec = np.degrees(corner_phi), np.degrees(np.pi/2-corner_theta)
# set up basemap
m = Basemap(projection=projection, lon_0=-90, resolution='c', celestial=True)
m.drawmeridians(np.arange(0, 360, 30), labels=[0,0,1,0], labelstyle='+/-')
m.drawparallels(np.arange(-90, 90, 15), labels=[1,0,0,0], labelstyle='+/-')
m.drawmapboundary()
# convert sky coords to map coords
x,y = m(corner_ra, corner_dec)
# regroup into pixel corners
verts = np.array([x.reshape(-1,4), y.reshape(-1,4)]).transpose(1,2,0)
# Make the collection and add it to the plot.
coll = PolyCollection(verts, array=values, cmap=cmap, norm=norm, edgecolors='none')
plt.gca().add_collection(coll)
plt.gca().autoscale_view()
if not hide_galactic_plane:
# generate vector in galactic coordinates and convert to equatorial coordinates
galactic_l = np.linspace(0, 2*np.pi, 1000)
galactic_plane = SkyCoord(l=galactic_l*u.radian, b=np.zeros_like(galactic_l)*u.radian, frame='galactic').fk5
# project to map coordinates
galactic_x, galactic_y = m(galactic_plane.ra.degree, galactic_plane.dec.degree)
m.scatter(galactic_x, galactic_y, marker='.', s=2, c='k')
# Add a colorbar for the PolyCollection
plt.colorbar(coll, orientation='horizontal', pad=0.01, aspect=40, label=label)
else:
nx, ny = nside, nside
ra_bins = numpy.linspace(-180, 180, nx+1)
cth_bins = numpy.linspace(-1., 1., ny+1)
ra[ra>180]=ra[ra>180]-360
density, _, _ = numpy.histogram2d(ra, np.sin(dec*np.pi/180.), [ra_bins, cth_bins])
ra_bins_2d, cth_bins_2d = numpy.meshgrid(ra_bins, cth_bins)
m = Basemap(projection=projection, lon_0=0, resolution='l', celestial=True)
m.drawmeridians(np.arange(0, 360, 60), labels=[0,0,1,0], labelstyle='+/-')
m.drawparallels(np.arange(-90, 90, 15), labels=[1,0,0,0], labelstyle='+/-')
m.drawmapboundary()
xs, ys = m(ra_bins_2d, np.arcsin(cth_bins_2d)*180/np.pi)
pcm = plt.pcolormesh(xs, ys, density)
plt.colorbar(pcm,orientation='horizontal', pad=0.04, label=label)
if not hide_galactic_plane:
# generate vector in galactic coordinates and convert to equatorial coordinates
galactic_l = np.linspace(0, 2*np.pi, 1000)
galactic_plane = SkyCoord(l=galactic_l*u.radian, b=np.zeros_like(galactic_l)*u.radian, frame='galactic').fk5
# project to map coordinates
galactic_x, galactic_y = m(galactic_plane.ra.degree, galactic_plane.dec.degree)
m.scatter(galactic_x, galactic_y, marker='.', s=2, c='k')
ra = 360*np.random.random(10000)*u.deg
dec = np.arcsin(-1+2*np.random.random(10000))*180/np.pi*u.deg
plot_sky(ra,dec,healpy=False, nside=16, projection='eck4', label='Galaxies per pixel')
# %load ex7.py
def ex7():
plot_sky(quasar_table['RA']*u.deg,quasar_table['DEC']*u.deg,nside=128, healpy=False)
ex7()
from astropy.cosmology import Planck15
print(Planck15.__doc__)
z=np.logspace(-4,4,30)
om=Planck15.Om(z)
ob=Planck15.Ob(z)
plt.plot(z,om,label=r'$\Omega_{m}(z)$')
plt.plot(z,ob,label=r'$\Omega_{b}(z)$')
plt.legend(loc=2)
plt.xscale('log')
plt.xlabel(r'$z$')
plt.ylabel(r'$\Omega(z)$')
h=Planck15.H(z)
plt.plot(z,h,label=r'$H(z)$')
plt.legend(loc=2)
plt.xscale('log')
plt.yscale('log')
plt.xlabel(r'$z$')
plt.ylabel(r'$H(z)$ %s' % h.unit)
from astropy.cosmology import z_at_value
z_at_value(Planck15.comoving_distance, 1200 *u.Mpc)
from astropy.cosmology import w0waCDM
cosmo = w0waCDM(H0=75*u.km/u.s/u.Mpc,Om0=0.3,Ode0=0.7,w0=-1.2,wa=-3,Neff=4,Ob0=0.044,m_nu=1e-5*u.eV)
h_cosmo = cosmo.H(z)
plt.plot(z,h_cosmo, label='Random cosmology')
plt.plot(z,h, label='Planck15')
plt.legend(loc=2)
plt.xscale('log')
plt.yscale('log')
plt.xlabel(r'$z$')
plt.ylabel(r'$H(z)$ %s' % h.unit)
plt.plot(z,h_cosmo/h-1)
plt.legend(loc=2)
plt.xscale('log')
plt.yscale('log')
plt.xlabel(r'$z$')
plt.ylabel(r'$H_{cosmo}(z)/H_{Planck15}(z)$')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bayesian Solution
Step2: Short Cut #1
Step3: Similarly, one can derived expressions for the uncertainty for of the least squares fit parameters, c.f. Ivezic Ch. 8.2. These expressions can be thought of as propagating the data error into parameter errors (using standard error propagation, i.e. chain rule).
|
<ASSISTANT_TASK:>
Python Code:
from straightline_utils import *
%matplotlib inline
from matplotlib import rcParams
rcParams['savefig.dpi'] = 100
(x,y,sigmay) = get_data_no_outliers()
plot_yerr(x, y, sigmay)
def straight_line_log_likelihood(x, y, sigmay, m, b):
'''
Returns the log-likelihood of drawing data values *y* at
known values *x* given Gaussian measurement noise with standard
deviation with known *sigmay*, where the "true" y values are
*y_t = m * x + b*
x: list of x coordinates
y: list of y coordinates
sigmay: list of y uncertainties
m: scalar slope
b: scalar line intercept
Returns: scalar log likelihood
'''
return (np.sum(np.log(1./(np.sqrt(2.*np.pi) * sigmay))) +
np.sum(-0.5 * (y - (m*x + b))**2 / sigmay**2))
def straight_line_log_prior(m, b):
return 0.
def straight_line_log_posterior(x,y,sigmay, m,b):
return (straight_line_log_likelihood(x,y,sigmay, m,b) +
straight_line_log_prior(m, b))
# Evaluate log P(m,b | x,y,sigmay) on a grid.
# Set up grid
mgrid = np.linspace(mlo, mhi, 100)
bgrid = np.linspace(blo, bhi, 101)
log_posterior = np.zeros((len(mgrid),len(bgrid)))
# Evaluate log probability on grid
for im,m in enumerate(mgrid):
for ib,b in enumerate(bgrid):
log_posterior[im,ib] = straight_line_log_posterior(x, y, sigmay, m, b)
# Convert to probability density and plot
posterior = np.exp(log_posterior - log_posterior.max())
plt.imshow(posterior, extent=[blo,bhi, mlo,mhi],cmap='Blues',
interpolation='nearest', origin='lower', aspect=(bhi-blo)/(mhi-mlo),
vmin=0, vmax=1)
plt.contour(bgrid, mgrid, posterior, pdf_contour_levels(posterior), colors='k')
i = np.argmax(posterior)
i,j = np.unravel_index(i, posterior.shape)
print 'Grid maximum posterior values:', bgrid[i], mgrid[j]
plt.title('Straight line: posterior PDF for parameters');
#plt.plot(b_ls, m_ls, 'w+', ms=12, mew=4);
plot_mb_setup();
# Linear algebra: weighted least squares
N = len(x)
A = np.zeros((N,2))
A[:,0] = 1. / sigmay
A[:,1] = x / sigmay
b = y / sigmay
theta,nil,nil,nil = np.linalg.lstsq(A, b)
plot_yerr(x, y, sigmay)
b_ls,m_ls = theta
print 'Least Squares (maximum likelihood) estimator:', b_ls,m_ls
plot_line(m_ls, b_ls);
def straight_line_posterior(x, y, sigmay, m, b):
return np.exp(straight_line_log_posterior(x, y, sigmay, m, b))
# initial m, b
m,b = 2, 0
# step sizes
mstep, bstep = 0.1, 10.
# how many steps?
nsteps = 10000
chain = []
probs = []
naccept = 0
print 'Running MH for', nsteps, 'steps'
# First point:
L_old = straight_line_log_likelihood(x, y, sigmay, m, b)
p_old = straight_line_log_prior(m, b)
prob_old = np.exp(L_old + p_old)
for i in range(nsteps):
# step
mnew = m + np.random.normal() * mstep
bnew = b + np.random.normal() * bstep
# evaluate probabilities
# prob_new = straight_line_posterior(x, y, sigmay, mnew, bnew)
L_new = straight_line_log_likelihood(x, y, sigmay, mnew, bnew)
p_new = straight_line_log_prior(mnew, bnew)
prob_new = np.exp(L_new + p_new)
if (prob_new / prob_old > np.random.uniform()):
# accept
m = mnew
b = bnew
L_old = L_new
p_old = p_new
prob_old = prob_new
naccept += 1
else:
# Stay where we are; m,b stay the same, and we append them
# to the chain below.
pass
chain.append((b,m))
probs.append((L_old,p_old))
print 'Acceptance fraction:', naccept/float(nsteps)
# Pull m and b arrays out of the Markov chain and plot them:
mm = [m for b,m in chain]
bb = [b for b,m in chain]
# Scatterplot of m,b posterior samples
plt.clf()
plt.contour(bgrid, mgrid, posterior, pdf_contour_levels(posterior), colors='k')
plt.gca().set_aspect((bhi-blo)/(mhi-mlo))
plt.plot(bb, mm, 'b.', alpha=0.1)
plot_mb_setup()
plt.show()
# 1 and 2D marginalised distributions:
import triangle
triangle.corner(chain, labels=['b','m'], range=[(blo,bhi),(mlo,mhi)],quantiles=[0.16,0.5,0.84],
show_titles=True, title_args={"fontsize": 12},
plot_datapoints=True, fill_contours=True, levels=[0.68, 0.95], color='b', bins=40, smooth=1.0);
plt.show()
# Traces, for convergence inspection:
plt.clf()
plt.subplot(2,1,1)
plt.plot(mm, 'k-')
plt.ylim(mlo,mhi)
plt.ylabel('m')
plt.subplot(2,1,2)
plt.plot(bb, 'k-')
plt.ylabel('b')
plt.ylim(blo,bhi)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step9: Construção
Step20: DataFrame
Step21: Acessando valores
Step22: Slicing
Step30: DataFrame
Step33: * Atribuição de Valores em DataFrames
Step36: Masks
Step38: Aplicação
Step41: * DataFame
Step45: Operações Vetoriais
Step46: Manipulações Numéricas
Step47: * Desconto de 10% no Preço Unitário
Step48: * Cálculo do Preço Total por Item
Step49: * Cálculo do Preço por Kg
Step50: * Preenchendo NaNs
Step51: * Soma
Step52: * Média
Step53: * Desvio Padrão
Step54: * Mediana
Step55: * Moda (valores mais frequentes)
Step56: Análise de Dados
Step61: Descrição dos dados
Step63: Desafio 1
Step65: Dataset Original
Step66: O Acumulador
Step68: Inserção de dados
Step70: [ B ] Elementos de outra distribuição
Step72: [ C ] Elementos Próximos à média
Step74: Avaliação das Séries
Step75: Desafio 2
Step77: Dataset Codificado
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
Construtor padrão
pd.Series(
name="Compras",
index=["Leite", "Ovos", "Carne", "Arroz", "Feijão"],
data=[2, 12, 1, 5, 2]
)
Construtor padrão: dados desconhecidos
pd.Series(
name="Compras",
index=["Leite", "Ovos", "Carne", "Arroz", "Feijão"]
)
Construtor padrão: valor padrão
pd.Series(
name="Compras",
index=["Leite", "Ovos", "Carne", "Arroz", "Feijão"],
data="fill here"
)
Recebendo um Dicionário
s = pd.Series({"Leite": 2, "Ovos": 12, "Carne": 1, "Arroz": 5, "Feijão": 2})
s.name = "Compras"
s
Recebendo uma Lista
s = pd.Series([2, 12, 1, 5, 2])
s
editando parâmetros
s.name="Compras"
s.index=["Leite", "Ovos", "Carne", "Arroz", "Feijão"]
s
Ordenação: Índices
s.sort_index()
Ordenação: Dados
s.sort_values(ascending=False)
Construtor padrão
pd.DataFrame(
index=["Leite", "Ovos", "Carne", "Arroz", "Feijão"],
columns=["quantidade", "unidade"],
data=[
[ 2, "L"],
[12, "Ud"],
[ 1, "Kg"],
[ 5, "Kg"],
[ 2, "Kg"]
]
)
Construtor padrão: dados desconhecidos
pd.DataFrame(
index=["Leite", "Ovos", "Carne", "Arroz", "Feijão"],
columns=["quantidade", "unidade"]
)
Construtor padrão: valor padrão
pd.DataFrame(
index=["Leite", "Ovos", "Carne", "Arroz", "Feijão"],
columns=["quantidade", "unidade"],
data="?"
)
Recebendo um Dicionário
pd.DataFrame(
{
"quantidade": {
"Leite": 2,
"Ovos": 12,
"Carne": 1,
"Arroz": 5,
"Feijão": 2
},
"unidade": {
"Leite": "L",
"Ovos": "Ud",
"Carne": "Kg",
"Arroz": "Kg",
"Feijão": "Kg"
}
}
)
Recebendo um Dicionário de Series
index = ["Leite", "Ovos", "Carne", "Arroz", "Feijão"]
pd.DataFrame(
{
"quantidade": pd.Series(index=index, data=[2, 12, 1, 5, 2]),
"unidade": pd.Series(index=index, data=["L", "Ud", "Kg", "Kg", "Kg"])
}
)
Recebendo um vetor de Series
index = ["Leite", "Ovos", "Carne", "Arroz", "Feijão"]
df = pd.DataFrame(
[
pd.Series(name="quantidade", index=index, data=[2, 12, 1, 5, 2]),
pd.Series(name="unidade", index=index, data=["L", "Ud", "Kg", "Kg", "Kg"])
]
)
df
Transpondo para ajustar a Tabela
df = df.T
df
editando parâmetros
df.index = ["Leite tipo A", "Ovos Orgânicos", "Patinho", "Arroz Arbóreo", "Feijão Preto"]
df.columns = ["Quantidade", "Unidade"]
df
Ordenação: Índices
df.sort_index()
Ordenação: Dados
df.sort_values(by="Unidade", ascending=False)
index = pd.Index(data=["Leite", "Ovos", "Carne", "Arroz", "Feijão"], name="Itens")
index
sq = pd.Series(index=index, data=[2, 12, 1, 5, 2]).sort_values()
sq
su = pd.Series(index=index, data=["L", "Ud", "Kg", "Kg", "Kg"]).sort_index()
su
df = pd.DataFrame({"Quantidade": sq, "Unidade": su}).sort_values(by="Unidade")
df
df["Preço p/ Ud"] = [5.00, 29.99, 6.50, 3.30, 0.50]
df["Preço Total"] = [25.00, 29.99, 13.00, 6.60, 6.00]
df
sq
sq[2]
sq[5:2:-1]
sq["Leite"]
sq["Leite":"Arroz"]
df
df["Unidade"]
df.Quantidade
Uma Coluna do DataFrame é uma Series
df["Preço Total"][2]
Acesso a mais de uma Coluna
df[["Preço Total", "Quantidade"]]
acesso às Linhas: método 'loc'
df.loc["Leite"]
acesso ao item: método 'loc'
df.loc["Ovos", "Preço Total"]
acesso ao item: método 'iloc'
df.iloc[4, 3]
acesso por slice: método 'loc'
df.loc["Leite":, "Preço p/ Ud":]
acesso por slice: método 'iloc'
df.iloc[3:, 2:]
Atribuir Valores em 'slices' levanta warnings
df["Unidade"][[0, 2]] = "Pacote"
df
Deve-se usar 'loc' ou 'iloc'
df.loc["Carne", "Unidade"] = "Kilograma"
df.iloc[3, 1] = "Litro"
df
mask => array de bool
sq > 2
mask => array de bool
df > 2
atribuição de valores em uma cópia
s_tmp = sq.copy()
s_tmp
s_tmp[s_tmp == 2]
s_tmp[s_tmp == 2] = 3
s_tmp
atribuição de valores em uma cópia
df_tmp = df[["Preço p/ Ud", "Preço Total"]].copy()
df_tmp
mask
mask = (df_tmp > 5) & (df_tmp < 10)
mask
df_tmp[mask]
tmp2 = df_tmp.copy()
tmp2[mask] = "?"
tmp2
s_tmp[s_tmp == 2] = 3
s_tmp
df = pd.DataFrame(
index=pd.Index(data=["Leite", "Ovos", "Carne", "Arroz", "Feijão"], name="Itens"),
columns=["Unidade", "Quantidade", "Preço Unitário"],
data=np.array([
["Litro", "Dúzia", "Kilograma", "Kilograma", "Kilograma"],
[4, 3, 1, 5, 2],
[3.00, 6.50, 25.90, 5.00, 3.80]
]).T,
)
df
verificando dtypes
df.dtypes
Conversão necessária pois o pandas interp´reta 'mixed types' como strings
df[["Quantidade", "Preço Unitário"]] = df[["Quantidade", "Preço Unitário"]].astype(float)
df
verificando dtypes
df.dtypes
df["Preço Unitário"] += 1.
df
df["Preço Unitário"] *= 0.90
df
df["Preço Total"] = df["Preço Unitário"] * df["Quantidade"]
df
df["Preço Médio Por Kg"] = np.nan
df
mask = df["Unidade"] == "Kilograma"
df[mask]
df.loc[mask, "Preço Médio Por Kg"] = (df.loc[mask, "Preço Unitário"] / df.loc[mask, "Quantidade"]).sum()
df
df.fillna(0)
df.sum()
df.mean()
df.std()
df.median()
df.mode()
cols=["c1", "c2", "c3", "c4", "c5"]
data = np.random.rand(100, 5)
data *= np.array([ 10, 20, 30, 40, 50])
data += np.array([100, 200, 300, 400, 500])
data = np.ceil(data)
df = pd.DataFrame(columns=cols, data=data)
df.head(10)
descrevendo as distribuições dos dados
df.describe()
mesma coisa, manipulando os percentis
df.describe(percentiles=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
Verificando os valores únicos de C3
df.c3.unique()
Verificando a frequencia dos valores únicos de C3
df.c3.value_counts()
Não altere esse valor, pois ele permite que toda a geração aleatória seja igual para todos
np.random.seed(123456789)
Dataset Original, já criado para a solução
media = 150
sigma = 10
serie = pd.Series(np.random.randn(1000)) * sigma + media
accum = pd.DataFrame(
index=range(2600),
columns=["original"],
data=serie
)
accum.head().append(accum.tail())
Escreva a a Solução Aqui
Escreva a a Solução Aqui
Escreva a a Solução Aqui
Escreva a a Solução Aqui
classes = ["Leite", "Ovos", "Carne", "Arroz", "Feijão"]
labels = pd.Series(np.random.choice(classes, 100))
Escreva a a Solução Aqui
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Day 3 Planck problem
Step2: so good agreement with 10000 points -- make a plot as well
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from matplotlib import pyplot as plt
#
# get Stull's c_1 and c_2 from fundamental constants
#
c=2.99792458e+08 #m/s -- speed of light in vacuum
h=6.62606876e-34 #J s -- Planck's constant
kb=1.3806503e-23 # J/K -- Boltzman's constant
c=3.e8 #speed of light in vacuum (m/s)
c1=2.*h*c**2.*np.pi
c2=h*c/kb
sigma=2.*np.pi**5.*kb**4./(15*h**3.*c**2.)
def planckwavelen(wavel,Temp):
Calculate the blackbody radiant exitence (Stull 2.13)
Parameters
----------
wavel: float or array
wavelength (meters)
Temp: float
temperature (K)
Returns
-------
Elambda: float or arr
monochromatic radiant exitence (W/m^2/m)
Elambda=c1/(wavel**5.*(np.exp(c2/(wavel*Temp)) -1))
return Elambda
Temp = 300.
SBflux = sigma*Temp**4.
print('stefan-boltzman says flux is {:8.3f} W/m^2'.format(SBflux))
#
# get wavelength in meters between 0.1 and 500 microns
#
npoints = 10000
wavelengths=np.linspace(0.1,500.,npoints)*1.e-6
flux = planckwavelen(wavelengths,Temp)
avg_flux = (flux[1:] + flux[:-1])/2.
integral = np.sum(avg_flux*np.diff(wavelengths))
print('numerically integrated flux is: {:8.3f} W/m^2'.format(integral))
%matplotlib inline
fig,ax = plt.subplots(1,1)
#
# change units from meters to microns to get prettier numbers
#
ax.semilogx(wavelengths*1.e6,flux*1.e-6)
_=ax.set(xlim=(1,100),xlabel="wavelength ($\mu m$)",ylabel=r"$E^*_\lambda\ (W/m^2/\mu m$)",
title='Radiant exitence for Temp = 300 K')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: T = 0.05
Step2: T=10
Step3: Correlation plots
Step4: Result
Step5: $\alpha$
Step6: $\beta$
Step7: $\alpha+\beta$
Step8: pol
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1)
D = np.random.rand(100,100)
## This is not symmetric, so we make it symmetric
D = (D+D.T)/2
print (D)
import math
N_steps = 10000
def L(sigma):
s=0
for i in range(0, len(sigma)-1):
s+=D[sigma[i], sigma[i+1]]
return s
def propose(sigma):
r = np.random.choice(len(sigma), 2)
rs = np.sort(r)
j,k=rs[0],rs[1]
x=(sigma[j:k])#.reverse()
x=x[::-1]
x0= sigma[:j]
x1 = sigma[k:]
y=np.concatenate((x0,x,x1))
return y
def pi(sigma,T):
return math.exp(-L(sigma)/T)
def metropolis(sigma,T,L_0):
sigma_n = propose(sigma)
L_n = L(sigma_n)
pi_ab = math.exp(-(L_n-L_0)/T)
q = min(1, pi_ab)
b = np.random.uniform(size=1)
if (b<q):
return sigma_n
else:
return sigma
sigma_0 = np.random.choice(100,100)
L_0 = L(sigma_0)
print sigma_0
T = [0.05,10]
def plotter(t):
L_history = []
sigma_history = []
sigma_0 = np.random.choice(100,100)
L_0 = L(sigma_0)
L_history.append(L_0)
sigma_history.append(sigma_0)
sigma = metropolis(sigma_0,t,L_0)
for i in range(1, N_steps):
sigma_t = metropolis(sigma_history[i-1],t,L_history[i-1])
L_1 = L(sigma_t)
L_history.append(L_1)
sigma_history.append(sigma_t)
plt.figure(0)
plt.hist(L_history, 20)
#plt.xlim(min(L_history)-25, max(L_history)+0.5)
plt.xlabel('Length')
plt.ylabel('Frequency')
plt.title('Frequency of L')
plt.figure(1)
plt.plot(range(1, N_steps+1),L_history)
plt.ylim(min(L_history), max(L_history))
plt.xlabel('N_steps')
plt.ylabel('L')
plt.title('Variation of L with N_steps')
return L_history
L_t0=plotter(T[0])
L_t1= plotter(T[1])
from scipy.signal import correlate
def autocorr(x):
xunbiased = x-np.mean(x)
xnorm = np.sum(xunbiased**2)
acor = np.correlate(xunbiased, xunbiased, "same")/xnorm
#result = correlate(x, x, mode='full')
#result /= result[result.argmax()]
acor = acor[len(acor)/2:]
return acor#result[result.size/2:]
cov_t0 = autocorr(L_t0)
cov_t1 = autocorr(L_t1)
plt.plot(cov_t0)
plt.ylabel('Autocorrelation')
plt.xlabel('N_steps')
plt.title('Autocorrelation of L_i for T=0.05')
plt.plot(cov_t1)
plt.ylabel('Autocorrelation')
plt.xlabel('N_steps')
plt.title('Autocorrelation of L_i for T=10')
k_a=0.2
k_b=0.2
k_p=0.5
P = np.matrix([[1-k_a-k_b, k_a ,k_b, 0, 0, 0],
[k_a, 1-k_a-k_b, 0, k_b, 0, 0],
[k_b, 0, 1-k_a-k_b, k_a, 0, 0],
[0, k_b, k_a, 1-k_a-k_b-k_p, k_p, 0],
[0, 0, 0, 0, 0, 1],
[0, 0, 0, 1, 0, 0]])
Q=P[1:5,1:5]
iq = np.eye(4)-Q
iqi = np.linalg.inv(iq)
print(iq)
print(iqi)
print 'U={}'.format(iqi[:,-1])
u=iqi[:,-1]
PP = {}
states = ['phi', 'alpha', 'beta', 'ab', 'pol', 'd']
PP['phi']= [1-k_a-k_b, k_a ,k_b, 0, 0, 0]
PP['alpha'] = [k_a, 1-k_a-k_b, 0, k_b, 0, 0]
PP['beta'] = [k_b, 0, 1-k_a-k_b, k_a, 0, 0]
PP['ab']= [0, k_b, k_a, 1-k_a-k_b-k_p, k_p, 0]
PP['pol']= [0, 0, 0, 0, 0, 1]
PP['d']= [0, 0, 0, 1, 0, 0]
def h(x):
s=0
ht=0
cc=0
for j in range(1,100):
new_state=x
for i in range(1,10000):
old_state=new_state
probs = PP[old_state]
z=np.random.choice(6, 1, p=probs)
new_state = states[z[0]]
s+=z[0]
if new_state=='d':
ht+=i
cc+=1
break
else:
continue
return s/1000, ht/cc
print('Simulation: {}\t Calculation: {}'.format(h('alpha')[1],u[0]))
print('Simulation: {}\t Calculation: {}'.format(h('beta')[1],u[1]))
print('Simulation: {}\t Calculation: {}'.format(h('ab')[1],u[2]))
print('Simulation: {}\t Calculation: {}'.format(h('pol')[1],u[3]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: (1b) Sparse vectors
Step3: (1c) OHE features as sparse vectors
Step5: (1d) Define a OHE function
Step6: (1e) Apply OHE to a dataset
Step7: Part 2
Step8: (2b) OHE Dictionary from distinct features
Step10: (2c) Automated creation of an OHE dictionary
Step11: Part 3
Step12: (3a) Loading and splitting the data
Step14: (3b) Extract features
Step15: (3c) Create an OHE dictionary from the dataset
Step17: (3d) Apply OHE to the dataset
Step20: Visualization 1
Step22: (3e) Handling unseen features
Step23: Part 4
Step25: (4b) Log loss
Step26: (4c) Baseline log loss
Step28: (4d) Predicted probability
Step30: (4e) Evaluate the model
Step31: (4f) Validation log loss
Step32: Visualization 2
Step34: Part 5
Step36: (5b) Creating hashed features
Step38: (5c) Sparsity
Step39: (5d) Logistic model with hashed features
Step40: Visualization 3
Step41: (5e) Evaluate on the test set
|
<ASSISTANT_TASK:>
Python Code:
labVersion = 'cs190_week4_v_1_3'
# Data for manual OHE
# Note: the first data point does not include any value for the optional third feature
sampleOne = [(0, 'mouse'), (1, 'black')]
sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
sampleDataRDD = sc.parallelize([sampleOne, sampleTwo, sampleThree])
# TODO: Replace <FILL IN> with appropriate code
sampleOHEDictManual = {}
sampleOHEDictManual[(0,'bear')] = 0
sampleOHEDictManual[(0,'cat')] = 1
sampleOHEDictManual[(0,'mouse')] = 2
sampleOHEDictManual[(1,'black')] = 3
sampleOHEDictManual[(1,'tabby')] = 4
sampleOHEDictManual[(2,'mouse')] = 5
sampleOHEDictManual[(2,'salmon')] = 6
# TEST One-hot-encoding (1a)
from test_helper import Test
Test.assertEqualsHashed(sampleOHEDictManual[(0,'bear')],
'b6589fc6ab0dc82cf12099d1c2d40ab994e8410c',
"incorrect value for sampleOHEDictManual[(0,'bear')]")
Test.assertEqualsHashed(sampleOHEDictManual[(0,'cat')],
'356a192b7913b04c54574d18c28d46e6395428ab',
"incorrect value for sampleOHEDictManual[(0,'cat')]")
Test.assertEqualsHashed(sampleOHEDictManual[(0,'mouse')],
'da4b9237bacccdf19c0760cab7aec4a8359010b0',
"incorrect value for sampleOHEDictManual[(0,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'black')],
'77de68daecd823babbb58edb1c8e14d7106e83bb',
"incorrect value for sampleOHEDictManual[(1,'black')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'tabby')],
'1b6453892473a467d07372d45eb05abc2031647a',
"incorrect value for sampleOHEDictManual[(1,'tabby')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'mouse')],
'ac3478d69a3c81fa62e60f5c3696165a4e5e6ac4',
"incorrect value for sampleOHEDictManual[(2,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'salmon')],
'c1dfd96eea8cc2b62785275bca38ac261256e278',
"incorrect value for sampleOHEDictManual[(2,'salmon')]")
Test.assertEquals(len(sampleOHEDictManual.keys()), 7,
'incorrect number of keys in sampleOHEDictManual')
import numpy as np
from pyspark.mllib.linalg import SparseVector
# TODO: Replace <FILL IN> with appropriate code
aDense = np.array([0., 3., 0., 4.])
aSparse = SparseVector(4, [[1, 3], [3., 4.]])
bDense = np.array([0., 0., 0., 1.])
bSparse = SparseVector(4, [(3, 1.)])
w = np.array([0.4, 3.1, -1.4, -.5])
print aDense.dot(w)
print aSparse.dot(w)
print bDense.dot(w)
print bSparse.dot(w)
# TEST Sparse Vectors (1b)
Test.assertTrue(isinstance(aSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')
Test.assertTrue(isinstance(bSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')
Test.assertTrue(aDense.dot(w) == aSparse.dot(w),
'dot product of aDense and w should equal dot product of aSparse and w')
Test.assertTrue(bDense.dot(w) == bSparse.dot(w),
'dot product of bDense and w should equal dot product of bSparse and w')
# Reminder of the sample features
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# TODO: Replace <FILL IN> with appropriate code
sampleOneOHEFeatManual = SparseVector(7, [(2, 1.), (3, 1.)])
sampleTwoOHEFeatManual = SparseVector(7, [(1, 1.), (4, 1.), (5, 1.)])
sampleThreeOHEFeatManual = SparseVector(7, [(0, 1.), (3, 1.), (6, 1.)])
# TEST OHE Features as sparse vectors (1c)
Test.assertTrue(isinstance(sampleOneOHEFeatManual, SparseVector),
'sampleOneOHEFeatManual needs to be a SparseVector')
Test.assertTrue(isinstance(sampleTwoOHEFeatManual, SparseVector),
'sampleTwoOHEFeatManual needs to be a SparseVector')
Test.assertTrue(isinstance(sampleThreeOHEFeatManual, SparseVector),
'sampleThreeOHEFeatManual needs to be a SparseVector')
Test.assertEqualsHashed(sampleOneOHEFeatManual,
'ecc00223d141b7bd0913d52377cee2cf5783abd6',
'incorrect value for sampleOneOHEFeatManual')
Test.assertEqualsHashed(sampleTwoOHEFeatManual,
'26b023f4109e3b8ab32241938e2e9b9e9d62720a',
'incorrect value for sampleTwoOHEFeatManual')
Test.assertEqualsHashed(sampleThreeOHEFeatManual,
'c04134fd603ae115395b29dcabe9d0c66fbdc8a7',
'incorrect value for sampleThreeOHEFeatManual')
# TODO: Replace <FILL IN> with appropriate code
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
You should ensure that the indices used to create a SparseVector are sorted.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
myList = [OHEDict[f] for f in rawFeats]
sortedMyList = sorted(myList)
valueList = [1 for f in rawFeats]
return SparseVector(numOHEFeats, sortedMyList, valueList)
# Calculate the number of features in sampleOHEDictManual
numSampleOHEFeats = len(sampleOHEDictManual)
# Run oneHotEnoding on sampleOne
sampleOneOHEFeat = oneHotEncoding(sampleOne, sampleOHEDictManual, numSampleOHEFeats)
print sampleOneOHEFeat
# TEST Define an OHE Function (1d)
Test.assertTrue(sampleOneOHEFeat == sampleOneOHEFeatManual,
'sampleOneOHEFeat should equal sampleOneOHEFeatManual')
Test.assertEquals(sampleOneOHEFeat, SparseVector(7, [2,3], [1.0,1.0]),
'incorrect value for sampleOneOHEFeat')
Test.assertEquals(oneHotEncoding([(1, 'black'), (0, 'mouse')], sampleOHEDictManual,
numSampleOHEFeats), SparseVector(7, [2,3], [1.0,1.0]),
'incorrect definition for oneHotEncoding')
# TODO: Replace <FILL IN> with appropriate code
sampleOHEData = sampleDataRDD.map(lambda x : oneHotEncoding(x, sampleOHEDictManual, len(sampleOHEDictManual)))
print sampleOHEData.collect()
# TEST Apply OHE to a dataset (1e)
sampleOHEDataValues = sampleOHEData.collect()
Test.assertTrue(len(sampleOHEDataValues) == 3, 'sampleOHEData should have three elements')
Test.assertEquals(sampleOHEDataValues[0], SparseVector(7, {2: 1.0, 3: 1.0}),
'incorrect OHE for first sample')
Test.assertEquals(sampleOHEDataValues[1], SparseVector(7, {1: 1.0, 4: 1.0, 5: 1.0}),
'incorrect OHE for second sample')
Test.assertEquals(sampleOHEDataValues[2], SparseVector(7, {0: 1.0, 3: 1.0, 6: 1.0}),
'incorrect OHE for third sample')
# TODO: Replace <FILL IN> with appropriate code
sampleDistinctFeats = (sampleDataRDD.
flatMap(lambda x : x).distinct())
# TEST Pair RDD of (featureID, category) (2a)
Test.assertEquals(sorted(sampleDistinctFeats.collect()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'incorrect value for sampleDistinctFeats')
# TODO: Replace <FILL IN> with appropriate code
sampleOHEDict = (sampleDistinctFeats.
zipWithIndex().collectAsMap())
print sampleOHEDict
# TEST OHE Dictionary from distinct features (2b)
Test.assertEquals(sorted(sampleOHEDict.keys()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'sampleOHEDict has unexpected keys')
Test.assertEquals(sorted(sampleOHEDict.values()), range(7), 'sampleOHEDict has unexpected values')
# TODO: Replace <FILL IN> with appropriate code
def createOneHotDict(inputData):
Creates a one-hot-encoder dictionary based on the input data.
Args:
inputData (RDD of lists of (int, str)): An RDD of observations where each observation is
made up of a list of (featureID, value) tuples.
Returns:
dict: A dictionary where the keys are (featureID, value) tuples and map to values that are
unique integers.
return (inputData.flatMap(lambda x : x).distinct().zipWithIndex().collectAsMap())
sampleOHEDictAuto = createOneHotDict(sampleDataRDD)
print sampleOHEDictAuto
# TEST Automated creation of an OHE dictionary (2c)
Test.assertEquals(sorted(sampleOHEDictAuto.keys()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'sampleOHEDictAuto has unexpected keys')
Test.assertEquals(sorted(sampleOHEDictAuto.values()), range(7),
'sampleOHEDictAuto has unexpected values')
# Run this code to view Criteo's agreement
from IPython.lib.display import IFrame
IFrame("http://labs.criteo.com/downloads/2014-kaggle-display-advertising-challenge-dataset/",
600, 350)
# TODO: Replace <FILL IN> with appropriate code
# Just replace <FILL IN> with the url for dac_sample.tar.gz
import glob
import os.path
import tarfile
import urllib
import urlparse
# Paste url, url should end with: dac_sample.tar.gz
url = 'http://labs.criteo.com/wp-content/uploads/2015/04/dac_sample.tar.gz'
url = url.strip()
baseDir = os.path.join('data')
inputPath = os.path.join('cs190', 'dac_sample.txt')
fileName = os.path.join(baseDir, inputPath)
inputDir = os.path.split(fileName)[0]
def extractTar(check = False):
# Find the zipped archive and extract the dataset
tars = glob.glob('dac_sample*.tar.gz*')
if check and len(tars) == 0:
return False
if len(tars) > 0:
try:
tarFile = tarfile.open(tars[0])
except tarfile.ReadError:
if not check:
print 'Unable to open tar.gz file. Check your URL.'
return False
tarFile.extract('dac_sample.txt', path=inputDir)
print 'Successfully extracted: dac_sample.txt'
return True
else:
print 'You need to retry the download with the correct url.'
print ('Alternatively, you can upload the dac_sample.tar.gz file to your Jupyter root ' +
'directory')
return False
if os.path.isfile(fileName):
print 'File is already available. Nothing to do.'
elif extractTar(check = True):
print 'tar.gz file was already available.'
elif not url.endswith('dac_sample.tar.gz'):
print 'Check your download url. Are you downloading the Sample dataset?'
else:
# Download the file and store it in the same directory as this notebook
try:
urllib.urlretrieve(url, os.path.basename(urlparse.urlsplit(url).path))
except IOError:
print 'Unable to download and store: {0}'.format(url)
extractTar()
import os.path
baseDir = os.path.join('data')
inputPath = os.path.join('cs190', 'dac_sample.txt')
fileName = os.path.join(baseDir, inputPath)
if os.path.isfile(fileName):
rawData = (sc
.textFile(fileName, 2)
.map(lambda x: x.replace('\t', ','))) # work with either ',' or '\t' separated data
print rawData.take(1)
# TODO: Replace <FILL IN> with appropriate code
weights = [.8, .1, .1]
seed = 42
# Use randomSplit with weights and seed
rawTrainData, rawValidationData, rawTestData = rawData.randomSplit(weights, seed)
# Cache the data
rawTrainData.cache()
rawValidationData.cache()
rawTestData.cache()
nTrain = rawTrainData.count()
nVal = rawValidationData.count()
nTest = rawTestData.count()
print nTrain, nVal, nTest, nTrain + nVal + nTest
print rawData.take(1)
# TEST Loading and splitting the data (3a)
Test.assertTrue(all([rawTrainData.is_cached, rawValidationData.is_cached, rawTestData.is_cached]),
'you must cache the split data')
Test.assertEquals(nTrain, 79911, 'incorrect value for nTrain')
Test.assertEquals(nVal, 10075, 'incorrect value for nVal')
Test.assertEquals(nTest, 10014, 'incorrect value for nTest')
# TODO: Replace <FILL IN> with appropriate code
def parsePoint(point):
Converts a comma separated string into a list of (featureID, value) tuples.
Note:
featureIDs should start at 0 and increase to the number of features - 1.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
Returns:
list: A list of (featureID, value) tuples.
mypoints = point.split(',')
return [(i, item) for i, item in enumerate(mypoints[1:])]
parsedTrainFeat = rawTrainData.map(parsePoint)
numCategories = (parsedTrainFeat
.flatMap(lambda x: x)
.distinct()
.map(lambda x: (x[0], 1))
.reduceByKey(lambda x, y: x + y)
.sortByKey()
.collect())
print numCategories[2][1]
# TEST Extract features (3b)
Test.assertEquals(numCategories[2][1], 855, 'incorrect implementation of parsePoint')
Test.assertEquals(numCategories[32][1], 4, 'incorrect implementation of parsePoint')
# TODO: Replace <FILL IN> with appropriate code
ctrOHEDict = createOneHotDict(parsedTrainFeat)
numCtrOHEFeats = len(ctrOHEDict.keys())
print numCtrOHEFeats
print ctrOHEDict[(0, '')]
# TEST Create an OHE dictionary from the dataset (3c)
Test.assertEquals(numCtrOHEFeats, 233286, 'incorrect number of features in ctrOHEDict')
Test.assertTrue((0, '') in ctrOHEDict, 'incorrect features in ctrOHEDict')
from pyspark.mllib.regression import LabeledPoint
# TODO: Replace <FILL IN> with appropriate code
def parseOHEPoint(point, OHEDict, numOHEFeats):
Obtain the label and feature vector for this raw observation.
Note:
You must use the function `oneHotEncoding` in this implementation or later portions
of this lab may not function as expected.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
OHEDict (dict of (int, str) to int): Mapping of (featureID, value) to unique integer.
numOHEFeats (int): The number of unique features in the training dataset.
Returns:
LabeledPoint: Contains the label for the observation and the one-hot-encoding of the
raw features based on the provided OHE dictionary.
parsedPoints = parsePoint(point)
label = point.split(',')[0]
features = oneHotEncoding(parsedPoints, OHEDict, numOHEFeats)
return LabeledPoint(label, features)
OHETrainData = rawTrainData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHETrainData.cache()
print OHETrainData.take(1)
# Check that oneHotEncoding function was used in parseOHEPoint
backupOneHot = oneHotEncoding
oneHotEncoding = None
withOneHot = False
try: parseOHEPoint(rawTrainData.take(1)[0], ctrOHEDict, numCtrOHEFeats)
except TypeError: withOneHot = True
oneHotEncoding = backupOneHot
# TEST Apply OHE to the dataset (3d)
numNZ = sum(parsedTrainFeat.map(lambda x: len(x)).take(5))
numNZAlt = sum(OHETrainData.map(lambda lp: len(lp.features.indices)).take(5))
Test.assertEquals(numNZ, numNZAlt, 'incorrect implementation of parseOHEPoint')
Test.assertTrue(withOneHot, 'oneHotEncoding not present in parseOHEPoint')
def bucketFeatByCount(featCount):
Bucket the counts by powers of two.
for i in range(11):
size = 2 ** i
if featCount <= size:
return size
return -1
featCounts = (OHETrainData
.flatMap(lambda lp: lp.features.indices)
.map(lambda x: (x, 1))
.reduceByKey(lambda x, y: x + y))
featCountsBuckets = (featCounts
.map(lambda x: (bucketFeatByCount(x[1]), 1))
.filter(lambda (k, v): k != -1)
.reduceByKey(lambda x, y: x + y)
.collect())
print featCountsBuckets
import matplotlib.pyplot as plt
x, y = zip(*featCountsBuckets)
x, y = np.log(x), np.log(y)
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',
gridWidth=1.0):
Template for generating the plot layout.
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(4, 14, 2))
ax.set_xlabel(r'$\log_e(bucketSize)$'), ax.set_ylabel(r'$\log_e(countInBucket)$')
plt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)
pass
# TODO: Replace <FILL IN> with appropriate code
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
If a (featureID, value) tuple doesn't have a corresponding key in OHEDict it should be
ignored.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
myList = [OHEDict[f] for f in rawFeats if f in OHEDict]
sortedMyList = sorted(myList)
valueList = [1 for f in rawFeats if f in OHEDict]
return SparseVector(numOHEFeats, sortedMyList, valueList)
OHEValidationData = rawValidationData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHEValidationData.cache()
print OHEValidationData.take(1)
# TEST Handling unseen features (3e)
numNZVal = (OHEValidationData
.map(lambda lp: len(lp.features.indices))
.sum())
Test.assertEquals(numNZVal, 372080, 'incorrect number of features')
from pyspark.mllib.classification import LogisticRegressionWithSGD
# fixed hyperparameters
numIters = 50
stepSize = 10.
regParam = 1e-6
regType = 'l2'
includeIntercept = True
# TODO: Replace <FILL IN> with appropriate code
model0 = LogisticRegressionWithSGD.train(OHETrainData, numIters, stepSize, regParam=regParam, regType=regType, intercept=includeIntercept)
sortedWeights = sorted(model0.weights)
print sortedWeights[:5], model0.intercept
# TEST Logistic regression (4a)
Test.assertTrue(np.allclose(model0.intercept, 0.56455084025), 'incorrect value for model0.intercept')
Test.assertTrue(np.allclose(sortedWeights[0:5],
[-0.45899236853575609, -0.37973707648623956, -0.36996558266753304,
-0.36934962879928263, -0.32697945415010637]), 'incorrect value for model0.weights')
# TODO: Replace <FILL IN> with appropriate code
from math import log
def computeLogLoss(p, y):
Calculates the value of log loss for a given probabilty and label.
Note:
log(0) is undefined, so when p is 0 we need to add a small value (epsilon) to it
and when p is 1 we need to subtract a small value (epsilon) from it.
Args:
p (float): A probabilty between 0 and 1.
y (int): A label. Takes on the values 0 and 1.
Returns:
float: The log loss value.
epsilon = 10e-12
logLoss = None
# For undefined values of log(p)
if p == 0:
p += epsilon
elif p == 1:
p -= epsilon
if y == 1:
logLoss = -log(p)
else:
logLoss = -log(1-p)
return logLoss
print computeLogLoss(.5, 1)
print computeLogLoss(.5, 0)
print computeLogLoss(.99, 1)
print computeLogLoss(.99, 0)
print computeLogLoss(.01, 1)
print computeLogLoss(.01, 0)
print computeLogLoss(0, 1)
print computeLogLoss(1, 1)
print computeLogLoss(1, 0)
# TEST Log loss (4b)
Test.assertTrue(np.allclose([computeLogLoss(.5, 1), computeLogLoss(.01, 0), computeLogLoss(.01, 1)],
[0.69314718056, 0.0100503358535, 4.60517018599]),
'computeLogLoss is not correct')
Test.assertTrue(np.allclose([computeLogLoss(0, 1), computeLogLoss(1, 1), computeLogLoss(1, 0)],
[25.3284360229, 1.00000008275e-11, 25.3284360229]),
'computeLogLoss needs to bound p away from 0 and 1 by epsilon')
# TODO: Replace <FILL IN> with appropriate code
# Note that our dataset has a very high click-through rate by design
# In practice click-through rate can be one to two orders of magnitude lower
classOneFracTrain = OHETrainData.map(lambda x : x.label).reduce(lambda x, y: x + y) / OHETrainData.count()
print classOneFracTrain
logLossTrBase = OHETrainData.map(lambda x : computeLogLoss(classOneFracTrain, x.label)).sum() / OHETrainData.count()
print 'Baseline Train Logloss = {0:.3f}\n'.format(logLossTrBase)
# TEST Baseline log loss (4c)
Test.assertTrue(np.allclose(classOneFracTrain, 0.22717773523), 'incorrect value for classOneFracTrain')
Test.assertTrue(np.allclose(logLossTrBase, 0.535844), 'incorrect value for logLossTrBase')
# TODO: Replace <FILL IN> with appropriate code
from math import exp # exp(-t) = e^-t
def getP(x, w, intercept):
Calculate the probability for an observation given a set of weights and intercept.
Note:
We'll bound our raw prediction between 20 and -20 for numerical purposes.
Args:
x (SparseVector): A vector with values of 1.0 for features that exist in this
observation and 0.0 otherwise.
w (DenseVector): A vector of weights (betas) for the model.
intercept (float): The model's intercept.
Returns:
float: A probability between 0 and 1.
rawPrediction = x.dot(w) + intercept
# Bound the raw prediction value
rawPrediction = min(rawPrediction, 20)
rawPrediction = max(rawPrediction, -20)
return 1. / (1. + exp(-rawPrediction))
trainingPredictions = OHETrainData.map(lambda x : getP(x.features, model0.weights, model0.intercept))
print trainingPredictions.take(5)
# TEST Predicted probability (4d)
Test.assertTrue(np.allclose(trainingPredictions.sum(), 18135.4834348),
'incorrect value for trainingPredictions')
# TODO: Replace <FILL IN> with appropriate code
def evaluateResults(model, data):
Calculates the log loss for the data given the model.
Args:
model (LogisticRegressionModel): A trained logistic regression model.
data (RDD of LabeledPoint): Labels and features for each observation.
Returns:
float: Log loss for the data.
return data.map(lambda x : computeLogLoss(getP(x.features, model.weights, model.intercept), x.label)).sum() / data.count()
logLossTrLR0 = evaluateResults(model0, OHETrainData)
print ('OHE Features Train Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossTrBase, logLossTrLR0))
# TEST Evaluate the model (4e)
Test.assertTrue(np.allclose(logLossTrLR0, 0.456903), 'incorrect value for logLossTrLR0')
# TODO: Replace <FILL IN> with appropriate code
logLossValBase = OHEValidationData.map(lambda x : computeLogLoss(classOneFracTrain, x.label)).sum() / OHEValidationData.count()
logLossValLR0 = evaluateResults(model0, OHEValidationData)
print ('OHE Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossValBase, logLossValLR0))
# TEST Validation log loss (4f)
Test.assertTrue(np.allclose(logLossValBase, 0.527603), 'incorrect value for logLossValBase')
Test.assertTrue(np.allclose(logLossValLR0, 0.456957), 'incorrect value for logLossValLR0')
labelsAndScores = OHEValidationData.map(lambda lp:
(lp.label, getP(lp.features, model0.weights, model0.intercept)))
labelsAndWeights = labelsAndScores.collect()
labelsAndWeights.sort(key=lambda (k, v): v, reverse=True)
labelsByWeight = np.array([k for (k, v) in labelsAndWeights])
length = labelsByWeight.size
truePositives = labelsByWeight.cumsum()
numPositive = truePositives[-1]
falsePositives = np.arange(1.0, length + 1, 1.) - truePositives
truePositiveRate = truePositives / numPositive
falsePositiveRate = falsePositives / (length - numPositive)
# Generate layout and plot data
fig, ax = preparePlot(np.arange(0., 1.1, 0.1), np.arange(0., 1.1, 0.1))
ax.set_xlim(-.05, 1.05), ax.set_ylim(-.05, 1.05)
ax.set_ylabel('True Positive Rate (Sensitivity)')
ax.set_xlabel('False Positive Rate (1 - Specificity)')
plt.plot(falsePositiveRate, truePositiveRate, color='#8cbfd0', linestyle='-', linewidth=3.)
plt.plot((0., 1.), (0., 1.), linestyle='--', color='#d6ebf2', linewidth=2.) # Baseline model
pass
from collections import defaultdict
import hashlib
def hashFunction(numBuckets, rawFeats, printMapping=False):
Calculate a feature dictionary for an observation's features based on hashing.
Note:
Use printMapping=True for debug purposes and to better understand how the hashing works.
Args:
numBuckets (int): Number of buckets to use as features.
rawFeats (list of (int, str)): A list of features for an observation. Represented as
(featureID, value) tuples.
printMapping (bool, optional): If true, the mappings of featureString to index will be
printed.
Returns:
dict of int to float: The keys will be integers which represent the buckets that the
features have been hashed to. The value for a given key will contain the count of the
(featureID, value) tuples that have hashed to that key.
mapping = {}
for ind, category in rawFeats:
featureString = category + str(ind)
mapping[featureString] = int(int(hashlib.md5(featureString).hexdigest(), 16) % numBuckets)
if(printMapping): print mapping
sparseFeatures = defaultdict(float)
for bucket in mapping.values():
sparseFeatures[bucket] += 1.0
return dict(sparseFeatures)
# Reminder of the sample values:
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# TODO: Replace <FILL IN> with appropriate code
# Use four buckets
sampOneFourBuckets = hashFunction(4, sampleOne, True)
sampTwoFourBuckets = hashFunction(4, sampleTwo, True)
sampThreeFourBuckets = hashFunction(4, sampleThree, True)
# Use one hundred buckets
sampOneHundredBuckets = hashFunction(100, sampleOne, True)
sampTwoHundredBuckets = hashFunction(100, sampleTwo, True)
sampThreeHundredBuckets = hashFunction(100, sampleThree, True)
print '\t\t 4 Buckets \t\t\t 100 Buckets'
print 'SampleOne:\t {0}\t\t {1}'.format(sampOneFourBuckets, sampOneHundredBuckets)
print 'SampleTwo:\t {0}\t\t {1}'.format(sampTwoFourBuckets, sampTwoHundredBuckets)
print 'SampleThree:\t {0}\t {1}'.format(sampThreeFourBuckets, sampThreeHundredBuckets)
# TEST Hash function (5a)
Test.assertEquals(sampOneFourBuckets, {2: 1.0, 3: 1.0}, 'incorrect value for sampOneFourBuckets')
Test.assertEquals(sampThreeHundredBuckets, {72: 1.0, 5: 1.0, 14: 1.0},
'incorrect value for sampThreeHundredBuckets')
# TODO: Replace <FILL IN> with appropriate code
def parseHashPoint(point, numBuckets):
Create a LabeledPoint for this observation using hashing.
Args:
point (str): A comma separated string where the first value is the label and the rest are
features.
numBuckets: The number of buckets to hash to.
Returns:
LabeledPoint: A LabeledPoint with a label (0.0 or 1.0) and a SparseVector of hashed
features.
parsedPoints = parsePoint(point)
label = point.split(',')[0]
features = hashFunction(numBuckets, parsedPoints, printMapping=False)
return LabeledPoint(label, SparseVector(numBuckets, features))
numBucketsCTR = 2 ** 15
hashTrainData = rawTrainData.map(lambda x : parseHashPoint(x, numBucketsCTR))
hashTrainData.cache()
hashValidationData = rawValidationData.map(lambda x : parseHashPoint(x, numBucketsCTR))
hashValidationData.cache()
hashTestData = rawTestData.map(lambda x : parseHashPoint(x, numBucketsCTR))
hashTestData.cache()
print hashTrainData.take(1)
# TEST Creating hashed features (5b)
hashTrainDataFeatureSum = sum(hashTrainData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashTrainDataLabelSum = sum(hashTrainData
.map(lambda lp: lp.label)
.take(100))
hashValidationDataFeatureSum = sum(hashValidationData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashValidationDataLabelSum = sum(hashValidationData
.map(lambda lp: lp.label)
.take(100))
hashTestDataFeatureSum = sum(hashTestData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashTestDataLabelSum = sum(hashTestData
.map(lambda lp: lp.label)
.take(100))
Test.assertEquals(hashTrainDataFeatureSum, 772, 'incorrect number of features in hashTrainData')
Test.assertEquals(hashTrainDataLabelSum, 24.0, 'incorrect labels in hashTrainData')
Test.assertEquals(hashValidationDataFeatureSum, 776,
'incorrect number of features in hashValidationData')
Test.assertEquals(hashValidationDataLabelSum, 16.0, 'incorrect labels in hashValidationData')
Test.assertEquals(hashTestDataFeatureSum, 774, 'incorrect number of features in hashTestData')
Test.assertEquals(hashTestDataLabelSum, 23.0, 'incorrect labels in hashTestData')
# TODO: Replace <FILL IN> with appropriate code
def computeSparsity(data, d, n):
Calculates the average sparsity for the features in an RDD of LabeledPoints.
Args:
data (RDD of LabeledPoint): The LabeledPoints to use in the sparsity calculation.
d (int): The total number of features.
n (int): The number of observations in the RDD.
Returns:
float: The average of the ratio of features in a point to total features.
return float(data.map(lambda x: len(x.features.indices)).sum()) / d / n
averageSparsityHash = computeSparsity(hashTrainData, numBucketsCTR, nTrain)
averageSparsityOHE = computeSparsity(OHETrainData, numCtrOHEFeats, nTrain)
print 'Average OHE Sparsity: {0:.7e}'.format(averageSparsityOHE)
print 'Average Hash Sparsity: {0:.7e}'.format(averageSparsityHash)
# TEST Sparsity (5c)
Test.assertTrue(np.allclose(averageSparsityOHE, 1.6717677e-04),
'incorrect value for averageSparsityOHE')
Test.assertTrue(np.allclose(averageSparsityHash, 1.1805561e-03),
'incorrect value for averageSparsityHash')
numIters = 500
regType = 'l2'
includeIntercept = True
# Initialize variables using values from initial model training
bestModel = None
bestLogLoss = 1e10
# TODO: Replace <FILL IN> with appropriate code
stepSizes = [1, 10]
regParams = [1e-6, 1e-3]
for stepSize in stepSizes:
for regParam in regParams:
model = (LogisticRegressionWithSGD
.train(hashTrainData, numIters, stepSize, regParam=regParam, regType=regType,
intercept=includeIntercept))
logLossVa = evaluateResults(model, hashValidationData)
print ('\tstepSize = {0:.1f}, regParam = {1:.0e}: logloss = {2:.3f}'
.format(stepSize, regParam, logLossVa))
if (logLossVa < bestLogLoss):
bestModel = model
bestLogLoss = logLossVa
print ('Hashed Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossValBase, bestLogLoss))
# TEST Logistic model with hashed features (5d)
Test.assertTrue(np.allclose(bestLogLoss, 0.4481683608), 'incorrect value for bestLogLoss')
from matplotlib.colors import LinearSegmentedColormap
# Saved parameters and results. Eliminate the time required to run 36 models
stepSizes = [3, 6, 9, 12, 15, 18]
regParams = [1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2]
logLoss = np.array([[ 0.45808431, 0.45808493, 0.45809113, 0.45815333, 0.45879221, 0.46556321],
[ 0.45188196, 0.45188306, 0.4518941, 0.4520051, 0.45316284, 0.46396068],
[ 0.44886478, 0.44886613, 0.44887974, 0.44902096, 0.4505614, 0.46371153],
[ 0.44706645, 0.4470698, 0.44708102, 0.44724251, 0.44905525, 0.46366507],
[ 0.44588848, 0.44589365, 0.44590568, 0.44606631, 0.44807106, 0.46365589],
[ 0.44508948, 0.44509474, 0.44510274, 0.44525007, 0.44738317, 0.46365405]])
numRows, numCols = len(stepSizes), len(regParams)
logLoss = np.array(logLoss)
logLoss.shape = (numRows, numCols)
fig, ax = preparePlot(np.arange(0, numCols, 1), np.arange(0, numRows, 1), figsize=(8, 7),
hideLabels=True, gridWidth=0.)
ax.set_xticklabels(regParams), ax.set_yticklabels(stepSizes)
ax.set_xlabel('Regularization Parameter'), ax.set_ylabel('Step Size')
colors = LinearSegmentedColormap.from_list('blue', ['#0022ff', '#000055'], gamma=.2)
image = plt.imshow(logLoss,interpolation='nearest', aspect='auto',
cmap = colors)
pass
# TODO: Replace <FILL IN> with appropriate code
# Log loss for the best model from (5d)
logLossTest = evaluateResults(bestModel, hashTestData)
# Log loss for the baseline model
logLossTestBaseline = hashTestData.map(lambda x: computeLogLoss(classOneFracTrain, x.label)).sum() / hashTestData.count()
print ('Hashed Features Test Log Loss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossTestBaseline, logLossTest))
# TEST Evaluate on the test set (5e)
Test.assertTrue(np.allclose(logLossTestBaseline, 0.537438),
'incorrect value for logLossTestBaseline')
Test.assertTrue(np.allclose(logLossTest, 0.455616931), 'incorrect value for logLossTest')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Log, run_cmd
Step2: The function run_cmd runs a command line and returns the standard output and error
Step3: Ask something to the user in a notebook
Step4: Explore a folder
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu(header="Plan")
from pyquickhelper.loghelper import fLOG
fLOG(OutputPrint=False) # by default
fLOG("not printed")
fLOG(OutputPrint=True)
fLOG("printed")
from pyquickhelper.loghelper import run_cmd
out,err=run_cmd("help", wait=True, shell=True)
print(out)
from pyquickhelper.ipythonhelper import open_html_form
params= {"module":"", "version":"v..."}
open_html_form(params, "fill the fields", "form1")
form1
from jyquickhelper.helper_in_notebook import set_notebook_name_theNotebook
set_notebook_name_theNotebook()
import os
path = os.path.join(os.path.abspath(os.path.dirname(theNotebook)), "..")
from pyquickhelper.filehelper import explore_folder
d,f =explore_folder(path)
[ os.path.split(_)[-1] for _ in f[:5]]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Use the Q8 class that places these 4 numbers in 8 slots like so
Step3: If you are unfamiliar with this notation, the $I^2 = -1,\, i^3=-i,\, j^3=-j,\, k^3=-k$. Only positive numbers are used, with additive inverse put in these placeholders.
Step4: Should we expect the first term to change? Look into the triple product first term where I use Capital variable for 3-vectors to simplify the presentation
Step5: The first term does change! At least in the non-reduced Q8 format, there is a change because it is composed of the positive and negative terms we saw in the algebra problem. For example, there is the vector identity W.WxR=0. The cross product makes a vector that is 90 degrees to both W and R. The dot product of that cross product with W is zero because nothing is in the direction of W anymore. This shows up algebraically because the 6 terms of the cross product have three positive terms and three negative terms that exactly cancel when dotted to W. But the values remain in the $I^0$ and $I^2$ terms until Q8 is reduced.
Step6: If squared, the reduced interval should be the same too
Step7: But what should we make of these non-reduced calculations? Here is my speculation. In classical physics, one always, always, always uses the reduced form of a Q8 quaternion measurement. Classical physics involves one thing doing something. Physics gets odd when dealing with relativistic quantum feild theory. That is a rare sport played only when a one packet of protons collides with another inside an atom smasher. In those entirely odd situations, one must start thinking about multiple particles because we cannot know what happened, there is too much energy around, so we sum over all possible histories.
Step8: This looks well-behaved because the the U and V if the U and V form a product before being applied, it results in the same answer as doing one after the other. I was a bit surprised this work without having to reduce the results.
Step9: The reduced interval is $124 \,I^2$, whether boosted or not. The norm will shrink because all the number are a little smaller, no longer quite (5, 6, 7, 8).
Step10: Rotations in Space and Time
Step11: Because of the rotation, the z value was larger. It is a safe bet that the norm turns out to be smaller as happened before
Step12: Ratios at Work
|
<ASSISTANT_TASK:>
Python Code:
%%capture
from Q_tool_devo import Q8;
U=Q8([1,2,-3,4])
V=Q8([4,-2,3,1])
R=Q8([5,6,7,-8])
print(U)
print(R)
def rotate_R_by_U(R, U):
Given a space-time number R, rotate it by Q.
return U.triple_product(R, U.invert())
R_rotated = rotate_R_by_U(R,U)
print(R_rotated)
print(R_rotated.reduce())
print(R.norm_squared())
print(R_rotated.norm_squared())
print(R_rotated.norm_squared().reduce())
print(R.square().reduce())
print(R_rotated.square())
print(R_rotated.square().reduce())
product_UV = rotate_R_by_U(R, V.product(U))
product_rotations = rotate_R_by_U(rotate_R_by_U(R, V), U)
print(product_UV)
print(product_rotations)
print(product_UV.reduce())
print(product_rotations.reduce())
R_boosted=R.boost(0.01,0.02, 0.003)
print("boosted: {}".format(R_boosted.reduce()))
print(R.square().reduce())
print(R_boosted.square())
print(R_boosted.square().reduce())
print(R.norm_squared().reduce())
print(R_boosted.norm_squared())
print(R_boosted.norm_squared().reduce())
R_rotated_and_boosted = R_rotated.boost(0.01,0.02, 0.003)
print("rotated and boosted: {}".format(R_rotated_and_boosted.reduce()))
print(R.square().reduce())
print(R_rotated_and_boosted.square())
print(R_rotated_and_boosted.square().reduce())
print(R.norm_squared().reduce())
print(R_rotated_and_boosted.norm_squared())
print(R_rotated_and_boosted.norm_squared().reduce())
print(R.product(U).dif(U.product(R)))
print(R.vahlen_conj().product(U.vahlen_conj()).dif(U.vahlen_conj().product(R.vahlen_conj())))
print(R.vahlen_conj("'").product(U.vahlen_conj("'")).dif(U.vahlen_conj("'").product(R.vahlen_conj("'"))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Aod Plus Ccn
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 13.3. External Mixture
Step59: 14. Optical Radiative Properties --> Radiative Scheme
Step60: 14.2. Shortwave Bands
Step61: 14.3. Longwave Bands
Step62: 15. Optical Radiative Properties --> Cloud Interactions
Step63: 15.2. Twomey
Step64: 15.3. Twomey Minimum Ccn
Step65: 15.4. Drizzle
Step66: 15.5. Cloud Lifetime
Step67: 15.6. Longwave Bands
Step68: 16. Model
Step69: 16.2. Processes
Step70: 16.3. Coupling
Step71: 16.4. Gas Phase Precursors
Step72: 16.5. Scheme Type
Step73: 16.6. Bulk Scheme Species
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-3', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Training with a Validation Set and Early Stopping
Step2: Calculate Classification Accuracy
Step3: Calculate Classification Log Loss
Step4: Log loss is calculated as follows
Step5: Evaluating Regression Results
Step6: Mean Square Error
Step7: Root Mean Square Error
Step8: Training with Cross Validation
Step9: Training with Cross Validation and a Holdout Set
Step10: How Kaggle Competitions are Scored
Step11: Grid Search
Step12: The following code performs a grid search. Your system is queried for the number of cores available they are used to scan through the combinations of hyperparameters that you specify.
Step13: The best combination of hyperparameters are displayed.
|
<ASSISTANT_TASK:>
Python Code:
from sklearn import preprocessing
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df,name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = "{}-{}".format(name,x)
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).
def encode_text_index(df,name):
le = preprocessing.LabelEncoder()
df[name] = le.fit_transform(df[name])
return le.classes_
# Encode a numeric column as zscores
def encode_numeric_zscore(df,name,mean=None,sd=None):
if mean is None:
mean = df[name].mean()
if sd is None:
sd = df[name].std()
df[name] = (df[name]-mean)/sd
# Convert all missing values in the specified column to the median
def missing_median(df, name):
med = df[name].median()
df[name] = df[name].fillna(med)
# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs
def to_xy(df,target):
result = []
for x in df.columns:
if x != target:
result.append(x)
# find out the type of the target column. Is it really this hard? :(
target_type = df[target].dtypes
target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type
print(target_type)
# Encode to int for classification, float otherwise. TensorFlow likes 32 bits.
if target_type in (np.int64, np.int32):
# Classification
return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.int32)
else:
# Regression
return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.float32)
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
# Regression chart, we will see more of this chart in the next class.
def chart_regression(pred,y):
t = pd.DataFrame({'pred' : pred.flatten(), 'y' : y_test.flatten()})
t.sort_values(by=['y'],inplace=True)
a = plt.plot(t['y'].tolist(),label='expected')
b = plt.plot(t['pred'].tolist(),label='prediction')
plt.ylabel('output')
plt.legend()
plt.show()
import os
import pandas as pd
from sklearn.cross_validation import train_test_split
import tensorflow.contrib.learn as skflow
import numpy as np
path = "./data/"
filename = os.path.join(path,"iris.csv")
df = pd.read_csv(filename,na_values=['NA','?'])
# Encode feature vector
encode_numeric_zscore(df,'petal_w')
encode_numeric_zscore(df,'petal_l')
encode_numeric_zscore(df,'sepal_w')
encode_numeric_zscore(df,'sepal_l')
species = encode_text_index(df,"species")
num_classes = len(species)
# Create x & y for training
# Create the x-side (feature vectors) of the training
x, y = to_xy(df,'species')
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=45)
# as much as I would like to use 42, it gives a perfect result, and a boring confusion matrix!
# Create a deep neural network with 3 hidden layers of 10, 20, 10
classifier = skflow.TensorFlowDNNClassifier(hidden_units=[20, 10, 5], n_classes=num_classes,
steps=10000)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50, n_classes=num_classes)
# Fit/train neural network
classifier.fit(x_train, y_train, monitor=early_stop)
from sklearn import metrics
# Evaluate success using accuracy
pred = classifier.predict(x_test)
score = metrics.accuracy_score(y_test, pred)
print("Accuracy score: {}".format(score))
pred = classifier.predict_proba(x_test)
np.set_printoptions(precision=4)
print("Numpy array of predictions")
print(pred[0:5])
print("As percent probability")
(pred[0:5]*100).astype(int)
score = metrics.log_loss(y_test, pred)
print("Log loss score: {}".format(score))
%matplotlib inline
from matplotlib.pyplot import figure, show
from numpy import arange, sin, pi
t = arange(0.0, 5.0, 0.00001)
#t = arange(1.0, 5.0, 0.00001) # computer scientists
#t = arange(0.0, 1.0, 0.00001) # data scientists
fig = figure(1,figsize=(12, 10))
ax1 = fig.add_subplot(211)
ax1.plot(t, np.log(t))
ax1.grid(True)
ax1.set_ylim((-8, 1.5))
ax1.set_xlim((-0.1, 2))
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_title('log(x)')
show()
import tensorflow.contrib.learn as skflow
from sklearn.cross_validation import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
encode_numeric_zscore(df, 'horsepower')
encode_numeric_zscore(df, 'weight')
encode_numeric_zscore(df, 'cylinders')
encode_numeric_zscore(df, 'displacement')
encode_numeric_zscore(df, 'acceleration')
encode_text_dummy(df, 'origin')
# Encode to a 2D matrix for training
x,y = to_xy(df,'mpg')
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.20, random_state=42)
# Create a deep neural network with 3 hidden layers of 50, 25, 10
regressor = skflow.TensorFlowDNNRegressor(hidden_units=[50, 25, 10], steps=5000)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50)
# Fit/train neural network
regressor.fit(x_train, y_train, monitor=early_stop)
pred = regressor.predict(x_test)
# Measure MSE error.
score = metrics.mean_squared_error(pred,y_test)
print("Final score (MSE): {}".format(score))
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
import tensorflow.contrib.learn as skflow
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.cross_validation import KFold
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
filename_write = os.path.join(path,"auto-mpg-out-of-sample.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
encode_numeric_zscore(df, 'horsepower')
encode_numeric_zscore(df, 'weight')
encode_numeric_zscore(df, 'cylinders')
encode_numeric_zscore(df, 'displacement')
encode_numeric_zscore(df, 'acceleration')
encode_text_dummy(df, 'origin')
# Shuffle
np.random.seed(42)
df = df.reindex(np.random.permutation(df.index))
df.reset_index(inplace=True, drop=True)
# Encode to a 2D matrix for training
x,y = to_xy(df,'mpg')
# Cross validate
kf = KFold(len(x), n_folds=5)
oos_y = []
oos_pred = []
fold = 1
for train, test in kf:
print("Fold #{}".format(fold))
fold+=1
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
# Create a deep neural network with 3 hidden layers of 10, 20, 10
regressor = skflow.TensorFlowDNNRegressor(hidden_units=[10, 20, 10], steps=500)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50)
# Fit/train neural network
regressor.fit(x_train, y_train, monitor=early_stop)
# Add the predictions to the oos prediction list
pred = regressor.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure accuracy
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Fold score (RMSE): {}".format(score))
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print("Final, out of sample score (RMSE): {}".format(score))
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
oosDF.to_csv(filename_write,index=False)
import tensorflow.contrib.learn as skflow
from sklearn.cross_validation import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.cross_validation import KFold
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
filename_write = os.path.join(path,"auto-mpg-holdout.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
encode_numeric_zscore(df, 'horsepower')
encode_numeric_zscore(df, 'weight')
encode_numeric_zscore(df, 'cylinders')
encode_numeric_zscore(df, 'displacement')
encode_numeric_zscore(df, 'acceleration')
encode_text_dummy(df, 'origin')
# Shuffle
np.random.seed(42)
df = df.reindex(np.random.permutation(df.index))
df.reset_index(inplace=True, drop=True)
# Encode to a 2D matrix for training
x,y = to_xy(df,'mpg')
# Keep a 10% holdout
x_main, x_holdout, y_main, y_holdout = train_test_split(
x, y, test_size=0.10)
# Cross validate
kf = KFold(len(x_main), n_folds=5)
oos_y = []
oos_pred = []
fold = 1
for train, test in kf:
print("Fold #{}".format(fold))
fold+=1
x_train = x_main[train]
y_train = y_main[train]
x_test = x_main[test]
y_test = y_main[test]
# Create a deep neural network with 3 hidden layers of 10, 20, 10
regressor = skflow.TensorFlowDNNRegressor(hidden_units=[10, 20, 10], steps=500)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50)
# Fit/train neural network
regressor.fit(x_train, y_train, monitor=early_stop)
# Add the predictions to the OOS prediction list
pred = regressor.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure accuracy
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Fold score (RMSE): {}".format(score))
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print()
print("Cross-validated score (RMSE): {}".format(score))
# Write the cross-validated prediction
holdout_pred = regressor.predict(x_holdout)
score = np.sqrt(metrics.mean_squared_error(holdout_pred,y_holdout))
print("Holdout score (RMSE): {}".format(score))
%matplotlib inline
from matplotlib.pyplot import figure, show
from numpy import arange
import tensorflow.contrib.learn as skflow
import pandas as pd
import os
import numpy as np
import tensorflow as tf
from sklearn import metrics
from scipy.stats import zscore
import matplotlib.pyplot as plt
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
encode_numeric_zscore(df, 'horsepower')
encode_numeric_zscore(df, 'weight')
encode_numeric_zscore(df, 'cylinders')
encode_numeric_zscore(df, 'displacement')
encode_numeric_zscore(df, 'acceleration')
encode_text_dummy(df, 'origin')
# Encode to a 2D matrix for training
x,y = to_xy(df,'mpg')
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
# Create a deep neural network with 3 hidden layers of 50, 25, 10
regressor = skflow.TensorFlowDNNRegressor(
hidden_units=[50, 25, 10],
batch_size = 32,
optimizer='SGD',
learning_rate=0.01,
steps=5000)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50)
# Fit/train neural network
regressor.fit(x_train, y_train, monitor=early_stop)
# Measure RMSE error. RMSE is common for regression.
pred = regressor.predict(x_test)
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
# Plot the chart
chart_regression(pred,y_test)
import multiprocessing
print("Your system has {} cores.".format(multiprocessing.cpu_count()))
%matplotlib inline
from matplotlib.pyplot import figure, show
from numpy import arange
import tensorflow.contrib.learn as skflow
import pandas as pd
import os
import numpy as np
import tensorflow as tf
from sklearn import metrics
from scipy.stats import zscore
from sklearn.grid_search import GridSearchCV
import multiprocessing
import time
from sklearn.cross_validation import train_test_split
import matplotlib.pyplot as plt
def main():
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
start_time = time.time()
# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
encode_numeric_zscore(df, 'horsepower')
encode_numeric_zscore(df, 'weight')
encode_numeric_zscore(df, 'cylinders')
encode_numeric_zscore(df, 'displacement')
encode_numeric_zscore(df, 'acceleration')
encode_text_dummy(df, 'origin')
# Encode to a 2D matrix for training
x,y = to_xy(df,'mpg')
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
# The hyperparameters specified here will be searched. Every combination.
param_grid = {
'learning_rate': [0.1, 0.01, 0.001],
'batch_size': [8, 16, 32]
}
# Create a deep neural network. The hyperparameters specified here remain fixed.
model = skflow.TensorFlowDNNRegressor(
hidden_units=[50, 25, 10],
batch_size = 32,
optimizer='SGD',
steps=5000)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50)
# Startup grid search
threads = 1 #multiprocessing.cpu_count()
print("Using {} cores.".format(threads))
regressor = GridSearchCV(model, verbose=True, n_jobs=threads,
param_grid=param_grid,fit_params={'monitor':early_stop})
# Fit/train neural network
regressor.fit(x_train, y_train)
# Measure RMSE error. RMSE is common for regression.
pred = regressor.predict(x_test)
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
print("Final options: {}".format(regressor.best_params_))
# Plot the chart
chart_regression(pred,y_test)
elapsed_time = time.time() - start_time
print("Elapsed time: {}".format(hms_string(elapsed_time)))
# Allow windows to multi-thread (unneeded on advanced OS's)
# See: https://docs.python.org/2/library/multiprocessing.html
if __name__ == '__main__':
main()
%matplotlib inline
from matplotlib.pyplot import figure, show
from numpy import arange
import tensorflow.contrib.learn as skflow
import pandas as pd
import os
import numpy as np
import tensorflow as tf
from sklearn import metrics
from scipy.stats import zscore
from scipy.stats import randint as sp_randint
from sklearn.grid_search import RandomizedSearchCV
import multiprocessing
import time
from sklearn.cross_validation import train_test_split
import matplotlib.pyplot as plt
def main():
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
start_time = time.time()
# create feature vector
missing_median(df, 'horsepower')
df.drop('name',1,inplace=True)
encode_numeric_zscore(df, 'horsepower')
encode_numeric_zscore(df, 'weight')
encode_numeric_zscore(df, 'cylinders')
encode_numeric_zscore(df, 'displacement')
encode_numeric_zscore(df, 'acceleration')
encode_text_dummy(df, 'origin')
# Encode to a 2D matrix for training
x,y = to_xy(df,'mpg')
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
# The hyperparameters specified here will be searched. A random sample will be searched.
param_dist = {
'learning_rate': [0.1, 0.01, 0.001],
'batch_size': sp_randint(4, 32),
}
model = skflow.TensorFlowDNNRegressor(
hidden_units=[50, 25, 10],
batch_size = 32,
optimizer='SGD',
steps=5000)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(x_test, y_test,
early_stopping_rounds=200, print_steps=50)
# Random search
threads = 1 #multiprocessing.cpu_count()
print("Using {} cores.".format(threads))
regressor = RandomizedSearchCV(model, verbose=True, n_iter = 10,
n_jobs=threads, param_distributions=param_dist,
fit_params={'monitor':early_stop})
# Fit/train neural network
regressor.fit(x_train, y_train)
# Measure RMSE error. RMSE is common for regression.
pred = regressor.predict(x_test)
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
print("Final options: {}".format(regressor.best_params_))
# Plot the chart
chart_regression(pred,y_test)
elapsed_time = time.time() - start_time
print("Elapsed time: {}".format(hms_string(elapsed_time)))
# Allow windows to multi-thread (unneeded on advanced OS's)
# See: https://docs.python.org/2/library/multiprocessing.html
if __name__ == '__main__':
main()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, let's use Thinc's prefer_gpu helper to make sure we're performing operations on GPU if available. The function should be called right after importing Thinc, and it returns a boolean indicating whether the GPU has been activated. If we're on GPU, we can also call use_pytorch_for_gpu_memory to route cupy's memory allocation via PyTorch, so both can play together nicely.
Step3: Overview
Step5: Defining the model
Step6: The wrapped tokenizer will take a list-of-lists as input (the texts) and will output a TokensPlus object containing the fully padded batch of tokens. The wrapped transformer will take a list of TokensPlus objects and will output a list of 2-dimensional arrays.
Step7: The forward pass takes the model and a list-of-lists of strings and outputs the TokensPlus dataclass. It also outputs a dummy callback function, to meet the API contract for Thinc models. Even though there's no way we can meaningfully "backpropagate" this layer, we need to make sure the function has the right signature, so that it can be used interchangeably with other layers.
Step8: The input and output transformation functions give you full control of how data is passed into and out of the underlying PyTorch model, so you can work with PyTorch layers that expect and return arbitrary objects. Putting it all together, we now have a nice layer that is configured with the name of a transformer model, that acts as a function mapping tokenized input into feature vectors.
Step9: We can now combine the TransformersTokenizer and Transformer into a feed-forward network using the chain combinator. The with_array layer transforms a sequence of data into a contiguous 2d array on the way into and
Step10: Training the model
Step11: We’ve prepared a separate package ml-datasets with loaders for some common datasets, including the AnCora data. If we're using a GPU, calling ops.asarray on the outputs ensures that they're converted to cupy arrays (instead of numpy arrays). Calling Model.initialize with a batch of inputs and outputs allows Thinc to infer the missing dimensions.
Step12: Helper functions for training and evaluation
Step13: The training loop
|
<ASSISTANT_TASK:>
Python Code:
!pip install "thinc>=8.0.0a0" transformers torch "ml_datasets>=0.2.0a0" "tqdm>=4.41"
from thinc.api import prefer_gpu, use_pytorch_for_gpu_memory
is_gpu = prefer_gpu()
print("GPU:", is_gpu)
if is_gpu:
use_pytorch_for_gpu_memory()
CONFIG =
[model]
@layers = "TransformersTagger.v1"
starter = "bert-base-multilingual-cased"
[optimizer]
@optimizers = "Adam.v1"
[optimizer.learn_rate]
@schedules = "warmup_linear.v1"
initial_rate = 0.01
warmup_steps = 3000
total_steps = 6000
[loss]
@losses = "SequenceCategoricalCrossentropy.v1"
[training]
batch_size = 128
words_per_subbatch = 2000
n_epoch = 10
from typing import Optional, List
import numpy
from thinc.types import Ints1d, Floats2d
from dataclasses import dataclass
import torch
from transformers import BatchEncoding, TokenSpan
@dataclass
class TokensPlus:
batch_size: int
tok2wp: List[Ints1d]
input_ids: torch.Tensor
token_type_ids: torch.Tensor
attention_mask: torch.Tensor
def __init__(self, inputs: List[List[str]], wordpieces: BatchEncoding):
self.input_ids = wordpieces["input_ids"]
self.attention_mask = wordpieces["attention_mask"]
self.token_type_ids = wordpieces["token_type_ids"]
self.batch_size = self.input_ids.shape[0]
self.tok2wp = []
for i in range(self.batch_size):
spans = [wordpieces.word_to_tokens(i, j) for j in range(len(inputs[i]))]
self.tok2wp.append(self.get_wp_starts(spans))
def get_wp_starts(self, spans: List[Optional[TokenSpan]]) -> Ints1d:
Calculate an alignment mapping each token index to its first wordpiece.
alignment = numpy.zeros((len(spans)), dtype="i")
for i, span in enumerate(spans):
if span is None:
raise ValueError(
"Token did not align to any wordpieces. Was the tokenizer "
"run with is_split_into_words=True?"
)
else:
alignment[i] = span.start
return alignment
def test_tokens_plus(name: str="bert-base-multilingual-cased"):
from transformers import AutoTokenizer
inputs = [
["Our", "band", "is", "called", "worlthatmustbedivided", "!"],
["We", "rock", "!"]
]
tokenizer = AutoTokenizer.from_pretrained(name)
wordpieces = tokenizer(
inputs,
is_split_into_words=True,
add_special_tokens=True,
return_token_type_ids=True,
return_attention_mask=True,
return_length=True,
return_tensors="pt",
padding="longest"
)
tplus = TokensPlus(inputs, wordpieces)
assert len(tplus.tok2wp) == len(inputs) == len(tplus.input_ids)
for i, align in enumerate(tplus.tok2wp):
assert len(align) == len(inputs[i])
for j in align:
assert j >= 0 and j < tplus.input_ids.shape[1]
test_tokens_plus()
import thinc
from thinc.api import Model
from transformers import AutoTokenizer
@thinc.registry.layers("transformers_tokenizer.v1")
def TransformersTokenizer(name: str) -> Model[List[List[str]], TokensPlus]:
def forward(model, inputs: List[List[str]], is_train: bool):
tokenizer = model.attrs["tokenizer"]
wordpieces = tokenizer(
inputs,
is_split_into_words=True,
add_special_tokens=True,
return_token_type_ids=True,
return_attention_mask=True,
return_length=True,
return_tensors="pt",
padding="longest"
)
return TokensPlus(inputs, wordpieces), lambda d_tokens: []
return Model("tokenizer", forward, attrs={"tokenizer": AutoTokenizer.from_pretrained(name)})
from typing import List, Tuple, Callable
from thinc.api import ArgsKwargs, torch2xp, xp2torch
from thinc.types import Floats2d
def convert_transformer_inputs(model, tokens: TokensPlus, is_train):
kwargs = {
"input_ids": tokens.input_ids,
"attention_mask": tokens.attention_mask,
"token_type_ids": tokens.token_type_ids,
}
return ArgsKwargs(args=(), kwargs=kwargs), lambda dX: []
def convert_transformer_outputs(
model: Model,
inputs_outputs: Tuple[TokensPlus, Tuple[torch.Tensor]],
is_train: bool
) -> Tuple[List[Floats2d], Callable]:
tplus, trf_outputs = inputs_outputs
wp_vectors = torch2xp(trf_outputs[0])
tokvecs = [wp_vectors[i, idx] for i, idx in enumerate(tplus.tok2wp)]
def backprop(d_tokvecs: List[Floats2d]) -> ArgsKwargs:
# Restore entries for BOS and EOS markers
d_wp_vectors = model.ops.alloc3f(*trf_outputs[0].shape, dtype="f")
for i, idx in enumerate(tplus.tok2wp):
d_wp_vectors[i, idx] += d_tokvecs[i]
return ArgsKwargs(
args=(trf_outputs[0],),
kwargs={"grad_tensors": xp2torch(d_wp_vectors)},
)
return tokvecs, backprop
import thinc
from thinc.api import PyTorchWrapper
from transformers import AutoModel
@thinc.registry.layers("transformers_model.v1")
def Transformer(name: str) -> Model[TokensPlus, List[Floats2d]]:
return PyTorchWrapper(
AutoModel.from_pretrained(name),
convert_inputs=convert_transformer_inputs,
convert_outputs=convert_transformer_outputs,
)
from thinc.api import chain, with_array, Softmax
@thinc.registry.layers("TransformersTagger.v1")
def TransformersTagger(starter: str, n_tags: int = 17) -> Model[List[List[str]], List[Floats2d]]:
return chain(
TransformersTokenizer(starter),
Transformer(starter),
with_array(Softmax(n_tags)),
)
from thinc.api import Config, registry
C = registry.resolve(Config().from_str(CONFIG))
model = C["model"]
optimizer = C["optimizer"]
calculate_loss = C["loss"]
cfg = C["training"]
import ml_datasets
(train_X, train_Y), (dev_X, dev_Y) = ml_datasets.ud_ancora_pos_tags()
train_Y = list(map(model.ops.asarray, train_Y)) # convert to cupy if needed
dev_Y = list(map(model.ops.asarray, dev_Y)) # convert to cupy if needed
model.initialize(X=train_X[:5], Y=train_Y[:5])
def minibatch_by_words(pairs, max_words):
pairs = list(zip(*pairs))
pairs.sort(key=lambda xy: len(xy[0]), reverse=True)
batch = []
for X, Y in pairs:
batch.append((X, Y))
n_words = max(len(xy[0]) for xy in batch) * len(batch)
if n_words >= max_words:
yield batch[:-1]
batch = [(X, Y)]
if batch:
yield batch
def evaluate_sequences(model, Xs: List[Floats2d], Ys: List[Floats2d], batch_size: int) -> float:
correct = 0.0
total = 0.0
for X, Y in model.ops.multibatch(batch_size, Xs, Ys):
Yh = model.predict(X)
for yh, y in zip(Yh, Y):
correct += (y.argmax(axis=1) == yh.argmax(axis=1)).sum()
total += y.shape[0]
return float(correct / total)
from tqdm.notebook import tqdm
from thinc.api import fix_random_seed
fix_random_seed(0)
for epoch in range(cfg["n_epoch"]):
batches = model.ops.multibatch(cfg["batch_size"], train_X, train_Y, shuffle=True)
for outer_batch in tqdm(batches, leave=False):
for batch in minibatch_by_words(outer_batch, cfg["words_per_subbatch"]):
inputs, truths = zip(*batch)
inputs = list(inputs)
guesses, backprop = model(inputs, is_train=True)
backprop(calculate_loss.get_grad(guesses, truths))
model.finish_update(optimizer)
optimizer.step_schedules()
score = evaluate_sequences(model, dev_X, dev_Y, cfg["batch_size"])
print(epoch, f"{score:.3f}")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
Step3: Explore the Data
Step6: Implement Preprocessing Functions
Step9: Tokenize Punctuation
Step11: Preprocess all the data and save it
Step13: Check Point
Step15: Build the Neural Network
Step18: Input
Step21: Build RNN Cell and Initialize
Step24: Word Embedding
Step27: Build RNN
Step30: Build the Neural Network
Step33: Batches
Step35: Neural Network Training
Step37: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Implement Generate Functions
Step49: Choose Word
Step51: Generate TV Script
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = 'data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
text[0:500]
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
int_to_vocab = {ii: word for ii, word in enumerate(vocab, 1)}
return (vocab_to_int, int_to_vocab)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
punct_list = {'.': '||period||',
',': '||comma||',
'"': '||quotation_mark||',
';': '||semicolon||',
'!': '||exclamation_mark||',
'?': '||question_mark||',
'(': '||left_parentheses||',
')': '||right_parentheses||',
'--': '||dash||',
'\n': '||return||'}
return punct_list
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
len(int_text)
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return (inputs, targets, learning_rate)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * 2)
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name= "initial_state")
return (cell, initial_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.truncated_normal((vocab_size, embed_dim), stddev=0.25))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name="final_state")
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
inputs = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, inputs)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
slice_size = batch_size * seq_length
n_batches = len(int_text) // slice_size
# We will drop the last few words to keep the batches in equal size
used_data = int_text[0:n_batches * slice_size + 1]
batches = []
for i in range(n_batches):
input_batch = []
target_batch = []
for j in range(batch_size):
start_idx = i * batch_size + j * seq_length
end_idx = i * batch_size + (j + 1) * seq_length
input_batch.append(used_data[start_idx: end_idx])
target_batch.append(used_data[start_idx + 1: end_idx + 1])
batches.append([input_batch, target_batch])
return np.array(batches)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
# Number of Epochs
num_epochs = 50
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 1024
# Sequence Length
seq_length = 16
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 11
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]])
)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
inputs = loaded_graph.get_tensor_by_name("input:0")
initial_state = loaded_graph.get_tensor_by_name("initial_state:0")
final_state = loaded_graph.get_tensor_by_name("final_state:0")
probs = loaded_graph.get_tensor_by_name("probs:0")
return (inputs, initial_state, final_state, probs)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
from random import randint
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
return int_to_vocab[np.argmax(probabilities)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
gen_length = 300
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 8.1 Data Structures
Step2: The index of a Series can be specified during its creation and giving it a similar function to a dictionary.
Step3: Data can be extracted by specifying the element position or index (similar to list/dictionary).
Step4: Series can also be constructed from a dictionary.
Step5: The elements can be sort using the <span style="color
Step6: Another sorting function is the <span style="color
Step7: Conditions can be applied to the elements.
Step8: A DataFrame is a 2-dimensional data structure with named rows and columns. It is similar to R's data.frame object and function like a spreadsheet. DataFrame can be considered to be made of series of Series data according to the column names. DataFrame can be created by passing a 2-dimensional array of data and specifying the rows and columns names.
Step9: Some of the useful functions that can be applied to a DataFrame include
Step10: NaN means empty, missing data or unavailable.
Step11: 8.2 Data Operations
Step12: Other statistical functions can be checked by typing df.< TAB >.
Step13: Replacing, rearranging and operations of data between columns can be done much like spreadsheet.
Step14: Tables can be split, rearranged and combined.
Step15: 8.3 Plotting Functions
Step16: In the above example, repositioning the legend requires the legend function in <span style="color
Step17: A 3-dimensional plot can be projected on a canvas but requires the <span style="color
Step18: 8.4 Reading And Writing Data To File
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
# Creating a series (with different type of data)
s1 = pd.Series([34, 'Material', 4*np.pi, 'Reactor', [100,250,500,750], 'kW'])
s1
# Creating a series with specified index
lt = [34, 'Material', 4*np.pi, 'Reactor', [100,250,500,750], 'kW']
s2 = pd.Series(lt, index = ['b1', 'r1', 'solid angle', 18, 'reactor power', 'unit'])
s2
s1[3], s2['solid angle']
pop_cities = {'Kuala Lumpur':1588750, 'Seberang Perai':818197, 'Kajang':795522,
'Klang':744062, 'Subang Jaya':708296}
cities = pd.Series(pop_cities)
cities
cities.order(ascending=False)
cities
# Sorting with descending values
cities.sort(ascending=False)
cities
cities
# cities with population less than 800,000
cities[cities<800000]
# cities with population between 750,000 and 800,000
cities[cities<800000][cities>750000]
# Creating a DataFrame by passing a 2-D numpy array of random number
# Creating first the date-time index using date_range function
# and checking it.
dates = pd.date_range('20140801', periods = 8, freq = 'D')
dates
# Creating the column names as list
Kedai = ['Kedai A', 'Kedai B', 'Kedai C', 'Kedai D', 'Kedai E']
# Creating the DataFrame with specified rows and columns
df = pd.DataFrame(np.random.randn(8,5),index=dates,columns=Kedai)
df
df.head() # Displaying the first five (default) rows
df.head(3) # Displaying the first three (specified) rows
df.tail(2) # Displaying the last two (specified) rows
df.index # Showing the index of rows
df.columns # Showing the fields of columns
df.values # Showing the data only in its original 2-D array
df.describe() # Simple statistical data for each column
df.T # Transposing the DataFrame (index becomes column and vice versa)
df.sort_index(axis=1,ascending=False) # Sorting with descending column
df.sort(columns='Kedai D') # Sorting according to ascending specific column
df['Kedai A'] # Extract specific column (using python list syntax)
df['Kedai A'][2:4] # Slicing specific column (using python list syntax)
df[2:4] # Slicing specific row data (using python list syntax)
# Slicing specific index range
df['2014-08-03':'2014-08-05']
# Slicing specific index range for a particular column
df['2014-08-03':'2014-08-05']['Kedai B']
# Using the loc() function
# Slicing specific index and column ranges
df.loc['2014-08-03':'2014-08-05','Kedai B':'Kedai D']
# Slicing specific index range with specific column names
df.loc['2014-08-03':'2014-08-05',['Kedai B','Kedai D']]
# Possibly not yet to have something like this
df.loc[['2014-08-01','2014-08-03':'2014-08-05'],['Kedai B','Kedai D']]
# Using the iloc() function
df.iloc[3] # Specific row location
df.iloc[:,3] # Specific column location (all rows)
df.iloc[2:4,1:3] # Python like slicing for range
df.iloc[[2,4],[1,3]] # Slicing with python like list
# Conditionals on the data
df>0 # Array values > 0 OR
df[df>0] # Directly getting the value
df[df['Kedai B']<0] # With reference to specific value in a column (e.g. Kedai B)
df2 = df.copy() # Made a copy of a database
df2
# Adding column
df2['Tambah'] = ['satu','satu','dua','tiga','empat','tiga','lima','enam']
df2
# Adding row using append() function. The previous loc() is possibly deprecated.
# Assign a new name to the new row (with the same format)
new_row_name = pd.date_range('20140809', periods = 1, freq = 'D')
# Appending new row with new data
df2.append(list(np.random.randn(5))+['sembilan'])
# Renaming the new row (here actually is a reassignment)
df2 = df2.rename(index={10: new_row_name[0]})
df2
# Assigning new data to a row
df2.loc['2014-08-05'] = list(np.random.randn(5))+['tujuh']
df2
# Assigning new data to a specific element
df2.loc['2014-08-05','Tambah'] = 'lapan'
df2
# Using the isin() function (returns boolean data frame)
df2.isin(['satu','tiga'])
# Select specific row based on additonal column
df2[df2['Tambah'].isin(['satu','tiga'])]
# Use previous command - select certain column based on selected additional column
df2[df2['Tambah'].isin(['satu','tiga'])].loc[:,'Kedai B':'Kedai D']
# Select > 0 from previous cell...
(df2[df2['Tambah'].isin(['satu','tiga'])].loc[:,'Kedai B':'Kedai D']>0)
df.mean() # Statistical mean (column) - same as df.mean(0), 0 means column
df.mean(1) # Statistical mean (row) - 1 means row
df.mean()['Kedai C':'Kedai E'] # Statistical mean (range of columns)
df.max() # Statistical max (column)
df.max()['Kedai C'] # Statistical max (specific column)
df.max(1)['2014-08-04':'2014-08-07'] # Statistical max (specific row)
df.max(1)[dates[3]] # Statistical max (specific row by variable)
df.apply(lambda x: x.max() - x.min()) # Operating array values with function
df.apply(lambda z: np.log(z)) # Operating array values with function
df3 = df.copy()
df3[r'Kedai A^2/Kedai E'] = df3['Kedai A']**2/df3['Kedai E']
df3
df4 = df.copy()
df4
pieces = [df4[6:], df4[3:6], df4[:3]] # split row 2+3+3
pieces
df5 = pd.concat(pieces) # concantenate (rearrange/combine)
df5
df4+df5 # Operation between tables with original index sequence
df0 = df.loc[:,'Kedai A':'Kedai C'] # Slicing and extracting columns
pd.concat([df4, df0], axis = 1) # Concatenating columns (axis = 1 -> refers to column)
df_add = df.copy()
# Simple auto plotting
%matplotlib inline
df_add.cumsum().plot()
# Reposition the legend
import matplotlib.pyplot as plt
df_add.cumsum().plot()
plt.legend(bbox_to_anchor=[1.3, 1])
df_add.cumsum().plot(kind='bar')
plt.legend(bbox_to_anchor=[1.3, 1])
df_add.cumsum().plot(kind='barh', stacked=True)
df_add.cumsum().plot(kind='hist', alpha=0.5)
df_add.cumsum().plot(kind='area', alpha=0.4, stacked=False)
plt.legend(bbox_to_anchor=[1.3, 1])
# Plotting a 3D bar plot
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
# Convert the time format into ordinary strings
time_series = pd.Series(df.index.format())
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(111, projection='3d')
# Plotting the bar graph column by column
for c, z in zip(['r', 'g', 'b', 'y','m'], np.arange(len(df.columns))):
xs = df.index
ys = df.values[:,z]
ax.bar(xs, ys, zs=z, zdir='y', color=c, alpha=0.5)
ax.set_zlabel('Z')
ax.set_xticklabels(time_series, va = 'baseline', ha = 'right', rotation = 15)
ax.set_yticks(np.arange(len(df.columns)))
ax.set_yticklabels(df.columns, va = 'center', ha = 'left', rotation = -42)
ax.view_init(30, -30)
fig.tight_layout()
# Export data to a csv file but separated with < TAB > rather than comma
# the default separation is with comma
df.to_csv('Tutorial8/Kedai.txt', sep='\t')
# Export to Excel file
df.to_excel('Tutorial8/Kedai.xlsx', sheet_name = 'Tarikh', index = True)
# Importing data from csv file (without header)
from_file = pd.read_csv('Tutorial8/Malaysian_Town.txt',sep='\t',header=None)
from_file.head()
# Importing data from Excel file (with header (the first row) that became the column names)
from_excel = pd.read_excel('Tutorial8/Malaysian_Town.xlsx','Sheet1')
from_excel.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Chapter 12
Step2: Show a bunch of 4s
Step3: Classifying with tree based models
|
<ASSISTANT_TASK:>
Python Code:
import os
import struct
import numpy as np
def load_mnist(path, kind='train'):
Load MNIST data from `path`
labels_path = os.path.join(path,
'%s-labels-idx1-ubyte' % kind)
images_path = os.path.join(path,
'%s-images-idx3-ubyte' % kind)
with open(labels_path, 'rb') as lbpath:
magic, n = struct.unpack('>II',
lbpath.read(8))
labels = np.fromfile(lbpath,
dtype=np.uint8)
with open(images_path, 'rb') as imgpath:
magic, num, rows, cols = struct.unpack(">IIII",
imgpath.read(16))
images = np.fromfile(imgpath,
dtype=np.uint8).reshape(len(labels), 784)
return images, labels
X_train, y_train = load_mnist('mnist', kind='train')
print('Rows: %d, columns: %d' % (X_train.shape[0], X_train.shape[1]))
X_test, y_test = load_mnist('mnist', kind='t10k')
print('Rows: %d, columns: %d' % (X_test.shape[0], X_test.shape[1]))
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True,)
ax = ax.flatten()
for i in range(10):
img = X_train[y_train == i][0].reshape(28, 28)
ax[i].imshow(img, cmap='Greys', interpolation='nearest')
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.tight_layout()
plt.show()
fig, ax = plt.subplots(nrows=5, ncols=5, sharex=True, sharey=True,)
ax = ax.flatten()
for i in range(25):
img = X_train[y_train == 4][i].reshape(28, 28)
ax[i].imshow(img, cmap='Greys', interpolation='nearest')
ax[0].set_xticks([])
ax[0].set_yticks([])
plt.tight_layout()
plt.show()
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
tree10 = DecisionTreeClassifier(criterion='entropy', max_depth=10, random_state=0)
tree100 = DecisionTreeClassifier(criterion='entropy', max_depth=100, random_state=0)
rf10 = RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=1)
rf100 = RandomForestClassifier(criterion='entropy', n_estimators=100, random_state=1)
labeled_models = [
('decision tree depth 10', tree10),
('decision tree depth 100', tree100),
('random forest 10 estimators', rf10),
('random forest 100 estimators', rf100),
]
import time
import subprocess
def say_done(label):
subprocess.call("say 'done with {}'".format(label), shell=True)
for label, model in labeled_models:
before = time.time()
model.fit(X_train, y_train)
after = time.time()
print("{} fit the dataset in {:.1f} seconds".format(label, after - before))
say_done(label)
from sklearn.metrics import accuracy_score
for label, model in labeled_models:
print("{} training fit: {:.3f}".format(label, accuracy_score(y_train, model.predict(X_train))))
print("{} test accuracy: {:.3f}".format(label, accuracy_score(y_test, model.predict(X_test))))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 연습 2
Step2: 연습 3
Step3: 연습 4
Step4: 연습 5
|
<ASSISTANT_TASK:>
Python Code:
def n_divide(n):
pass
n_divide(10)
def sen2word(xs):
pass
sen2word("I am learning Python. It's quite interesting.")
def fibo(n):
pass
fibo(5)
def second(t):
return t[1]
def sort_notes(xs):
pass
L = [("Lee", 45), ("Kim", 30), ("Kang", 70), ("Park", 99), ("Cho", 65)]
sort_notes(L)
def num_sum(xs):
pass
L = [5, 'abc', 2, [2,3]]
num_sum(L)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Encoding the words
Step3: Encoding the labels
Step4: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Step5: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.
Step6: Exercise
Step7: Training, Validation, Test
Step8: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step9: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Step10: Embedding
Step11: LSTM cell
Step12: RNN forward pass
Step13: Output
Step14: Validation accuracy
Step15: Batching
Step16: Training
Step17: Testing
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment_network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment_network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
print(labels[:5])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
len(non_zero_idx)
reviews_ints[-1]
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
labels = np.array([labels[ii] for ii in non_zero_idx])
print(len(reviews_ints))
print(len(labels))
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
print(features.shape)
print(labels.shape)
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
print(n_words)
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In order to work with Matplotlib, the library must be imported first. So we do not have to type so much, we give it a shorter name
Step2: Matplotlib works best with numpy arrays, so we import numpy as well
Step3: <a id=line_plots></a>
Step4: <a id=different_styles></a>
Step5: All styles and colors
Step6: <a id=plot_labels></a>
Step7: A side note on units
Step8: <a id=latex_labels></a>
Step9: <a id=legends></a>
Step10: <a id=grids></a>
Step11: <a id=axis_scales></a>
Step12: <a id=ticks></a>
Step13: <a id=multiple_lines></a>
Step14: Remember
Step15: <a id=shared_axes></a>
Step16: <a id=tight_layout></a>
Step17: <a id=inset_plots></a>
Step18: <a id=error_bars></a>
Step19: <a id=asym_errors></a>
Step20: <a id=upper_limits></a>
Step21: <a id=polar_plots></a>
Step22: <a id=histograms></a>
Step23: <a id=hist2d></a>
Step24: Colormaps
Step25: <a id=oo_syntax></a>
Step26: <a id="mjd"></a>
Step27: <a id=publication_plots></a>
Step28: <a id=save_figures></a>
|
<ASSISTANT_TASK:>
Python Code:
# only for the notebook
%matplotlib inline
# only in the ipython shell
# %matplotlib
import matplotlib.pyplot as plt
# Make the size and fonts larger for this presentation
plt.rcParams['figure.figsize'] = (10, 8)
plt.rcParams['font.size'] = 16
plt.rcParams['lines.linewidth'] = 2
import numpy as np
x = np.linspace(0, 1, 100) # 100 numbers from 0 to 1
plt.plot(x, x**2)
# If not interactive, e.g. in a script:
# plt.show()
t = np.linspace(0, 2 * np.pi) # 50 points between 0 and 2π
plt.plot(t, np.sin(t));
plt.plot(t, np.sin(t), '--');
plt.plot(t, np.sin(t), 'go')
# plt.plot(t, np.sin(t), color='green', marker='o', linestyle=''); # same thing!
# new in matplotlib 2.0, all colors of the color rotation available as C<N>
x = np.linspace(0, 1, 100)
for n in range(9):
plt.plot(x**(n + 1), color='C{}'.format(n))
plt.plot(t, np.sin(t))
plt.xlim(0, 2*np.pi)
plt.ylim(-1.2, 1.2);
with plt.xkcd():
plt.title('Axes with labels')
plt.plot(t, np.sin(t))
plt.xlabel('t / s')
plt.ylabel('U / V')
plt.ylim(-1.1, 1.1)
plt.xlim(0, 2*np.pi)
plt.plot(t, np.sin(t))
title_font = {'fontsize': 24, 'fontweight': 'bold', 'family': 'serif'}
axes_font = {'fontsize': 18, 'fontstyle': 'italic'}
plt.xlabel('t / s', axes_font)
plt.ylabel('U / V', axes_font)
plt.title('Always label your plots!', title_font);
plt.plot(t, np.sin(t))
plt.xlabel(r'$t / \mathrm{s}$') # leading r means "raw", so that '\' is handled correctly
plt.ylabel(r"$\int_0^t \cos(t') \, \mathrm{d}t'$");
plt.plot(t, np.sin(t), label=r'$\sin(t)$')
plt.plot(t, np.cos(t), label=r'$\cos(t)$')
plt.legend()
#plt.legend(loc='upper center')
None # only to avoid cluttering the notebook
plt.plot(t, np.sin(t))
plt.grid()
x = np.linspace(0, 10)
# x = np.logspace(-1, 2, 100)
plt.plot(x, np.exp(-x))
plt.yscale('log')
# plt.xscale('log')
x = np.linspace(0, 2*np.pi)
plt.plot(x, np.sin(x))
plt.xlim(0, 2*np.pi)
# First argument: position, second argument: labels
plt.xticks(
np.arange(0, 2*np.pi + 0.1, np.pi/2),
[r"$0$", r"$\frac{1}{4}\tau$", r"$\frac{1}{2}\tau$", r"$\frac{3}{4}\tau$", r"$\tau$"]
)
plt.title(r"$\tau$ FTW!") # https://tauday.com/tau-manifesto
None
months = ['January',
'February',
'March',
'April',
'May',
'June',
'July',
'August',
'September',
'October',
'November',
'December']
plt.bar(np.arange(12), np.random.rand(12))
plt.xticks(
np.arange(12),
months,
rotation=45,
rotation_mode='anchor',
horizontalalignment='right', # or ha
verticalalignment='top', # or va
);
x = np.linspace(0, 1)
plt.plot(x, x**2, label=r'$x^2$')
plt.plot(x, x**4)
plt.plot(x, x**6, 'o', label=r'$x^6$')
plt.legend(loc='best');
x = np.linspace(0, 2*np.pi)
# subplot arguments: # of rows, # of columns, plot index (row * (#cols) + col)
plt.subplot(2, 1, 1)
plt.plot(x, x**2)
plt.xlim(0, 2*np.pi)
plt.subplot(2, 1, 2)
plt.plot(x, np.sin(x))
plt.xlim(0, 2*np.pi);
def poisson(x, k):
return np.exp(-x)*x**k / np.math.factorial(k)
x = np.linspace(0, 12, 40)
y = poisson(x, 2)
y_noise = y + np.random.normal(0, 0.01, len(y))
z = np.linspace(0, 12, 100)
gridspec = {'height_ratios': [2, 1]}
fig, (ax1, ax2) = plt.subplots(2, sharex=True, gridspec_kw=gridspec)
ax1.plot(x, y_noise, 'ko')
ax1.plot(z, poisson(z, 2))
ax1.set_ylim(-0.05, 0.30)
ax1.set_ylabel('Flux')
ax1.set_yticks(ax1.get_yticks()[1:]) # remove bottom y-tick
ax2.plot(x, y_noise - y, 'ko')
ax2.axhline(y=0, color='black', linestyle='--', linewidth=1)
ax2.set_xlabel('Energy')
ax2.set_ylim(-0.03, 0.04)
ax2.set_ylabel('Residuals')
ax2.set_yticks(ax2.get_yticks()[:-2]) # remove top y-tick
fig.subplots_adjust(hspace=0)
fig.suptitle('\nFake Spectrum', fontweight='bold');
x = np.linspace(0, 2*np.pi)
plt.subplot(2, 1, 1)
plt.plot(x, x**2)
plt.xlim(0, 2*np.pi)
plt.title(r"$f(x)=x^2$")
plt.subplot(2, 1, 2)
plt.plot(x, np.sin(x))
plt.xlim(0, 2*np.pi)
plt.title(r"$f(x)=\sin(x)$")
plt.tight_layout() # try commenting this line out!
plt.plot(x, x**2)
plt.title("Outer Plot")
# axes coordinates: (0,0) is lower left, (1,1) upper right
plt.axes([0.2, 0.45, 0.3, 0.3])
plt.plot(x, x**3)
plt.title("Inner Plot");
x = np.linspace(0, 2*np.pi, 10)
errX = np.random.normal(0, 0.4, 10)
errY = np.random.normal(0, 0.4, 10)
plt.errorbar(x + errX, x + errY, xerr=0.4, yerr=errY, fmt='o');
x = np.linspace(0, 1, 10)
plt.errorbar(
x,
np.sin(2 * np.pi * x),
yerr=[np.full_like(x, 0.5), np.full_like(x, 0.1)],
linestyle='',
marker='o',
)
bins = np.logspace(2, 4, 15)
x = (bins[:-1] + bins[1:]) / 2
y = x**(-2.7)
yerr = y * 0.3
y += np.random.normal(0, yerr)
# mask for which points are upper limits
uplims = np.full_like(x, False)
# last points are only upper limits
y[-3:] += 3 * y[-3:]
yerr[-3:] = 0.3 * y[-3:] # yerr determines length of limit arrow
uplims[-3:] = True
plt.errorbar(
x,
y,
xerr=np.diff(bins/2),
yerr=yerr,
uplims=uplims,
ls='none',
)
plt.xlabel('$E \ / \ \mathrm{GeV}$')
plt.ylabel('$Flux \ / \ \mathrm{GeV}^{-1} \mathrm{s}^{-1} \mathrm{m}^{-2} \mathrm{sr}^{-1}$')
plt.xscale('log')
plt.yscale('log')
r = np.linspace(0, 10, 50)
# r = np.linspace(0, 10, 1000)
theta = 2*np.pi*r
plt.polar(theta, r);
# Generate random data:
x = np.random.normal(0, 1, 1000)
plt.hist(x, bins=25);
x1 = np.random.normal(-1, 1, 1000)
x2 = np.random.normal(1, 1, 1000)
bin_edges = np.linspace(-6, 6, 51) # 50 bins between -6 and 6
plt.hist(x1, bins=bin_edges, histtype='step', label='x1')
plt.hist(x2, bins=bin_edges, histtype='step', label='x2')
plt.legend();
mean = [2, 1]
cov = [[9, 2],
[2, 4]]
x, y = np.random.multivariate_normal(mean, cov, size=10000).T
plt.hist2d(x, y)
# plt.hist2d(x, y, bins=50)
# plt.hist2d(x, y, bins=[25, 50], range=[[-10, 14], [-5, 7]])
plt.colorbar(label='Counts');
from matplotlib.colors import LogNorm
plt.hist2d(x, y, bins=50, norm=LogNorm())
plt.colorbar();
x1, y1 = np.random.multivariate_normal([1, 1], [[1, 0], [0, 1]], 1000).T
x2, y2 = np.random.multivariate_normal([-1, -1], [[1, 0], [0, 1]], 1000).T
plt.scatter(x1, y1)
plt.scatter(x2, y2);
x = np.append(x1, x2)
y = np.append(y1, y2)
s = np.random.uniform(5, 50, 2000)
label = np.append(np.ones_like(x1), np.zeros_like(x2))
plt.scatter(x, y, c=label, s=s);
import matplotlib.pyplot as plt
import numpy as np
t = np.linspace(0, 2*np.pi, 1000)
fig, (ax1, ax2) = plt.subplots(2, 1)
# note that plot is now a method of ax1, not the global plt object
ax1.plot(t, np.sin(t), 'r-')
ax1.set_title(r"$f(t)=\sin(t)$") # use object-oriented get/set syntax
ax1.set_xlabel("$t$")
ax1.set_xlim(0, 2*np.pi)
ax1.set_ylim(-1.1, 1.1)
ax2.plot(t, np.cos(t), 'b-')
ax2.set_title(r"$f(t)=\cos(t)$")
ax2.set_xlabel("$t$")
ax2.set_xlim(0, 2*np.pi)
ax2.set_ylim(-1.1, 1.1)
fig.tight_layout()
from datetime import datetime, timedelta
# constants for ordinal and mjd date representation
MJD_EPOCH = datetime(1858, 11, 17)
ORDINAL_EPOCH = datetime(1, 1, 1)
def ordinal_to_mjd(ordinal):
''' Converts ordinal date (days since 0001-01-01T00:00) to MJD (days since 1858-11-17T00:00)'''
return ordinal - (MJD_EPOCH - ORDINAL_EPOCH).total_seconds() / 86400
# create some random "Crab nebula" data
n_on = np.random.poisson(60, 25)
n_off = np.random.poisson(30, 25)
n_signal = n_on - 0.2 * n_off
n_signal_err = np.sqrt(n_on + 0.2**2 * n_off)
# create some dates
dates = [datetime(2017, 1, 1) + timedelta(days=i) for i in range(25)]
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.errorbar(dates, n_signal, yerr=n_signal_err, ls='')
ax.axhline(n_signal.mean(), color='C1')
ax.set_ylim(0, 80)
ax.set_ylabel(r'Signal Rate / $\mathrm{h}^{-1}$')
fig.autofmt_xdate()
# create a second axis, using the same y-axis
ax_mjd = ax.twiny()
# set its xlims to the same values of the date axis, but convert to mjd
ax_mjd.set_xlim(*map(ordinal_to_mjd, ax.get_xlim()))
ax_mjd.set_xlabel('MJD')
fig.tight_layout()
print(plt.style.available)
from scipy import stats
def plot_stuff():
plt.subplot(2, 2, 1)
x = np.linspace(-1, 1, 1000)
plt.plot(x, np.sin(50*x**3)/(x))
plt.grid()
plt.subplot(2, 2, 2)
x = np.linspace(-1, 1, 10)
y = np.exp(-2.2*x) + np.random.normal(0, 0.1, 10)
yerr = np.random.normal(0, 0.2, 10)
plt.errorbar(x, y, yerr, fmt='o', capsize=3)
plt.yscale('log')
plt.subplot(2, 2, 3)
x = stats.skewnorm.rvs(10, size=1000)
plt.hist(x, bins=50)
plt.subplot(2, 2, 4)
x, y = np.mgrid[-1:1:.01, -1:1:.01]
pos = np.dstack((x, y))
z = stats.multivariate_normal([0.1, 0.3], [[0.2, 0.3], [0.1, 0.4]])
plt.contourf(x, y, z.pdf(pos))
for plot_style in ['classic', 'bmh', 'fivethirtyeight', 'ggplot', 'seaborn']:
plt.figure()
with plt.style.context(plot_style): # use context manager so that changes are temporary
plot_stuff()
plt.suptitle('Plot Style: ' + plot_style, fontweight='bold')
pwd()
x = np.linspace(-5, 5)
plt.plot(x, x**3, marker='s')
plt.title("My Awesome Plot")
# save in current directory; extension determines file type
plt.savefig('awesome_plot.pdf')
plt.savefig('awesome_plot.eps')
plt.savefig('awesome_plot.png', dpi=300) # bitmap graphics; don't use me for publications!
plt.savefig('awesome_plot.jpg', dpi=300) # bitmap graphics; don't use me either!
# relative path with subdirectory
# plt.savefig('build/awesome_plot.pdf')
# absolute path
# plt.saveig('/path/to/output/directory/awesome_plot.pdf')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's download and import our primary Canadian Immigration dataset using pandas read_excel() method. Normally, before we can do that, we would need to download a module which pandas requires to read in excel files. This module is xlrd. For your convenience, we have pre-installed this module, so you would not have to worry about that. Otherwise, you would need to run the following line of code to install the xlrd module
Step2: Let's take a look at the first five items in our dataset
Step3: Let's find out how many entries there are in our dataset
Step4: Clean up data. We will make some modifications to the original dataset to make it easier to create our visualizations. Refer to Introduction to Matplotlib and Line Plots and Area Plots, Histograms, and Bar Plots for a detailed description of this preprocessing.
Step5: Visualizing Data using Matplotlib<a id="4"></a>
Step6: Waffle Charts <a id="6"></a>
Step7: Unfortunately, unlike R, waffle charts are not built into any of the Python visualization libraries. Therefore, we will learn how to create them from scratch.
Step 1. The first step into creating a waffle chart is determing the proportion of each category with respect to the total.
Step8: Step 2. The second step is defining the overall size of the waffle chart.
Step9: Step 3. The third step is using the proportion of each category to determe it respective number of tiles
Step10: Based on the calculated proportions, Denmark will occupy 129 tiles of the waffle chart, Norway will occupy 77 tiles, and Sweden will occupy 194 tiles.
Step 4. The fourth step is creating a matrix that resembles the waffle chart and populating it.
Step11: Let's take a peek at how the matrix looks like.
Step12: As expected, the matrix consists of three categories and the total number of each category's instances matches the total number of tiles allocated to each category.
Step 5. Map the waffle chart matrix into a visual.
Step13: Step 6. Prettify the chart.
Step14: Step 7. Create a legend and add it to chart.
Step15: And there you go! What a good looking delicious waffle chart, don't you think?
Step16: Now to create a waffle chart, all we have to do is call the function create_waffle_chart. Let's define the input parameters
Step17: And now let's call our function to create a waffle chart.
Step18: There seems to be a new Python package for generating waffle charts called PyWaffle, but it looks like the repository is still being built. But feel free to check it out and play with it.
Step19: Word clouds are commonly used to perform high-level analysis and visualization of text data. Accordinly, let's digress from the immigration dataset and work with an example that involves analyzing text data. Let's try to analyze a short novel written by Lewis Carroll titled Alice's Adventures in Wonderland. Let's go ahead and download a .txt file of the novel.
Step20: Next, let's use the stopwords that we imported from word_cloud. We use the function set to remove any redundant stopwords.
Step21: Create a word cloud object and generate a word cloud. For simplicity, let's generate a word cloud using only the first 2000 words in the novel.
Step22: Awesome! Now that the word cloud is created, let's visualize it.
Step23: Interesting! So in the first 2000 words in the novel, the most common words are Alice, said, little, Queen, and so on. Let's resize the cloud so that we can see the less frequent words a little better.
Step24: Much better! However, said isn't really an informative word. So let's add it to our stopwords and re-generate the cloud.
Step25: Excellent! This looks really interesting! Another cool thing you can implement with the word_cloud package is superimposing the words onto a mask of any shape. Let's use a mask of Alice and her rabbit. We already created the mask for you, so let's go ahead and download it and call it alice_mask.png.
Step26: Let's take a look at how the mask looks like.
Step27: Shaping the word cloud according to the mask is straightforward using word_cloud package. For simplicity, we will continue using the first 2000 words in the novel.
Step28: Really impressive!
Step29: And what was the total immigration from 1980 to 2013?
Step30: Using countries with single-word names, let's duplicate each country's name based on how much they contribute to the total immigration.
Step31: We are not dealing with any stopwords here, so there is no need to pass them when creating the word cloud.
Step32: According to the above word cloud, it looks like the majority of the people who immigrated came from one of 15 countries that are displayed by the word cloud. One cool visual that you could build, is perhaps using the map of Canada and a mask and superimposing the word cloud on top of the map of Canada. That would be an interesting visual to build!
Step33: Create a new dataframe that stores that total number of landed immigrants to Canada per year from 1980 to 2013.
Step34: With seaborn, generating a regression plot is as simple as calling the regplot function.
Step35: This is not magic; it is seaborn! You can also customize the color of the scatter plot and regression line. Let's change the color to green.
Step36: You can always customize the marker shape, so instead of circular markers, let's use '+'.
Step37: Let's blow up the plot a little bit so that it is more appealing to the sight.
Step38: And let's increase the size of markers so they match the new size of the figure, and add a title and x- and y-labels.
Step39: And finally increase the font size of the tickmark labels, the title, and the x- and y-labels so they don't feel left out!
Step40: Amazing! A complete scatter plot with a regression fit with 5 lines of code only. Isn't this really amazing?
Step41: Or to a white background with gridlines.
Step42: Question
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np # useful for many scientific computing in Python
import pandas as pd # primary data structure library
from PIL import Image # converting images into arrays
df_can = pd.read_excel('https://ibm.box.com/shared/static/lw190pt9zpy5bd1ptyg2aw15awomz9pu.xlsx',
sheet_name='Canada by Citizenship',
skiprows=range(20),
skipfooter=2)
print('Data downloaded and read into a dataframe!')
df_can.head()
# print the dimensions of the dataframe
print(df_can.shape)
# clean up the dataset to remove unnecessary columns (eg. REG)
df_can.drop(['AREA','REG','DEV','Type','Coverage'], axis = 1, inplace = True)
# let's rename the columns so that they make sense
df_can.rename (columns = {'OdName':'Country', 'AreaName':'Continent','RegName':'Region'}, inplace = True)
# for sake of consistency, let's also make all column labels of type string
df_can.columns = list(map(str, df_can.columns))
# set the country name as index - useful for quickly looking up countries using .loc method
df_can.set_index('Country', inplace = True)
# add total column
df_can['Total'] = df_can.sum (axis = 1)
# years that we will be using in this lesson - useful for plotting later on
years = list(map(str, range(1980, 2014)))
print ('data dimensions:', df_can.shape)
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches # needed for waffle Charts
mpl.style.use('ggplot') # optional: for ggplot-like style
# check for latest version of Matplotlib
print ('Matplotlib version: ', mpl.__version__) # >= 2.0.0
# let's create a new dataframe for these three countries
df_dsn = df_can.loc[['Denmark', 'Norway', 'Sweden'], :]
# let's take a look at our dataframe
df_dsn
# compute the proportion of each category with respect to the total
total_values = sum(df_dsn['Total'])
category_proportions = [(float(value) / total_values) for value in df_dsn['Total']]
# print out proportions
for i, proportion in enumerate(category_proportions):
print (df_dsn.index.values[i] + ': ' + str(proportion))
width = 40 # width of chart
height = 10 # height of chart
total_num_tiles = width * height # total number of tiles
print ('Total number of tiles is ', total_num_tiles)
# compute the number of tiles for each catagory
tiles_per_category = [round(proportion * total_num_tiles) for proportion in category_proportions]
# print out number of tiles per category
for i, tiles in enumerate(tiles_per_category):
print (df_dsn.index.values[i] + ': ' + str(tiles))
# initialize the waffle chart as an empty matrix
waffle_chart = np.zeros((height, width))
# define indices to loop through waffle chart
category_index = 0
tile_index = 0
# populate the waffle chart
for col in range(width):
for row in range(height):
tile_index += 1
# if the number of tiles populated for the current category is equal to its corresponding allocated tiles...
if tile_index > sum(tiles_per_category[0:category_index]):
# ...proceed to the next category
category_index += 1
# set the class value to an integer, which increases with class
waffle_chart[row, col] = category_index
print ('Waffle chart populated!')
waffle_chart
# instantiate a new figure object
fig = plt.figure()
# use matshow to display the waffle chart
colormap = plt.cm.coolwarm
plt.matshow(waffle_chart, cmap=colormap)
plt.colorbar()
# instantiate a new figure object
fig = plt.figure()
# use matshow to display the waffle chart
colormap = plt.cm.coolwarm
plt.matshow(waffle_chart, cmap=colormap)
plt.colorbar()
# get the axis
ax = plt.gca()
# set minor ticks
ax.set_xticks(np.arange(-.5, (width), 1), minor=True)
ax.set_yticks(np.arange(-.5, (height), 1), minor=True)
# add gridlines based on minor ticks
ax.grid(which='minor', color='w', linestyle='-', linewidth=2)
plt.xticks([])
plt.yticks([])
# instantiate a new figure object
fig = plt.figure()
# use matshow to display the waffle chart
colormap = plt.cm.coolwarm
plt.matshow(waffle_chart, cmap=colormap)
plt.colorbar()
# get the axis
ax = plt.gca()
# set minor ticks
ax.set_xticks(np.arange(-.5, (width), 1), minor=True)
ax.set_yticks(np.arange(-.5, (height), 1), minor=True)
# add gridlines based on minor ticks
ax.grid(which='minor', color='w', linestyle='-', linewidth=2)
plt.xticks([])
plt.yticks([])
# compute cumulative sum of individual categories to match color schemes between chart and legend
values_cumsum = np.cumsum(df_dsn['Total'])
total_values = values_cumsum[len(values_cumsum) - 1]
# create legend
legend_handles = []
for i, category in enumerate(df_dsn.index.values):
label_str = category + ' (' + str(df_dsn['Total'][i]) + ')'
color_val = colormap(float(values_cumsum[i])/total_values)
legend_handles.append(mpatches.Patch(color=color_val, label=label_str))
# add legend to chart
plt.legend(handles=legend_handles,
loc='lower center',
ncol=len(df_dsn.index.values),
bbox_to_anchor=(0., -0.2, 0.95, .1)
)
def create_waffle_chart(categories, values, height, width, colormap, value_sign=''):
# compute the proportion of each category with respect to the total
total_values = sum(values)
category_proportions = [(float(value) / total_values) for value in values]
# compute the total number of tiles
total_num_tiles = width * height # total number of tiles
print ('Total number of tiles is', total_num_tiles)
# compute the number of tiles for each catagory
tiles_per_category = [round(proportion * total_num_tiles) for proportion in category_proportions]
# print out number of tiles per category
for i, tiles in enumerate(tiles_per_category):
print (df_dsn.index.values[i] + ': ' + str(tiles))
# initialize the waffle chart as an empty matrix
waffle_chart = np.zeros((height, width))
# define indices to loop through waffle chart
category_index = 0
tile_index = 0
# populate the waffle chart
for col in range(width):
for row in range(height):
tile_index += 1
# if the number of tiles populated for the current category
# is equal to its corresponding allocated tiles...
if tile_index > sum(tiles_per_category[0:category_index]):
# ...proceed to the next category
category_index += 1
# set the class value to an integer, which increases with class
waffle_chart[row, col] = category_index
# instantiate a new figure object
fig = plt.figure()
# use matshow to display the waffle chart
colormap = plt.cm.coolwarm
plt.matshow(waffle_chart, cmap=colormap)
plt.colorbar()
# get the axis
ax = plt.gca()
# set minor ticks
ax.set_xticks(np.arange(-.5, (width), 1), minor=True)
ax.set_yticks(np.arange(-.5, (height), 1), minor=True)
# add dridlines based on minor ticks
ax.grid(which='minor', color='w', linestyle='-', linewidth=2)
plt.xticks([])
plt.yticks([])
# compute cumulative sum of individual categories to match color schemes between chart and legend
values_cumsum = np.cumsum(values)
total_values = values_cumsum[len(values_cumsum) - 1]
# create legend
legend_handles = []
for i, category in enumerate(categories):
if value_sign == '%':
label_str = category + ' (' + str(values[i]) + value_sign + ')'
else:
label_str = category + ' (' + value_sign + str(values[i]) + ')'
color_val = colormap(float(values_cumsum[i])/total_values)
legend_handles.append(mpatches.Patch(color=color_val, label=label_str))
# add legend to chart
plt.legend(
handles=legend_handles,
loc='lower center',
ncol=len(categories),
bbox_to_anchor=(0., -0.2, 0.95, .1)
)
width = 40 # width of chart
height = 10 # height of chart
categories = df_dsn.index.values # categories
values = df_dsn['Total'] # correponding values of categories
colormap = plt.cm.coolwarm # color map class
create_waffle_chart(categories, values, height, width, colormap)
# install wordcloud
!conda install -c conda-forge wordcloud==1.4.1 --yes
# import package and its set of stopwords
from wordcloud import WordCloud, STOPWORDS
print ('Wordcloud is installed and imported!')
# download file and save as alice_novel.txt
!wget --quiet https://ibm.box.com/shared/static/m54sjtrshpt5su20dzesl5en9xa5vfz1.txt -O alice_novel.txt
# open the file and read it into a variable alice_novel
alice_novel = open('alice_novel.txt', 'r').read()
print ('File downloaded and saved!')
stopwords = set(STOPWORDS)
# instantiate a word cloud object
alice_wc = WordCloud(
background_color='white',
max_words=2000,
stopwords=stopwords
)
# generate the word cloud
alice_wc.generate(alice_novel)
# display the word cloud
plt.imshow(alice_wc, interpolation='bilinear')
plt.axis('off')
plt.show()
fig = plt.figure()
fig.set_figwidth(14) # set width
fig.set_figheight(18) # set height
# display the cloud
plt.imshow(alice_wc, interpolation='bilinear')
plt.axis('off')
plt.show()
stopwords.add('said') # add the words said to stopwords
# re-generate the word cloud
alice_wc.generate(alice_novel)
# display the cloud
fig = plt.figure()
fig.set_figwidth(14) # set width
fig.set_figheight(18) # set height
plt.imshow(alice_wc, interpolation='bilinear')
plt.axis('off')
plt.show()
# download image
!wget --quiet https://ibm.box.com/shared/static/3mpxgaf6muer6af7t1nvqkw9cqj85ibm.png -O alice_mask.png
# save mask to alice_mask
alice_mask = np.array(Image.open('alice_mask.png'))
print('Image downloaded and saved!')
fig = plt.figure()
fig.set_figwidth(14) # set width
fig.set_figheight(18) # set height
plt.imshow(alice_mask, cmap=plt.cm.gray, interpolation='bilinear')
plt.axis('off')
plt.show()
# instantiate a word cloud object
alice_wc = WordCloud(background_color='white', max_words=2000, mask=alice_mask, stopwords=stopwords)
# generate the word cloud
alice_wc.generate(alice_novel)
# display the word cloud
fig = plt.figure()
fig.set_figwidth(14) # set width
fig.set_figheight(18) # set height
plt.imshow(alice_wc, interpolation='bilinear')
plt.axis('off')
plt.show()
df_can.head()
total_immigration = df_can['Total'].sum()
total_immigration
max_words = 90
word_string = ''
for country in df_can.index.values:
# check if country's name is a single-word name
if len(country.split(' ')) == 1:
repeat_num_times = int(df_can.loc[country, 'Total']/float(total_immigration)*max_words)
word_string = word_string + ((country + ' ') * repeat_num_times)
# display the generated text
word_string
# create the word cloud
wordcloud = WordCloud(background_color='white').generate(word_string)
print('Word cloud created!')
# display the cloud
fig = plt.figure()
fig.set_figwidth(14)
fig.set_figheight(18)
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis('off')
plt.show()
# install seaborn
!pip install seaborn
# import library
import seaborn as sns
print('Seaborn installed and imported!')
# we can use the sum() method to get the total population per year
df_tot = pd.DataFrame(df_can[years].sum(axis=0))
# change the years to type float (useful for regression later on)
df_tot.index = map(float,df_tot.index)
# reset the index to put in back in as a column in the df_tot dataframe
df_tot.reset_index(inplace = True)
# rename columns
df_tot.columns = ['year', 'total']
# view the final dataframe
df_tot.head()
import seaborn as sns
ax = sns.regplot(x='year', y='total', data=df_tot)
import seaborn as sns
ax = sns.regplot(x='year', y='total', data=df_tot, color='green')
import seaborn as sns
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+')
plt.figure(figsize=(15, 10))
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+')
plt.figure(figsize=(15, 10))
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200})
ax.set(xlabel='Year', ylabel='Total Immigration') # add x- and y-labels
ax.set_title('Total Immigration to Canada from 1980 - 2013') # add title
plt.figure(figsize=(15, 10))
sns.set(font_scale=1.5)
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200})
ax.set(xlabel='Year', ylabel='Total Immigration')
ax.set_title('Total Immigration to Canada from 1980 - 2013')
plt.figure(figsize=(15, 10))
sns.set(font_scale=1.5)
sns.set_style('ticks') # change background to white background
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200})
ax.set(xlabel='Year', ylabel='Total Immigration')
ax.set_title('Total Immigration to Canada from 1980 - 2013')
plt.figure(figsize=(15, 10))
sns.set(font_scale=1.5)
sns.set_style('whitegrid')
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200})
ax.set(xlabel='Year', ylabel='Total Immigration')
ax.set_title('Total Immigration to Canada from 1980 - 2013')
### type your answer here
import folium
folium.Map(location=[-40.4637, -3.7492], zoom_start=6, tiles='Stamen Toner')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download all required data
Step2: Extract list of human protein-coding genes
Step3: Parse human GO annotations
Step4: Get information about a specific GO term
Step5: Find all GO terms that the MYC gene is annotated with
|
<ASSISTANT_TASK:>
Python Code:
# get package versions
from pkg_resources import require
print 'Package versions'
print '----------------'
print require('genometools')[0]
print require('goparser')[0]
gene_annotation_file = 'Homo_sapiens.GRCh38.82.gtf.gz'
protein_coding_gene_file = 'protein_coding_genes_human.tsv'
go_annotation_file = 'gene_association.goa_human.149.gz'
go_ontology_file = 'go-basic_2015-10-12.obo'
# download gene annotations
!curl -o "$gene_annotation_file" \
"ftp://ftp.ensembl.org/pub/release-82/gtf/homo_sapiens/Homo_sapiens.GRCh38.82.gtf.gz"
# download UniProt-GOA GO annotation file
!curl -o "$go_annotation_file" \
"ftp://ftp.ebi.ac.uk/pub/databases/GO/goa/old/HUMAN/gene_association.goa_human.149.gz"
# look at which version of the Gene Ontology was used for the GO annotation file
# (answer: 2015-10-12)
!gunzip -c "$go_annotation_file" | head -n 12
# download gene ontology file
!curl -o "$go_ontology_file" \
http://viewvc.geneontology.org/viewvc/GO-SVN/ontology-releases/2015-10-12/go-basic.obo?revision=29122
# generate list of human protein-coding genes
species = 'human'
!gunzip -c $gene_annotation_file | \
extract_protein_coding_genes.py -s $species -o $protein_coding_gene_file
import sys
from genometools import misc
from goparser import GOParser
# configure a logger that prints to stdout
misc.configure_logger('goparser',log_stream=sys.stdout)
# instantiate a GOParser object
P = GOParser()
# parse the ontology (.obo) file from the Gene Ontology Consortium
P.parse_ontology(go_ontology_file)
# parse the GO annotation (.gaf) file from the UniProt-GOA database
# --- only include annotations with certain evidence codes
select_evidence = ['IDA','IGI','IMP','ISO','ISS','IC','NAS','TAS']
P.parse_annotations(go_annotation_file,protein_coding_gene_file,\
select_evidence=select_evidence)
term = P.get_term_by_name('canonical Wnt signaling pathway')
print 'Term name:'
print '----------'
print term.get_pretty_format()
print
term_id = term.id
term_name = term.name
annotated_genes = P.get_goterm_genes(term_id)
print 'Genes annotated with GO term "%s", %s:' %(term_id,term_name)
print '------------------------------------------------------------------------'
print 'Total number of genes: %d' %(len(annotated_genes))
print ', '.join(sorted(annotated_genes))
associated_terms = P.get_gene_goterms('MYC')
print 'GO terms associated with MYC:'
print '-----------------------------'
print '\n'.join(sorted([t.get_pretty_format() for t in associated_terms]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the component using KFP SDK
Step2: Sample
Step3: Set sample parameters
Step4: Run the component as a single pipeline
Step5: Compile the pipeline
Step6: Submit the pipeline for execution
Step7: Inspect the output
|
<ASSISTANT_TASK:>
Python Code:
%%capture --no-stderr
!pip3 install kfp --upgrade
import kfp.components as comp
bigquery_query_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/bigquery/query/component.yaml')
help(bigquery_query_op)
QUERY = 'SELECT * FROM `bigquery-public-data.stackoverflow.posts_questions` LIMIT 10'
# Required Parameters
PROJECT_ID = '<Please put your project ID here>'
GCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash
# Optional Parameters
EXPERIMENT_NAME = 'Bigquery -Query'
OUTPUT_PATH = '{}/bigquery/query/questions.csv'.format(GCS_WORKING_DIR)
import kfp.dsl as dsl
import json
@dsl.pipeline(
name='Bigquery query pipeline',
description='Bigquery query pipeline'
)
def pipeline(
query=QUERY,
project_id = PROJECT_ID,
dataset_id='',
table_id='',
output_gcs_path=OUTPUT_PATH,
dataset_location='US',
job_config=''
):
bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id=table_id,
output_gcs_path=output_gcs_path,
dataset_location=dataset_location,
job_config=job_config)
pipeline_func = pipeline
pipeline_filename = pipeline_func.__name__ + '.zip'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename)
#Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
!gsutil cat $OUTPUT_PATH
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that there are two nested MeterGroups
Step2: Putting these meters into a MeterGroup allows us to easily sum together the power demand recorded by both meters to get the total power demand for the entire appliance (but it's also very easy to see the individual meter power demand too).
Step3: We can easily get the power data for both mains meters summed together
Step4: Stats for MeterGroups
Step5: Note that NILMTK has raised a warning that Mains uses a different type of power measurement than all the submeters, so it's not an entirely accurate comparison. Which raises the question
Step6: Total Energy
Step7: Energy per submeter
Step8: column headings are the ElecMeter instance numbers.
Step9: Plot fraction of energy consumption of each appliance
Step10: Draw wiring diagram
Step11: It's not very pretty but it shows that meters (1,2) (the site meters) are upstream of all other meters.
Step12: Plot appliances when they are in use
Step13: Stats and info for individual meters
Step14: Get upstream meter
Step15: Metadata about the class of meter
Step16: Dominant appliance
Step17: Total energy
Step18: Get good sections
Step19: We can automatically identify the 'good sections' (i.e. the sections where every pair of consecutive samples is less than max_sample_period specified in the dataset metadata)
Step20: The blue chunks show where the data is good. The white gap is the large gap seen in the raw power data. There are lots of smaller gaps that we cannot see at this zoom level.
Step21: Dropout rate
Step22: Note that the dropout rate has gone down (which is good!) now that we are ignoring the gaps. This value is probably more representative of the performance of the wireless system.
Step23: Or select multiple appliance types
Step24: Or all appliances in the 'heating' category
Step25: Or all appliances in building 1 with a single-phase induction motor(!)
Step26: (NILMTK imports the 'common metadata' from the NILM Metadata project, which includes a wide range of different category taxonomies)
Step27: Select a group of meters from properties of the meters (not the appliances)
Step28: Select a single meter from a MeterGroup
Step29: Appliances are uniquely identified within a building by a type (fridge, kettle, television, etc.) and an instance number. If we do not specify an instance number then ElecMeter retrieves instance 1 (instance numbering starts from 1). If you want a different instance then just do this
Step30: To uniquely identify an appliance in nilmtk.global_meter_group then we must specify the dataset name, building instance number, appliance type and appliance instance in a dict
Step31: Search for a meter using details of the ElecMeter
Step32: Instance numbering
Step33: Select nested MeterGroup
Step34: We can also specify the mains by asking for meter instance 0
Step35: which is equivalent to elec.mains()
Step36: Plot sub-metered data for a single day
Step37: Autocorrelation Plot
Step38: Daily energy consumption across fridges in the dataset
Step39: Correlation dataframe of the appliances
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import rcParams
import matplotlib.pyplot as plt
import pandas as pd
import nilmtk
from nilmtk import DataSet, MeterGroup
plt.style.use('ggplot')
rcParams['figure.figsize'] = (13, 10)
redd = DataSet('/data/redd.h5')
elec = redd.buildings[1].elec
elec
elec.nested_metergroups()
elec.mains()
elec.mains().power_series_all_data().head()
elec.submeters()
elec.proportion_of_energy_submetered()
mains = elec.mains()
mains.available_ac_types('power')
elec.submeters().available_ac_types('power')
next(elec.load())
elec.mains().total_energy() # returns kWh
energy_per_meter = elec.submeters().energy_per_meter() # kWh, again
energy_per_meter
# energy_per_meter is a DataFrame where each row is a
# power type ('active', 'reactive' or 'apparent').
# All appliance meters in REDD are record 'active' so just select
# the 'active' row:
energy_per_meter = energy_per_meter.loc['active']
more_than_20 = energy_per_meter[energy_per_meter > 20]
more_than_20
instances = more_than_20.index
instances
fraction = elec.submeters().fraction_per_meter().dropna()
# Create convenient labels
labels = elec.get_labels(fraction.index)
plt.figure(figsize=(10,30))
fraction.plot(kind='pie', labels=labels);
elec.draw_wiring_graph()
elec.meters_directly_downstream_of_mains()
#sns.set_palette("Set3", n_colors=12)
# Set a threshold to remove residual power noise when devices are off
elec.plot_when_on(on_power_threshold = 40)
fridge_meter = elec['fridge']
fridge_meter.upstream_meter() # happens to be the mains meter group!
fridge_meter.device
fridge_meter.dominant_appliance()
fridge_meter.total_energy() # kWh
fridge_meter.plot()
good_sections = fridge_meter.good_sections(full_results=True)
# specifying full_results=False would give us a simple list of
# TimeFrames. But we want the full GoodSectionsResults object so we can
# plot the good sections...
good_sections.plot()
good_sections.combined()
fridge_meter.dropout_rate()
import nilmtk
nilmtk.global_meter_group.select_using_appliances(type='washer dryer')
elec.select_using_appliances(type=['fridge', 'microwave'])
nilmtk.global_meter_group.select_using_appliances(category='heating')
nilmtk.global_meter_group.select_using_appliances(building=1, category='single-phase induction motor')
nilmtk.global_meter_group.select_using_appliances(building=2, category='laundry appliances')
elec.select(device_model='REDD_whole_house')
elec.select(sample_period=3)
elec['fridge']
elec.select_using_appliances(type='fridge')
elec['light', 2]
import nilmtk
nilmtk.global_meter_group[{'dataset': 'REDD', 'building': 1, 'type': 'fridge', 'instance': 1}]
elec[1]
from nilmtk.elecmeter import ElecMeterID
# ElecMeterID is a namedtuple for uniquely identifying each ElecMeter
nilmtk.global_meter_group[ElecMeterID(instance=8, building=1, dataset='REDD')]
elec[[ElecMeterID(instance=3, building=1, dataset='REDD'),
ElecMeterID(instance=4, building=1, dataset='REDD')]]
elec[ElecMeterID(instance=(3,4), building=1, dataset='REDD')]
elec[ElecMeterID(instance=0, building=1, dataset='REDD')]
elec.mains() == elec[ElecMeterID(instance=0, building=1, dataset='REDD')]
redd.set_window(start='2011-04-21', end='2011-04-22')
elec.plot();
plt.xlabel("Time");
from pandas.plotting import autocorrelation_plot
elec.mains().plot_autocorrelation();
fridges_restricted = nilmtk.global_meter_group.select_using_appliances(type='fridge')
daily_energy = pd.Series([meter.average_energy_per_period(offset_alias='D')
for meter in fridges_restricted.meters])
# daily_energy.plot(kind='hist');
# plt.title('Histogram of daily fridge energy');
# plt.xlabel('energy (kWh)');
# plt.ylabel('occurences');
# plt.legend().set_visible(False)
daily_energy
correlation_df = elec.pairwise_correlation()
correlation_df
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's take a look at what we can do with an AR. First, we will simulate some data.
Step2: MLE Results
Step3: We can get our errors as always, but the results cannot be propagated (error propagation assumes Gaussian errors, i.e., symmetric errors)
Step4: We need to get the AnalysisResults object that is created after a fit is performed. The AR object is a member of the JointLikelihood object
Step5: We can display the results of the analysis. Note, when a fit is performed, the post display is actaully from the internal AR.
Step6: By default, the equal tail intervals are displayed. We can instead display highest posterior densities (equal in the MLE case)
Step7: The AR stores several properties from the analysis
Step8: Saving results to disk
Step9: The FITS file can be examines with any normal FITS reader.
Step10: However, to easily pull the results back into the 3ML framework, we use the ${\tt load_analysis_results}$ function
Step11: You can get a DataFrame with the saved results
Step12: Analysis Result Sets
Step13: Error propagation
Step14: The propagation accounts for covariances. For example this
Step15: You can use arbitrary (np) functions
Step16: Errors can become asymmetric. For example, the ratio of two gaussians is
Step17: You can always use it with arbitrary functions
Step18: This is an example of an error propagation to get the plot of the model with its errors
Step19: Bayesian Analysis Results
Step20: Again, we grab the results from the BayesianAnalysis object
Step21: We can write and read the results to/from a file
Step22: The AR holds the posterior samples from the analysis. We can see the saved and live reults are the same
Step23: NOTE
Step24: We can return pandas DataFrames with equal tail or HPD results.
Step25: Error propagation operates the same way. Internally, the process is the same as the MLE results, however, the samples are those of the posterior rather than the (assumed) covariance matrix.
Step26: To demonstrate how the two objects (MLE and Bayes) are the same, we see that our plotting function written for the MLE result works on our Bayesian results seamlessly.
|
<ASSISTANT_TASK:>
Python Code:
from threeML import *
from threeML.analysis_results import *
from threeML.io.progress_bar import progress_bar
from jupyterthemes import jtplot
%matplotlib inline
jtplot.style(context="talk", fscale=1, ticks=True, grid=False)
import matplotlib.pyplot as plt
plt.style.use("mike")
import astropy.units as u
gen_function = Line(a=0.0, b=2) + Gaussian(F=30.0, mu=25.0, sigma=1)
# Generate a dataset using the line and a gaussian.
# constant 20% error
x = np.linspace(0, 50, 50)
xy = XYLike.from_function(
"sim_data", function=gen_function, x=x, yerr=0.2 * gen_function(x)
)
xy.plot()
fitfun = Line() + Gaussian()
fitfun.a_1.bounds = (-10, 10.0)
fitfun.b_1.bounds = (-100, 100.0)
fitfun.F_2 = 25.0
fitfun.F_2.bounds = (1e-3, 200.0)
fitfun.mu_2 = 25.0
fitfun.mu_2.bounds = (0.0, 100.0)
fitfun.sigma_2.bounds = (1e-3, 10.0)
model = Model(PointSource("fake", 0.0, 0.0, fitfun))
data = DataList(xy)
jl = JointLikelihood(model, DataList(xy))
_ = jl.fit()
jl.get_errors();
ar = jl.results
ar.display()
ar.display("hpd")
ar.analysis_type
ar.covariance_matrix
ar.get_point_source_flux(1*u.keV, .1*u.MeV)
ar.optimized_model
ar.write_to("test_mle.fits", overwrite=True)
import astropy.io.fits as fits
ar_fits = fits.open('test_mle.fits')
ar_fits.info()
ar_reloaded = load_analysis_results("test_mle.fits")
ar_reloaded.get_statistic_frame()
ar_reloaded.get_data_frame()
from threeML.analysis_results import AnalysisResultsSet
analysis_set = AnalysisResultsSet([ar, ar_reloaded])
# index as time bins
analysis_set.set_bins("testing", [-1, 1], [3, 5], unit="s")
# write to disk
analysis_set.write_to("analysis_set_test.fits", overwrite=True)
analysis_set = load_analysis_results("analysis_set_test.fits")
analysis_set[0].display()
p1 = ar.get_variates("fake.spectrum.main.composite.a_1")
p2 = ar.get_variates("fake.spectrum.main.composite.b_1")
print("Propagating a+b, with a and b respectively:")
print(p1)
print(p2)
print("\nThis is the result (with errors):")
res = p1 + p2
print(res)
print(res.equal_tail_interval())
print("\nThis is 50 * a/a:")
print(50 * p1/p1)
print("\nThis is arcsinh(a + 5*b) / np.log10(b) (why not?)")
print(np.arcsinh(p1 + 5 * p2) / np.log10(p2))
print("\nRatio a/b:")
print(p2 / p1)
def my_function(x, a, b):
return b * x ** a
print("\nPropagating using a custom function:")
print(my_function(2.3, p1, p2))
def go(fitfun, ar, model):
fig, ax = plt.subplots()
# Gather the parameter variates
arguments = {}
for par in fitfun.parameters.values():
if par.free:
this_name = par.name
this_variate = ar.get_variates(par.path)
# Do not use more than 1000 values (would make computation too slow for nothing)
if len(this_variate) > 1000:
this_variate = np.random.choice(this_variate, size=1000)
arguments[this_name] = this_variate
# Prepare the error propagator function
pp = ar.propagate(
ar.optimized_model.fake.spectrum.main.shape.evaluate_at, **arguments
)
# You can just use it as:
print(pp(5.0))
# Make the plot
energies = np.linspace(0, 50, 100)
low_curve = np.zeros_like(energies)
middle_curve = np.zeros_like(energies)
hi_curve = np.zeros_like(energies)
free_parameters = model.free_parameters
with progress_bar(len(energies), title="Propagating errors") as p:
with use_astromodels_memoization(False):
for i, e in enumerate(energies):
this_flux = pp(e)
low_bound, hi_bound = this_flux.equal_tail_interval()
low_curve[i], middle_curve[i], hi_curve[i] = (
low_bound,
this_flux.median,
hi_bound,
)
p.increase()
ax.plot(energies, middle_curve, "--", color="black")
ax.fill_between(energies, low_curve, hi_curve, alpha=0.5, color="blue")
go(fitfun, ar, model)
for parameter in ar.optimized_model:
model[parameter.path].value = parameter.value
model.fake.spectrum.main.composite.a_1.set_uninformative_prior(Uniform_prior)
model.fake.spectrum.main.composite.b_1.set_uninformative_prior(Uniform_prior)
model.fake.spectrum.main.composite.F_2.set_uninformative_prior(Log_uniform_prior)
model.fake.spectrum.main.composite.mu_2.set_uninformative_prior(Uniform_prior)
model.fake.spectrum.main.composite.sigma_2.set_uninformative_prior(Log_uniform_prior)
bs = BayesianAnalysis(model, data)
samples = bs.sample(20, 100, 1000)
ar2 = bs.results
ar2.write_to("test_bayes.fits", overwrite=True)
ar2_reloaded = load_analysis_results("test_bayes.fits")
np.allclose(ar2_reloaded.samples, ar2.samples)
#ar2.corner_plot();
# with chain consumer (pretty!)
ar2.corner_plot_cc();
ar2.get_data_frame("equal tail")
ar2.get_data_frame("hpd")
p1 = ar2.get_variates("fake.spectrum.main.composite.a_1")
p2 = ar2.get_variates("fake.spectrum.main.composite.b_1")
print(p1)
print(p2)
res = p1 + p2
print(res)
go(fitfun, ar2, model)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load geometric and material properties
Step2: Number of samples
Step3: Generate the capacity curves
Step4: Plot the capacity curves
Step5: Include additional information
Step6: Save capacity curves
|
<ASSISTANT_TASK:>
Python Code:
import DBELA
from rmtk.vulnerability.common import utils
%matplotlib inline
building_model_file = "../../../../../rmtk_data/DBELA/bare_frames.csv"
damage_model_file = "../../../../../rmtk_data/damage_model_dbela_low_code.csv"
no_assets = 100
building_class_model = DBELA.read_building_class_model(building_model_file)
assets = DBELA.generate_assets(building_class_model, no_assets)
damage_model = utils.read_damage_model(damage_model_file)
capacity_curves = DBELA.generate_capacity_curves(assets, damage_model)
utils.plot_capacity_curves(capacity_curves)
gamma = 1.2
yielding_point_index = 1.0
capacity_curves = utils.add_information(capacity_curves, "gamma", "value", gamma)
capacity_curves = utils.add_information(capacity_curves, "yielding point", "point", yielding_point_index)
output_file = "../../../../../rmtk_data/capacity_curves_dbela.csv"
utils.save_SdSa_capacity_curves(capacity_curves, output_file)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: how many rows contain null values?
Step2: Data Visualization
Step3: Let's compare the share of men and women in engineering major.
Step4: from the plot above, we found most students in engineering major are male.
|
<ASSISTANT_TASK:>
Python Code:
recent_grads = pd.read_csv('recent-grads.csv')
recent_grads.head()
recent_grads.tail()
recent_grads.describe()
recent_grads.shape
recent_grads.shape[0] - recent_grads.dropna().shape[0]
from pandas.tools.plotting import scatter_matrix
scatter_matrix(recent_grads[['ShareWomen', 'Unemployment_rate']], figsize=(12,8))
plt.show()
sns.pairplot(recent_grads[['ShareWomen', 'Unemployment_rate']].dropna(), size=4)
sns.plt.show()
grads_eng_share = recent_grads[recent_grads['Major_category'] == 'Engineering']
grads_eng_share['ShareMen'] = 1 - grads_eng_share['ShareWomen']
grads_eng_share = grads_eng_share.set_index('Major')
grads_eng_share = grads_eng_share[['ShareMen', 'ShareWomen']]
grads_eng_share.head()
grads_eng_share.plot(kind='bar', figsize=(12,12))
plt.show()
grads_eng_num = recent_grads[recent_grads['Major_category'] == 'Engineering']
grads_eng_num.head()
grads_eng_num = grads_eng_num.set_index('Major')
grads_eng_num = grads_eng_num[['Men', 'Women']]
grads_eng_num.head()
grads_eng_num.plot(kind='bar', stacked=True, figsize=(12,12))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get information about the datatype 'Geotecnische codering'
Step2: A description is provided for the 'Geotechnische codering' datatype
Step3: The different fields that are available for objects of the 'Geotechnische codering' datatype can be requested with the get_fields() method
Step4: You can get more information of a field by requesting it from the fields dictionary
Step5: Example use cases
Step6: The dataframe contains one 'Geotechnische codering' interpretation where ten layers ('laag') were identified. The available data are flattened to represent unique attributes per row of the dataframe.
Step7: Get 'Geotechnische codering' with specific properties within a distance from a point
Step8: In this example we build a query using the PropertyIsGreaterThan and PropertyIsEqualTo operators to find all interpretations that are at least 20 m deep, that are deemed appropriate for a range of 1 km from a defined point
Step9: Once again we can use the pkey_interpretatie as a permanent link to the information of these interpretations
Step10: Get 'Geotechnische codering' in a bounding box based on specific properties
Step11: We can look at one of the interpretations in a webbrowser using its pkey_interpretatie
Step12: Get 'Geotechnische codering' based on fields not available in the standard output dataframe
Step13: Get 'Geotechnische codering' data, returning fields not available in the standard output dataframe
Step14: Visualize results
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import inspect, sys
# check pydov path
import pydov
from pydov.search.interpretaties import GeotechnischeCoderingSearch
itp = GeotechnischeCoderingSearch()
itp.get_description()
fields = itp.get_fields()
# print available fields
for f in fields.values():
print(f['name'])
fields['Datum']
from pydov.util.location import Within, Box
df = itp.search(location=Within(Box(108281, 197850, 108282, 197851)))
df.head()
for pkey_interpretatie in set(df.pkey_interpretatie):
print(pkey_interpretatie)
[i for i,j in inspect.getmembers(sys.modules['owslib.fes'], inspect.isclass) if 'Property' in i]
from owslib.fes import And, PropertyIsGreaterThan, PropertyIsEqualTo
from pydov.util.location import WithinDistance, Point
query = And([PropertyIsEqualTo(propertyname='Betrouwbaarheid',
literal='goed'),
PropertyIsGreaterThan(propertyname='diepte_tot_m',
literal='20'),
])
df = itp.search(query=query,
location=WithinDistance(Point(153145, 206930), 1000))
df.head()
for pkey_interpretatie in set(df.pkey_interpretatie):
print(pkey_interpretatie)
from owslib.fes import PropertyIsEqualTo
query = PropertyIsEqualTo(
propertyname='Type_proef',
literal='Boring')
df = itp.search(
location=Within(Box(153145, 206930, 154145, 207930)),
query=query
)
df.head()
for pkey_interpretatie in set(df.pkey_interpretatie):
print(pkey_interpretatie)
from owslib.fes import And, PropertyIsEqualTo, PropertyIsLessThan
query = And([PropertyIsEqualTo(propertyname='gemeente',
literal='Antwerpen'),
PropertyIsLessThan(propertyname='Datum',
literal='2010-01-01')]
)
df = itp.search(query=query,
return_fields=('pkey_interpretatie', 'Datum'))
df.head()
query = PropertyIsEqualTo(
propertyname='gemeente',
literal='Leuven')
df = itp.search(query=query,
return_fields=('pkey_interpretatie', 'pkey_boring',
'x', 'y', 'Z_mTAW', 'gemeente', 'Auteurs', 'Proefnummer'))
df.head()
# import the necessary modules (not included in the requirements of pydov!)
import folium
from folium.plugins import MarkerCluster
from pyproj import Transformer
# convert the coordinates to lat/lon for folium
def convert_latlon(x1, y1):
transformer = Transformer.from_crs("epsg:31370", "epsg:4326", always_xy=True)
x2,y2 = transformer.transform(x1, y1)
return x2, y2
df['lon'], df['lat'] = zip(*map(convert_latlon, df['x'], df['y']))
# convert to list
loclist = df[['lat', 'lon']].values.tolist()
# initialize the Folium map on the centre of the selected locations, play with the zoom until ok
fmap = folium.Map(location=[df['lat'].mean(), df['lon'].mean()], zoom_start=12)
marker_cluster = MarkerCluster().add_to(fmap)
for loc in range(0, len(loclist)):
folium.Marker(loclist[loc], popup=df['Proefnummer'][loc]).add_to(marker_cluster)
fmap
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Lorenz system
Step4: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
Step6: Write a function plot_lorentz that
Step7: Use interact to explore your plot_lorenz function with
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
def lorentz_derivs(yvec, t, sigma, rho, beta):
Compute the the derivatives for the Lorentz system at yvec(t).
# YOUR CODE HERE
raise NotImplementedError()
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this to grade solve_lorenz
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
# YOUR CODE HERE
raise NotImplementedError()
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
# YOUR CODE HERE
raise NotImplementedError()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's call the fit method. It automatically finds appropriate transformers for the fields of jsonlines file. The parameter 'limit' means how many objects will be used to detect the right transformers.
Step2: Let's remove the SVDOneHotTransformer
|
<ASSISTANT_TASK:>
Python Code:
import datapot as dp
datapot = dp.DataPot()
from datapot.utils import csv_to_jsonlines
csv_to_jsonlines('../data/transactions.csv', '../data/transactions.jsonlines')
ftr = open('../data/transactions.jsonlines')
datapot.detect(ftr, limit=100)
datapot.fit(ftr)
datapot
datapot.remove_transformer('merchant_id', 0)
data = datapot.transform(ftr)
data.head()
data.columns
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame([["A", "Z-Y"], ["B", "X"], ["C", "W-U-V"]], index=[1,2,3], columns=['var1', 'var2'])
def g(df):
return df.join(pd.DataFrame(df.var2.str.split('-', expand=True).stack().reset_index(level=1, drop=True),columns=['var2 '])).\
drop('var2',1).rename(columns=str.strip).reset_index(drop=True)
result = g(df.copy())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this example, we are going to cluster all of the links found in the Wikipedia page "Vital articles." First, we will open the page in main, then add all of the raw text of the articles into a dictionary called token_dict.
Step2: You can save the text to disk in the following cell.
Step3: later you can retrieve it like this
Step4: The next sell will calculate the SVD decomposition of the tf-idf matrix.
Step5: You can save the calculation to disk in the following cell.
Step6: You can load it back later like this
Step7: Now calculate t-SNE on the reduced feature vectors and normalize to (0,1).
Step8: Save to json for future-keeping.
Step9: We can also convert the t-SNE to an nx by ny grid assignment.
Step10: The next cell will create an HTML file with the gridded wikipedia articles arranged by similarity.
|
<ASSISTANT_TASK:>
Python Code:
import string
import os
import time
import pickle
import json
import re
import wikipedia
import nltk
import numpy as np
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.manifold import TSNE
from sklearn.preprocessing import normalize
from sklearn.decomposition import TruncatedSVD
nltk.download('punkt')
nltk.download('stopwords')
main = wikipedia.page('Wikipedia:Vital articles')
#https://en.wikipedia.org/wiki/Wikipedia:1,000_core_topics
#https://en.wikipedia.org/wiki/User:West.andrew.g/2016_Popular_pages
token_dict = {}
for i, article in enumerate(main.links):
if article not in token_dict:
if i%20==0:
print "getting text for article %d/%d : %s"%(i, len(main.links), article)
try:
text = wikipedia.page(article)
token_dict[article] = text.content
except:
print " ==> error processing "+article
pickle.dump(token_dict, open('fulltext_WikiVitalArticles.p', 'wb'))
token_dict = pickle.load(open('fulltext_WikiVitalArticles.p', 'rb'))
def tokenize(text):
text = text.lower() # lower case
text = re.sub(r"[%s\n\t]+"%string.punctuation, ' ', text) # remove punctuation
text = re.sub(r"[ ]+", " ", text) # remove extra spaces
text = text.translate(string.punctuation) # punctuation
tokens = nltk.word_tokenize(text)
tokens = [t for t in tokens if not t in stopwords.words('english')] # stopwords
stems = [PorterStemmer().stem(t) for t in tokens]
return stems
# calculate tfidf (might take a while)
print("calculating tf-idf")
tfidf = TfidfVectorizer(tokenizer=tokenize, stop_words='english')
tfs = tfidf.fit_transform(token_dict.values())
print("reducing tf-idf to 500 dim")
tfs_reduced = TruncatedSVD(n_components=500, random_state=0).fit_transform(tfs)
print("done")
pickle.dump(tfs_reduced, open('tfidf_WikiVitalArticles.p', 'wb'))
tfs_reduced = pickle.load(open('tfidf_WikiVitalArticles.p', 'rb'))
# calculate t-SNE
tsne = TSNE(n_components=2, perplexity=50, verbose=2).fit_transform(tfs_reduced)
# save to json file
x_axis, y_axis = tsne[:, 0], tsne[:, 1]
x_norm = (x_axis-np.min(x_axis)) / (np.max(x_axis) - np.min(x_axis))
y_norm = (y_axis-np.min(y_axis)) / (np.max(y_axis) - np.min(y_axis))
data = {"x":[float(x) for x in x_norm.tolist()], "y":[float(y) for y in y_norm.tolist()], "names":token_dict.keys()}
with open('tsne_wikiVitalArticles.json', 'w') as outfile:
json.dump(data, outfile)
nx, ny = 32, 31
import rasterfairy
grid_assignment = rasterfairy.transformPointCloud2D(tsne[0:nx*ny, :], target=(nx, ny))[0]
grid_sorted = sorted(range(len(grid_assignment)), key=lambda k: grid_assignment[k][1]*nx + grid_assignment[k][0])
keys = list(token_dict.keys())
links_grid = [[0 for x in range(nx)] for y in range(ny)]
for i, g in enumerate(grid_assignment):
links_grid[int(g[1])][int(g[0])] = keys[i]
table_html = '<table>\n'
for row in links_grid:
table_html += '\t<tr>\n'
for col in row:
table_html += '\t\t<td><a href=\"https://en.wikipedia.org/wiki/%s\">%s</a></td>\n' % (col, col)
table_html += '\t</tr>\n'
table_html += '</table>\n'
html = '''
<head>
<style>
body {
padding-top: 80px;
text-align: center;
font-family: monaco, monospace;
background-size: cover;
}
table {
text-align: center;
}
tr {
background-color:#ff0;
}
td {
padding:10px;
}
</style>
</head>
<body>
%s
</body>
''' % table_html
with open('index.html', 'wb') as text_file:
#text_file.write(html.encode('utf-8'))
text_file.write(html.encode('utf-8'))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This shows how exceptions are raised and caught, but this approach is somewhat limited. Suppose now, that we weren't expecting this expected unexpected behaviour and we wanted to compute everything before displaying our results.
Step2: Ooops! Let's fix that.
Step3: That's also not what we want. We wasted all this time computing nice reciprocals of numbers, only to find all of our results being thrown away because of one stupid zero in the input list. We can fix this.
Step4: That's better! We skipped right over the error and continued to more interesting results. So how are we going to make this solution more generic? Subsequent functions may not know how to handle that little nan in our list.
Step5: Hmmmpf. There we go again.
Step7: This seems Ok, but there are two problems here. For one, it feels like we're doing too much work! We have a repeating code pattern here. That's always a moment to go back and consider making parts of our code more generic. At the same time, this is when we need some more advanced Python concepts to get us out of trouble. We're going to define a function in a function!
Step8: Consider what happens here. The function secure_function takes a function something_dangerous as an argument and returns a new function something_safe. This new function executes something_dangerous within a try-except block to deal with the possibility of failure. Let's see how this works.
Step11: Ok, so that works! However, the documentation of safe_sqrt is not yet very useful. There is a nice library routine that may help us here
Step12: Now it is very easy to also rewrite our function computing the reciprocals safely
Step15: There is a second problem to this approach, which is a bit more subtle. How do we know where the error occured? We got two values of nan and are desperate to find out what went wrong. We'll need a little class to capture all aspects of failure.
Step18: We will adapt our earlier design for secure_function. If the given argument is a Fail, we don't even attempt to run the next function. In stead, we extend the trace of the failure, so that we can see what happened later on.
Step19: Now we can rewrite our little program entirely from scratch
Step20: See how we retain a trace of the functions that were involved in creating the failed state, even though the execution of that produced those values is entirely decoupled. This is exactly what we need to trace errors in Noodles.
Step21: The maybe decorator works well together with schedule. The following workflow is full of errors!
Step22: Both the reciprocal and the square root functions will fail. Noodles is smart enough to report on both errors.`
Step23: Example
Step24: If a file does note exist, stat returns an error-code of 1.
Step25: We can wrap the execution of the stat command in a helper function.
Step26: The run function runs the given command and returns a CompletedProcess object. The check=True argument enables checking for return value of the child process. If the return value is any other then 0, a CalledProcessError is raised. Because we decorated our function with noodles.maybe, such an error will be caught and a Fail object will be returned.
Step27: We can now run this workflow and print the output in a table.
|
<ASSISTANT_TASK:>
Python Code:
import sys
def something_dangerous(x):
print("computing reciprocal of", x)
return 1 / x
try:
for x in [2, 1, 0, -1]:
print("1/{} = {}".format(x, something_dangerous(x)))
except ArithmeticError as error:
print("Something went terribly wrong:", error)
input_list = [2, 1, 0, -1]
reciprocals = [something_dangerous(item)
for item in input_list]
print("The reciprocal of", input_list, "is", reciprocals)
try:
reciprocals = [something_dangerous(item)
for item in input_list]
except ArithmeticError as error:
print("Something went terribly wrong:", error)
else:
print("The reciprocal of\n\t", input_list,
"\nis\n\t", reciprocals)
import math
def something_safe(x):
try:
return something_dangerous(x)
except ArithmeticError as error:
return math.nan
reciprocals = [something_safe(item)
for item in input_list]
print("The reciprocal of\n\t", input_list,
"\nis\n\t", reciprocals)
square_roots = [math.sqrt(item) for item in reciprocals]
def safe_sqrt(x):
try:
return math.sqrt(x)
except ValueError as error:
return math.nan
[safe_sqrt(item) for item in reciprocals]
def secure_function(dangerous_function):
def something_safe(x):
A safer version of something dangerous.
try:
return dangerous_function(x)
except (ArithmeticError, ValueError):
return math.nan
return something_safe
safe_sqrt = secure_function(math.sqrt)
print("⎷2 =", safe_sqrt(2))
print("⎷-1 =", safe_sqrt(-1))
print()
help(safe_sqrt)
import functools
def secure_function(dangerous_function):
Create a function that doesn't raise ValueErrors.
@functools.wraps(dangerous_function)
def something_safe(x):
A safer version of something dangerous.
try:
return dangerous_function(x)
except (ArithmeticError, ValueError):
return math.nan
return something_safe
safe_sqrt = secure_function(math.sqrt)
help(safe_sqrt)
something_safe = secure_function(something_dangerous)
[safe_sqrt(something_safe(item)) for item in input_list]
class Fail:
Keep track of failures.
def __init__(self, exception, trace):
self.exception = exception
self.trace = trace
def extend_trace(self, f):
Grow a stack trace.
self.trace.append(f)
return self
def __str__(self):
return "Fail in " + " -> ".join(
f.__name__ for f in reversed(self.trace)) \
+ ":\n\t" + type(self.exception).__name__ \
+ ": " + str(self.exception)
def secure_function(dangerous_function):
Create a function that doesn't raise ValueErrors.
@functools.wraps(dangerous_function)
def something_safe(x):
A safer version of something dangerous.
if isinstance(x, Fail):
return x.extend_trace(dangerous_function)
try:
return dangerous_function(x)
except Exception as error:
return Fail(error, [dangerous_function])
return something_safe
@secure_function
def reciprocal(x):
return 1 / x
@secure_function
def square_root(x):
return math.sqrt(x)
reciprocals = map(reciprocal, input_list)
square_roots = map(square_root, reciprocals)
for x, result in zip(input_list, square_roots):
print("sqrt( 1 /", x, ") =", result)
import noodles
import math
from noodles.tutorial import display_workflows
@noodles.maybe
def reciprocal(x):
return 1 / x
@noodles.maybe
def square_root(x):
return math.sqrt(x)
results = [square_root(reciprocal(x)) for x in [2, 1, 0, -1]]
for result in results:
print(str(result))
@noodles.schedule
@noodles.maybe
def add(a, b):
return a + b
workflow = add(noodles.schedule(reciprocal)(0),
noodles.schedule(square_root)(-1))
display_workflows(arithmetic=workflow, prefix='errors')
result = noodles.run_single(workflow)
print(result)
!stat -t -c '%A %10s %n' /dev/null
!stat -t -c '%A %10s %n' does-not-exist
from subprocess import run, PIPE, CalledProcessError
@noodles.schedule
@noodles.maybe
def stat_file(filename):
p = run(['stat', '-t', '-c', '%A %10s %n', filename],
check=True, stdout=PIPE, stderr=PIPE)
return p.stdout.decode().strip()
files = ['/dev/null', 'does-not-exist', '/home', '/usr/bin/python3']
workflow = noodles.gather_all(stat_file(f) for f in files)
display_workflows(stat=workflow, prefix='errors')
result = noodles.run_parallel(workflow, n_threads=4)
for file, stat in zip(files, result):
print('stat {:18} -> {}'.format(
file, stat if not noodles.failed(stat)
else 'failed: ' + stat.exception.stderr.decode().strip()))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Давайте создадим таблицу, заполним ее и сделаем какой-нибудь запрос!
Step2: Посмотрим на полученную таблицу.
Step3: Немного SQL терминологии
Step4: Посмотрим на примеры проекции, то есть получим только несколько атрибутов запроса.
Step5: На выходе все еще таблица и ее схема -
Step6: На выходе запроса к таблице - снова таблица
Step7: Небольшие детали
Step8: LIKE
Step9: Убрать дубли
Step10: Сортировка результатов
Step11: Соединения - JOIN
Step12: Внешние ключи
Step13: Внешние ключи являются ограничениями таблицы
Step14: Внешние ключи и удаление
Step15: Второй вариант
Step16: Другой вариант написания запроса.
Step17: Примечания
|
<ASSISTANT_TASK:>
Python Code:
%load_ext sql
%sql sqlite://
%%sql drop table if exists product;
create table product(
pname varchar primary key, -- имя продукта
price money, -- цена продукта
category varchar, -- категория
manufacturer varchar NOT NULL -- производитель
);
insert into product values('Пряник', 19.99, 'Еда', 'Тульский пряник');
insert into product values('Тетрадь', 29.99, 'Канцелярия', 'Эксмо');
insert into product values('Печенье', 149.99, 'Еда', 'WagonWiels');
insert into product values('Планшет', 203.99, 'Техника', 'Zua');
%sql select * from product;
%%sql
SELECT * from Product
WHERE category = 'Канцелярия' and manufacturer = 'Эксмо'
%%sql
SELECT Pname, Price, Manufacturer
FROM Product;
%%sql
SELECT Pname, Price, Manufacturer
FROM Product
WHERE category='Еда';
%%sql
SELECT
p.manufacturer, p.pname, p.price
FROM
(SELECT distinct p0.category
FROM Product p0
WHERE p0.price < 20.00) cp, -- this is a nested query!
Product p
WHERE
p.category = cp.category and p.price > 20.00
%sql select * from product;
%%sql
SELECT * FROM product
where category LIKE '%да%'
%sql SELECT category from product;
%%sql
-- чтобы убрать дубли используйте слово DISTINCT
SELECT DISTINCT category from product;
%%sql
-- сортировка результатов
-- сортировка по умолчанию - ascending
SELECT pname, price, manufacturer
FROM Product
WHERE price > 50
ORDER BY price, pname
%%sql
-- сортировка результатов
-- тип сортировки каждого компонента определяется индивидуально
SELECT price, manufacturer
FROM Product
ORDER BY manufacturer ASC, price DESC
%%sql
drop table if exists product; -- This needs to be dropped if exists, see why further down!
drop table if exists company;
create table company (
cname varchar primary key, -- company name uniquely identifies the company.
stockprice money, -- stock price is in money
country varchar); -- country is just a string
insert into company values ('Тульский пряник', 25.0, 'Россия');
insert into company values ('Эксмо', 65.0, 'Россия');
insert into company values ('Zua', 15.0, 'Китай');
%sql select * from company;
%%sql drop table if exists product;
pragma foreign_keys = ON; -- WARNING by default off in sqlite
create table product(
pname varchar primary key, -- name of the product
price money, -- price of the product
category varchar, -- category
manufacturer varchar, -- manufacturer
foreign key (manufacturer) references company(cname));
insert into product values('Пряник', 19.99, 'Еда', 'Тульский пряник');
insert into product values('Тетрадь', 29.99, 'Канцелярия', 'Эксмо');
insert into product values('Печенье', 149.99, 'Еда', 'Тульский пряник');
insert into product values('Планшет', 203.99, 'Техника', 'Zua');
try:
%sql insert into product values('MultiTouch', 203.99, 'Household', 'Google');
except Exception as e:
print (e)
print ("Rejected!")
%%sql
-- the update is rejected!
select * from product;
try:
%sql delete from company where cname = 'Zua';
except Exception as e:
print (e)
print ("Disallowed!")
%%sql
SELECT pname, price
FROM product, company
where manufacturer=cname and country='Россия' and price <= 200;
%%sql -- Часть 1: выбрать все компании из России
SELECT distinct cname -- нужна ли нам уникальность?
from company where country='Россия';
%%sql -- Part 2: Продукты до 200
select distinct pname, price, manufacturer
from product
where price <= 200;
%%sql -- Объединение как cross join
SELECT *
FROM
(SELECT DISTINCT pname, price, manufacturer
FROM product
WHERE price <= 200) CheapProducts,
(SELECT DISTINCT cname
FROM company
WHERE country='Россия') JapaneseProducts;
%%sql
-- Фильтруем cross join
SELECT DISTINCT pname, price
FROM
(SELECT DISTINCT pname, price, manufacturer
FROM product
WHERE price <= 200) CheapProducts,
(SELECT distinct cname
FROM company
WHERE country='Россия') JapaneseProducts
WHERE cname = manufacturer;
%%sql -- duplicate answer
SELECT Country
FROM Product, Company
WHERE Manufacturer=CName AND Category='Еда';
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: IMDB Movie Review
Step2: Step 2
Step3: Step 3
Step4: Step 4
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
import tensorflow_hub as hub
import tensorflow_datasets as ds
## general checks
print("Tensor Flow Version : {}".format(tf.__version__))
print("Eager Mode : {}".format(tf.executing_eagerly()))
print("Hub Version : {}".format(hub.__version__))
print("GPU availablility : {}".format(True if tf.config.experimental.list_physical_devices("GPU") else False))
## check if the dataset holds the imdb dataset or not
"imdb_reviews" in ds.list_builders()
## Load the dataset
if "imdb_reviews" in ds.list_builders():
imdb_ds = ds.load("imdb_reviews")
for key,value in imdb_ds.items():
print(key, value)
### Using TF DS to split the training data into 60:40 validation dataset.
if "imdb_reviews" in ds.list_builders():
train_data, validation_data, test_data = ds.load(name="imdb_reviews",split=('train[:60]','train[60:]','test'),as_supervised=True)
type(train_data)
## Explore the data set
train_examples_batch, train_labels_batch = next(iter(train_data.batch(10)))
train_examples_batch
train_labels_batch
## Embedding the word from TF Hub pre-build BoW
embedding = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1"
hub_layer = hub.KerasLayer(embedding,trainable=True,input_shape=[],dtype=tf.string)
hub_layer(train_examples_batch[:3])
### Define the layers
model = tf.keras.Sequential(name='IMDBSEQModel')
model.add(hub_layer)
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1))
model.summary()
### Define the Loss Function and optimizer
model.compile(optimizer='adam',loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy'])
history = model.fit(train_data.shuffle(10000).batch(512), epochs=20, validation_data=validation_data.batch(512), verbose=1)
results = model.evaluate(test_data.batch(512),verbose=2)
for name, value in zip(model.metrics_names, results):
print("{} {}".format(name,value))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Set Configuration
Step2: 3. Enter BigQuery Query to Sheet Recipe Parameters
Step3: 4. Execute BigQuery Query to Sheet
|
<ASSISTANT_TASK:>
Python Code:
!pip install git+https://github.com/google/starthinker
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'sheet':'', # Either sheet url or sheet name.
'tab':'', # Name of the tab where to put the data.
'range':'', # Range in the sheet to place the data, leave blank for whole sheet.
'dataset':'', # Existing BigQuery dataset.
'query':'', # Query to pull data from the table.
'legacy':True, # Use Legacy SQL
}
print("Parameters Set To: %s" % FIELDS)
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'bigquery':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'from':{
'auth':'service',
'dataset':{'field':{'name':'dataset','kind':'string','order':4,'default':'','description':'Existing BigQuery dataset.'}},
'query':{'field':{'name':'query','kind':'text','order':5,'default':'','description':'Query to pull data from the table.'}},
'legacy':{'field':{'name':'legacy','kind':'boolean','order':6,'default':True,'description':'Use Legacy SQL'}}
},
'to':{
'sheet':{'field':{'name':'sheet','kind':'string','order':1,'default':'','description':'Either sheet url or sheet name.'}},
'tab':{'field':{'name':'tab','kind':'string','order':2,'default':'','description':'Name of the tab where to put the data.'}},
'range':{'field':{'name':'range','kind':'string','order':3,'default':'','description':'Range in the sheet to place the data, leave blank for whole sheet.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To plot these initial orbits in the $xy$-plane, we can simply call the OrbitPlot function and give it the simulation as an argument.
Step2: Note that the OrbitPlot function chooses reasonable limits for the axes for you. There are various ways to customize the plot. Have a look at the arguments used in the following examples, which are pretty much self-explanatory (if in doubt, check the documentation!).
Step3: Note that all orbits are draw with respect to the center of mass of all interior particles. This coordinate system is known as Jacobi coordinates. It requires that the particles are sorted by ascending semi-major axis within the REBOUND simulation's particle array.
Step4: To get an idea of the three dimensional distribution of orbits, use the slices=True option. This will plot the orbits three times, from different perspectives. You can adjust the dimensions in the z direction using the limz keyword.
|
<ASSISTANT_TASK:>
Python Code:
import rebound
sim = rebound.Simulation()
sim.add(m=1)
sim.add(m=0.1, e=0.041, a=0.4, inc=0.2, f=0.43, Omega=0.82, omega=2.98)
sim.add(m=1e-3, e=0.24, a=1.0, pomega=2.14)
sim.add(m=1e-3, e=0.24, a=1.5, omega=1.14, l=2.1)
sim.add(a=-2.7, e=1.4, f=-1.5,omega=-0.7) # hyperbolic orbit
%matplotlib inline
fig = rebound.OrbitPlot(sim)
fig = rebound.OrbitPlot(sim, unitlabel="[AU]", color=True, periastron=True)
fig = rebound.OrbitPlot(sim, unitlabel="[AU]", periastron=True, lw=2)
from IPython.display import display, clear_output
import matplotlib.pyplot as plt
sim.move_to_com()
for i in range(3):
sim.integrate(sim.t+0.31)
fig = rebound.OrbitPlot(sim,color=True,unitlabel="[AU]",lim=2.)
display(fig)
plt.close(fig)
clear_output(wait=True)
fig = rebound.OrbitPlot(sim,slices=True,color=True,unitlabel="[AU]",lim=2.,limz=0.36)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: While performing CPT measurements, different techniques can be used. Since these can have an impact on the results, it can be interesting to download this additional information in order to better comprehend the CPT data.
Step2: A new subtype has to be a subclass of the AbstractDovSubType class and implement two class variables
Step3: In order to be able to use this subtype in a search query, we have to create a subclass of the original main type (Sondering) and register our new subtype
Step4: The next step is to instantiate the SonderingSearch class with our newly created type
Step5: If everything worked out, you should be able to see the new fields in the get_fields output
Step6: Querying is exactly the same as with the default Sondering type
Step7: One can use the values from the XSD type to add a human-readably column with the different techniques
Step8: Adding location and height details to Boring dataframe
Step9: When instantiating our BoringSearch object, we now explicitly set our new type as objecttype to search
Step10: Searching for boreholes remains exactly the same, but will reveal the extra information in the output dataframe
Step11: Listing GxG for GrondwaterFilters
Step12: Adding color to the Borehole dataframe
Step13: List all glauconite fields for Grondmonsters
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from pydov.types.fields import XmlField, XsdType
from pydov.types.abstract import AbstractDovSubType
from pydov.types.sondering import Sondering
class Techniek(AbstractDovSubType):
rootpath = './/sondering/sondeonderzoek/penetratietest/technieken'
fields = [
XmlField(name='techniek_diepte_van',
source_xpath='/diepte_van',
definition='Enkel van toepassing voor het plaatsen van voerbuizen - '
'(code V) of het boren door een harde laag (code B).',
datatype='float'),
XmlField(name='techniek_diepte',
source_xpath='/diepte_techniek',
definition='Diepte waarop techniek toegepast werd.',
datatype='float'),
XmlField(name='techniek',
source_xpath='/techniek',
definition='De gebruikte techniek.',
datatype='string',
xsd_type=XsdType(
xsd_schema='https://www.dov.vlaanderen.be/xdov/schema/latest/xsd/kern/sondering/SonderingDataCodes.xsd',
typename='SondeerTechniekEnumType')),
XmlField(name='techniek_andere',
source_xpath='/techniek_andere',
definition="De gebruikte techniek (enkel van toepassing indien de techniek = 'andere').",
datatype='string')
]
class SonderingTechnieken(Sondering):
subtypes = [Techniek]
from pydov.search.sondering import SonderingSearch
cpts = SonderingSearch(objecttype=SonderingTechnieken)
cpts.get_fields()['techniek']
from pydov.util.location import WithinDistance, Point
df = cpts.search(location=WithinDistance(Point(150000, 150000), 10000, 'meter'))
df['techniek_label'] = df['techniek'].map(cpts.get_fields()['techniek']['values'])
df.head()
from pydov.types.fields import XmlField, XsdType
from pydov.types.boring import Boring
class BoringMethodeXyz(Boring):
__generiekeDataCodes = 'https://www.dov.vlaanderen.be/xdov/schema/latest/xsd/kern/generiek/GeneriekeDataCodes.xsd'
fields = Boring.extend_fields([
XmlField(name='methode_xy',
source_xpath='/boring/xy/methode_opmeten',
definition='Methode waarop de x en y-coordinaat opgemeten werden.',
datatype='string',
xsd_type=XsdType(
xsd_schema=__generiekeDataCodes,
typename='MethodeOpmetenXyEnumType')),
XmlField(name='betrouwbaarheid_xy',
source_xpath='/boring/xy/betrouwbaarheid',
definition='Betrouwbaarheid van het opmeten van de x en y-coordinaat.',
datatype='string',
xsd_type=XsdType(
xsd_schema=__generiekeDataCodes,
typename='BetrouwbaarheidXyzEnumType')),
XmlField(name='methode_mv',
source_xpath='/boring/oorspronkelijk_maaiveld/methode_opmeten',
definition='Methode waarop de Z-coördinaat van het maaiveld opgemeten werd.',
datatype='string',
xsd_type=XsdType(
xsd_schema=__generiekeDataCodes,
typename='MethodeOpmetenZEnumType')),
XmlField(name='betrouwbaarheid_mv',
source_xpath='/boring/oorspronkelijk_maaiveld/betrouwbaarheid',
definition='Betrouwbaarheid van het opmeten van de z-coordinaat van het maaiveld.',
datatype='string',
xsd_type=XsdType(
xsd_schema=__generiekeDataCodes,
typename='BetrouwbaarheidXyzEnumType')),
XmlField(name='aanvangspeil_mtaw',
source_xpath='/boring/aanvangspeil/waarde',
definition='Hoogte in mTAW van het startpunt van de boring (boortafel, bouwput etc).',
datatype='float'),
XmlField(name='methode_aanvangspeil',
source_xpath='/boring/aanvangspeil/methode_opmeten',
definition='Methode waarop de Z-coördinaat van het aanvangspeil opgemeten werd.',
datatype='string',
xsd_type=XsdType(
xsd_schema=__generiekeDataCodes,
typename='MethodeOpmetenZEnumType')),
XmlField(name='betrouwbaarheid_aanvangspeil',
source_xpath='/boring/aanvangspeil/betrouwbaarheid',
definition='Betrouwbaarheid van het opmeten van de z-coordinaat van het aanvangspeil.',
datatype='string',
xsd_type=XsdType(
xsd_schema=__generiekeDataCodes,
typename='MethodeOpmetenZEnumType')),
])
from pydov.search.boring import BoringSearch
bs = BoringSearch(objecttype=BoringMethodeXyz)
bs.get_fields()['mv_mtaw']
from pydov.util.location import WithinDistance, Point
df = bs.search(location=WithinDistance(Point(150000, 150000), 10000, 'meter'),
return_fields=('pkey_boring', 'boornummer', 'x', 'y', 'methode_xy', 'betrouwbaarheid_xy',
'mv_mtaw', 'methode_mv', 'betrouwbaarheid_mv', 'aanvangspeil_mtaw',
'methode_aanvangspeil', 'betrouwbaarheid_aanvangspeil', 'start_boring_mtaw'))
df.head()
df_plot = df.groupby('methode_xy').size().sort_values()
ax = df_plot.plot.barh()
ax.set_xlabel("count");
ax.set_title("Number of boreholes per coordinate digitization method");
df_plot = df.groupby('methode_mv').size().sort_values()
ax = df_plot.plot.barh()
ax.set_xlabel("count");
ax.set_title("Number of boreholes per height digitization method");
df_plot = df.groupby('methode_aanvangspeil').size().sort_values()
ax = df_plot.plot.barh()
ax.set_xlabel("count");
ax.set_title("Number of boreholes per `aanvangspeil` digitization method");
from pydov.types.fields import XmlField, XsdType
from pydov.types.abstract import AbstractDovSubType
from pydov.types.grondwaterfilter import GrondwaterFilter
class Gxg(AbstractDovSubType):
rootpath = './/filtermeting/gxg'
fields = [
XmlField(name='gxg_jaar',
source_xpath='/jaar',
definition='jaar (hydrologisch jaar voor lg3 en hg3, kalenderjaar voor vg3)',
datatype='integer'),
XmlField(name='gxg_hg3',
source_xpath='/hg3',
definition='gemiddelde van de drie hoogste grondwaterstanden in een hydrologisch '
'jaar (1 april t/m 31 maart) bij een meetfrequentie van tweemaal per maand',
datatype='float'),
XmlField(name='gxg_lg3',
source_xpath='/lg3',
definition='gemiddelde van de drie laagste grondwaterstanden in een hydrologisch jaar '
'(1 april t/m 31 maart) bij een meetfrequentie van tweemaal per maand',
datatype='float'),
XmlField(name='gxg_vg3',
source_xpath='/vg3',
definition='gemiddelde van de grondwaterstanden op 14 maart, 28 maart en 14 april in '
'een bepaald kalenderjaar',
datatype='float')
]
class GrondwaterFilterGxg(GrondwaterFilter):
subtypes = [Gxg]
from pydov.search.grondwaterfilter import GrondwaterFilterSearch
from owslib.fes import PropertyIsEqualTo
fs = GrondwaterFilterSearch(objecttype=GrondwaterFilterGxg)
fs.get_fields()['gxg_vg3']
df = fs.search(
query=PropertyIsEqualTo('pkey_filter', 'https://www.dov.vlaanderen.be/data/filter/1999-009146')
)
df.head()
from pydov.types.abstract import AbstractDovSubType
from pydov.types.boring import Boring
class Kleur(AbstractDovSubType):
rootpath = './/boring/details/kleur'
fields = [
XmlField(name='diepte_kleur_van',
source_xpath='/van',
definition='Bovenkant van de laag met een bepaalde bekisting, in meter.',
datatype='float'),
XmlField(name='diepte_kleur_tot',
source_xpath='/tot',
definition='Onderkant van de laag met een bepaalde bekisting, in meter.',
datatype='float'),
XmlField(name='kleur',
source_xpath='/kleur',
definition='Grondkleur voor het diepte-interval',
datatype='string')
]
class BoringKleur(Boring):
subtypes = [Kleur]
from pydov.search.boring import BoringSearch
from owslib.fes import PropertyIsEqualTo
bs = BoringSearch(objecttype=BoringKleur)
bs.get_fields()['kleur']
df = bs.search(
query=PropertyIsEqualTo('pkey_boring', 'https://www.dov.vlaanderen.be/data/boring/2004-103984')
)
df.head()
from pydov.types.fields import XmlField
from owslib.fes import PropertyIsEqualTo
from pydov.search.grondmonster import GrondmonsterSearch
from pydov.types.grondmonster import Grondmonster
class GlauconietWaarden(Grondmonster):
fields = Grondmonster.extend_fields([
XmlField(name='glauconiet_gt500',
source_xpath='/grondmonster/observatieData/observatie['
'parameter="GLAUCONIET_GT500"]/'
'waarde_numeriek',
definition='Glauconiet fractie groter 500 micron (%)',
datatype='float'),
XmlField(name='glauconiet_tss',
source_xpath='/grondmonster/observatieData/observatie['
'parameter="GLAUCONIET_TSS"]/'
'waarde_numeriek',
definition='Glauconiet fractie kleiner 500micron en groter 63micron (%)',
datatype='float'),
XmlField(name='glauconiet_kl63',
source_xpath='/grondmonster/observatieData/observatie['
'parameter="GLAUCONIET_KL63"]/'
'waarde_numeriek',
definition='Glauconiet fractie kleiner 63micron (%)',
datatype='float')
])
gm = GrondmonsterSearch(objecttype=GlauconietWaarden)
df = gm.search(
query=PropertyIsEqualTo(
propertyname='pkey_grondmonster', literal='https://www.dov.vlaanderen.be/data/grondmonster/2017-172111')
)
df.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: There are also other types of cells, for example, "Markdown". Double click this cell to view raw Markdown markup content.
Step2: If you are not sure about the function, class or variable then use its name with '?' at the end to get available documentation. Here is an example for common module numpy
Step3: OK, it seems that now we are ready to start our Hi-C data analysis! I've placed Go top shortcut for you in each section so that you can navigate quickly throughout the notebook.
Step4: 1.2 Genome
Step5: For Hi-C data mapping we will use hiclib. It utilizes bowtie 2 read mapping software. Bowtie 2 indexes the genome prior to reads mapping in order to reduce memory usage. Usually, you have to run genome indexing, but I've already done this time-consuming step. That's why code for this step is included but commented.
Step6: 1.3 Iterative mapping
Step7: Then we need to set some parameters and prepare our environment
Step8: Let's take a look at .sam files that were created during iterative mapping
Step9: 1.4 Making sense of mapping output
Step10: Let's take a look at the created file
Step11: <a id="filtering"></a>
Step12: Nice visualisation of the data
Step13: <a id="binning"></a>
Step14: <a id="visualisation"></a>
Step15: <a id="correction"></a>
Step16: <a id="meta"></a>
Step17: Seems to be nothing special with compartments. What if we had much better coverage by reads? Let's take a look at the dataset from Rao et al. 2014, GEO GSE63525, HIC069
Step18: 7.2 Topologically associating domains (TADs)
|
<ASSISTANT_TASK:>
Python Code:
# This is regular Python comment inside Jupyter "Code" cell.
# You can easily run "Hello world" in the "Code" cell (focus on the cell and press Shift+Enter):
print("Hello world!")
%%bash
echo "Current directory is: "; pwd
echo "List of files in the current directory is: "; ls
# Module import under custom name
import numpy as np
# You've started asking questions about it
np?
%%bash
head -n 8 '../DATA/FASTQ/K562_B-bulk_R1.fastq'
%%bash
head -n 8 '../DATA/FASTQ/K562_B-bulk_R2.fastq'
%%bash
ls ../GENOMES/HG19_FASTA
#%%bash
#bowtie2-build /home/jovyan/GENOMES/HG19_FASTA/chr1.fa /home/jovyan/GENOMES/HG19_IND/hg19_chr1
#Time consuming step
%%bash
ls ../GENOMES/HG19_IND
import os
from hiclib import mapping
from mirnylib import h5dict, genome
%%bash
which bowtie2
# Bowtie 2 path
%%bash
pwd
# Current working directory path
# Setting parameters and environmental variables
bowtie_path = '/opt/conda/bin/bowtie2'
enzyme = 'DpnII'
bowtie_index_path = '/home/jovyan/GENOMES/HG19_IND/hg19_chr1'
fasta_path = '/home/jovyan/GENOMES/HG19_FASTA/'
chrms = ['1']
# Reading the genome
genome_db = genome.Genome(fasta_path, readChrms=chrms)
# Creating directories for further data processing
if not os.path.exists('tmp/'):
os.mkdir('tmp/', exists_)
if not os.path.exists('../DATA/SAM/'):
os.mkdir('../DATA/SAM/')
# Set parameters for iterative mapping
min_seq_len = 25
len_step = 5
nthreads = 2
temp_dir = 'tmp'
bowtie_flags = '--very-sensitive'
infile1 = '/home/jovyan/DATA/FASTQ1/K562_B-bulk_R1.fastq'
infile2 = '/home/jovyan/DATA/FASTQ1/K562_B-bulk_R2.fastq'
out1 = '/home/jovyan/DATA/SAM/K562_B-bulk_R1.chr1.sam'
out2 = '/home/jovyan/DATA/SAM/K562_B-bulk_R2.chr1.sam'
# Iterative mapping itself. Time consuming step!
mapping.iterative_mapping(
bowtie_path = bowtie_path,
bowtie_index_path = bowtie_index_path,
fastq_path = infile1,
out_sam_path = out1,
min_seq_len = min_seq_len,
len_step = len_step,
nthreads = nthreads,
temp_dir = temp_dir,
bowtie_flags = bowtie_flags)
mapping.iterative_mapping(
bowtie_path = bowtie_path,
bowtie_index_path = bowtie_index_path,
fastq_path = infile2,
out_sam_path = out2,
min_seq_len = min_seq_len,
len_step = len_step,
nthreads = nthreads,
temp_dir = temp_dir,
bowtie_flags = bowtie_flags)
%%bash
ls /home/jovyan/DATA/SAM/
%%bash
head -n 10 /home/jovyan/DATA/SAM/K562_B-bulk_R1.chr1.sam.25
# Create the directory for output
if not os.path.exists('../DATA/HDF5/'):
os.mkdir('../DATA/HDF5/')
# Define file name for output
out = '/home/jovyan/DATA/HDF5/K562_B-bulk.fragments.hdf5'
# Open output file
mapped_reads = h5dict.h5dict(out)
# Parse mapping data and write to output file
mapping.parse_sam(
sam_basename1 = out1,
sam_basename2 = out2,
out_dict = mapped_reads,
genome_db = genome_db,
enzyme_name = enzyme,
save_seqs = False,
keep_ids = False)
%%bash
ls /home/jovyan/DATA/HDF5/
import h5py
# Reading the file
a = h5py.File('/home/jovyan/DATA/HDF5/K562_B-bulk.fragments.hdf5')
# "a" variable has dictionary-like structure, we can view its keys, for example:
list( a.keys() )
# Mapping positions for forward reads are stored under 'cuts1' key:
a['cuts1'].value
from hiclib import fragmentHiC
inp = '/home/jovyan/DATA/HDF5/K562_B-bulk.fragments.hdf5'
out = '/home/jovyan/DATA/HDF5/K562_B-bulk.fragments_filtered.hdf5'
# Create output file
fragments = fragmentHiC.HiCdataset(
filename = out,
genome = genome_db,
maximumMoleculeLength= 500,
mode = 'w')
# Parse input data
fragments.parseInputData(
dictLike=inp)
# Filtering
fragments.filterRsiteStart(offset=5) # reads map too close to restriction site
fragments.filterDuplicates() # remove PCR duplicates
fragments.filterLarge() # remove too large restriction fragments
fragments.filterExtreme(cutH=0.005, cutL=0) # remove fragments with too high and low counts
# Some hidden filteres were also applied, we can check them all:
fragments.printMetadata()
import pandas as pd
df_stat = pd.DataFrame(list(fragments.metadata.items()), columns=['Feature', 'Count'])
df_stat
df_stat['Ratio of total'] = 100*df_stat['Count']/df_stat.loc[2,'Count']
df_stat
# Define file name for binned data. Note "{}" prepared for string formatting
out_bin = '/home/jovyan/DATA/HDF5/K562_B-bulk.binned_{}.hdf5'
res_kb = [100, 20] # Several resolutions in Kb
for res in res_kb:
print(res)
outmap = out_bin.format(str(res)+'kb') # String formatting
fragments.saveHeatmap(outmap, res*1000) # Save heatmap
del fragments # delete unwanted object
# Importing visualisation modules
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('ticks')
%matplotlib inline
from hiclib.binnedData import binnedDataAnalysis
res = 100 # Resolution in Kb
# prepare to read the data
data_hic = binnedDataAnalysis(resolution=res*1000, genome=genome_db)
# read the data
data_hic.simpleLoad('/home/jovyan/DATA/HDF5/K562_B-bulk.binned_{}.hdf5'.format(str(res)+'kb'),'hic')
mtx = data_hic.dataDict['hic']
# show heatmap
plt.figure(figsize=[15,15])
plt.imshow(mtx[0:200, 0:200], cmap='jet', interpolation='None')
# Additional data filtering
data_hic.removeDiagonal()
data_hic.removePoorRegions()
data_hic.removeZeros()
data_hic.iterativeCorrectWithoutSS(force=True)
data_hic.restoreZeros()
mtx = data_hic.dataDict['hic']
plt.figure(figsize=[15,15])
plt.imshow(mtx[200:500, 200:500], cmap='jet', interpolation='None')
# Load compartments computed previously based on K562 dataset from Rao et al. 2014
eig = np.loadtxt('/home/jovyan/DATA/ANNOT/comp_K562_100Kb_chr1.tsv')
eig
from matplotlib import gridspec
bgn = 0
end = 500
fig = plt.figure(figsize=(10,10))
gs = gridspec.GridSpec(2, 1, height_ratios=[20,2])
gs.update(wspace=0.0, hspace=0.0)
ax = plt.subplot(gs[0,0])
ax.matshow(mtx[bgn:end, bgn:end], cmap='jet', origin='lower', aspect='auto')
ax.set_xticks([])
ax.set_yticks([])
axl = plt.subplot(gs[1,0])
plt.plot(range(end-bgn), eig[bgn:end] )
plt.xlim(0, end-bgn)
plt.xlabel('Eigenvector values')
ticks = range(bgn, end+1, 100)
ticklabels = ['{} Kb'.format(x) for x in ticks]
plt.xticks(ticks, ticklabels)
print('')
mtx_Rao = np.genfromtxt('../DATA/ANNOT/Rao_K562_chr1.csv', delimiter=',')
bgn = 0
end = 500
fig = plt.figure(figsize=(10,10))
gs = gridspec.GridSpec(2, 1, height_ratios=[20,2])
gs.update(wspace=0.0, hspace=0.0)
ax = plt.subplot(gs[0,0])
ax.matshow(mtx_Rao[bgn:end, bgn:end], cmap='jet', origin='lower', aspect='auto', vmax=1000)
ax.set_xticks([])
ax.set_yticks([])
axl = plt.subplot(gs[1,0])
plt.plot(range(end-bgn), eig[bgn:end] )
plt.xlim(0, end-bgn)
plt.xlabel('Eigenvector values')
ticks = range(bgn, end+1, 100)
ticklabels = ['{} Kb'.format(x) for x in ticks]
plt.xticks(ticks, ticklabels)
print('')
# Import Python package
import lavaburst
good_bins = mtx.astype(bool).sum(axis=0) > 1 # We have to mask rows/cols if data is missing
gam=[0.15, 0.25, 0.5, 0.75, 1.0] # set of parameters gamma for TADs calling
segments_dict = {}
for gam_current in gam:
print(gam_current)
S = lavaburst.scoring.armatus_score(mtx, gamma=gam_current, binmask=good_bins)
model = lavaburst.model.SegModel(S)
segments = model.optimal_segmentation() # Positions of TADs for input matrix
segments_dict[gam_current] = segments.copy()
A = mtx.copy()
good_bins = A.astype(bool).sum(axis=0) > 0
At = lavaburst.utils.tilt_heatmap(mtx, n_diags=100)
start_tmp = 0
end_tmp = 500
f = plt.figure(figsize=(20, 6))
ax = f.add_subplot(111)
blues = sns.cubehelix_palette(0.4, gamma=0.5, rot=-0.3, dark=0.1, light=0.9, as_cmap=True)
ax.matshow(np.log(At[start_tmp: end_tmp]), cmap=blues)
cmap = mpl.cm.get_cmap('brg')
gammas = segments_dict.keys()
for n, gamma in enumerate(gammas):
segments = segments_dict[gamma]
for a in segments[:-1]:
if a[1]<start_tmp or a[0]>end_tmp:
continue
ax.plot([a[0]-start_tmp, a[0]+(a[1]-a[0])/2-start_tmp], [0, -(a[1]-a[0])], c=cmap(n/len(gammas)), alpha=0.5)
ax.plot([a[0]+(a[1]-a[0])/2-start_tmp, a[1]-start_tmp], [-(a[1]-a[0]), 0], c=cmap(n/len(gammas)), alpha=0.5)
a = segments[-1]
ax.plot([a[0]-start_tmp, a[0]+(a[1]-a[0])/2-start_tmp], [0, -(a[1]-a[0])], c=cmap(n/len(gammas)), alpha=0.5, label=gamma)
ax.plot([a[0]+(a[1]-a[0])/2-start_tmp, a[1]-start_tmp], [-(a[1]-a[0]), 0], c=cmap(n/len(gammas)), alpha=0.5)
ax.set_xlim([0,end_tmp-start_tmp])
ax.set_ylim([100,-100])
ax.legend(bbox_to_anchor=(1.1, 1.05))
ax.set_aspect(0.5)
#Let's check what are median TAD sized with different parameters:
for gam_current in gam:
segments = segments_dict[gam_current]
tad_lens = segments[:,1]-segments[:,0]
good_lens = (tad_lens>=200/res)&(tad_lens<100)
print(res*1000*np.mean(tad_lens[good_lens]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Columns Interested
Step2: Decriptive Analyss
Step3: Fig 1a shows the sorted issued loan amounts from low to high.<br/>
Step4: Fig 2a and Fig 2b both show the sorted applicant annual income from low to high. The former indicates extreme values, and the latter plots only those values below the 75% quantile, which looks more sensible.<br/>
Step5: Fig 3a shows the approved loan amount against the applicants' annual income. <br/>
Step6: Fig 3b is plot of the loan amount VS applicant annual income with all extreme income amounts being excluded.
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.feature_selection import RFE
from sklearn.svm import SVR
from sklearn.svm import LinearSVC
from sklearn.svm import LinearSVR
import seaborn as sns
import matplotlib.pylab as pl
%matplotlib inline
#import matplotlib.pyplot as plt
## 2015
df_app_2015 = pd.read_csv('data/LoanStats3d_securev1.csv.zip', compression='zip', low_memory=False,\
header=1)
df_app_2015.loan_status.unique()
df_app_2015.head(5)
df_app_2015['delinq_amnt'].unique()
df_app_2015.info(max_cols=111)
df_app_2015.groupby('title').loan_amnt.mean()
df_app_2015.groupby('purpose').loan_amnt.mean()
df_app_2015['emp_length'].unique()
## selected columns
df = df_app_2015.ix[:, ['loan_status','loan_amnt', 'int_rate', 'grade', 'sub_grade',\
'purpose',\
'annual_inc', 'emp_length', 'home_ownership',\
'fico_range_low','fico_range_high',\
'num_actv_bc_tl', 'tot_cur_bal', 'mort_acc','num_actv_rev_tl',\
'pub_rec_bankruptcies','dti' ]]
df.head(3)
len(df.dropna())
df.shape
df.loan_status.unique()
len(df[df['loan_status']=='Fully Paid'])
len(df[df['loan_status']=='Default'])
len(df[df['loan_status']=='Charged Off'])
len(df[df['loan_status']=='Late (31-120 days)'])
df.info()
df.loan_status.unique()
## Convert applicable fields to numeric (I only select "Interest Rate" to use for this analysis)
df.ix[:,'int_rate'] = df.ix[:,['int_rate']]\
.applymap(lambda e: pd.to_numeric(str(e).rstrip()[:-1], errors='coerce'))
df.info()
df = df.rename(columns={"int_rate": "int_rate(%)"})
df.head(3)
#len(df.dropna(thresh= , axis=1).columns)
df.describe()
# 1. Loan Amount distribution
# # create plots and histogram to visualize total loan amounts
fig = pl.figure(figsize=(8,10))
ax1 = fig.add_subplot(211)
ax1.plot(range(len(df)), sorted(df.loan_amnt), '.', color='purple')
ax1.set_xlabel('Loan Applicant Count')
ax1.set_ylabel('Loan Amount ($)')
ax1.set_title('Fig 1a - Sorted Issued Loan Amount (2015)', size=15)
# all_ histogram
# pick upper bound 900 to exclude too large numbers
ax2 = fig.add_subplot(212)
ax2.hist(df.loan_amnt, range=(df.loan_amnt.min(), 36000), color='purple')
ax2.set_xlabel('Loan Amount -$', size=12)
ax2.set_ylabel('Counts',size=12)
ax2.set_title('Fig 1b - Sorted Issued Loan Amount (2015)', size=15)
inc_75 = df.describe().loc['75%', 'annual_inc']
count_75 = int(len(df)*0.75)
# 2. Applicant Anual Income Distribution
fig = pl.figure(figsize=(8,16))
ax0 = fig.add_subplot(311)
ax0.plot(range(len(df.annual_inc)), sorted(df.annual_inc), '.', color='blue')
ax0.set_xlabel('Loan Applicant Count')
ax0.set_ylabel('Applicant Annual Income ($)')
ax0.set_title('Fig 2a - Sorted Applicant Annual Income-all ($) (2015)', size=15)
# use 75% quantile to plot the graph and histograms -- excluding extreme values
inc_75 = df.describe().loc['75%', 'annual_inc']
inc_below75 = df.annual_inc[df.annual_inc <= inc_75]
count_75 = int(len(df)*0.75)
ax1 = fig.add_subplot(312)
ax1.plot(range(count_75), sorted(df.annual_inc)[:count_75], '.', color='blue')
ax1.set_xlabel('Loan Applicant Count')
ax1.set_ylabel('Applicant Annual Income ($)')
ax1.set_title('Fig 2b - Sorted Applicant Annual Income-75% ($) (2015)',size=15)
# all_ histogram
# pick upper bound 900 to exclude too large numbers
ax2 = fig.add_subplot(313)
ax2.hist(df.annual_inc, range=(df.annual_inc.min(), inc_75), color='blue')
ax2.set_xlabel('Applicant Annual Income -$', size=12)
ax2.set_ylabel('Counts',size=12)
ax2.set_title('Fig 2c - Sorted Applicant Income-75% ($) (2015)',size=15)
4.600000e+04
# 3. Loan amount and Applicant Annual Income
# View all
pl.figure(figsize=(6,4))
pl.plot(df.annual_inc, df.loan_amnt, '.')
pl.ylim(0, 40000)
pl.xlim(0, 0.2e7) # df.annual_inc.max()
pl.title('Fig 3a - Loan Amount VS Applicant Annual Income_all', size=15)
pl.ylabel('Loan Amount ($)', size=15)
pl.xlabel('Applicant Annual Income ($)', size=15)
# 3b
pl.figure(figsize=(6,4))
pl.plot(df.annual_inc, df.loan_amnt, '.')
pl.ylim(0, 40000)
pl.xlim(0, inc_75)
pl.title('Fig 3b - Loan Amount VS Applicant Annual Income_75%', size=15)
pl.ylabel('Loan Amount ($)', size=15)
pl.xlabel('Applicant Annual Income ($)', size=15)
pl.plot(np.log(df.annual_inc), np.log(df.loan_amnt), '.')
# 4. Average loan amount groupby grade
mean_loan_grade = df.groupby('grade')['loan_amnt'].mean()
mean_loan_grade
sum_loan_grade = df.groupby('grade')['loan_amnt'].sum()
sum_loan_grade
fig = pl.figure(figsize=(8,12)) #16,5
ax0 = fig.add_subplot(211)
ax0.plot(range(len(mean_loan_grade)), mean_loan_grade, 'o', color='blue')
ax0.set_ylim(0, 23000)
ax0.set_xlim(-0.5, len(mean_loan_grade))
ax0.set_xticks(range(len(mean_loan_grade)))
ax0.set_xticklabels(('A','B','C','D','E','F','G'))
ax0.set_xlabel('Grade')
ax0.set_ylabel('Average Loan Amount ($)')
ax0.set_title('Fig 4a - Average Loan Amount by Grade ($) (2015)', size=15)
ax1 = fig.add_subplot(212)
ax1.plot(range(len(sum_loan_grade)), sum_loan_grade, 'o', color='brown')
ax1.set_ylim(0, 2.3e9)
ax1.set_xlim(-0.5, len(sum_loan_grade))
ax1.set_xticks(range(len(sum_loan_grade)))
ax1.set_xticklabels(('A','B','C','D','E','F','G'))
ax1.set_xlabel('Grade')
ax1.set_ylabel('Total Loan Amount ($)')
ax1.set_title('Fig 4b - Total Loan Amount by Grade ($) (2015)', size=15)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this chapter we consider our first system of hyperbolic conservation laws. We study the acoustics equations that were introduced briefly in Introduction. We first describe the physical context of this system and then investigate its characteristic structure and the solution to the Riemann problem. This system is described in more detail in Chapter 3 of <cite data-cite="fvmhp"><a href="riemann.html#fvmhp">(LeVeque 2002)</a></cite>.
Step2: In the second and third steps, we evolve the characteristic variables $w$ and then transform back to the original variables. We take as initial pressure a Gaussian, with zero initial velocity. We visualize this below, where the time evolution in the characteristic variables is shown in the first plot, and the time evolution of the velocity is shown in the second plot.
Step3: In the live notebook, you can advance the above solutions in time and select which of the two characteristic variables to display.
Step4: The three constant states are related by the jumps
Step5: One way to visualize the Riemann solution for a system of two equations is by looking at the $p-u$ phase plane. In the figure below, we show the two initial conditions of the Riemann problem $q_\ell$ and $q_r$ as points in the phase space; the lines passing through these points correspond to the eigenvectors, $r_1$ and $r_2$.
Step6: Note that the eigenvectors are given in terms of the impedance $Z$, which depends on the density $\rho$
Step7: We can also observe the structure of the solution in the phase plane. In the second plot, we show the structure of the solution in the phase plane.
Step8: We again show the Riemann solution in space and in the phase plane, where the symmetry is also evident.
Step9: Gaussian initial condition
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import numpy as np
from exact_solvers import acoustics, acoustics_demos
from IPython.display import IFrame, HTML, Image
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import numpy as np
from exact_solvers import acoustics, acoustics_demos
from IPython.display import IFrame
acoustics_demos.decompose_q_interactive()
acoustics_demos.char_solution_interactive()
Image('figures/acoustics_xt_plane.png', width=350)
# Initial data for Riemann problem
rho = 0.5 # density
bulk = 2. # bulk modulus
ql = np.array([3,2]) # Left state
qr = np.array([3,-2]) # Right state
# Calculated parameters
c = np.sqrt(bulk/rho) # calculate sound speed
Z = np.sqrt(bulk*rho) # calculate impedance
print("With density rho = %g, bulk modulus K = %g" \
% (rho,bulk))
print("We compute: sound speed c = %g, impedance Z = %g \n" \
% (c,Z))
# Call and print Riemann solution
states, speeds, reval = \
acoustics.exact_riemann_solution(ql ,qr, [rho, bulk])
print("The states ql, qm and qr are: ")
print(states, "\n")
print("The left and right wave speeds are:")
print(speeds)
acoustics_demos.interactive_phase_plane(ql,qr,rho,bulk)
ql = np.array([5,0])
qr = np.array([1,0])
rho = 1.0
bulk = 4.0
acoustics_demos.riemann_plot_pplane(ql,qr,rho,bulk)
ql = np.array([2,1])
qr = np.array([2,-1])
rho = 1.0
bulk = 1.5
acoustics_demos.riemann_plot_pplane(ql,qr,rho,bulk)
IFrame(src='phase_plane/acoustics_small_notitle.html',
width=980, height=340)
anim = acoustics_demos.bump_animation(numframes = 50)
HTML(anim)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The file has been saved in the current directory
Step2: Launching pytest is as easy as move to the right directory and using the command line
Step3: To get less information, use the quick option -q
Step4: To get more information of which test has been run, use the verbose option -v
Step5: The basic test test_a has passed.
Step6: We see pytest has collected and run 4 items, 1 from the first file, and 3 from the second.
Step7: Using fixtures
Step8: Extended fixtures
Step9: Two tests have been written but pytest has collected 6 items, meaning 3 items per test. Using the verbose option, one can see at the end of each test which parameter of the fixture has been used for this given test.
Step10: numbers has 3 parameters, sign has 2 parameters, hence the simultaneous use of these fixtures probes 3 x 2 = 6 combinations of parameters. As expected, the test has failed for the -50 value.
Step11: This directory contains a Makefile with three utility commands
Step12: In order to visualize the arborescence of the module and test directory, I recommend using the Linux utility tree which can be install in the VM with
Step16: Except from the Makefile, there are two directories, the module euclid and the test directory tests.
Step23: conftest.py contains 3 fixtures
Step24: test_mytrigo.py contains 6 tests, some of which rely on 2 fixtures. However, as mentioned, the fixtures can be used without being imported.
Step25: These tests summarize well all the properties explained in the previous paragraphs
Step26: One can see that hello.py and mytime.py are not covered by tests.
|
<ASSISTANT_TASK:>
Python Code:
%%file my_first_test.py
def f(a):
return a
def test_a():
assert f(1) == 1
!ls *.py
!py.test
!py.test -q
!py.test -v
%%file my_second_test.py
def f(a):
return a
def test_a():
assert f(1) == 1
def test_b():
assert f(2) == 1
def test_c():
assert f(3) == 1 + 1 + 1
!py.test -v
%%file my_third_test.py
import pytest
def h(n):
if n < 0:
raise ValueError("Negative value detected")
return n
def test_h():
assert h(1) == 1
def test_exception_h():
with pytest.raises(ValueError):
h(-1)
!py.test -v my_third_test.py
%%file my_first_fixture.py
import pytest
def g(a):
return 2 * a
@pytest.fixture
def numbers():
return 42
def test_g(numbers):
assert g(numbers) == numbers + numbers
def test_2g(numbers):
assert g(2*numbers) == 4 * numbers
!py.test -v my_first_fixture.py
%%file my_parametrized_fixture.py
import pytest
def g(a):
return 2 * a
@pytest.fixture(params=[10, 50, 100])
def numbers(request):
return request.param
def test_g(numbers):
assert g(numbers) == numbers + numbers
def test_2g(numbers):
assert g(2*numbers) == 4 * numbers
!py.test -v my_parametrized_fixture.py
%%file my_combined_fixture.py
import pytest
def w(a):
if a == -50:
raise ValueError("The value cannot be -50")
return a
@pytest.fixture(params=[10, 50, 100])
def numbers(request):
return request.param
@pytest.fixture(params=[-1, 1])
def sign(request):
return request.param
def test_w(numbers, sign):
value = numbers * sign
assert w(value) == value
!py.test -v my_combined_fixture.py
# Depending on where you are at this point do not run this
%cd ../euclid/
!make clean
!tree
# %load tests/conftest.py
#!/usr/bin/env python
import pytest
import numpy as np
@pytest.fixture
def simplearray():
Basic fixture: a simple numpy array for general testing purposes
return np.array([1, 2, 3])
@pytest.fixture(params=[10, 100, 1000])
def arraysize(request):
Parametrized fixture: a numpy array with a varying size
The parameters should be set as a list under the `params` keyword
Then in the fixture definition, the `request` argument must be used
in order to retrieve the parameters
return np.arange(request.param)
@pytest.fixture(params=[np.int32, np.int64, np.float32, np.float64])
def dtypes(request):
Parametrized fixture: returns numpy data types
More information on fixtures can be found on
http://pytest.org/latest/fixture.html
and
http://pytest.org/latest/builtin.html#_pytest.python.fixture
return request.param
# %load tests/test_mytrigo.py
#!/usr/bin/env python
import pytest
import numpy as np
from numpy.testing import assert_almost_equal
from euclid.maths import trigo, trigonp
def test_trigo_simple():
The simplest test:
assert <condition that should be met>
assert trigo(10) == 1
def test_trigo_simple_fail():
Catching a specific Exception
with pytest.raises(Exception):
<call that should raise the exception>
with pytest.raises(ValueError):
trigo(-40)
def test_trigonp_simple(simplearray):
Using a simple fixture from the conftest.py file.
The fixture to be used in the test should be given as argument
of the test: here a basic numpy array
The fixture is then called during the test.
It avoids hardcoding the same array for every test.
In this specific test, since the equality test "==" on numpy arrays
returns an array of booleans, one must check that all the elements
are `True` with the np.all() method.
However due to floating point errors in the calculation of trigonp,
the returned values are not always equal to one. Thus the use
np.allclose() allows for some tiny departure around the checked
value.
assert np.allclose(trigonp(simplearray),
np.ones_like(simplearray, dtype=float))
def test_trigonp_size(arraysize):
This time, the fixture `arraysize` takes several parameters in input
(see conftest.py)
This means that every test using the fixture will be run for every
parameter of the fixture.
In this case the test will be on arrays with different size.
Moreover, we introduce here test triggers provided in the
`numpy.testing` submodule.
The various numpy assert methods can be parametrized in many ways to
ensure both the precision and the accuracy of the tests.
assert_almost_equal(trigonp(arraysize),
np.ones_like(arraysize, dtype=float))
def test_trigonp_dtype(simplearray, dtypes):
Various fixtures can be used for a given test. Again they need
to be mentioned as arguments of the tests.
Here we test the method on the `simplearray` and different data
types `dtypes`.
assert_almost_equal(trigonp(simplearray),
np.ones_like(simplearray, dtype=dtypes))
def test_trigonp_dim_and_dtype(arraysize, dtypes):
In the specific case where several fixtures are parametrized, the
a single test will be run for each combination of the full parameter
set.
Here we test both the array size and the data type, and for each
size of array, all data types will be tested, that is
3 size x 4 dtype = 12 combinations
assert_almost_equal(trigonp(arraysize),
np.ones_like(arraysize, dtype=dtypes))
!make test # or !py.test -v
!py.test --cov euclid/
from IPython.core.display import HTML
HTML(open('../styles/notebook.css', 'r').read())
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Read associations
Step2: 2b. Or you can read 'gene2go' once and load all species...
Step3: 3. Import protein-coding information for human and fly
Step4: 4. Calculate Gene Ontology coverage
Step5: 5 Report Gene Ontology coverage for human and fly
|
<ASSISTANT_TASK:>
Python Code:
# Get ftp://ftp.ncbi.nlm.nih.gov/gene/DATA/gene2go.gz
from goatools.base import download_ncbi_associations
gene2go = download_ncbi_associations()
from goatools.associations import read_ncbi_gene2go
geneid2gos_human = read_ncbi_gene2go(gene2go, taxids=[9606])
geneid2gos_fly = read_ncbi_gene2go(gene2go, taxids=[7227])
from collections import defaultdict, namedtuple
taxid2asscs = defaultdict(lambda: defaultdict(lambda: defaultdict(set)))
geneid2gos_all = read_ncbi_gene2go(
gene2go,
taxids=[9606, 7227],
taxid2asscs=taxid2asscs)
from goatools.test_data.genes_NCBI_9606_ProteinCoding import GeneID2nt as GeneID2nt_human
from goatools.test_data.genes_NCBI_7227_ProteinCoding import GeneID2nt as GeneID2nt_fly
lst = [
(9606, GeneID2nt_human),
(7227, GeneID2nt_fly)
]
cov_data = []
NtCov = namedtuple("NtCov", "taxid num_GOs num_covgenes coverage num_allgenes")
for taxid, pcGeneID2nt in lst:
# Get GeneID2GOs association for current species
geneid2gos = taxid2asscs[taxid]['GeneID2GOs']
# Restrict GeneID2GOs to only protein-coding genes for this report
pcgene_w_gos = set(geneid2gos.keys()).intersection(set(pcGeneID2nt.keys()))
num_pcgene_w_gos = len(pcgene_w_gos)
num_pc_genes = len(pcGeneID2nt)
# Number of GO terms annotated to protein-coding genes
gos_pcgenes = set()
for geneid in pcgene_w_gos:
gos_pcgenes |= geneid2gos[geneid]
# Print report data
cov_data.append(NtCov(
taxid = taxid,
num_GOs = len(gos_pcgenes),
num_covgenes = num_pcgene_w_gos,
coverage = 100.0*num_pcgene_w_gos/num_pc_genes,
num_allgenes = num_pc_genes))
from __future__ import print_function
print(" taxid GOs GeneIDs Coverage")
print("------ ------ ------- ----------------------")
fmtstr = "{TAXID:>6} {N:>6,} {M:>7,} {COV:2.0f}% GO coverage of {TOT:,} protein-coding genes"
for nt in cov_data:
print(fmtstr.format(
TAXID = nt.taxid,
N = nt.num_GOs,
M = nt.num_covgenes,
COV = nt.coverage,
TOT = nt.num_allgenes))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Producer sends data to server for processing
Step2: Using Spark Streaming
Step3: Monitor a directory for new or renamed files
Step4: Usage
Step5: Usage
|
<ASSISTANT_TASK:>
Python Code:
%%file consumer.py
import sys
import socket
from collections import Counter
HOST = sys.argv[1]
PORT = int(sys.argv[2])
s = socket.socket()
s.bind((HOST, PORT))
s.listen(4)
connection, address = s.accept()
c = Counter()
while True:
line = connection.recv(64)
words = line.split()
if words:
c.update(words)
print(c.most_common(5))
%%file client.py
import socket
import time
import sys
HOST = sys.argv[1]
PORT = int(sys.argv[2])
s = socket.socket()
s.connect((HOST, PORT))
while True:
for line in open('data/Ulysses.txt'):
s.sendall(str.encode(line))
time.sleep(1)
from pyspark import SparkContext
sc = SparkContext('local[*]')
lines = sc.textFile('data/Ulysses.txt')
counts = (lines.flatMap(lambda line: line.split())
.map(lambda word: (word, 1))
.reduceByKey(lambda x,y: x+ y))
counts.takeOrdered(5, key=lambda x: -x[1])
%%file file_consumer.py
import sys
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
sc = SparkContext('local[*]')
sc.setLogLevel("WARN")
ssc = StreamingContext(sc, 2)
lines = ssc.textFileStream(sys.argv[1])
counts = (lines.flatMap(lambda line: line.split())
.map(lambda word: (word, 1))
.reduceByKey(lambda x,y: x+ y))
counts.pprint()
ssc.start()
ssc.awaitTermination()
%%file socket_consumer.py
import sys
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
sc = SparkContext('local[*]')
sc.setLogLevel("WARN")
ssc = StreamingContext(sc, 2)
lines = ssc.socketTextStream(sys.argv[1], int(sys.argv[2]))
counts = (lines.flatMap(lambda line: line.split())
.map(lambda word: (word, 1))
.reduceByKey(lambda x,y: x+ y))
counts.pprint()
ssc.start()
ssc.awaitTermination()
%%file stateful_socket_consumer.py
import sys
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
def updateFunc(new, last):
if last is None:
last = 0
return sum(new) + last
sc = SparkContext('local[*]')
sc.setLogLevel("WARN")
ssc = StreamingContext(sc, 2)
ssc.checkpoint("checkpoint")
lines = ssc.socketTextStream(sys.argv[1], int(sys.argv[2]))
counts = (lines.flatMap(lambda line: line.split())
.map(lambda word: (word, 1))
.updateStateByKey(updateFunc)
.transform(lambda x: x.sortByKey()))
counts.pprint()
ssc.start()
ssc.awaitTermination()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we will take a look at the fluorescence "base line"
Step2: Now compare the fluorescence of a regular signal to its relative fluorescence
Step3: Now lets compare the maximum of each individual time signal within the relative fluorescences with the maximum of the regular data and the signal mean
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from load_environment import * # python file with imports and basics to set up this computing environment
f0 = np.average(data[:32], axis=0)
plt.imshow(f0); plt.title("Average of First 32 Frames"); plt.show()
plt.subplot(121)
f41 = data[41]
plt.imshow(f41); plt.title("Unprocessed Fluorescence")
plt.subplot(122)
plt.imshow((f41-f0)/f0); plt.title("Relative Fluorescence"); plt.show()
cpy = data[32:].copy() # Not to ruin future experiments on the original data
plt.subplot(131); plt.imshow(cpy.max(axis=0)); plt.title("Maximum F");
for f in cpy:
f = (f-f0)/f0
maxFluorescence = cpy.max(axis=0)
plt.subplot(132); plt.imshow(maxFluorescence); plt.title("Maximum $\Delta F$");
plt.subplot(133); plt.imshow(np.average(cpy, axis=0)); plt.title("Temporal Mean"); plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: numpy is multithreaded. For an accurate comparison, this needs to be disabled. This can be done as follows or by setting environment variable MKL_NUM_THREADS=1.
Step2: First experiment
Step3: The third line is also repeated 10 times to avoid the loop being too significant.
Step4: Times are not very conclusive on such small lists.
Step5: The article some implementations will be slower if the values are not sorted.
Step6: It seems that inline tests (cond ? value1
Step7: sorted, not sorted does not seem to have a real impact in this case. It shows branching really slows down the execution of a program. Branching happens whenever the program meets a loop condition or a test. Iterator *it are faster than accessing an array with notation [i] which adds a cost due to an extra addition.
Step8: numpy vs C++
Step9: numpy.einsum
Step10: The function einsum is slower (see Einsum - Einstein summation in deep learning appears to be slower but it is usually faster when it comes to chain operations as it reduces the number of intermediate allocations to do.
Step11: Very small. It should not pollute our experiments.
Step12: Pretty slow. Let's see what it does to compute dot product 16 by 16.
Step13: Use of branching
Step14: We are far from numpy but the branching has clearly a huge impact and the fact the loop condition is evaluated only every 16 iterations does not explain this gain. Next experiment with SSE instructions.
Step15: Weird, branching did not happen when the code is not inside a separate function.
Step16: Better even though it is still slower than numpy. It is closer. Maybe the compilation option are not optimized, numpy was also compiled with the Intel compiler. To be accurate, multi-threading must be disabled on numpy side. That's the purpose of the first two lines.
Step17: If the time is the same, it means that options AVX512 are not available.
Step18: Last call with OpenMP
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
try:
import mkl
mkl.set_num_threads(1)
except ModuleNotFoundError as e:
print('mkl not found', e)
import os
os.environ['MKL_NUM_THREADS']='1'
# int nb = 0;
# for(auto it = values.begin(); it != values.end(); ++it)
# if (*it >= th) nb++; // this line changes
# if (*it >= th) nb++; // and is repeated 10 times inside the loop.
# // ... 10 times
# return nb;
from cpyquickhelper.numbers.cbenchmark_dot import measure_scenario_A, measure_scenario_B
from cpyquickhelper.numbers.cbenchmark_dot import measure_scenario_C, measure_scenario_D
from cpyquickhelper.numbers.cbenchmark_dot import measure_scenario_E, measure_scenario_F
from cpyquickhelper.numbers.cbenchmark_dot import measure_scenario_G, measure_scenario_H
from cpyquickhelper.numbers.cbenchmark_dot import measure_scenario_I, measure_scenario_J
import pandas
def test_benchmark(label, values, th, repeat=10, number=20):
funcs = [(k, v) for k, v in globals().copy().items() if k.startswith("measure_scenario")]
rows = []
for k, v in funcs:
exe = v(values, th, repeat, number)
d = exe.todict()
d['doc'] = v.__doc__.split('``')[1]
d['label'] = label
d['name'] = k
rows.append(d)
df = pandas.DataFrame(rows)
return df
test_benchmark("sorted", list(range(10)), 5)
values = list(range(100000))
df_sorted = test_benchmark("sorted", values, len(values)//2, repeat=200)
df_sorted
import random
random.shuffle(values)
values = values.copy()
values[:10]
df_shuffled = test_benchmark("shuffled", values, len(values)//2, repeat=200)
df_shuffled
df = pandas.concat([df_sorted, df_shuffled])
dfg = df[["doc", "label", "average"]].pivot("doc", "label", "average")
ax = dfg.plot.bar(rot=30)
labels = [l.get_text() for l in ax.get_xticklabels()]
ax.set_xticklabels(labels, ha='right')
ax.set_title("Comparison of all implementations");
sdf = df[["doc", "label", "average"]]
dfg2 = sdf[sdf.doc.str.contains('[?^]')].pivot("doc", "label", "average")
ax = dfg2.plot.bar(rot=30)
labels = [l.get_text() for l in ax.get_xticklabels()]
ax.set_xticklabels(labels, ha='right')
ax.set_title("Comparison of implementations using ? :");
sdf = df[["doc", "label", "average"]]
dfg2 = sdf[sdf.doc.str.contains('if')].pivot("doc", "label", "average")
ax = dfg2.plot.bar(rot=30)
labels = [l.get_text() for l in ax.get_xticklabels()]
ax.set_xticklabels(labels, ha='right')
ax.set_ylim([0.0004, 0.0020])
ax.set_title("Comparison of implementations using tests");
# float vector_dot_product_pointer(const float *p1, const float *p2, size_t size)
# {
# float sum = 0;
# const float * end1 = p1 + size;
# for(; p1 != end1; ++p1, ++p2)
# sum += *p1 * *p2;
# return sum;
# }
#
#
# float vector_dot_product(py::array_t<float> v1, py::array_t<float> v2)
# {
# if (v1.ndim() != v2.ndim())
# throw std::runtime_error("Vector v1 and v2 must have the same dimension.");
# if (v1.ndim() != 1)
# throw std::runtime_error("Vector v1 and v2 must be vectors.");
# return vector_dot_product_pointer(v1.data(0), v2.data(0), v1.shape(0));
# }
%matplotlib inline
import numpy
def simple_dot(values):
return numpy.dot(values, values)
values = list(range(10000000))
values = numpy.array(values, dtype=numpy.float32)
vect = values / numpy.max(values)
simple_dot(vect)
vect.dtype
from timeit import Timer
def measure_time(stmt, context, repeat=10, number=50):
tim = Timer(stmt, globals=context)
res = numpy.array(tim.repeat(repeat=repeat, number=number))
mean = numpy.mean(res)
dev = numpy.mean(res ** 2)
dev = (dev - mean**2) ** 0.5
return dict(average=mean, deviation=dev, min_exec=numpy.min(res),
max_exec=numpy.max(res), repeat=repeat, number=number,
size=context['values'].shape[0])
measure_time("simple_dot(values)", context=dict(simple_dot=simple_dot, values=vect))
res = []
for i in range(10, 200000, 2500):
t = measure_time("simple_dot(values)", repeat=100,
context=dict(simple_dot=simple_dot, values=vect[:i].copy()))
res.append(t)
import pandas
dot = pandas.DataFrame(res)
dot.tail()
res = []
for i in range(100000, 10000000, 1000000):
t = measure_time("simple_dot(values)", repeat=10,
context=dict(simple_dot=simple_dot, values=vect[:i].copy()))
res.append(t)
huge_dot = pandas.DataFrame(res)
huge_dot.head()
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=(14,4))
dot.plot(x='size', y="average", ax=ax[0])
huge_dot.plot(x='size', y="average", ax=ax[1], logy=True)
ax[0].set_title("numpy dot product execution time");
ax[1].set_title("numpy dot product execution time");
def simple_dot_einsum(values):
return numpy.einsum('i,i->', values, values)
values = list(range(10000000))
values = numpy.array(values, dtype=numpy.float32)
vect = values / numpy.max(values)
simple_dot_einsum(vect)
measure_time("simple_dot_einsum(values)",
context=dict(simple_dot_einsum=simple_dot_einsum, values=vect))
res = []
for i in range(10, 200000, 2500):
t = measure_time("simple_dot_einsum(values)", repeat=100,
context=dict(simple_dot_einsum=simple_dot_einsum, values=vect[:i].copy()))
res.append(t)
import pandas
einsum_dot = pandas.DataFrame(res)
einsum_dot.tail()
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1, figsize=(7,4))
dot.plot(x='size', y="average", ax=ax, label="numpy.dot", logy=True)
einsum_dot.plot(x='size', y="average", ax=ax, logy=True,label="numpy.einsum")
ax.set_title("numpy einsum / dot dot product execution time");
from cpyquickhelper.numbers.cbenchmark_dot import empty_vector_dot_product
empty_vector_dot_product(vect, vect)
def empty_c11_dot(vect):
return empty_vector_dot_product(vect, vect)
measure_time("empty_c11_dot(values)",
context=dict(empty_c11_dot=empty_c11_dot, values=vect), repeat=10)
from cpyquickhelper.numbers.cbenchmark_dot import vector_dot_product
vector_dot_product(vect, vect)
def c11_dot(vect):
return vector_dot_product(vect, vect)
measure_time("c11_dot(values)",
context=dict(c11_dot=c11_dot, values=vect), repeat=10)
res = []
for i in range(10, 200000, 2500):
t = measure_time("c11_dot(values)", repeat=10,
context=dict(c11_dot=c11_dot, values=vect[:i].copy()))
res.append(t)
import pandas
cus_dot = pandas.DataFrame(res)
cus_dot.tail()
fig, ax = plt.subplots(1, 2, figsize=(14,4))
dot.plot(x='size', y="average", ax=ax[0], label="numpy")
cus_dot.plot(x='size', y="average", ax=ax[0], label="pybind11")
dot.plot(x='size', y="average", ax=ax[1], label="numpy", logy=True)
cus_dot.plot(x='size', y="average", ax=ax[1], label="pybind11")
ax[0].set_title("numpy and custom dot product execution time");
ax[1].set_title("numpy and custom dot product execution time");
from cpyquickhelper.numbers.direct_blas_lapack import cblas_sdot
def blas_dot(vect):
return cblas_sdot(vect, vect)
measure_time("blas_dot(values)", context=dict(blas_dot=blas_dot, values=vect), repeat=10)
res = []
for i in range(10, 200000, 2500):
t = measure_time("blas_dot(values)", repeat=10,
context=dict(blas_dot=blas_dot, values=vect[:i].copy()))
res.append(t)
import pandas
blas_dot = pandas.DataFrame(res)
blas_dot.tail()
fig, ax = plt.subplots(1, 2, figsize=(14,4))
dot.plot(x='size', y="average", ax=ax[0], label="numpy")
cus_dot.plot(x='size', y="average", ax=ax[0], label="pybind11")
blas_dot.plot(x='size', y="average", ax=ax[0], label="blas")
dot.plot(x='size', y="average", ax=ax[1], label="numpy", logy=True)
cus_dot.plot(x='size', y="average", ax=ax[1], label="pybind11")
blas_dot.plot(x='size', y="average", ax=ax[1], label="blas")
ax[0].set_title("numpy and custom dot product execution time");
ax[1].set_title("numpy and custom dot product execution time");
# float vector_dot_product_pointer16(const float *p1, const float *p2)
# {
# float sum = 0;
#
# sum += *(p1++) * *(p2++);
# sum += *(p1++) * *(p2++);
# sum += *(p1++) * *(p2++);
# sum += *(p1++) * *(p2++);
# sum += *(p1++) * *(p2++);
# sum += *(p1++) * *(p2++);
# sum += *(p1++) * *(p2++);
# sum += *(p1++) * *(p2++);
#
# sum += *(p1++) * *(p2++);
# sum += *(p1++) * *(p2++);
# sum += *(p1++) * *(p2++);
# sum += *(p1++) * *(p2++);
# sum += *(p1++) * *(p2++);
# sum += *(p1++) * *(p2++);
# sum += *(p1++) * *(p2++);
# sum += *(p1++) * *(p2++);
#
# return sum;
# }
#
# #define BYN 16
#
# float vector_dot_product_pointer16(const float *p1, const float *p2, size_t size)
# {
# float sum = 0;
# size_t i = 0;
# if (size >= BYN) {
# size_t size_ = size - BYN;
# for(; i < size_; i += BYN, p1 += BYN, p2 += BYN)
# sum += vector_dot_product_pointer16(p1, p2);
# }
# for(; i < size; ++p1, ++p2, ++i)
# sum += *p1 * *p2;
# return sum;
# }
#
# float vector_dot_product16(py::array_t<float> v1, py::array_t<float> v2)
# {
# if (v1.ndim() != v2.ndim())
# throw std::runtime_error("Vector v1 and v2 must have the same dimension.");
# if (v1.ndim() != 1)
# throw std::runtime_error("Vector v1 and v2 must be vectors.");
# return vector_dot_product_pointer16(v1.data(0), v2.data(0), v1.shape(0));
# }
from cpyquickhelper.numbers.cbenchmark_dot import vector_dot_product16
vector_dot_product16(vect, vect)
def c11_dot16(vect):
return vector_dot_product16(vect, vect)
measure_time("c11_dot16(values)",
context=dict(c11_dot16=c11_dot16, values=vect), repeat=10)
res = []
for i in range(10, 200000, 2500):
t = measure_time("c11_dot16(values)", repeat=10,
context=dict(c11_dot16=c11_dot16, values=vect[:i].copy()))
res.append(t)
cus_dot16 = pandas.DataFrame(res)
cus_dot16.tail()
fig, ax = plt.subplots(1, 2, figsize=(14,4))
dot.plot(x='size', y="average", ax=ax[0], label="numpy")
cus_dot.plot(x='size', y="average", ax=ax[0], label="pybind11")
cus_dot16.plot(x='size', y="average", ax=ax[0], label="pybind11x16")
dot.plot(x='size', y="average", ax=ax[1], label="numpy", logy=True)
cus_dot.plot(x='size', y="average", ax=ax[1], label="pybind11")
cus_dot16.plot(x='size', y="average", ax=ax[1], label="pybind11x16")
ax[0].set_title("numpy and custom dot product execution time");
ax[1].set_title("numpy and custom dot product execution time");
# float vector_dot_product_pointer16_nofcall(const float *p1, const float *p2, size_t size)
# {
# float sum = 0;
# const float * end = p1 + size;
# if (size >= BYN) {
# #if(BYN != 16)
# #error "BYN must be equal to 16";
# #endif
# unsigned int size_ = (unsigned int) size;
# size_ = size_ >> 4; // division by 16=2^4
# size_ = size_ << 4; // multiplication by 16=2^4
# const float * end_ = p1 + size_;
# for(; p1 != end_;)
# {
# sum += *p1 * *p2; ++p1, ++p2;
# sum += *p1 * *p2; ++p1, ++p2;
# sum += *p1 * *p2; ++p1, ++p2;
# sum += *p1 * *p2; ++p1, ++p2;
#
# sum += *p1 * *p2; ++p1, ++p2;
# sum += *p1 * *p2; ++p1, ++p2;
# sum += *p1 * *p2; ++p1, ++p2;
# sum += *p1 * *p2; ++p1, ++p2;
#
# sum += *p1 * *p2; ++p1, ++p2;
# sum += *p1 * *p2; ++p1, ++p2;
# sum += *p1 * *p2; ++p1, ++p2;
# sum += *p1 * *p2; ++p1, ++p2;
#
# sum += *p1 * *p2; ++p1, ++p2;
# sum += *p1 * *p2; ++p1, ++p2;
# sum += *p1 * *p2; ++p1, ++p2;
# sum += *p1 * *p2; ++p1, ++p2;
# }
# }
# for(; p1 != end; ++p1, ++p2)
# sum += *p1 * *p2;
# return sum;
# }
#
# float vector_dot_product16_nofcall(py::array_t<float> v1, py::array_t<float> v2)
# {
# if (v1.ndim() != v2.ndim())
# throw std::runtime_error("Vector v1 and v2 must have the same dimension.");
# if (v1.ndim() != 1)
# throw std::runtime_error("Vector v1 and v2 must be vectors.");
# return vector_dot_product_pointer16_nofcall(v1.data(0), v2.data(0), v1.shape(0));
# }
from cpyquickhelper.numbers.cbenchmark_dot import vector_dot_product16_nofcall
vector_dot_product16_nofcall(vect, vect)
def c11_dot16_nofcall(vect):
return vector_dot_product16_nofcall(vect, vect)
measure_time("c11_dot16_nofcall(values)",
context=dict(c11_dot16_nofcall=c11_dot16_nofcall, values=vect), repeat=10)
res = []
for i in range(10, 200000, 2500):
t = measure_time("c11_dot16_nofcall(values)", repeat=10,
context=dict(c11_dot16_nofcall=c11_dot16_nofcall, values=vect[:i].copy()))
res.append(t)
cus_dot16_nofcall = pandas.DataFrame(res)
cus_dot16_nofcall.tail()
fig, ax = plt.subplots(1, 2, figsize=(14,4))
dot.plot(x='size', y="average", ax=ax[0], label="numpy")
cus_dot.plot(x='size', y="average", ax=ax[0], label="pybind11")
cus_dot16.plot(x='size', y="average", ax=ax[0], label="pybind11x16")
cus_dot16_nofcall.plot(x='size', y="average", ax=ax[0], label="pybind11x16_nofcall")
dot.plot(x='size', y="average", ax=ax[1], label="numpy", logy=True)
cus_dot.plot(x='size', y="average", ax=ax[1], label="pybind11")
cus_dot16.plot(x='size', y="average", ax=ax[1], label="pybind11x16")
cus_dot16_nofcall.plot(x='size', y="average", ax=ax[1], label="pybind11x16_nofcall")
ax[0].set_title("numpy and custom dot product execution time");
ax[1].set_title("numpy and custom dot product execution time");
# #include <xmmintrin.h>
#
# float vector_dot_product_pointer16_sse(const float *p1, const float *p2)
# {
# __m128 c1 = _mm_load_ps(p1);
# __m128 c2 = _mm_load_ps(p2);
# __m128 r1 = _mm_mul_ps(c1, c2);
#
# p1 += 4;
# p2 += 4;
#
# c1 = _mm_load_ps(p1);
# c2 = _mm_load_ps(p2);
# r1 = _mm_add_ps(r1, _mm_mul_ps(c1, c2));
#
# p1 += 4;
# p2 += 4;
#
# c1 = _mm_load_ps(p1);
# c2 = _mm_load_ps(p2);
# r1 = _mm_add_ps(r1, _mm_mul_ps(c1, c2));
#
# p1 += 4;
# p2 += 4;
#
# c1 = _mm_load_ps(p1);
# c2 = _mm_load_ps(p2);
# r1 = _mm_add_ps(r1, _mm_mul_ps(c1, c2));
#
# float r[4];
# _mm_store_ps(r, r1);
#
# return r[0] + r[1] + r[2] + r[3];
# }
from cpyquickhelper.numbers.cbenchmark_dot import vector_dot_product16_sse
vector_dot_product16_sse(vect, vect)
def c11_dot16_sse(vect):
return vector_dot_product16_sse(vect, vect)
measure_time("c11_dot16_sse(values)",
context=dict(c11_dot16_sse=c11_dot16_sse, values=vect), repeat=10)
res = []
for i in range(10, 200000, 2500):
t = measure_time("c11_dot16_sse(values)", repeat=10,
context=dict(c11_dot16_sse=c11_dot16_sse, values=vect[:i].copy()))
res.append(t)
cus_dot16_sse = pandas.DataFrame(res)
cus_dot16_sse.tail()
fig, ax = plt.subplots(1, 2, figsize=(14,4))
dot.plot(x='size', y="average", ax=ax[0], label="numpy")
cus_dot16_sse.plot(x='size', y="average", ax=ax[0], label="pybind11x16_sse")
dot.plot(x='size', y="average", ax=ax[1], label="numpy", logy=True)
cus_dot16_sse.plot(x='size', y="average", ax=ax[1], label="pybind11x16_sse")
cus_dot.plot(x='size', y="average", ax=ax[1], label="pybind11")
cus_dot16.plot(x='size', y="average", ax=ax[1], label="pybind11x16")
ax[0].set_title("numpy and custom dot product execution time");
ax[1].set_title("numpy and custom dot product execution time");
import platform
platform.processor()
import numpy
values = numpy.array(list(range(10000000)), dtype=numpy.float32)
vect = values / numpy.max(values)
from cpyquickhelper.numbers.cbenchmark_dot import vector_dot_product16_avx512
vector_dot_product16_avx512(vect, vect)
def c11_dot16_avx512(vect):
return vector_dot_product16_avx512(vect, vect)
measure_time("c11_dot16_avx512(values)",
context=dict(c11_dot16_avx512=c11_dot16_avx512, values=vect), repeat=10)
res = []
for i in range(10, 200000, 2500):
t = measure_time("c11_dot16_avx512(values)", repeat=10,
context=dict(c11_dot16_avx512=c11_dot16_avx512, values=vect[:i].copy()))
res.append(t)
cus_dot16_avx512 = pandas.DataFrame(res)
cus_dot16_avx512.tail()
fig, ax = plt.subplots(1, 2, figsize=(14,4))
dot.plot(x='size', y="average", ax=ax[0], label="numpy")
cus_dot16.plot(x='size', y="average", ax=ax[0], label="pybind11x16")
cus_dot16_sse.plot(x='size', y="average", ax=ax[0], label="pybind11x16_sse")
cus_dot16_avx512.plot(x='size', y="average", ax=ax[0], label="pybind11x16_avx512")
dot.plot(x='size', y="average", ax=ax[1], label="numpy", logy=True)
cus_dot16.plot(x='size', y="average", ax=ax[1], label="pybind11x16")
cus_dot16_sse.plot(x='size', y="average", ax=ax[1], label="pybind11x16_sse")
cus_dot16_avx512.plot(x='size', y="average", ax=ax[1], label="pybind11x16_avx512")
cus_dot.plot(x='size', y="average", ax=ax[1], label="pybind11")
ax[0].set_title("numpy and custom dot product execution time");
ax[1].set_title("numpy and custom dot product execution time");
from cpyquickhelper.numbers.cbenchmark import get_simd_available_option
get_simd_available_option()
from cpyquickhelper.numbers.cbenchmark_dot import vector_dot_product_openmp
vector_dot_product_openmp(vect, vect, 2)
vector_dot_product_openmp(vect, vect, 4)
def c11_dot_openmp2(vect):
return vector_dot_product_openmp(vect, vect, nthreads=2)
def c11_dot_openmp4(vect):
return vector_dot_product_openmp(vect, vect, nthreads=4)
measure_time("c11_dot_openmp2(values)",
context=dict(c11_dot_openmp2=c11_dot_openmp2, values=vect), repeat=10)
measure_time("c11_dot_openmp4(values)",
context=dict(c11_dot_openmp4=c11_dot_openmp4, values=vect), repeat=10)
res = []
for i in range(10, 200000, 2500):
t = measure_time("c11_dot_openmp2(values)", repeat=10,
context=dict(c11_dot_openmp2=c11_dot_openmp2, values=vect[:i].copy()))
res.append(t)
cus_dot_openmp2 = pandas.DataFrame(res)
cus_dot_openmp2.tail()
res = []
for i in range(10, 200000, 2500):
t = measure_time("c11_dot_openmp4(values)", repeat=10,
context=dict(c11_dot_openmp4=c11_dot_openmp4, values=vect[:i].copy()))
res.append(t)
cus_dot_openmp4 = pandas.DataFrame(res)
cus_dot_openmp4.tail()
fig, ax = plt.subplots(1, 2, figsize=(14,4))
dot.plot(x='size', y="average", ax=ax[0], label="numpy")
cus_dot16.plot(x='size', y="average", ax=ax[0], label="pybind11x16")
cus_dot16_sse.plot(x='size', y="average", ax=ax[0], label="pybind11x16_sse")
cus_dot_openmp2.plot(x='size', y="average", ax=ax[0], label="cus_dot_openmp2")
cus_dot_openmp4.plot(x='size', y="average", ax=ax[0], label="cus_dot_openmp4")
dot.plot(x='size', y="average", ax=ax[1], label="numpy", logy=True)
cus_dot16.plot(x='size', y="average", ax=ax[1], label="pybind11x16")
cus_dot16_sse.plot(x='size', y="average", ax=ax[1], label="pybind11x16_sse")
cus_dot_openmp2.plot(x='size', y="average", ax=ax[1], label="cus_dot_openmp2")
cus_dot_openmp4.plot(x='size', y="average", ax=ax[1], label="cus_dot_openmp4")
cus_dot.plot(x='size', y="average", ax=ax[1], label="pybind11")
ax[0].set_title("numpy and custom dot product execution time");
ax[1].set_title("numpy and custom dot product execution time");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For the bar plot, let's look at the number of movies in each category, allowing each movie to be counted more than once.
Step2: Basic plot
Step3: color by a category
Step4: make plot horizontal
Step5: Saturation
Step6: Various palettes
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['figure.figsize'] = (20.0, 10.0)
df = pd.read_csv('../../../datasets/movie_metadata.csv')
df.head()
# split each movie's genre list, then form a set from the unwrapped list of all genres
categories = set([s for genre_list in df.genres.unique() for s in genre_list.split("|")])
# one-hot encode each movie's classification
for cat in categories:
df[cat] = df.genres.transform(lambda s: int(cat in s))
# drop other columns
df = df[['director_name','genres','duration'] + list(categories)]
df.head()
# convert from wide to long format and remove null classificaitons
df = pd.melt(df,
id_vars=['duration'],
value_vars = list(categories),
var_name = 'Category',
value_name = 'Count')
df = df.loc[df.Count>0]
# add an indicator whether a movie is short or long, split at 100 minutes runtime
df['islong'] = df.duration.transform(lambda x: int(x > 100))
# sort in descending order
#df = df.loc[df.groupby('Category').transform(sum).sort_values('Count', ascending=False).index]
df.head()
p = sns.countplot(data=df, x = 'Category')
p = sns.countplot(data=df,
x = 'Category',
hue = 'islong')
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong')
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.5)
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = 'deep')
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = 'muted')
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = 'pastel')
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = 'bright')
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = 'dark')
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = 'colorblind')
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = ((50/255, 132/255.0, 191/255.0), (255/255.0, 232/255.0, 0/255.0)))
p = sns.countplot(data=df,
y = 'Category',
hue = 'islong',
saturation=.9,
palette = 'Dark2')
help(sns.color_palette)
help(sns.countplot)
p = sns.countplot(data=df, x = 'Category')
plt.text(9,2000, "Color Palettes", fontsize = 95, color='black', fontstyle='italic')
p.get_figure().savefig('../../figures/colors.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Pseudospectra of matrices
Step3: 1. Jordan block
Step4: 2. Limacon
Step5: 3. Grcar matrix
Step6: 4. Wilkinson matrix
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
import numpy as np
from scipy.sparse import diags
from scipy.sparse.linalg import svds, eigs
import matplotlib.pyplot as plt
%matplotlib notebook
def pseudo_spec(x, y, mat_A):
Compute the pseudospectra of `mat_A` around the point $x + iy
ps_spec = np.zeros_like(x)
for row in range(npts):
for col in range(npts):
z_num = x[row, col] + 1j*y[row, col]
sv_min = np.min(svds((mat_A - z_num*np.eye(N)), k=10, which='SM',
return_singular_vectors=False))
ps_spec[row, col] = sv_min
return ps_spec
def plot_pseudo_spec(x, y, mat_A, ps_spec, levels=8):
Plot the pseudospectra and eigenvalues for matrix `mat_A`
n_eigs = np.shape(mat_A)[0] - 2
vals = eigs(mat_A, k=n_eigs, which='SM', return_eigenvectors=False)
ps_spec[np.isnan(ps_spec)] = 0
plt.plot(np.real(vals), np.imag(vals), 'k.')
plt.contourf(x, y, ps_spec, levels, cmap="YlGnBu_r")
C = plt.contour(x, y, ps_spec, levels, colors='k')
plt.axis('image')
return C
npts = 51
N = 200
x1, y1 = np.mgrid[-1:1:npts*1j, -1:1:npts*1j]
A1 = diags([1.], [1], shape=(N, N))
ps_spec1 = pseudo_spec(x1, y1, A1)
plt.figure(figsize=(8,8))
plot_pseudo_spec(x1, y1, A1, ps_spec1)
plt.show()
x2, y2 = np.mgrid[-2:2:npts*1j, -2:2:npts*1j]
A2 = diags([1.], [1], shape=(N, N))
ps_spec2 = pseudo_spec(x2, y2, A2)
plt.figure(figsize=(8,8))
plot_pseudo_spec(x2, y2, A2, ps_spec2)
plt.show()
x3, y3 = np.mgrid[-4:4:npts*1j, -4:4:npts*1j]
A3 = diags([-1., 1., 1., 1, 1], [-1, 0, 1, 2, 3], shape=(N, N))
ps_spec3 = pseudo_spec(x3, y3, A3)
plt.figure(figsize=(8,8))
plot_pseudo_spec(x3, y3, A3, ps_spec3)
plt.show()
x4, y4 = np.mgrid[-1:2:npts*1j, -1.5:1.5:npts*1j]
diag_0 = [k/N for k in range(1,N+1)]
diag_1 = [1 for k in range(N-1)]
A4 = diags([diag_0, diag_1], [0, 1], shape=(N, N))
ps_spec4 = pseudo_spec(x4, y4, A4)
plt.figure(figsize=(8,8))
plot_pseudo_spec(x4, y4, A4, ps_spec4)
plt.show()
from IPython.core.display import HTML
def css_styling():
styles = open('./styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Visualize the size of the original train dataset.
Step2: Shuffle and split the train filenames
Step3: Visualize the size of the processed train dataset
Step4: Create symbolic link of images
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.model_selection import train_test_split
import seaborn as sns
import os
import shutil
import pandas as pd
%matplotlib inline
df = pd.read_csv('list.txt', sep=' ')
df.ix[2000:2005]
train_cat = df[df['SPECIES'] == 1]
train_dog = df[df['SPECIES'] == 2]
x = ['cat', 'dog']
y = [len(train_cat), len(train_dog)]
ax = sns.barplot(x=x, y=y)
mytrain, myvalid = train_test_split(df, test_size=0.1)
print len(mytrain), len(myvalid)
mytrain_cat = mytrain[mytrain['SPECIES'] == 1]
mytrain_dog = mytrain[mytrain['SPECIES'] == 2]
myvalid_cat = myvalid[myvalid['SPECIES'] == 1]
myvalid_dog = myvalid[myvalid['SPECIES'] == 2]
x = ['mytrain_cat', 'mytrain_dog', 'myvalid_cat', 'myvalid_dog']
y = [len(mytrain_cat), len(mytrain_dog), len(myvalid_cat), len(myvalid_dog)]
ax = sns.barplot(x=x, y=y)
def remove_and_create_class(dirname):
if os.path.exists(dirname):
shutil.rmtree(dirname)
os.mkdir(dirname)
os.mkdir(dirname+'/cat')
os.mkdir(dirname+'/dog')
remove_and_create_class('mytrain_ox')
remove_and_create_class('myvalid_ox')
for filename in mytrain_cat['IMAGE']:
os.symlink('../../images/'+filename+'.jpg', 'mytrain_ox/cat/'+filename+'.jpg')
for filename in mytrain_dog['IMAGE']:
os.symlink('../../images/'+filename+'.jpg', 'mytrain_ox/dog/'+filename+'.jpg')
for filename in myvalid_cat['IMAGE']:
os.symlink('../../images/'+filename+'.jpg', 'myvalid_ox/cat/'+filename+'.jpg')
for filename in myvalid_dog['IMAGE']:
os.symlink('../../images/'+filename+'.jpg', 'myvalid_ox/dog/'+filename+'.jpg')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'nicam16-7s', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 2
Step2: Problem 3
Step3: Problem 4
Step4: OK, so there are about 50000 in each class in the training set
Step5: And about 1870 in each class in the test set
Step6: What about near duplicates between datasets? (images that are almost identical)
Step7: Problem 6
|
<ASSISTANT_TASK:>
Python Code:
label_map = list('abcdefghij')
fig,axes = pl.subplots(3,3,figsize=(5,5),sharex=True,sharey=True)
with h5py.File(cache_file, 'r') as f:
for i in range(9):
ax = axes.flat[i]
idx = np.random.randint(f['test']['images'].shape[0])
ax.imshow(f['test']['images'][idx],
cmap='Greys', interpolation='nearest')
ax.set_title(label_map[int(f['test']['labels'][idx])])
# Solution:
with h5py.File(cache_file, 'r') as f:
# get a unique list of the classes
classes = np.unique(f['test']['labels'])
classes.sort()
nclasses = len(classes)
images = f['test']['images'][:]
for i,cls in enumerate(classes):
fig,ax = pl.subplots(1,1,figsize=(2,2))
mean_img = images[f['test']['labels'][:] == cls].mean(axis=0) # select all images for a given class, take mean
ax.imshow(mean_img, cmap='Greys', interpolation='nearest') # greyscale colormap, no interpolation
ax.set_title(label_map[i])
def randomize(data, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_data = data[permutation]
shuffled_labels = labels[permutation]
return shuffled_data, shuffled_labels
with h5py.File(cache_file, 'r') as f:
train_dataset, train_labels = randomize(f['train']['images'][:], f['train']['labels'][:])
test_dataset, test_labels = randomize(f['test']['images'][:], f['test']['labels'][:])
np.histogram(train_labels, bins=np.arange(0,nclasses+1,1))
np.histogram(test_labels, bins=np.arange(0,nclasses+1,1))
n_overlaps = []
# the data has been randomize, so let's just check the first 100 images and assume that
# is a representative sample
for test_img in test_dataset[:100]:
diff = (train_dataset - test_img[None]).sum(axis=-1).sum(axis=-1)
n_overlap = (diff == 0).sum()
n_overlaps.append(n_overlap)
print("Typical overlap:", np.median(n_overlaps))
pl.hist(n_overlaps)
n_overlaps = []
threshold = 1E-2 # define an arbitrary threshold -- play with this
# the data has been randomize, so let's just check the first 100 images and assume that
# is a representative sample
for test_img in test_dataset[:100]:
diff = (train_dataset - test_img[None]).sum(axis=-1).sum(axis=-1)
n_overlap = (np.abs(diff) < threshold).sum()
n_overlaps.append(n_overlap)
model = LogisticRegression()
image_size = train_dataset.shape[-1]
subset = 50 # replace with 100, 1000, 5000
idx = np.random.choice(np.arange(train_dataset.shape[0]), size=subset)
train_subset_data = train_dataset[idx].reshape(subset, image_size*image_size)
train_subset_labels = train_labels[idx]
model.fit(train_subset_data, train_subset_labels)
predict_labels = model.predict(test_dataset.reshape(test_dataset.shape[0], image_size*image_size))
(predict_labels != test_labels).sum() / float(test_labels.size)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Initial example
Step2: Spark ML
Step3: Load Newsgroup Data
Step4: Spark ML
Step5: Train-Test Split
Step6: Spark
Step7: Feature engineering
Step8: Spark ML
Step9: Modelling & Prediction
Step10: Spark ML
Step11: Pipeline
Step12: Spark ML
Step13: Model Evaluation
Step14: Spark ML
Step15: Scores are different mainly because default parameters are not the same in scikit-learn and Spark ML
Step16: Spark ML
|
<ASSISTANT_TASK:>
Python Code:
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
import pandas as pd
from sklearn.datasets import load_iris
data = pd.DataFrame(data=load_iris().data, columns=['sepal_length', 'sepal_width', 'petal_length', 'petal_width'])
data.head()
from sklearn.preprocessing import Binarizer
binarizer = Binarizer(threshold=5)
binarizer.fit_transform(data.sepal_length.reshape(-1, 1))
df = sqlContext.createDataFrame(data)
from pyspark.ml.feature import Binarizer
binarizer = Binarizer(threshold=5.0, inputCol='sepal_length', outputCol='sepal_length_bin')
binarizer.transform(df).show(5)
# Import data
from sklearn.datasets import fetch_20newsgroups
categories = ['rec.autos', 'rec.sport.baseball', 'comp.graphics', 'comp.sys.mac.hardware',
'sci.space', 'sci.crypt', 'talk.politics.guns', 'talk.religion.misc']
newsgroup = fetch_20newsgroups(subset='train', categories=categories, shuffle=True, random_state=42)
print newsgroup.data[0]
# Create pandas DataFrames for values and targets
import pandas as pd
pdf_newsgroup = pd.DataFrame(data=newsgroup.data, columns=['news']) # Texts
pdf_newsgroup_target = pd.DataFrame(data=newsgroup.target, columns=['target']) # Targets
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df_newsgroup = sqlContext.createDataFrame(pd.concat([pdf_newsgroup, pdf_newsgroup_target], axis=1))
df_newsgroup.printSchema()
df_newsgroup.show(3)
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(newsgroup.data, newsgroup.target, train_size=0.8, random_state=42)
(df_train, df_test) = df_newsgroup.randomSplit([0.8, 0.2])
# Tokenizing and Occurrence Counts
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train)
# TF-IDF
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
# Tokenizing
from pyspark.ml.feature import Tokenizer
tokenizer = Tokenizer(inputCol='news', outputCol='news_words')
df_train_words = tokenizer.transform(df_train)
# Hashing Term-Frequency
from pyspark.ml.feature import HashingTF
hashing_tf = HashingTF(inputCol=tokenizer.getOutputCol(), outputCol='news_tf', numFeatures=10000)
df_train_tf = hashing_tf.transform(df_train_words)
# Inverse Document Frequency
from pyspark.ml.feature import IDF
idf = IDF(inputCol=hashing_tf.getOutputCol(), outputCol="news_tfidf")
idf_model = idf.fit(df_train_tf) # fit to build the model on all the data, and then apply it line by line
df_train_tfidf = idf_model.transform(df_train_tf)
df_train_tfidf.show(5)
# Training a Decision Tree on training set
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(max_depth=10).fit(X_train_tfidf, y_train)
# Transform test set
X_test_counts = count_vect.transform(X_test)
X_test_tfidf = tfidf_transformer.transform(X_test_counts)
# Predictions on the test set
y_test_pred = clf.predict(X_test_tfidf)
# Indexing the target
from pyspark.ml.feature import StringIndexer
string_indexer = StringIndexer(inputCol='target', outputCol='target_indexed')
string_indexer_model = string_indexer.fit(df_train_tfidf)
df_train_final = string_indexer_model.transform(df_train_tfidf)
# Training a Decision Tree on training set
from pyspark.ml.classification import DecisionTreeClassifier
dt = DecisionTreeClassifier(featuresCol=idf.getOutputCol(), labelCol=string_indexer.getOutputCol())
dt_model = dt.fit(df_train_final)
# Transform the test set
df_test_words = tokenizer.transform(df_test)
df_test_tf = hashing_tf.transform(df_test_words)
df_test_tfidf = idf_model.transform(df_test_tf)
df_test_final = string_indexer_model.transform(df_test_tfidf)
# Preditions on the test set
df_test_pred = dt_model.transform(df_test_final)
df_test_pred.select('news', 'target', 'prediction', 'probability').show(5)
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.tree import DecisionTreeClassifier
from sklearn.pipeline import Pipeline
# Instanciate a Pipeline
text_clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', DecisionTreeClassifier(max_depth=10)),
])
# Transform the data and train the classifier on the training set
text_clf = text_clf.fit(X_train, y_train)
# Transform the data and perform predictions on the test set
y_test_pred = text_clf.predict(X_test)
from pyspark.ml.feature import Tokenizer, HashingTF, IDF, StringIndexer
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml import Pipeline
# Instanciate all the Estimators and Transformers necessary
tokenizer = Tokenizer(inputCol='news', outputCol='news_words')
hashing_tf = HashingTF(inputCol=tokenizer.getOutputCol(), outputCol='news_tf', numFeatures=10000)
idf = IDF(inputCol=hashing_tf.getOutputCol(), outputCol="news_tfidf")
string_indexer = StringIndexer(inputCol='target', outputCol='target_indexed')
dt = DecisionTreeClassifier(featuresCol=idf.getOutputCol(), labelCol=string_indexer.getOutputCol(), maxDepth=10)
# Instanciate a Pipeline
pipeline = Pipeline(stages=[tokenizer,
hashing_tf,
idf,
string_indexer,
dt])
# Transform the data and train the classifier on the training set
pipeline_model = pipeline.fit(df_train)
# Transform the data and perform predictions on the test set
df_test_pred = pipeline_model.transform(df_test)
df_test_pred.show(5)
from sklearn.metrics import precision_score
# Evaluate the predictions done on the test set
precision_score(y_test_pred, y_test, average='micro')
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
# Instanciate a MulticlassClassificationEvaluator with precision metric
evaluator = MulticlassClassificationEvaluator(predictionCol='prediction', labelCol='target_indexed',
metricName='precision')
# Evaluate the predictions done on the test set
evaluator.evaluate(df_test_pred)
from sklearn.grid_search import GridSearchCV
# Create the parameters grid
parameters = {'tfidf__use_idf': (True, False),
'clf__max_depth': (10, 20)
}
# Instanciate a GridSearchCV object with the pipeline, the parameters grid and the scoring function
gs_clf = GridSearchCV(text_clf, parameters, score_func=precision_score, n_jobs=-1)
# Transform the data and train the classifier on the training set
gs_clf = gs_clf.fit(X_train, y_train)
# Transform the data and perform predictions on the test set
y_test_pred = gs_clf.predict(X_test)
# Evaluate the predictions done on the test set
precision_score(y_test_pred, y_test, average='micro')
from pyspark.ml.tuning import ParamGridBuilder
from pyspark.ml.tuning import CrossValidator
# Instanciation of a ParamGridBuilder
grid = (ParamGridBuilder()
.baseOn([evaluator.metricName, 'precision'])
.addGrid(dt.maxDepth, [10, 20])
.build())
# Instanciation of a CrossValidator
cv = CrossValidator(estimator=pipeline, estimatorParamMaps=grid, evaluator=evaluator)
# Transform the data and train the classifier on the training set
cv_model = cv.fit(df_train)
# Transform the data and perform predictions on the test set
df_test_pred = cv_model.transform(df_test)
# Evaluate the predictions done on the test set
evaluator.evaluate(df_test_pred)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 1.
Step2: b.
Step3: Exercise 2.
Step4: Exercise 3.
Step5: X would be -30 when the sample $\omega$ has no 6s, 50 when has one, 75 when it has two, and 100 when it has three. The probability distribution of such variable would be the binomial with $p = 1 / 6$, $n=3$ and $k$ the number of 6s.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy.special import binom
import matplotlib.pylab as plt
from scipy.misc import factorial as fact
%matplotlib inline
def binomial(p, n, k):
return binom(n, k) * p ** k * (1 - p) ** (n-k)
p = 4. / 100
np.sum(binomial(p, 150, np.arange(5)))
np.sum(binomial(p, 150, np.arange(5, 10)))
plt.bar(np.arange(20), binomial(p, 150, np.arange(20)))
plt.bar(np.arange(5), binomial(p, 150, np.arange(5)))
plt.bar(np.arange(5, 10), binomial(p, 150, np.arange(5,10)))
plt.xlabel('# defectives')
plt.ylabel('P(X=k)')
def not_same_birthday(q):
return np.prod((365 - np.arange(q))/ 365)
q = 45
p = np.empty(q - 1)
for i in range(1, q):
p[i - 1] = 1 - not_same_birthday(i)
plt.plot(np.arange(1, q), p)
plt.plot(23, 1 - not_same_birthday(23), 'r+', label='23 people')
plt.grid()
plt.ylabel('Probability')
plt.xlabel('q')
plt.legend()
1 - not_same_birthday(23)
import itertools
x = [1, 2, 3, 4, 5, 6]
omega = set([p for p in itertools.product(x, repeat=3)])
print(r'Omega has', len(omega), 'elements and they are:')
print(omega)
g = binomial(1 / 6, 3, np.arange(4)) * np.array([-30, 50, 75, 100])
np.sum(g)
plt.bar(np.arange(4), g)
plt.plot([-.5, 3.5], np.ones(2) * np.sum(g), 'r')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Of course, we need a better way to figure out how well we’ve fit the data than staring at the graph.
Step2: Doing Statistics with statsmodels
|
<ASSISTANT_TASK:>
Python Code:
num_friends_good = [49,41,40,25,21,21,19,19,18,18,16,15,15,15,15,14,14,13,13,13,13,12,12,11,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,8,8,8,8,8,8,8,8,8,8,8,8,8,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]
daily_minutes_good = [68.77,51.25,52.08,38.36,44.54,57.13,51.4,41.42,31.22,34.76,54.01,38.79,47.59,49.1,27.66,41.03,36.73,48.65,28.12,46.62,35.57,32.98,35,26.07,23.77,39.73,40.57,31.65,31.21,36.32,20.45,21.93,26.02,27.34,23.49,46.94,30.5,33.8,24.23,21.4,27.94,32.24,40.57,25.07,19.42,22.39,18.42,46.96,23.72,26.41,26.97,36.76,40.32,35.02,29.47,30.2,31,38.11,38.18,36.31,21.03,30.86,36.07,28.66,29.08,37.28,15.28,24.17,22.31,30.17,25.53,19.85,35.37,44.6,17.23,13.47,26.33,35.02,32.09,24.81,19.33,28.77,24.26,31.98,25.73,24.86,16.28,34.51,15.23,39.72,40.8,26.06,35.76,34.76,16.13,44.04,18.03,19.65,32.62,35.59,39.43,14.18,35.24,40.13,41.82,35.45,36.07,43.67,24.61,20.9,21.9,18.79,27.61,27.21,26.61,29.77,20.59,27.53,13.82,33.2,25,33.1,36.65,18.63,14.87,22.2,36.81,25.53,24.62,26.25,18.21,28.08,19.42,29.79,32.8,35.99,28.32,27.79,35.88,29.06,36.28,14.1,36.63,37.49,26.9,18.58,38.48,24.48,18.95,33.55,14.24,29.04,32.51,25.63,22.22,19,32.73,15.16,13.9,27.2,32.01,29.27,33,13.74,20.42,27.32,18.23,35.35,28.48,9.08,24.62,20.12,35.26,19.92,31.02,16.49,12.16,30.7,31.22,34.65,13.13,27.51,33.2,31.57,14.1,33.42,17.44,10.12,24.42,9.82,23.39,30.93,15.03,21.67,31.09,33.29,22.61,26.89,23.48,8.38,27.81,32.35,23.84]
alpha, beta = 22.9475, 0.90386
%matplotlib inline
import matplotlib.pyplot as plt
plt.scatter(num_friends_good, daily_minutes_good)
plt.plot(num_friends_good, [alpha + beta*i for i in num_friends_good], 'b-')
plt.xlabel('# of friends', fontsize = 20)
plt.ylabel('minutes per day', fontsize = 20)
plt.title('simple linear regression model', fontsize = 20)
plt.show()
# https://github.com/computational-class/machinelearninginaction/blob/master/Ch08/regression.py
import pandas as pd
import random
dat = pd.read_csv('../data/ex0.txt', sep = '\t', names = ['x1', 'x2', 'y'])
dat['x3'] = [yi*.3 + .5*random.random() for yi in dat['y']]
dat.head()
from numpy import mat, linalg, corrcoef
def standRegres(xArr,yArr):
xMat = mat(xArr); yMat = mat(yArr).T
xTx = xMat.T*xMat
if linalg.det(xTx) == 0.0:
print("This matrix is singular, cannot do inverse")
return
ws = xTx.I * (xMat.T*yMat)
return ws
xs = [[dat.x1[i], dat.x2[i], dat.x3[i]] for i in dat.index]
y = dat.y
print(xs[:2])
ws = standRegres(xs, y)
print(ws)
xMat=mat(xs)
yMat=mat(y)
yHat = xMat*ws
xCopy=xMat.copy()
xCopy.sort(0)
yHat=xCopy*ws
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(xMat[:,1].flatten().A[0], yMat.T[:,0].flatten().A[0])
ax.plot(xCopy[:,1],yHat, 'r-')
plt.ylim(0, 5)
plt.show()
yHat = xMat*ws
corrcoef(yHat.T, yMat)
import statsmodels.api as sm
import statsmodels.formula.api as smf
dat = pd.read_csv('ex0.txt', sep = '\t', names = ['x1', 'x2', 'y'])
dat['x3'] = [yi*.3 - .1*random.random() for yi in y]
dat.head()
results = smf.ols('y ~ x2 + x3', data=dat).fit()
results.summary()
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_partregress_grid(results, fig = fig)
plt.show()
import numpy as np
X = np.array(num_friends_good)
X = sm.add_constant(X, prepend=False)
mod = sm.OLS(daily_minutes_good, X)
res = mod.fit()
print(res.summary())
fig = plt.figure(figsize=(6,8))
fig = sm.graphics.plot_partregress_grid(res, fig = fig)
plt.show()
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.