text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
|---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating an instance of MeasurementControl
Step2: The InstrumentMonitor can be used to see the parameters of any instrument connected to the station and updates during the loop initiated by MeasurementControl.
Step3: Create instruments used in the experiment
Step4: A 1D hard measurement
Step5: By setting persist_mode = True we can see a copy of the last measurements
Step6: A simple 1D soft measurement
Step7: You can play around a bit with the options in the MC
Step8: A simple 2D measurement
Step9: 2D combinatioin of a hard inner and soft outer loop
Step10: A Hard measurement that uses soft averaging
Step11: 2D soft averaging
Step12: Starting an adaptive measurement
|
<ASSISTANT_TASK:>
Python Code:
import pycqed as pq
import numpy as np
from pycqed.measurement import measurement_control
from pycqed.measurement.sweep_functions import None_Sweep
import pycqed.measurement.detector_functions as det
from qcodes import station
station = station.Station()
MC = measurement_control.MeasurementControl('MC',live_plot_enabled=True, verbose=True)
MC.station = station
station.add_component(MC)
from pycqed.instrument_drivers.virtual_instruments import instrument_monitor as im
IM = im.InstrumentMonitor('IM', station)
station.add_component(IM)
# Link the instrument monitor to the MC so that it gets updated in the loop
MC.instrument_monitor('IM')
IM.update()
IM.update_interval(.1)
IM.update()
from pycqed.instrument_drivers.physical_instruments.dummy_instruments import DummyParHolder
dummy_instrument = DummyParHolder('dummy_instrument')
station.add_component(dummy_instrument)
MC.soft_avg(15)
MC.persist_mode(True)
MC.set_sweep_function(None_Sweep(sweep_control='hard'))
MC.set_sweep_points(np.linspace(0, 10, 30))
MC.set_detector_function(det.Dummy_Detector_Hard(noise=0.5, delay=.02))
dat = MC.run('dummy_hard')
data_set = dat['dset']
MC.set_sweep_function(None_Sweep(sweep_control='hard'))
MC.set_sweep_points(np.linspace(0, 10, 30))
MC.set_detector_function(det.Dummy_Detector_Hard(noise=0.5, delay=.02))
dat2 = MC.run('dummy_hard persistent')
data_set2 = dat2['dset']
dummy_instrument.x(145/134545)
IM.update()
dummy_instrument.delay(.01)
MC.soft_avg(15)
MC.set_sweep_function(dummy_instrument.x)
MC.set_sweep_points(np.linspace(-1,1,30))
dummy_instrument.noise(1)
MC.set_detector_function(dummy_instrument.parabola)
dat = MC.run('1D test')
data_set = dat['dset']
# the second plot will also show the first line
MC.set_sweep_function(dummy_instrument.x)
MC.set_sweep_points(np.linspace(-1,1,30))
dat2= MC.run('1D test-persist')
data_set2 = dat2['dset']
dummy_instrument.delay(.01)
MC.soft_avg(15)
MC.set_sweep_function(dummy_instrument.x)
MC.set_sweep_points(np.linspace(-1,1,30))
MC.set_detector_function(det.Dummy_Detector_Soft())
dat = MC.run('1D test')
data_set = dat['dset']
MC.persist_mode(True) # Turns on and off persistent plotting
MC.verbose(True)
MC.plotting_interval(.2)
MC.live_plot_enabled(True)
dummy_instrument.delay(.0001)
MC.soft_avg(4)
sweep_pts = np.linspace(-2, 2, 30)
sweep_pts_2D = np.linspace(-2, 2, 5)
MC.set_sweep_function(dummy_instrument.x)
MC.set_sweep_function_2D(dummy_instrument.y)
MC.set_sweep_points(sweep_pts)
MC.set_sweep_points_2D(sweep_pts_2D)
MC.set_detector_function(dummy_instrument.parabola)
dat=MC.run('test', mode='2D')
data_set = dat['dset']
MC.soft_avg(1)
sweep_pts = np.linspace(0, 10, 30)
sweep_pts_2D = np.linspace(0, 10, 30)
MC.set_sweep_function(None_Sweep(sweep_control='hard'))
MC.set_sweep_function_2D(None_Sweep(sweep_control='soft'))
MC.set_sweep_points(sweep_pts)
MC.set_sweep_points_2D(sweep_pts_2D)
MC.set_detector_function(det.Dummy_Detector_Hard(delay=.05, noise=.1))
dat = MC.run('2D_hard', mode='2D')
data_set = dat['dset']
MC.soft_avg(4)
MC.set_sweep_function(None_Sweep(sweep_control='hard'))
MC.set_sweep_points(np.linspace(0, 10, 30))
MC.set_detector_function(det.Dummy_Detector_Hard(noise=1.5, delay=.02))
dat = MC.run('dummy_hard')
data_set = dat['dset']
MC.soft_avg(10)
sweep_pts = np.linspace(0, 10, 30)
sweep_pts_2D = np.linspace(0, 10, 5)
MC.set_sweep_function(None_Sweep(sweep_control='hard'))
MC.set_sweep_function_2D(None_Sweep(sweep_control='soft'))
MC.set_sweep_points(sweep_pts)
MC.set_sweep_points_2D(sweep_pts_2D)
MC.set_detector_function(det.Dummy_Detector_Hard(noise=1.5, delay=.001))
dat = MC.run('dummy_hard_2D', mode='2D')
data_set = dat['dset']
dummy_instrument.delay(.05)
dummy_instrument.noise(2)
from pycqed.measurement.optimization import nelder_mead
MC.soft_avg(1)
dummy_instrument
MC.set_sweep_functions([dummy_instrument.x, dummy_instrument.y])
MC.set_adaptive_function_parameters({'adaptive_function':nelder_mead,
'x0':[-5,-5], 'initial_step': [2.5, 2.5]})
dummy_instrument.noise(2)
MC.set_detector_function(dummy_instrument.parabola)
dat = MC.run('1D test', mode='adaptive')
data_set = dat['dset']
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Description
Step7: 1.4. Land Atmosphere Flux Exchanges
Step8: 1.5. Atmospheric Coupling Treatment
Step9: 1.6. Land Cover
Step10: 1.7. Land Cover Change
Step11: 1.8. Tiling
Step12: 2. Key Properties --> Conservation Properties
Step13: 2.2. Water
Step14: 2.3. Carbon
Step15: 3. Key Properties --> Timestepping Framework
Step16: 3.2. Time Step
Step17: 3.3. Timestepping Method
Step18: 4. Key Properties --> Software Properties
Step19: 4.2. Code Version
Step20: 4.3. Code Languages
Step21: 5. Grid
Step22: 6. Grid --> Horizontal
Step23: 6.2. Matches Atmosphere Grid
Step24: 7. Grid --> Vertical
Step25: 7.2. Total Depth
Step26: 8. Soil
Step27: 8.2. Heat Water Coupling
Step28: 8.3. Number Of Soil layers
Step29: 8.4. Prognostic Variables
Step30: 9. Soil --> Soil Map
Step31: 9.2. Structure
Step32: 9.3. Texture
Step33: 9.4. Organic Matter
Step34: 9.5. Albedo
Step35: 9.6. Water Table
Step36: 9.7. Continuously Varying Soil Depth
Step37: 9.8. Soil Depth
Step38: 10. Soil --> Snow Free Albedo
Step39: 10.2. Functions
Step40: 10.3. Direct Diffuse
Step41: 10.4. Number Of Wavelength Bands
Step42: 11. Soil --> Hydrology
Step43: 11.2. Time Step
Step44: 11.3. Tiling
Step45: 11.4. Vertical Discretisation
Step46: 11.5. Number Of Ground Water Layers
Step47: 11.6. Lateral Connectivity
Step48: 11.7. Method
Step49: 12. Soil --> Hydrology --> Freezing
Step50: 12.2. Ice Storage Method
Step51: 12.3. Permafrost
Step52: 13. Soil --> Hydrology --> Drainage
Step53: 13.2. Types
Step54: 14. Soil --> Heat Treatment
Step55: 14.2. Time Step
Step56: 14.3. Tiling
Step57: 14.4. Vertical Discretisation
Step58: 14.5. Heat Storage
Step59: 14.6. Processes
Step60: 15. Snow
Step61: 15.2. Tiling
Step62: 15.3. Number Of Snow Layers
Step63: 15.4. Density
Step64: 15.5. Water Equivalent
Step65: 15.6. Heat Content
Step66: 15.7. Temperature
Step67: 15.8. Liquid Water Content
Step68: 15.9. Snow Cover Fractions
Step69: 15.10. Processes
Step70: 15.11. Prognostic Variables
Step71: 16. Snow --> Snow Albedo
Step72: 16.2. Functions
Step73: 17. Vegetation
Step74: 17.2. Time Step
Step75: 17.3. Dynamic Vegetation
Step76: 17.4. Tiling
Step77: 17.5. Vegetation Representation
Step78: 17.6. Vegetation Types
Step79: 17.7. Biome Types
Step80: 17.8. Vegetation Time Variation
Step81: 17.9. Vegetation Map
Step82: 17.10. Interception
Step83: 17.11. Phenology
Step84: 17.12. Phenology Description
Step85: 17.13. Leaf Area Index
Step86: 17.14. Leaf Area Index Description
Step87: 17.15. Biomass
Step88: 17.16. Biomass Description
Step89: 17.17. Biogeography
Step90: 17.18. Biogeography Description
Step91: 17.19. Stomatal Resistance
Step92: 17.20. Stomatal Resistance Description
Step93: 17.21. Prognostic Variables
Step94: 18. Energy Balance
Step95: 18.2. Tiling
Step96: 18.3. Number Of Surface Temperatures
Step97: 18.4. Evaporation
Step98: 18.5. Processes
Step99: 19. Carbon Cycle
Step100: 19.2. Tiling
Step101: 19.3. Time Step
Step102: 19.4. Anthropogenic Carbon
Step103: 19.5. Prognostic Variables
Step104: 20. Carbon Cycle --> Vegetation
Step105: 20.2. Carbon Pools
Step106: 20.3. Forest Stand Dynamics
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
Step109: 22.2. Growth Respiration
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
Step111: 23.2. Allocation Bins
Step112: 23.3. Allocation Fractions
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
Step115: 26. Carbon Cycle --> Litter
Step116: 26.2. Carbon Pools
Step117: 26.3. Decomposition
Step118: 26.4. Method
Step119: 27. Carbon Cycle --> Soil
Step120: 27.2. Carbon Pools
Step121: 27.3. Decomposition
Step122: 27.4. Method
Step123: 28. Carbon Cycle --> Permafrost Carbon
Step124: 28.2. Emitted Greenhouse Gases
Step125: 28.3. Decomposition
Step126: 28.4. Impact On Soil Properties
Step127: 29. Nitrogen Cycle
Step128: 29.2. Tiling
Step129: 29.3. Time Step
Step130: 29.4. Prognostic Variables
Step131: 30. River Routing
Step132: 30.2. Tiling
Step133: 30.3. Time Step
Step134: 30.4. Grid Inherited From Land Surface
Step135: 30.5. Grid Description
Step136: 30.6. Number Of Reservoirs
Step137: 30.7. Water Re Evaporation
Step138: 30.8. Coupled To Atmosphere
Step139: 30.9. Coupled To Land
Step140: 30.10. Quantities Exchanged With Atmosphere
Step141: 30.11. Basin Flow Direction Map
Step142: 30.12. Flooding
Step143: 30.13. Prognostic Variables
Step144: 31. River Routing --> Oceanic Discharge
Step145: 31.2. Quantities Transported
Step146: 32. Lakes
Step147: 32.2. Coupling With Rivers
Step148: 32.3. Time Step
Step149: 32.4. Quantities Exchanged With Rivers
Step150: 32.5. Vertical Grid
Step151: 32.6. Prognostic Variables
Step152: 33. Lakes --> Method
Step153: 33.2. Albedo
Step154: 33.3. Dynamics
Step155: 33.4. Dynamic Lake Extent
Step156: 33.5. Endorheic Basins
Step157: 34. Lakes --> Wetlands
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-1', 'land')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we start making changes to the driver log file and we create a new driver log file.
Step2: We sample (normal dist) 40% of example that are not zero angles; flip them and reverse steering angle.
Step3: We have to upsample under represented example and we do this by shifting images and their corresponding angles
Step4: The histogram looks more balanced from the upsampling, but this caused us to created steering angles outside the bounds of [-1,1]
Step5: Now the histogram is mostly evenly distributed and the question now becomes how much data can we trim and still train the model?!
Step6: Now we have a decent starting point and we can start the pipeline process!!
Step7: Last step is to create the shift image and load image methods.
Step8: We can actually load everthing into memory and we don't need to use a generator!
Step16: We are now ready to go into creating our model!
Step18: Here is the final 52 param model that we are using for this project.
Step19: To run the model training below you'll need to download and run the vis tool
Step20: Notice the loss doesn't change greatly and because we use an aggressive learning rate we could benefit from early termination.
|
<ASSISTANT_TASK:>
Python Code:
zero_steering = drive_log_df[drive_log_df.steering == 0].sample(frac=0.9)
drive_log_df = drive_log_df.drop(zero_steering.index)
plt.figure(figsize=(10,4))
drive_log_df.steering.hist(bins=100, color='r')
plt.xlabel('steering angle bins')
plt.ylabel('counts')
plt.show()
print("Current Dataset Size: ", len(drive_log_df.steering))
def update_left_right_steering_correction(df):
records = []
for index, row in df.iterrows():
left = row.left
center = row.center
right = row.right
steering = row.steering
records.append({
'image': left,
'steering': steering + 0.23
})
records.append({
'image': right,
'steering': steering - 0.23
})
records.append({
'image': center,
'steering': steering
})
return pd.DataFrame(data=records, columns=['image', 'steering'])
new_drive_log = update_left_right_steering_correction(drive_log_df)
new_drive_log.tail()
def flip_images_augmentation(df):
new_df = df[df.steering != 0].sample(frac=0.4)
df.loc[:,'is_flipped'] = False
new_df.loc[:,'is_flipped'] = True
left_rows = (new_df.steering < 0)
right_rows = (new_df.steering > 0)
new_df.loc[left_rows,'steering'] = new_df[left_rows].steering.abs()
new_df.loc[right_rows, 'steering'] = new_df[right_rows].steering * -1
return pd.concat([df, new_df])
augmented = flip_images_augmentation(new_drive_log)
plt.figure(figsize=(10,4))
augmented.steering.hist(bins=100, color='r')
plt.xlabel('steering angle bins')
plt.ylabel('counts')
plt.show()
print("Current Dataset Size: ", len(augmented.steering))
def shift_img_augmentation(df):
df.loc[:,'random_shift'] = 0
new_df = df[df.steering != 0].copy()
df.loc[:,'is_shift'] = False
new_df.loc[:,'is_shift'] = True
max_shift = 30
max_ang = 0.17
def row_shift_update(row):
random_shift = np.random.randint(-max_shift, max_shift + 1)
row.random_shift = random_shift
updated_steer = row.steering + (random_shift / max_shift) * max_ang
if abs(updated_steer) > 1:
updated_steer = -1 if (updated_steer < 0) else 1
row.steering = updated_steer
return row
new_df = new_df.apply(row_shift_update, axis=1)
return pd.concat([df, new_df])
shifted = shift_img_augmentation(augmented)
plt.figure(figsize=(10,4))
shifted.steering.hist(bins=100, color='r')
plt.xlabel('steering angle bins')
plt.ylabel('counts')
plt.show()
print("Current Dataset Size: ", len(shifted.steering))
shifted.tail(1)
plt.show()
def process_driver_log(driver_log):
update_log = update_left_right_steering_correction(driver_log)
update_log = flip_images_augmentation(update_log)
update_log = shift_img_augmentation(update_log)
#update_log = change_brightness_augmentation(update_log)
#reset index since we it's no longer good.
update_log = update_log.reset_index(drop=True)
#drop outbound steering examples to be between [-1,1] !
outbound_steering = update_log[abs(update_log.steering) > 1]
update_log = update_log.drop(outbound_steering.index)
return update_log
processed_log = process_driver_log(drive_log_df)
plt.figure(figsize=(10,4))
processed_log.steering.hist(bins=100, color='r')
plt.xlabel('steering angle bins')
plt.ylabel('counts')
plt.show()
print("Current Dataset Size: ", len(processed_log.steering))
hist, counts = np.histogram(processed_log.steering, bins=100)
upper_limit = 400
over = [(i, v) for i, v in enumerate(hist) if v > upper_limit ]
over_ranges = [(counts[i],counts[i+1]) for i,_ in over]
#loop through ranges and create a mask for each bin
masks = ["processed_log[(processed_log.steering >= {0}) & (processed_log.steering < {1})]".format(l,r) for l,r in over_ranges]
for mask in masks:
selected = eval(mask)
selected_length = len(selected)
frac_to_drop = (selected_length-upper_limit)/selected_length
samples_to_drop = selected.sample(frac=frac_to_drop)
processed_log = processed_log.drop(samples_to_drop.index)
plt.figure(figsize=(10,4))
processed_log.steering.hist(bins=100, color='g')
plt.xlabel('steering angle bins')
plt.ylabel('counts')
plt.show()
print("Current Dataset Size: ", len(processed_log.steering))
processed_log.to_csv('preprocessed_driver_log.csv')
def crop_top_and_bottom(image):
resized = cv2.resize(image[70:140], (64,64), cv2.INTER_AREA)
return cv2.cvtColor(resized, cv2.COLOR_RGB2HSV)[:,:,1]
plt.figure(figsize=(10,4))
random_image = drive_log_df.iloc[0]
image = cv2.imread("./data/{0}".format(random_image.center))
print(image.shape)
plt.figure(1)
plt.imshow(image)
plt.show()
cropped_image = crop_top_and_bottom(image)
print(cropped_image.shape)
plt.figure(2)
plt.imshow(cropped_image)
plt.show()
def shift_img(image, random_shift):
rows, cols = image.shape
mat = np.float32([[1, 0, random_shift], [0, 1, 0]])
return cv2.warpAffine(image, mat, (cols, rows))
def load_image(row):
image = cv2.imread("./data/{0}".format(row.image.strip()))
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = crop_top_and_bottom(image)
if(row.is_flipped):
image = cv2.flip(image,1)
if(row.is_shift):
image = shift_img(image, row.random_shift)
return image
def load_all_features_and_labels(df):
images = [load_image(row) for _, row in df.iterrows()]
return np.array(images).reshape((len(images), 64, 64, 1)), df.steering
features, labels = load_all_features_and_labels(processed_log)
print(features.shape, labels.shape)
def fire_module(x, fire_id, squeeze=16, expand=64):
This is a modified version of: https://github.com/rcmalli/keras-squeezenet/blob/master/squeezenet.py#L14
Changes made:
* Uses ELU activation
* Only supports tf
s_id = 'fire' + str(fire_id) + '/'
c_axis = 3
sq1x1 = "squeeze1x1"
exp1x1 = "expand1x1"
exp3x3 = "expand3x3"
elu = "elu_"
x = Convolution2D(squeeze, 1, 1, border_mode='valid', name=s_id + sq1x1)(x)
x = Activation('elu', name=s_id + elu + sq1x1)(x)
left = Convolution2D(expand, 1, 1, border_mode='valid', name=s_id + exp1x1)(x)
left = Activation('elu', name=s_id + elu + exp1x1)(left)
right = Convolution2D(expand, 3, 3, border_mode='same', name=s_id + exp3x3)(x)
right = Activation('elu', name=s_id + elu + exp3x3)(right)
x = merge([left, right], mode='concat', concat_axis=c_axis, name=s_id + 'concat')
return x
def squeeze_model_10000():
This model is a modification from the reference:
https://github.com/rcmalli/keras-squeezenet/blob/master/squeezenet.py
Normalizing will be done in the model directly for GPU speedup
input_shape=(64, 64, 1)
input_img = Input(shape=input_shape)
x = Lambda(lambda x: x/127.5 - 1.,input_shape=input_shape)(input_img)
x = Convolution2D(2, 3, 3, subsample=(2, 2), border_mode='valid', name='conv1')(x)
x = Activation('elu', name='elu_conv1')(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x)
x = fire_module(x, fire_id=2, squeeze=2, expand=6)
x = fire_module(x, fire_id=3, squeeze=16, expand=64)
x = Dropout(0.2, name='drop9')(x)
x = GlobalAveragePooling2D()(x)
out = Dense(1, name='loss')(x)
model = Model(input=input_img, output=[out])
plot(model, to_file='SqueezeNet10k.png', show_shapes=True)
model.compile(optimizer=Adam(lr=1e-3), loss='mse')
return model
def squeeze_model_1005():
This model is a modification from the reference:
https://github.com/rcmalli/keras-squeezenet/blob/master/squeezenet.py
Normalizing will be done in the model directly for GPU speedup
input_shape=(64, 64, 1)
input_img = Input(shape=input_shape)
x = Lambda(lambda x: x/127.5 - 1.,input_shape=input_shape)(input_img)
x = Convolution2D(2, 3, 3, subsample=(2, 2), border_mode='valid', name='conv1')(x)
x = Activation('elu', name='elu_conv1')(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x)
x = fire_module(x, fire_id=2, squeeze=2, expand=6)
x = fire_module(x, fire_id=3, squeeze=6, expand=12)
x = Dropout(0.2, name='drop9')(x)
x = GlobalAveragePooling2D()(x)
out = Dense(1, name='loss')(x)
model = Model(input=input_img, output=[out])
plot(model, to_file='SqueezeNet1005.png', show_shapes=True)
model.compile(optimizer=Adam(lr=1e-1), loss='mse')
return model
def squeeze_model_329():
This model is a modification from the reference:
https://github.com/rcmalli/keras-squeezenet/blob/master/squeezenet.py
Normalizing will be done in the model directly for GPU speedup
input_shape=(64, 64, 1)
input_img = Input(shape=input_shape)
x = Lambda(lambda x: x/127.5 - 1.,input_shape=input_shape)(input_img)
x = Convolution2D(2, 3, 3, subsample=(2, 2), border_mode='valid', name='conv1')(x)
x = Activation('elu', name='elu_conv1')(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x)
x = fire_module(x, fire_id=2, squeeze=2, expand=6)
x = fire_module(x, fire_id=3, squeeze=2, expand=6)
x = Dropout(0.2, name='drop9')(x)
x = GlobalAveragePooling2D()(x)
out = Dense(1, name='loss')(x)
model = Model(input=input_img, output=[out])
plot(model, to_file='SqueezeNet329.png', show_shapes=True)
model.compile(optimizer=Adam(lr=1e-3), loss='mse')
return model
def squeeze_model_159():
This model is a modification from the reference:
https://github.com/rcmalli/keras-squeezenet/blob/master/squeezenet.py
Normalizing will be done in the model directly for GPU speedup
input_shape=(64, 64, 1)
input_img = Input(shape=input_shape)
x = Lambda(lambda x: x/127.5 - 1.,input_shape=input_shape)(input_img)
x = Convolution2D(1, 3, 3, subsample=(2, 2), border_mode='valid', name='conv1')(x)
x = Activation('elu', name='elu_conv1')(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x)
x = fire_module(x, fire_id=2, squeeze=2, expand=6)
x = Dropout(0.2, name='drop9')(x)
x = GlobalAveragePooling2D()(x)
out = Dense(1, name='loss')(x)
model = Model(input=input_img, output=[out])
plot(model, to_file='SqueezeNet159.png', show_shapes=True)
model.compile(optimizer=Adam(lr=1e-1), loss='mse')
return model
def squeeze_model_63():
This model is a modification from the reference:
https://github.com/rcmalli/keras-squeezenet/blob/master/squeezenet.py
Normalizing will be done in the model directly for GPU speedup
input_shape=(64, 64, 1)
input_img = Input(shape=input_shape)
x = Lambda(lambda x: x/127.5 - 1.,input_shape=input_shape)(input_img)
x = Convolution2D(1, 3, 3, subsample=(2, 2), border_mode='valid', name='conv1')(x)
x = Activation('elu', name='elu_conv1')(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x)
x = fire_module(x, fire_id=2, squeeze=2, expand=2)
x = Dropout(0.2, name='drop9')(x)
x = GlobalAveragePooling2D()(x)
out = Dense(1, name='loss')(x)
model = Model(input=input_img, output=[out])
plot(model, to_file='SqueezeNet63.png', show_shapes=True)
model.compile(optimizer=Adam(lr=1e-1), loss='mse')
return model
def squeeze_model_52():
This model is a modification from the reference:
https://github.com/rcmalli/keras-squeezenet/blob/master/squeezenet.py
Normalizing will be done in the model directly for GPU speedup
input_shape=(64, 64, 1)
input_img = Input(shape=input_shape)
x = Lambda(lambda x: x/127.5 - 1.,input_shape=input_shape)(input_img)
x = Convolution2D(2, 3, 3, subsample=(2, 2), border_mode='valid', name='conv1')(x)
x = Activation('elu', name='elu_conv1')(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), name='pool1')(x)
x = fire_module(x, fire_id=2, squeeze=1, expand=2)
x = Dropout(0.2, name='drop3')(x)
x = GlobalAveragePooling2D()(x)
out = Dense(1, name='loss')(x)
model = Model(input=input_img, output=[out])
plot(model, to_file='SqueezeNet52.png', show_shapes=True)
model.compile(optimizer=Adam(lr=1e-1), loss='mse')
return model
model = squeeze_model_52()
model.summary()
class CustomEarlyStop(Callback):
Custom Callback that stops the epoch when val_loss reachs 0.3
This callback assumes you are logging val loss
def __init__(self, monitor='val_loss'):
super(CustomEarlyStop, self).__init__()
self.monitor = monitor
def on_epoch_end(self, epoch, logs=None):
val_loss = logs.get(self.monitor)
if val_loss <= 0.039:
print("\nEarly Stop on Epoch {0} with Val_loss {1}\n".format(epoch,val_loss))
self.model.stop_training = True
#This is a callback that works with https://github.com/fchollet/hualos and gives you simple vis of the training loss
remote = RemoteMonitor(headers=None)
early_stop = CustomEarlyStop(monitor='val_loss')
h = model.fit(x=features,y=labels,verbose=1,batch_size=128,nb_epoch=50,validation_split=0.3, callbacks=[remote, early_stop])
filename = 'model'#"squeezemodel{0}".format(squeezemodel.count_params())
model.save_weights(filename+".h5", True)
with open(filename+'.json', 'w') as outfile:
json.dump(model.to_json(), outfile)
print("Saved model weights and configuration")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Each row is an observed flower. These rows are called samples and the number of rows is called n_samples.
Step2: Let's split the data according to the convention
Step3: To summarize, in order to use Scikit-Learn, the data layout should look like this
Step4: 1. Choose a class of model
Step5: 2. Model instantiation with hyperparameters
Step6: Other models have different parameters. Refer to the documentation.
Step7: 4. Fit the model to your data (i.e. learning)
Step8: This fit() command causes a number of model-dependent internal computations to take place.
Step9: Comparing to the data definition, we see that they are very close to the input slope of 2 and intercept of -1.
Step10: Again, we have to coerce our data into a [n_samples, n_features] feature matrix
Step11: Finally, let's visualize the results by plotting first the raw data, and then this model fit
Step12: Training and Test Set
|
<ASSISTANT_TASK:>
Python Code:
import seaborn as sns
iris = sns.load_dataset('iris')
iris.head()
%matplotlib inline
import seaborn as sns; sns.set()
sns.pairplot(iris, hue='species', size=1.5);
X_iris, y_iris = iris.drop('species', axis=1), iris['species']
X_iris.shape, y_iris.shape
import matplotlib.pyplot as plt
import numpy as np
rng = np.random.RandomState(42)
x = 10 * rng.rand(50)
y = 2 * x - 1 + rng.randn(50)
plt.scatter(x, y);
from sklearn.linear_model import LinearRegression
model = LinearRegression(fit_intercept=True)
model
X = x[:, np.newaxis]
X.shape
model.fit(X, y)
model.coef_
model.intercept_
xfit = np.linspace(-1, 11)
Xfit = xfit[:, np.newaxis]
yfit = model.predict(Xfit)
plt.scatter(x, y)
plt.plot(xfit, yfit);
from sklearn.model_selection import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(X_iris, y_iris, random_state=1)
Xtrain.shape, Xtest.shape
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Forward modelling
|
<ASSISTANT_TASK:>
Python Code:
p=Function('p')
m,s,h = symbols('m s h')
m=M(x,y,z)
q=Q(x,y,z,t)
d=D(x,y,z,t)
e=E(x,y,z)
# Choose dimension (2 or 3)
dim = 3
# Choose order
time_order = 2
space_order = 2
# half width for indexes, goes from -half to half
width_t = int(time_order/2)
width_h = int(space_order/2)
solvep = p(x,y,z,t+width_t*s)
solvepa = p(x,y,z,t-width_t*s)
# Indexes for finite differences
indx = []
indy = []
indz = []
indt = []
for i in range(-width_h,width_h+1):
indx.append(x + i * h)
indy.append(y + i * h)
indz.append(z + i* h)
for i in range(-width_t,width_t+1):
indt.append(t + i * s)
# Finite differences
dtt=as_finite_diff(p(x,y,z,t).diff(t,t),indt)
dxx=as_finite_diff(p(x,y,z,t).diff(x,x), indx)
dyy=as_finite_diff(p(x,y,z,t).diff(y,y), indy)
dzz=as_finite_diff(p(x,y,z,t).diff(z,z), indz)
dt=as_finite_diff(p(x,y,z,t).diff(t), indt)
lap = dxx + dyy + dzz
arglamb=[]
arglamba=[]
for i in range(-width_t,width_t):
arglamb.append( p(x,y,z,indt[i+width_t]))
arglamba.append( p(x,y,z,indt[i+width_t+1]))
for i in range(-width_h,width_h+1):
arglamb.append( p(indx[i+width_h],y,z,t))
arglamba.append( p(indx[i+width_h],y,z,t))
for i in range(-width_h,width_h+1):
arglamb.append( p(x,indy[i+width_h],z,t))
arglamba.append( p(x,indy[i+width_h],z,t))
for i in range(-width_h,width_h+1):
arglamb.append( p(x,y,indz[i+width_h],t))
arglamba.append( p(x,y,indz[i+width_h],t))
arglamb.extend((q , m, s, h, e))
arglamb=tuple(arglamb)
arglamba.extend((q , m, s, h, e))
arglamba=tuple(arglamba)
arglamb=[ii for n,ii in enumerate(arglamb) if ii not in arglamb[:n]]
arglamb
# Forward wave equation
wave_equation = m*dtt- lap - q + e*dt
stencil = solve(wave_equation,solvep)[0]
ts=lambdify(arglamb,stencil,"numpy")
stencil
# Adjoint wave equation
wave_equationA = m*dtt- lap - d - e*dt
stencilA = solve(wave_equationA,solvepa)[0]
tsA=lambdify(arglamba,stencilA,"numpy")
stencilA
import matplotlib.pyplot as plt
from matplotlib import animation
hstep=25 #space increment d = minv/(10*f0);
tstep=2 #time increment dt < .5 * hstep /maxv;
tmin=0.0 #initial time
tmax=300 #simulate until
xmin=-500.0 #left bound
xmax=500.0 #right bound...assume packet never reaches boundary
ymin=-600.0 #left bound
ymax=600.0 #right bound...assume packet never reaches boundary
zmin=-250.0 #left bound
zmax=400.0 #right bound...assume packet never reaches boundary
f0=.010
t0=1/.010
nbpml=10
nx = int((xmax-xmin)/hstep) + 1 #number of points on x grid
ny = int((ymax-ymin)/hstep) + 1 #number of points on x grid
nz = int((zmax-zmin)/hstep) + 1 #number of points on x grid
nt = int((tmax-tmin)/tstep) + 2 #number of points on t grid
xsrc=0.0
ysrc=0.0
zsrc=25.0
#set source as Ricker wavelet for f0
def source(x,y,z,t):
r = (np.pi*f0*(t-t0))
val = (1-2.*r**2)*np.exp(-r**2)
if abs(x-xsrc)<hstep/2 and abs(y-ysrc)<hstep/2 and abs(z-zsrc)<hstep/2:
return val
else:
return 0.0
def dampx(x):
dampcoeff=1.5*np.log(1.0/0.001)/(5.0*hstep);
if x<nbpml:
return dampcoeff*((nbpml-x)/nbpml)**2
elif x>nx-nbpml-1:
return dampcoeff*((x-nx+nbpml)/nbpml)**2
else:
return 0.0
def dampy(y):
dampcoeff=1.5*np.log(1.0/0.001)/(5.0*hstep);
if y<nbpml:
return dampcoeff*((nbpml-y)/nbpml)**2
elif y>ny-nbpml-1:
return dampcoeff*((y-ny+nbpml)/nbpml)**2
else:
return 0.0
def dampz(z):
dampcoeff=1.5*np.log(1.0/0.001)/(5.0*hstep);
if z<nbpml:
return dampcoeff*((nbpml-z)/nbpml)**2
elif z>nz-nbpml-1:
return dampcoeff*((z-ny+nbpml)/nbpml)**2
else:
return 0.0
# True velocity
vel=np.ones((nx,ny,nz)) + 2.0
vel[:,:,int(nz/2):nz]=4.5
mt=vel**-2
def Forward(nt,nx,ny,nz,m):
u=np.zeros((nt+2,nx,ny,nz))
for ti in range(2,nt+2):
for a in range(1,nx-1):
for b in range(1,ny-1):
for c in range(1,nz-1):
src = source(xmin+a*hstep,ymin+b*hstep,zmin+c*hstep,tstep*ti)
damp=dampx(a)+dampy(b)+dampz(c)
u[ti,a,b,c]=ts(u[ti-2,a,b,c],
u[ti-1,a,b,c],
u[ti-1,a-1,b,c],
u[ti-1,a+1,b,c],
u[ti-1,a,b-1,c],
u[ti-1,a,b+1,c],
u[ti-1,a,b,c-1],
u[ti-1,a,b,c+1],
src , m[a,b,c], tstep, hstep, damp)
return u
u = Forward(nt,nx,ny,nz,mt)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'sandbox-3', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Input Parameter
Step2: Preparation
Step3: Create space and time vector
Step4: Source signal - Ricker-wavelet
Step5: Time stepping
Step6: Save seismograms
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Discretization
c1=20 # Number of grid points per dominant wavelength
c2=0.5 # CFL-Number
nx=2000 # Number of grid points
T=10 # Total propagation time
# Source Signal
f0= 10 # Center frequency Ricker-wavelet
q0= 1 # Maximum amplitude Ricker-Wavelet
xscr = 100 # Source position (in grid points)
# Receiver
xrec1=400 # Position Reciever 1 (in grid points)
xrec2=800 # Position Reciever 2 (in grid points)
xrec3=1800 # Position Reciever 3 (in grid points)
# Velocity and density
modell_v = np.hstack((1000*np.ones((int(nx/2))),1500*np.ones((int(nx/2)))))
rho=np.hstack((1*np.ones((int(nx/2))),1.5*np.ones((int(nx/2)))))
# Init wavefields
vx=np.zeros(nx)
p=np.zeros(nx)
# Calculate first Lame-Paramter
l=rho * modell_v * modell_v
cmin=min(modell_v.flatten()) # Lowest P-wave velocity
cmax=max(modell_v.flatten()) # Highest P-wave velocity
fmax=2*f0 # Maximum frequency
dx=cmin/(fmax*c1) # Spatial discretization (in m)
dt=dx/(cmax)*c2 # Temporal discretization (in s)
lampda_min=cmin/fmax # Smallest wavelength
# Output model parameter:
print("Model size: x:",dx*nx,"in m")
print("Temporal discretization: ",dt," s")
print("Spatial discretization: ",dx," m")
print("Number of gridpoints per minimum wavelength: ",lampda_min/dx)
x=np.arange(0,dx*nx,dx) # Space vector
t=np.arange(0,T,dt) # Time vector
nt=np.size(t) # Number of time steps
# Plotting model
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.subplots_adjust(wspace=0.4,right=1.6)
ax1.plot(x,modell_v)
ax1.set_ylabel('VP in m/s')
ax1.set_xlabel('Depth in m')
ax1.set_title('P-wave velocity')
ax2.plot(x,rho)
ax2.set_ylabel('Density in g/cm^3')
ax2.set_xlabel('Depth in m')
ax2.set_title('Density');
tau=np.pi*f0*(t-1.5/f0)
q=q0*(1.0-2.0*tau**2.0)*np.exp(-tau**2)
# Plotting source signal
plt.figure(3)
plt.plot(t,q)
plt.title('Source signal Ricker-Wavelet')
plt.ylabel('Amplitude')
plt.xlabel('Time in s')
plt.draw()
# Init Seismograms
Seismogramm=np.zeros((3,nt)); # Three seismograms
# Calculation of some coefficients
i_dx=1.0/(dx)
i_dx3=1.0/(dx**3)
c9=dt**3/24.0
print("Starting time stepping...")
## Time stepping
for n in range(2,nt):
# Inject source wavelet
p[xscr]=p[xscr]+q[n]
# Update velocity
for kx in range(5,nx-4):
# Calculating spatial derivative
p_x=i_dx*9.0/8.0*(p[kx+1]-p[kx])-i_dx*1.0/24.0*(p[kx+2]-p[kx-1])
p_xxx=i_dx3*(-3.0)*(p[kx+1]-p[kx])+i_dx3*(1)*(p[kx+2]-p[kx-1])
# Update velocity
vx[kx]=vx[kx]-dt/rho[kx]*p_x-l[kx]*c9*1/(rho[kx]**2.0)*(p_xxx)
# Update pressure
for kx in range(5,nx-4):
# Calculating spatial derivative
vx_x= i_dx*9.0/8.0*(vx[kx]-vx[kx-1])-i_dx*1.0/24.0*(vx[kx+1]-vx[kx-2])
vx_xxx=i_dx3*(-3.0)*(vx[kx]-vx[kx-1])+i_dx3*(1)*(vx[kx+1]-vx[kx-2])
# Update pressure
p[kx]=p[kx]-l[kx]*dt*(vx_x)-l[kx]**2*c9*1/(rho[kx])*(vx_xxx)
# Save seismograms
Seismogramm[0,n]=p[xrec1]
Seismogramm[1,n]=p[xrec2]
Seismogramm[2,n]=p[xrec3]
print("Finished time stepping!")
## Save seismograms
np.save("Seismograms/FD_1D_DX4_DT4_LW",Seismogramm)
## Plot seismograms
fig, (ax1, ax2, ax3) = plt.subplots(3, 1)
fig.subplots_adjust(hspace=0.4,right=1.6, top = 2 )
ax1.plot(t,Seismogramm[0,:])
ax1.set_title('Seismogram 1')
ax1.set_ylabel('Amplitude')
ax1.set_xlabel('Time in s')
ax1.set_xlim(0, T)
ax2.plot(t,Seismogramm[1,:])
ax2.set_title('Seismogram 2')
ax2.set_ylabel('Amplitude')
ax2.set_xlabel('Time in s')
ax2.set_xlim(0, T)
ax3.plot(t,Seismogramm[2,:])
ax3.set_title('Seismogram 3')
ax3.set_ylabel('Amplitude')
ax3.set_xlabel('Time in s')
ax3.set_xlim(0, T);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So far, we have considered parametric methods that reduce inference
Step2: The code uses the histogram function from Numpy.
Step3: The train_test_split function makes it easy to split and
Step4: Programming Tip.
Step5: Programming Tip.
Step6: Programming Tip.
Step7: <!-- dom
Step8: This is tantamount to reorganizing the above previous code
Step9: The grid search iterates over all the elements in the params
Step10: Programming Tip.
Step11: We can use this data to construct a simple nearest neighbor
Step12: Programming Tip.
Step13: <!-- dom
Step14: which produces the following corresponding Figure.
Step15: The LeaveOneOut object is an iterable that produces a set of
Step16: The next block loops over the disjoint sets of training and test
Step17: The last line in the code above reports leave-one-out's estimated
Step18: The todense part reformats the sparse matrix that is
Step19: The sub-blocks show the windows of the the y data that are being
Step20: Or, more concisely checking all entries for approximate equality,
Step21: which shows that the results from the nearest neighbor
Step22: This code makes it possible to internally optimize over the bandwidth
Step23: Figure shows the kernel estimator (heavy
Step24: Curse of Dimensionality
Step25: <!-- # #ifdef SINGLE -->
Step26: <!-- dom
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
%pylab inline
from pprint import pprint
import textwrap
import sys, re
old_displayhook = sys.displayhook
def displ(x):
if x is None: return
print "\n".join(textwrap.wrap(repr(x).replace(' ',''),width=80))
sys.displayhook=displ
def generate_samples(n,ntrials=500):
phat = np.zeros((nbins,ntrials))
for k in range(ntrials):
d = rv.rvs(n)
phat[:,k],_=histogram(d,bins,density=True)
return phat
from sklearn.cross_validation import train_test_split
from sklearn.neighbors.kde import KernelDensity
import numpy as np
np.random.seed(123456)
from scipy.integrate import quad
from scipy import stats
rv= stats.beta(2,2)
n=100 # number of samples to generate
d = rv.rvs(n)[:,None] # generate samples as column-vector
train,test,_,_=train_test_split(d,d,test_size=0.5)
kdes=[KernelDensity(bandwidth=i).fit(train)
for i in [.05,0.1,0.2,0.3]]
import numpy as np
for i in kdes:
f = lambda x: np.exp(i.score_samples(x))
f2 = lambda x: f(x)**2
print 'h=%3.2f\t %3.4f'%(i.bandwidth,quad(f2,0,1)[0]
-2*np.mean(f(test)))
%matplotlib inline
from __future__ import division
from matplotlib.pylab import subplots
fig,ax=subplots()
xi = np.linspace(0,1,100)[:,None]
for i in kdes:
f=lambda x: np.exp(i.score_samples(x))
f2 = lambda x: f(x)**2
_=ax.plot(xi,f(xi),label='$h$='+str(i.bandwidth))
_=ax.set_xlabel('$x$',fontsize=28)
_=ax.set_ylabel('$y$',fontsize=28)
_=ax.plot(xi,rv.pdf(xi),'k:',lw=3,label='true')
_=ax.legend(loc=0)
ax2 = ax.twinx()
_=ax2.hist(d,20,alpha=.3,color='gray')
_=ax2.axis(ymax=50)
_=ax2.set_ylabel('count',fontsize=28)
fig.tight_layout()
#fig.savefig('fig-statistics/nonparametric_003.png')
class KernelDensityWrapper(KernelDensity):
def predict(self,x):
return np.exp(self.score_samples(x))
def score(self,test):
f = lambda x: self.predict(x)
f2 = lambda x: f(x)**2
return -(quad(f2,0,1)[0]-2*np.mean(f(test)))
from sklearn.grid_search import GridSearchCV
params = {'bandwidth':np.linspace(0.01,0.5,10)}
clf = GridSearchCV(KernelDensityWrapper(), param_grid=params,cv=2)
clf.fit(d)
print clf.best_params_
from pprint import pprint
pprint(clf.grid_scores_)
import numpy as np
from numpy import cos, pi
xi = np.linspace(0,1,100)[:,None]
xin = np.linspace(0,1,12)[:,None]
f0 = 1 # init frequency
BW = 5
y = cos(2*pi*(f0*xin+(BW/2.0)*xin**2))
from sklearn.neighbors import KNeighborsRegressor
knr=KNeighborsRegressor(2)
knr.fit(xin,y)
from matplotlib.pylab import subplots
fig,ax=subplots()
yi = cos(2*pi*(f0*xi+(BW/2.0)*xi**2))
_=ax.plot(xi,yi,'k--',lw=2,label=r'$y(x)$')
_=ax.plot(xin,y,'ko',lw=2,ms=11,color='gray',alpha=.8,label='$y(x_i)$')
_=ax.fill_between(xi.flat,yi.flat,knr.predict(xi).flat,color='gray',alpha=.3)
_=ax.plot(xi,knr.predict(xi),'k-',lw=2,label='$\hat{y}(x)$')
_=ax.set_aspect(1/4.)
_=ax.axis(ymax=1.05,ymin=-1.05)
_=ax.set_xlabel(r'$x$',fontsize=24)
_=ax.legend(loc=0)
fig.set_tight_layout(True)
#fig.savefig('fig-statistics/nonparametric_004.png')
knr=KNeighborsRegressor(3)
knr.fit(xin,y)
fig,ax=subplots()
_=ax.plot(xi,yi,'k--',lw=2,label=r'$y(x)$')
_=ax.plot(xin,y,'ko',lw=2,ms=11,color='gray',alpha=.8,label='$y(x_i)$')
_=ax.fill_between(xi.flat,yi.flat,knr.predict(xi).flat,color='gray',alpha=.3)
_=ax.plot(xi,knr.predict(xi),'k-',lw=2,label='$\hat{y}(x)$')
_=ax.set_aspect(1/4.)
_=ax.axis(ymax=1.05,ymin=-1.05)
_=ax.set_xlabel(r'$x$',fontsize=24)
_=ax.legend(loc=0)
fig.set_tight_layout(True)
#fig.savefig('fig-statistics/nonparametric_005.png')
from sklearn.cross_validation import LeaveOneOut
loo=LeaveOneOut(len(xin))
pprint(list(LeaveOneOut(3)))
out=[]
for train_index, test_index in loo:
_=knr.fit(xin[train_index],y[train_index])
out.append((knr.predict(xi[test_index])-y[test_index])**2)
print 'Leave-one-out Estimated Risk: ',np.mean(out),
_= knr.fit(xin,y) # fit on all data
S=(knr.kneighbors_graph(xin)).todense()/float(knr.n_neighbors)
print S[:5,:5]
print np.hstack([knr.predict(xin[:5]),(S*y)[:5]])#columns match
print np.allclose(knr.predict(xin),S*y)
xin = np.linspace(0,1,20)[:,None]
y = cos(2*pi*(f0*xin+(BW/2.0)*xin**2)).flatten()
from kernel_regression import KernelRegression
kr = KernelRegression(gamma=np.linspace(6e3,7e3,500))
kr.fit(xin,y)
fig,ax=subplots()
#fig.set_size_inches((12,4))
_=ax.plot(xi,kr.predict(xi),'k-',label='kernel',lw=3)
_=ax.plot(xin,y,'o',lw=3,color='gray',ms=12)
_=ax.plot(xi,yi,'--',color='gray',label='chirp')
_=ax.plot(xi,knr.predict(xi),'k-',label='nearest')
_=ax.axis(ymax=1.1,ymin=-1.1)
_=ax.set_aspect(1/4.)
_=ax.axis(ymax=1.05,ymin=-1.05)
_=ax.set_xlabel(r'$x$',fontsize=24)
_=ax.set_ylabel(r'$y$',fontsize=24)
_=ax.legend(loc=0)
#fig.savefig('fig-statistics/nonparametric_006.png')
sys.displayhook= old_displayhook
import numpy as np
v=np.random.rand(1000,2)-1/2.
from matplotlib.patches import Circle
from matplotlib.pylab import subplots
fig,ax=subplots()
fig.set_size_inches((5,5))
_=ax.set_aspect(1)
_=ax.scatter(v[:,0],v[:,1],color='gray',alpha=.3)
_=ax.add_patch(Circle((0,0),0.5,alpha=.8,lw=3.,fill=False))
#fig.savefig('fig-statistics/curse_of_dimensionality_001.pdf')
for d in [2,3,5,10,20,50]:
v=np.random.rand(5000,d)-1/2.
hist([np.linalg.norm(i) for i in v])
siz = [2,3,5,10,20,50]
fig,axs=subplots(3,2,sharex=True)
fig.set_size_inches((10,6))
#fig.set_size_inches((10,8))
for ax,k in zip(axs.flatten(),siz):
v=np.random.rand(5000,k)-1/2.
_=ax.hist([np.linalg.norm(i) for i in v],color='gray',normed=True);
_=ax.vlines(0.5,0,ax.axis()[-1]*1.1,lw=3)
_=ax.set_title('$d=%d$'%k,fontsize=20)
_=ax.tick_params(labelsize='small',top=False,right=False)
_=ax.spines['top'].set_visible(False)
_=ax.spines['right'].set_visible(False)
_=ax.spines['left'].set_visible(False)
_=ax.yaxis.set_visible(False)
_=ax.axis(ymax=3.5)
fig.set_tight_layout(True)
#fig.savefig('fig-statistics/curse_of_dimensionality_002.pdf')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The first operation is always to initialize the lucene backend. This only needs to be done once for each running Python process
Step2: Tests
Step3: Stemming
Step4: Tokenizer
Step5: Analyzer
|
<ASSISTANT_TASK:>
Python Code:
import lucene
print(lucene.VERSION)
# We can check all the Lucene packages included in this distribution of Pylucene
for p in sorted(lucene.CLASSPATH.split(':')):
print(p)
# Init
if not lucene.getVMEnv():
lucene.initVM(vmargs=['-Djava.awt.headless=true'])
test_strings = (
'La lluvia en Sevilla es una pura maravilla',
'En un lugar de La Mancha, de cuyo nombre no quiero acordarme',
u'Con diez cañones por banda, viento en popa a toda vela' )
# An auxiliary function used in the tokenizer and analyzer examples
from org.apache.lucene.analysis.tokenattributes import CharTermAttribute
def fetch_terms(obj):
'''fetch all terms from a token list object, as strings'''
termAtt = obj.getAttribute(CharTermAttribute.class_)
try:
obj.clearAttributes()
obj.reset()
while obj.incrementToken():
yield termAtt.toString()
finally:
obj.end()
obj.close()
from lucene import JArray_char, JArray
from org.tartarus.snowball.ext import SpanishStemmer, EnglishStemmer
def stem(stemmer, word):
# Add the word
stemmer.setCurrent(JArray_char(word), len(word))
# Fire stemming
stemmer.stem()
# Fetch the output (buffer & size)
result = stemmer.getCurrentBuffer()
l = stemmer.getCurrentBufferLength()
return ''.join(result)[0:l]
st = SpanishStemmer()
for w in (u'haciendo', u'lunes', u'vino', u'lápiz'):
print( w, '->', stem(st, w))
st = EnglishStemmer()
for w in (u'making', u'Monday', u'came', u'pencil'):
print( w, '->', stem(st, w))
from java.io import StringReader
def tokenize( tk, data ):
'''Send a string to a tokenizer and get back the token list'''
tk.setReader( StringReader(data) )
return list(fetch_terms(tk))
from org.apache.lucene.analysis.standard import StandardTokenizer
from org.apache.lucene.analysis.core import LetterTokenizer
from org.apache.lucene.analysis.ngram import NGramTokenizer
tokenizers = (StandardTokenizer(), LetterTokenizer(), NGramTokenizer(4, 4))
for n, t in enumerate(tokenizers):
print( "\n{} -----------".format(n+1), str(t) )
for s in test_strings:
print( "\n", tokenize(t,s) )
from java.io import StringReader
def analyze(anal, data):
'''Send a string to an analizer and get back the analyzed term list'''
ts = anal.tokenStream( "dummy", StringReader(data) )
return list(fetch_terms(ts))
from org.apache.lucene.analysis.core import KeywordAnalyzer, SimpleAnalyzer
from org.apache.lucene.analysis.standard import StandardAnalyzer
from org.apache.lucene.analysis.es import SpanishAnalyzer
from org.apache.lucene.analysis.shingle import ShingleAnalyzerWrapper
analyzers = ( KeywordAnalyzer(),
SimpleAnalyzer(),
SpanishAnalyzer(),
ShingleAnalyzerWrapper( SimpleAnalyzer(), 2, 3 ),
ShingleAnalyzerWrapper( SpanishAnalyzer(), 2, 3 ),
)
for n, a in enumerate(analyzers):
print( "\n {} ----------- {}".format(n+1, a) )
for s in test_strings:
print( "\n", analyze(a,s) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Set atlas, project and output directories from your nersc home directory
Step2: 3. Select groups and get QC files
Step3: 4. Get template QC atlas from database
Step4: 4b. Uncomment the block below to adjust RT window
Step5: 5. Create metatlas dataset from QC files and QC atlas
Step6: 5b Optional
Step7: 6. Summarize RT peak across files and make data frame
Step8: 7. Create Compound atlas RTs plot and choose file for prediction
Step9: 8. Create RT adjustment model - Linear & Polynomial Regression
Step10: 8. Plot actual vs predict RT values and fit a median coeff+intercept line
Step11: 9. Choose your model
Step12: 10. Save RT model (optional)
Step13: 11. Auto RT adjust Template atlases
Step14: OPTIONAL BLOCK FOR RT PREDICTION OF CUSTOM ATLAS
|
<ASSISTANT_TASK:>
Python Code:
from IPython.core.display import Markdown, display, clear_output, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
%matplotlib notebook
%matplotlib inline
%env HDF5_USE_FILE_LOCKING=FALSE
import sys, os
#### add a path to your private code if not using production code ####
#print ('point path to metatlas repo')
sys.path.insert(0,"/global/homes/v/vrsingan/repos/metatlas") #where your private code is
######################################################################
from metatlas.plots import dill2plots as dp
from metatlas.io import metatlas_get_data_helper_fun as ma_data
from metatlas.plots import chromatograms_mp_plots as cp
from metatlas.plots import chromplotplus as cpp
from metatlas.datastructures import metatlas_objects as metob
import time
import numpy as np
import multiprocessing as mp
import pandas as pd
import operator
import matplotlib.pyplot as plt
pd.set_option('display.max_rows', 5000)
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_colwidth', 100)
def printmd(string):
display(Markdown(string))
project_directory='/global/homes/FIRST-INITIAL-OF-USERNAME/USERNAME/PROJECTDIRECTORY/' # <- edit this line, do not copy the path directly from NERSC (ex. the u1, or u2 directories)
output_subfolder='HILIC_POS_20190830/' # <- edit this as 'chromatography_polarity_yyyymmdd/'
output_dir = os.path.join(project_directory,output_subfolder)
output_data_qc = os.path.join(output_dir,'data_QC')
if not os.path.exists(project_directory):
os.makedirs(project_directory)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
if not os.path.exists(output_data_qc):
os.makedirs(output_data_qc)
groups = dp.select_groups_for_analysis(name = '%20201106%505892%HILIC%KLv1%',
most_recent = True,
remove_empty = True,
include_list = ['QC'], exclude_list = ['NEG']) #['QC','Blank']
groups = sorted(groups, key=operator.attrgetter('name'))
file_df = pd.DataFrame(columns=['file','time','group'])
for g in groups:
for f in g.items:
if hasattr(f, 'acquisition_time'):
file_df = file_df.append({'file':f, 'time':f.acquisition_time,'group':g}, ignore_index=True)
else:
file_df = file_df.append({'file':f, 'time':0,'group':g}, ignore_index=True)
file_df = file_df.sort_values(by=['time'])
for file_data in file_df.iterrows():
print(file_data[1].file.name)
# DO NOT EDIT THIS BLOCK
pos_templates = ['HILICz150_ANT20190824_TPL_EMA_Unlab_POS',
'HILICz150_ANT20190824_TPL_QCv3_Unlab_POS',
'HILICz150_ANT20190824_TPL_ISv5_Unlab_POS',
'HILICz150_ANT20190824_TPL_ISv5_13C15N_POS',
'HILICz150_ANT20190824_TPL_IS_LabUnlab2_POS']
neg_templates = ['HILICz150_ANT20190824_TPL_EMA_Unlab_NEG',
'HILICz150_ANT20190824_TPL_QCv3_Unlab_NEG',
'HILICz150_ANT20190824_TPL_ISv5_Unlab_NEG',
'HILICz150_ANT20190824_TPL_ISv5_13C15N_NEG',
'HILICz150_ANT20190824_TPL_IS_LabUnlab2_NEG']
#Atlas File Name
QC_template_filename = pos_templates[1]
atlases = metob.retrieve('Atlas',name=QC_template_filename,
username='vrsingan')
names = []
for i,a in enumerate(atlases):
print(i,a.name,pd.to_datetime(a.last_modified,unit='s'),len(a.compound_identifications))
# #Alternatively use this block to create QC atlas from spreadsheet
# import datetime
#dp = reload(dp)
# QC_template_filename = " " #<- Give the template filename to be used for storing in Database
#myAtlas = dp.make_atlas_from_spreadsheet('/global/project/projectdirs/metatlas/projects/1_TemplateAtlases/TemplateAtlas_HILICz150mm_Annotation20190824_QCv3_Unlabeled_Positive.csv',
# QC_template_filename,
# filetype='csv',
# sheetname='',
# polarity = 'positive',
# store=True,
# mz_tolerance = 20)
#atlases = dp.get_metatlas_atlas(name=QC_template_filename,do_print = True,most_recent=True)
myAtlas = atlases[-1]
atlas_df = ma_data.make_atlas_df(myAtlas)
atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications]
print(myAtlas.name)
print(myAtlas.username)
# rt_allowance = 1.5
# atlas_df['rt_min'] = atlas_df['rt_peak'].apply(lambda rt: rt-rt_allowance)
# atlas_df['rt_max'] = atlas_df['rt_peak'].apply(lambda rt: rt+rt_allowance)
# for compound in range(len(myAtlas.compound_identifications)):
# rt_peak = myAtlas.compound_identifications[compound].rt_references[0].rt_peak
# myAtlas.compound_identifications[compound].rt_references[0].rt_min = rt_peak - rt_allowance
# myAtlas.compound_identifications[compound].rt_references[0].rt_max = rt_peak + rt_allowance
all_files = []
for file_data in file_df.iterrows():
all_files.append((file_data[1].file,file_data[1].group,atlas_df,myAtlas))
pool = mp.Pool(processes=min(4, len(all_files)))
t0 = time.time()
metatlas_dataset = pool.map(ma_data.get_data_for_atlas_df_and_file, all_files)
pool.close()
pool.terminate()
#If you're code crashes here, make sure to terminate any processes left open.
print(time.time() - t0)
# dp = reload(dp)
# num_data_points_passing = 3
# peak_height_passing = 1e4
# atlas_df_passing = dp.filter_atlas(atlas_df=atlas_df, input_dataset=metatlas_dataset, num_data_points_passing = num_data_points_passing, peak_height_passing = peak_height_passing)
# print("# Compounds in Atlas: "+str(len(atlas_df)))
# print("# Compounds passing filter: "+str(len(atlas_df_passing)))
# atlas_passing = myAtlas.name+'_filteredby-datapnts'+str(num_data_points_passing)+'-pkht'+str(peak_height_passing)
# myAtlas_passing = dp.make_atlas_from_spreadsheet(atlas_df_passing,
# atlas_passing,
# filetype='dataframe',
# sheetname='',
# polarity = 'positive',
# store=True,
# mz_tolerance = 20)
# atlases = dp.get_metatlas_atlas(name=atlas_passing,do_print = True, most_recent=True)
# myAtlas = atlases[-1]
# atlas_df = ma_data.make_atlas_df(myAtlas)
# atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications]
# print(myAtlas.name)
# print(myAtlas.username)
# metob.to_dataframe([myAtlas])#
# all_files = []
# for file_data in file_df.iterrows():
# all_files.append((file_data[1].file,file_data[1].group,atlas_df,myAtlas))
# pool = mp.Pool(processes=min(4, len(all_files)))
# t0 = time.time()
# metatlas_dataset = pool.map(ma_data.get_data_for_atlas_df_and_file, all_files)
# pool.close()
# pool.terminate()
# #If you're code crashes here, make sure to terminate any processes left open.
# print(time.time() - t0)
from importlib import reload
dp=reload(dp)
rts_df = dp.make_output_dataframe(input_dataset = metatlas_dataset, fieldname='rt_peak', use_labels=True, output_loc = output_data_qc, summarize=True)
rts_df.to_csv(os.path.join(output_data_qc,"QC_Measured_RTs.csv"))
rts_df
import itertools
import math
from __future__ import division
from matplotlib import gridspec
import matplotlib.ticker as mticker
rts_df['atlas RT peak'] = [compound['identification'].rt_references[0].rt_peak for compound in metatlas_dataset[0]]
# number of columns in rts_df that are not values from a specific input file
num_not_files = len(rts_df.columns) - len(metatlas_dataset)
rts_df_plot = rts_df.sort_values(by='standard deviation', ascending=False, na_position='last') \
.drop(['#NaNs'], axis=1) \
.dropna(axis=0, how='all', subset=rts_df.columns[:-num_not_files])
fontsize = 2
pad = 0.1
cols = 8
rows = int(math.ceil((rts_df.shape[0]+1)/8))
fig = plt.figure()
gs = gridspec.GridSpec(rows, cols, figure=fig, wspace=0.2, hspace=0.4)
for i, (index, row) in enumerate(rts_df_plot.iterrows()):
ax = fig.add_subplot(gs[i])
ax.tick_params(direction='in', length=1, pad=pad, width=0.1, labelsize=fontsize)
ax.scatter(range(rts_df_plot.shape[1]-num_not_files),row[:-num_not_files], s=0.2)
ticks_loc = np.arange(0,len(rts_df_plot.columns)-num_not_files , 1.0)
ax.axhline(y=row['atlas RT peak'], color='r', linestyle='-', linewidth=0.2)
ax.set_xlim(-0.5,len(rts_df_plot.columns)-num_not_files+0.5)
ax.xaxis.set_major_locator(mticker.FixedLocator(ticks_loc))
range_columns = list(rts_df_plot.columns[:-num_not_files])+['atlas RT peak']
ax.set_ylim(np.nanmin(row.loc[range_columns])-0.12,
np.nanmax(row.loc[range_columns])+0.12)
[s.set_linewidth(0.1) for s in ax.spines.values()]
# truncate name so it fits above a single subplot
ax.set_title(row.name[:33], pad=pad, fontsize=fontsize)
ax.set_xlabel('Files', labelpad=pad, fontsize=fontsize)
ax.set_ylabel('Actual RTs', labelpad=pad, fontsize=fontsize)
plt.savefig(os.path.join(output_data_qc, 'Compound_Atlas_RTs.pdf'), bbox_inches="tight")
for i,a in enumerate(rts_df.columns):
print(i, a)
selected_column=9
from sklearn.linear_model import LinearRegression, RANSACRegressor
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import mean_absolute_error as mae
actual_rts, pred_rts, polyfit_rts = [],[],[]
current_actual_df = rts_df.loc[:,rts_df.columns[selected_column]]
bad_qc_compounds = np.where(~np.isnan(current_actual_df))
current_actual_df = current_actual_df.iloc[bad_qc_compounds]
current_pred_df = atlas_df.iloc[bad_qc_compounds][['rt_peak']]
actual_rts.append(current_actual_df.values.tolist())
pred_rts.append(current_pred_df.values.tolist())
ransac = RANSACRegressor(random_state=42)
rt_model_linear = ransac.fit(current_pred_df, current_actual_df)
coef_linear = rt_model_linear.estimator_.coef_[0]
intercept_linear = rt_model_linear.estimator_.intercept_
poly_reg = PolynomialFeatures(degree=2)
X_poly = poly_reg.fit_transform(current_pred_df)
rt_model_poly = LinearRegression().fit(X_poly, current_actual_df)
coef_poly = rt_model_poly.coef_
intercept_poly = rt_model_poly.intercept_
for i in range(rts_df.shape[1]-5):
current_actual_df = rts_df.loc[:,rts_df.columns[i]]
bad_qc_compounds = np.where(~np.isnan(current_actual_df))
current_actual_df = current_actual_df.iloc[bad_qc_compounds]
current_pred_df = atlas_df.iloc[bad_qc_compounds][['rt_peak']]
actual_rts.append(current_actual_df.values.tolist())
pred_rts.append(current_pred_df.values.tolist())
#User can change to use particular qc file
import itertools
import math
from __future__ import division
from matplotlib import gridspec
x = list(itertools.chain(*pred_rts))
y = list(itertools.chain(*actual_rts))
rows = int(math.ceil((rts_df.shape[1]+1)/5))
cols = 5
fig = plt.figure(constrained_layout=False)
gs = gridspec.GridSpec(rows, cols, figure=fig)
plt.rc('font', size=6)
plt.rc('axes', labelsize=6)
plt.rc('xtick', labelsize=3)
plt.rc('ytick', labelsize=3)
for i in range(rts_df.shape[1]-5):
x = list(itertools.chain(*pred_rts[i]))
y = actual_rts[i]
ax = fig.add_subplot(gs[i])
ax.scatter(x, y, s=2)
ax.plot(np.linspace(0, max(x),100), coef_linear*np.linspace(0,max(x),100)+intercept_linear, linewidth=0.5,color='red')
ax.plot(np.linspace(0, max(x),100), (coef_poly[1]*np.linspace(0,max(x),100))+(coef_poly[2]*(np.linspace(0,max(x),100)**2))+intercept_poly, linewidth=0.5,color='green')
ax.set_title("File: "+str(i))
ax.set_xlabel('predicted RTs')
ax.set_ylabel('actual RTs')
fig_legend = "FileIndex FileName"
for i in range(rts_df.shape[1]-5):
fig_legend = fig_legend+"\n"+str(i)+" "+rts_df.columns[i]
fig.tight_layout(pad=0.5)
plt.text(0,-0.03*rts_df.shape[1], fig_legend, transform=plt.gcf().transFigure)
plt.savefig(os.path.join(output_data_qc, 'Actual_vs_Predicted_RTs.pdf'), bbox_inches="tight")
qc_df = rts_df[[rts_df.columns[selected_column]]]
qc_df = qc_df.copy()
print("Linear Parameters :", coef_linear, intercept_linear)
print("Polynomial Parameters :", coef_poly,intercept_poly)
qc_df.columns = ['RT Measured']
atlas_df.index = qc_df.index
qc_df['RT Reference'] = atlas_df['rt_peak']
qc_df['RT Linear Pred'] = qc_df['RT Reference'].apply(lambda rt: coef_linear*rt+intercept_linear)
qc_df['RT Polynomial Pred'] = qc_df['RT Reference'].apply(lambda rt: (coef_poly[1]*rt)+(coef_poly[2]*(rt**2))+intercept_poly)
qc_df['RT Diff Linear'] = qc_df['RT Measured'] - qc_df['RT Linear Pred']
qc_df['RT Diff Polynomial'] = qc_df['RT Measured'] - qc_df['RT Polynomial Pred']
qc_df.to_csv(os.path.join(output_data_qc, "RT_Predicted_Model_Comparison.csv"))
qc_df
# CHOOSE YOUR MODEL HERE (linear / polynomial).
#model = 'linear'
model = 'polynomial'
# Save model
with open(os.path.join(output_data_qc,'rt_model.txt'), 'w') as f:
if model == 'linear':
f.write('coef = {}\nintercept = {}\nqc_actual_rts = {}\nqc_predicted_rts = {}'.format(coef_linear,
intercept_linear,
', '.join([g.name for g in groups]),
myAtlas.name))
f.write('\n'+repr(rt_model_linear.set_params()))
else:
f.write('coef = {}\nintercept = {}\nqc_actual_rts = {}\nqc_predicted_rts = {}'.format(coef_poly,
intercept_poly,
', '.join([g.name for g in groups]),
myAtlas.name))
f.write('\n'+repr(rt_model_poly.set_params()))
pos_atlas_indices = [0,1,2,3,4]
neg_atlas_indices = [0,1,2,3,4]
free_text = '' # this will be appended to the end of the csv filename exported
save_to_db = False
for ix in pos_atlas_indices:
atlases = metob.retrieve('Atlas',name=pos_templates[ix], username='vrsingan')
prd_atlas_name = pos_templates[ix].replace('TPL', 'PRD')
if free_text != '':
prd_atlas_name = prd_atlas_name+"_"+free_text
prd_atlas_filename = prd_atlas_name+'.csv'
myAtlas = atlases[-1]
PRD_atlas_df = ma_data.make_atlas_df(myAtlas)
PRD_atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications]
if model == 'linear':
PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: coef_linear*rt+intercept_linear)
else:
PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: (coef_poly[1]*rt)+(coef_poly[2]*(rt**2))+intercept_poly)
PRD_atlas_df['rt_min'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt-.5)
PRD_atlas_df['rt_max'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt+.5)
PRD_atlas_df.to_csv(os.path.join(output_data_qc,prd_atlas_filename), index=False)
if save_to_db:
dp.make_atlas_from_spreadsheet(PRD_atlas_df,
prd_atlas_name,
filetype='dataframe',
sheetname='',
polarity = 'positive',
store=True,
mz_tolerance = 12)
print(prd_atlas_name+" Created!")
for ix in neg_atlas_indices:
atlases = metob.retrieve('Atlas',name=neg_templates[ix], username='vrsingan')
prd_atlas_name = neg_templates[ix].replace('TPL', 'PRD')
if free_text != '':
prd_atlas_name = prd_atlas_name+"_"+free_text
prd_atlas_filename = prd_atlas_name+'.csv'
myAtlas = atlases[-1]
PRD_atlas_df = ma_data.make_atlas_df(myAtlas)
PRD_atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications]
if model == 'linear':
PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: coef_linear*rt+intercept_linear)
else:
PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: (coef_poly[1]*rt)+(coef_poly[2]*(rt**2))+intercept_poly)
PRD_atlas_df['rt_min'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt-.5)
PRD_atlas_df['rt_max'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt+.5)
PRD_atlas_df.to_csv(os.path.join(output_data_qc,prd_atlas_filename), index=False)
if save_to_db:
dp.make_atlas_from_spreadsheet(PRD_atlas_df,
prd_atlas_name,
filetype='dataframe',
sheetname='',
polarity = 'negative',
store=True,
mz_tolerance = 12)
print(prd_atlas_name+" Created!")
## Optional for custom template predictions
# atlas_name = '' #atlas name
# save_to_db = False
# atlases = metob.retrieve('Atlas',name=atlas_name, username='*')
# myAtlas = atlases[-1]
# PRD_atlas_df = ma_data.make_atlas_df(myAtlas)
# PRD_atlas_df['label'] = [cid.name for cid in myAtlas.compound_identifications]
# if model == 'linear':
# PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: coef_linear*rt+intercept_linear)
# else:
# PRD_atlas_df['rt_peak'] = PRD_atlas_df['rt_peak'].apply(lambda rt: (coef_poly[1]*rt)+(coef_poly[2]*(rt**2))+intercept_poly)
# PRD_atlas_df['rt_min'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt-.5)
# PRD_atlas_df['rt_max'] = PRD_atlas_df['rt_peak'].apply(lambda rt: rt+.5)
# PRD_atlas_df.to_csv(os.path.join(output_data_qc, name=atlas_name.replace('TPL','PRD'), index=False)
# if save_to_db:
# dp.make_atlas_from_spreadsheet(PRD_atlas_df,
# PRD_atlas_name,
# filetype='dataframe',
# sheetname='',
# polarity = 'positive', # NOTE - Please make sure you are choosing the correct polarity
# store=True,
# mz_tolerance = 12)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: BASICS OF NETWORKX
Step2: PLOT GRAPH
Step3: PLOT GRAPH WITH SHORTEST PATH
Step4: SHORTEST PATHS
|
<ASSISTANT_TASK:>
Python Code:
import psycopg2
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
conn = psycopg2.connect(database="postgres", user="postgres", password="***", host="127.0.0.1", port="5432")
query = SELECT fromnode, tonode, distance from edges
df = pd.read_sql_query(query, conn)
df.shape
# from dataframe to graph
# import as undirected graph
g=nx.from_pandas_dataframe(df, 'fromnode', 'tonode', 'distance')
# from graph to dataframe as a matrix
nx.to_pandas_dataframe(g, weight='distance')
# print nodes and edges
print 'list nodes: ', g.nodes(), '\n'
print 'no. nodes:', len(g) #no. nodes
print 'no. edges:', g.number_of_edges(), '\n'
print 'list edges: ', g.edges(), '\n'
print 'list all edge attributes: ', dict(((a,b,),c['distance']) for a,b,c in g.edges(data=True))
# choose layoutl pos=position for nodes
pos = nx.fruchterman_reingold_layout(g)
# draw network
nx.draw(g, pos, with_labels = True, node_size=800, node_color='pink', edge_color='grey')
# label edges
edge_labels = dict([((u,v,),d['distance']) for u,v,d in g.edges(data=True)])
nx.draw_networkx_edge_labels(graph, pos, edge_labels=edge_labels)
s=2
t=7
print nx.shortest_path(g, source=s,target=t, weight='distance')
plt.figure(figsize=(5, 5))
# choose layout
pos = nx.fruchterman_reingold_layout(g)
# draw network
nx.draw(g, pos, with_labels = True, node_size=800, node_color='pink', edge_color='grey')
# nx.draw_networkx_edges(g, pos, edge_color='grey',width=0.1, alpha=0.5)
# label edges
edge_labels = dict([((u,v,),d['distance']) for u,v,d in g.edges(data=True)])
nx.draw_networkx_edge_labels(graph, pos, edge_labels=edge_labels)
# plot shortest path
path = nx.shortest_path(g, source=s,target=t, weight='distance')
path_edges = zip(path,path[1:])
# nx.draw_networkx_nodes(g,pos,nodelist=path,node_color='black', node_size=1000)
nx.draw_networkx_edges(g,pos,edgelist=path_edges,edge_color='r',width=15)
# shortest path
nx.shortest_path(graph, source=1, target=3, weight='distance')
# shortest path
paths = nx.all_shortest_paths(graph, source=1, target=7, weight='distance')
for path in paths:
print path
# all simple paths without weight
path = nx.all_simple_paths(graph, source=1, target=7)
for i in path:
print i
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set up an Observer and list of FixedTargets in astroplan.
Step2: Determine which standards are observable tonight.
|
<ASSISTANT_TASK:>
Python Code:
catalog_name = 'Landolt 1992'
observatory_name = 'Apache Point'
from astroquery.vizier import Vizier
from astropy.coordinates import SkyCoord
import astropy.units as u
catalog_list = Vizier.find_catalogs(catalog_name)
catalogs = Vizier.get_catalogs(catalog_list.keys())
Vizier.ROW_LIMIT = -1 # Otherwise would only show first 50 values
catalog_table = catalogs[0] # This is the table with the data
RAs = u.Quantity(catalog_table['_RAJ2000'].data, unit=u.deg)
Decs = u.Quantity(catalog_table['_DEJ2000'].data, unit=u.deg)
names = list(catalog_table['SimbadName'].data)
landolt_standards = SkyCoord(ra=RAs, dec=Decs)
from astroplan import Observer, FixedTarget
obs = Observer.at_site(observatory_name)
target_list = [FixedTarget(coord=coord, name=name)
for coord, name in zip(landolt_standards, names)]
from astroplan import is_observable, observability_table, AltitudeConstraint, AtNightConstraint
from astropy.time import Time
constraints = [AltitudeConstraint(min=25*u.deg),
AtNightConstraint.twilight_astronomical()]
# Figure out when "tonight" is
present_time = Time.now()
if not obs.is_night(present_time):
# If it's currently day time at runtime, find time of sunset and sunrise
tonight_start = obs.sun_set_time(present_time, which='next')
tonight_end = obs.sun_rise_time(present_time, which='next')
else:
# Otherwise find time to next sunrise
tonight_start = present_time
tonight_end = obs.sun_rise_time(present_time, which='next')
table = observability_table(constraints, obs, target_list,
time_range=Time([tonight_start, tonight_end]))
print(table)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First Overview of the Data
Step2: Embedding & Prepping the data
Step3: Metrics and Evaluation
Step4: What's a Recurrent Neural Network (RNN)?
Step5: Let's now compile and train our model, using Sparce Categorical Cross-Entropy Loss as our loss function (you can read more about it <a href="https
Step6: Evaluation in terms of EM and F1 Scores
Step7: Our first Model - when trained on 70,000+ datapoints - obtains an Exact Match Score of 11.0%, and a F1 Score of 23.8% on the Test set. This is not ideal yet... Below are a few ideas on how we could improve from there - by adding complexity while making sure we keep the regularization in check!
Step8: Let's explore our Results!
|
<ASSISTANT_TASK:>
Python Code:
# Basics for Data Manipulation
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
# Tensorflow and Keras tools
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import (
Layer,
Input,
Dense,
Concatenate,
Masking,
Embedding,
Dropout,
Softmax,
Dot,
Lambda,
SimpleRNN,
GRU,
LSTM,
Bidirectional
)
from tensorflow.keras import regularizers
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import text_to_word_sequence
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.utils import plot_model
from tensorflow.keras.models import Model
from sklearn.model_selection import train_test_split
import string
#Loading the training data i.e. a dataframe of 15,000 ground-truth question-context-answer triples from the SQuAD Dataset
data = pd.read_csv('../assets/qa/squadlite.csv')
data.head(3)
#Loading the test data i.e. a dataframe of 5,000+ questions/answers to test on model on
test = pd.read_csv('../assets/qa/squadtest.csv')
test.head(3)
# Splitting the questions/paragraphs into words and embedding them...
pars = []
ques = []
embed = hub.KerasLayer("https://tfhub.dev/google/nnlm-en-dim128/2") #NNLM
for text in data.context:
words = np.array(text_to_word_sequence(text))
pars.append(embed(tf.constant(words)))
for text in data.question:
words = np.array(text_to_word_sequence(text))
ques.append(embed(tf.constant(words)))
# Now padding...
padded_pars = pad_sequences(pars, padding="post",dtype='float32')
padded_ques = pad_sequences(ques, padding="post",dtype='float32')
# Key Dimensions
batch_size = np.shape(padded_pars)[0] #Batch Size
max_paragraph_length = np.shape(padded_pars)[1] #Time Steps
max_question_length = np.shape(padded_ques)[1] #Time Steps
emb_dim = np.shape(padded_pars)[2] #Embed Dimension
print("Shape of the Padded Embedded Paragraphs: ", np.shape(padded_pars))
print("Shape of the Padded Embedded Questions: ", np.shape(padded_ques))
print("i.e. (Batch Size, Sequence Length, Embed Dimension)")
# Our y data (i.e the positions of the answer's start and end words)
y_start_word = np.array(data.start_word)
y_end_word = np.array(data.end_word)
print("Shape of the Y Train set for Start Word: ", np.shape(y_start_word))
print("Shape of the Y Train set for End Word: ", np.shape(y_end_word))
# Train & Validation
p_train, p_val, q_train, q_val, ys_train, ys_val, ye_train, ye_val = train_test_split(
padded_pars, padded_ques, y_start_word, y_end_word, test_size=0.1, random_state=30
)
# Let's create helper functions to measure these metrics
# Both exact_match & f1_score take strings as inputs
def exact_match(pred, truth):
truth = str(truth).replace("-", " ")
truth = "".join(l for l in truth if l not in string.punctuation)
return np.sum(str(pred).lower() == str(truth).lower())
def f1_score(pred, truth):
p = text_to_word_sequence(str(pred))
t = text_to_word_sequence(str(truth))
tp = [i for i in p if i in t]
if len(tp) == 0:
f1 = 0
else:
precision = len(tp)/len(p)
recall = len(tp)/len(t)
f1 = 2 * (precision * recall) / (precision + recall)
return f1
# First Input = Paragraphs / Straightforward GRU Layer
paragraphs = Input(shape=(max_paragraph_length, emb_dim), name="pars_in")
p = Masking(mask_value=0)(paragraphs)
p = GRU(
256,
return_sequences=True,
name="pars_out",
kernel_regularizer=regularizers.l2(0.002),
kernel_initializer="glorot_normal",
)(p)
# Output is = a 128d vector per word in the paragraph (None, max_paragraph_length, 128).
# Second Input = Questions / Straightforward GRU
questions = Input(shape=(max_question_length, emb_dim), name="ques_in")
q = Masking(mask_value=0)(questions)
q = GRU(
256,
return_sequences=True,
name="ques_gru",
kernel_regularizer=regularizers.l2(0.002),
kernel_initializer="glorot_normal",
)(q)
# Output is = a 256d vector per word in the paragraph (None, max_question_length, 256).
# Weighted Average to obtain the single vector q'
weights = Dense(1, activation="softmax", name="weights")(q)
q = Dot(axes=1, name="ques_out")([weights, q])
# Output is = a single 256d vector per question (None, 256, 1).
# Outputs for Start & End / Quadratic Layers and Softmax
qs = Dense(
256,
activation="linear",
name="s1",
use_bias=False,
kernel_regularizer=regularizers.l2(0.001),
)(q)
outs = Dot(axes=(2, 2), name="s2")([p, qs])
qe = Dense(
256,
activation="linear",
name="e1",
use_bias=False,
kernel_regularizer=regularizers.l2(0.001),
)(q)
oute = Dot(axes=(2, 2), name="e2")([p, qe])
# Output is = a probability vector (None, seq_pars, 1) for each
# Model
model = Model(inputs=[paragraphs, questions], outputs=[outs, oute])
# print(BaseModel.summary())
# Model Chart
plot_model(model, to_file="Baseline.png", show_shapes=True)
# Compiling our model
acc = tf.keras.metrics.SparseCategoricalAccuracy()
opt = tf.keras.optimizers.Adamax()
sce = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(
optimizer=opt, loss=[sce, sce], loss_weights=[1, 1], metrics=[[acc], [acc]]
)
# Defining a checkpoint to save our best weights during Training
checkpoint = ModelCheckpoint(
filepath="basemodel",
frequency="epoch",
save_weights_only=True,
save_best_only=True,
verbose=0,
)
# Training (using the predefined checkpoint and our validation set)
history = model.fit(
[p_train, q_train],
[ys_train, ye_train],
validation_data=([p_val, q_val], [ys_val, ye_val]),
epochs=1,
batch_size=64,
callbacks=[checkpoint],
verbose=0,
)
# Using our Best Model i.e. load the saved weights
model.load_weights('basemodel')
# Function to measure overall EM and F1 on the Test Set
def create_masked_matrix(pred_start, pred_end):
# Creating the masked matrix of possible answers (where start < end < start 15)
masked_matrix = tf.matmul(pred_start, tf.transpose(pred_end, [0, 2, 1]))
i, j = np.meshgrid(
*map(np.arange, (masked_matrix.shape[1], masked_matrix.shape[2])), indexing="ij"
)
masked_matrix.mask = (i <= j) & (j < i+15)
masked_matrix = np.where(masked_matrix.mask, masked_matrix, 0)
max_results = np.amax(masked_matrix, axis=(1, 2))
return masked_matrix, max_results
def model_eval(pred):
pred_start = tf.exp(pred[0])
pred_end = tf.exp(pred[1])
masked_matrix, max_results = create_masked_matrix(pred_start, pred_end)
number_of_examples = masked_matrix.shape[0]
em = []
f1 = []
# Find the most probable answer for each question in the test set.
# We compare with the four human answers, and keep the max F1 and EM scores.
for k in range(number_of_examples):
result = np.where(masked_matrix[k] == max_results[k])
if result[1][0] < len(text_to_word_sequence(test.context[k])):
answer = np.array(text_to_word_sequence(test.context[k]))[result[0][0]:result[1][0]+1]
else: answer = ['-']
if result[0][0] != result[1][0] and result[1][0] < len(
text_to_word_sequence(test.context[k])
):
answer = " ".join(answer)
else:
answer = str(answer[0])
em_k = max(
exact_match(answer, test.answer1[k]),
exact_match(answer, test.answer2[k]),
exact_match(answer, test.answer3[k]),
exact_match(answer, test.answer4[k]),
)
f1_k = max(
f1_score(answer, test.answer1[k]),
f1_score(answer, test.answer2[k]),
f1_score(answer, test.answer3[k]),
f1_score(answer, test.answer4[k]),
)
em.append(em_k)
f1.append(f1_k)
print("Exact Match: ", np.round(np.mean(em), 3))
print("F1 Score: ", np.round(np.mean(f1), 3))
return (em, f1)
# Let's embed and pad the Test set too...
pars_test = []
ques_test = []
embed = hub.KerasLayer("https://tfhub.dev/google/nnlm-en-dim128/2") # NNLM
for text in test.context:
words = np.array(text_to_word_sequence(text))
pars_test.append(embed(tf.constant(words)))
for text in test.question:
words = np.array(text_to_word_sequence(text))
ques_test.append(embed(tf.constant(words)))
p_test = pad_sequences(
pars_test, padding="post", dtype="float32", maxlen=max_paragraph_length
)
q_test = pad_sequences(
ques_test, padding="post", dtype="float32", maxlen=max_question_length
)
# Evaluate the model on the Test set
pred_test = model.predict([p_test, q_test])
print("**Results on Test Set:")
(em_model, f1_model) = model_eval(pred_test)
# First Input = Paragraphs
paragraphs = Input(shape=(max_paragraph_length, emb_dim), name="par0")
p = Masking(mask_value=0)(paragraphs)
# Bidirectional Multi-Layer with Dropout
p = Bidirectional(
GRU(128, return_sequences=True, name="par1", kernel_initializer="glorot_normal"),
merge_mode="concat",
)(p)
p = Dropout(0.15)(p)
p = Bidirectional(
GRU(64, return_sequences=True, name="par2", kernel_initializer="glorot_normal"),
merge_mode="concat",
)(p)
p = Dropout(0.15)(p)
# Output is = a 128d vector per word in the paragraph (None, max_paragraph_length, 128).
# Second Input = Questions
questions = Input(shape=(max_question_length, emb_dim), name="ques0")
q = Masking(mask_value=0)(questions)
q = GRU(256, return_sequences=True, name="ques2")(q)
q = Dropout(0.15)(q)
# Output is = a 256d vector per word in the paragraph (None, max_question_length, 256).
# Weighted Average to obtain the single vector q'
weights = Dense(1, activation="softmax", name="weights")(q)
q = Dot(axes=1, name="ques3")([weights, q])
# Output is = a single 256d vector per question (None, 256, 1).
# Outputs for Start & End / Quadratic Layers and Softmax
qs = Dense(128, activation = 'linear', name = "s1", use_bias=False,
kernel_regularizer=regularizers.l2(0.002))(q)
outs = Dot(axes=(2, 2), name = "s2")([p, qs])
qe = Dense(128, activation = 'linear', name = "e1", use_bias=False,
kernel_regularizer=regularizers.l2(0.002))(q)
oute = Dot(axes=(2, 2), name = "e2")([p, qe])
# Output is = a probability vector (None, seq_pars, 1) for each
# Model
model = Model(inputs=[paragraphs, questions], outputs=[outs, oute])
def create_masked_matrix_for_one(pred_start, pred_end):
# Creating the masked matrix of possible answers (where start < end < start 15)
masked_matrix = tf.matmul(pred_start, tf.transpose(pred_end))
i, j = np.meshgrid(
*map(np.arange, (masked_matrix.shape)), indexing="ij"
)
masked_matrix.mask = (i <= j) & (j < i+15)
masked_matrix = np.where(masked_matrix.mask, masked_matrix, 0)
max_results = np.where(masked_matrix == np.amax(masked_matrix))
return masked_matrix, max_results
# Function to get the result on the kth question
def get_result(k, model=model, verbose=True):
paragraph = tf.expand_dims(p_test[k], 0)
question = tf.expand_dims(q_test[k], 0)
out = model([paragraph, question])
start = tf.exp(out[0][0])
end = tf.exp(out[1][0])
_, result = create_masked_matrix_for_one(start, end)
if result[1][0] < len(text_to_word_sequence(test.context[k])):
answer = np.array(text_to_word_sequence(test.context[k]))[result[0][0]:result[1][0]+1]
else: answer = ['-']
if result[0][0] != result[1][0] and result[1][0] < len(text_to_word_sequence(test.context[k])):
answer = " ".join(answer)
else: answer = str(answer[0])
if verbose:
print("--------------------------------------------------------")
print("Question: ", test.question[k])
print("--------------------------------------------------------")
print("Context: ")
print(test.context[k])
print("--------------------------------------------------------")
print("Model's answer: ", answer)
print("Human answers: ")
print(
test.answer1[k],
" -- ",
test.answer2[k],
" -- ",
test.answer3[k],
" -- ",
test.answer4[k],
)
print("--------------------------------------------------------")
print(
"EM Score: ",
max(
exact_match(answer, test.answer1[k]),
exact_match(answer, test.answer2[k]),
exact_match(answer, test.answer3[k]),
exact_match(answer, test.answer4[k]),
),
)
print(
"F1 Score: ",
np.round(
max(
f1_score(answer, test.answer1[k]),
f1_score(answer, test.answer2[k]),
f1_score(answer, test.answer3[k]),
f1_score(answer, test.answer4[k]),
),
3,
),
)
#Let's try...
get_result(10)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.Searching and Printing a List of 50 'Lil' Musicians
Step2: 2 Genres Most Represented in the Search Results
Step3: More Spotify - LIL' GRAPHICS
Step4: The Second Highest Popular Artist
Step5: 4. List of Lil's Popular Than Lil' Kim
Step6: 5.Two Favorite Lils and Their Top Tracks
Step7: 6. Average Popularity of My Fav Musicians (Above) for Their explicit songs vs. their non-explicit songs
Step8: 7a. Number of Biggies and Lils
Step9: 7b. Time to Download All Information on Lil and Biggies
Step10: 8. Highest Average Popular Lils and Biggies Out of The Top 50
|
<ASSISTANT_TASK:>
Python Code:
import requests
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')
Lil_data = Lil_response.json()
#Lil_data
Lil_data.keys()
Lil_data['artists'].keys()
Lil_artists = Lil_data['artists']['items']
#With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50
#that are playable in the USA (or the country of your choice), along with their popularity score.
count =0
for artist in Lil_artists:
count += 1
print(count,".", artist['name'],"has the popularity of", artist['popularity'])
# What genres are most represented in the search results? Edit your previous printout to also display a list of their genres
#in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
#Tip: "how to join a list Python" might be a helpful search
# if len(artist['genres']) == 0 )
# print ("no genres")
# else:
# genres = ", ".join(artist['genres'])
genre_list = []
genre_loop = Lil_data['artists']['items']
for item in genre_loop:
#print(item['genres'])
item_gen = item['genres']
for i in item_gen:
genre_list.append(i)
#print(sorted(genre_list))
#COUNTING the most
genre_counter = {}
for word in genre_list:
if word in genre_counter:
genre_counter[word] += 1
else:
genre_counter[word] = 1
popular_genre = sorted(genre_counter, key = genre_counter.get, reverse = True)
top_genre = popular_genre[:1]
print("The genre most represented is", top_genre)
#COUNTING the most with count to confirm
from collections import Counter
count = Counter(genre_list)
most_count = count.most_common(1)
print("The genre most represented and the count are", most_count)
print("-----------------------------------------------------")
for artist in Lil_artists:
num_genres = 'no genres listed'
if len(artist['genres']) > 0:
num_genres= str.join(',', (artist['genres']))
print(artist['name'],"has the popularity of", artist['popularity'], ", and has", num_genres, "under genres")
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')
Lil_data = Lil_response.json()
#Lil_data
#Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating.
#Is it the same artist who has the largest number of followers?
name_highest = ""
name_follow =""
second_high_pop = 0
highest_pop = 0
high_follow = 0
for artist in Lil_artists:
if (highest_pop < artist['popularity']) & (artist['name'] != "Lil Wayne"):
#second_high_pop = highest_pop
#name_second = artist['name']
highest_pop = artist['popularity']
name_highest = artist['name']
if (high_follow < artist['followers']['total']):
high_follow = artist ['followers']['total']
name_follow = artist['name']
#print(artist['followers']['total'])
print(name_highest, "has the second highest popularity, which is", highest_pop)
print(name_follow, "has the highest number of followers:", high_follow)
#print("the second highest popularity is", second_high_pop)
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')
Lil_data = Lil_response.json()
#Lil_data
Lil_artists = Lil_data['artists']['items']
#Print a list of Lil's that are more popular than Lil' Kim.
count = 0
for artist in Lil_artists:
if artist['popularity'] > 62:
count+=1
print(count, artist['name'],"has the popularity of", artist['popularity'])
#else:
#print(artist['name'], "is less popular with a score of", artist['popularity'])
response = requests.get("https://api.spotify.com/v1/search?query=Lil&type=artist&limit=2&country=US")
data = response.json()
for artist in Lil_artists:
#print(artist['name'],artist['id'])
if artist['name'] == "Lil Wayne":
wayne = artist['id']
print(artist['name'], "id is",wayne)
if artist['name'] == "Lil Yachty":
yachty = artist['id']
print(artist['name'], "id is", yachty)
#Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
#Tip: You're going to be making two separate requests, be sure you DO NOT save them into the same variable.
response = requests.get("https://api.spotify.com/v1/artists/" +wayne+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
print("Lil Wayne's top tracks are: ")
for track in tracks:
print("-", track['name'])
print("-----------------------------------------------")
response = requests.get("https://api.spotify.com/v1/artists/" +yachty+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
print("Lil Yachty 's top tracks are: ")
for track in tracks:
print("-", track['name'])
response = requests.get("https://api.spotify.com/v1/artists/" +yachty+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
#print(tracks)
#for track in tracks:
#print(track.keys())
#Get an average popularity for their explicit songs vs. their non-explicit songs.
#How many minutes of explicit songs do they have? Non-explicit?
# How explicit is Lils?
response = requests.get("https://api.spotify.com/v1/artists/" +yachty+ "/top-tracks?country=US")
data = response.json()
tracks = data['tracks']
# counter for tracks for explicit and clean
track_count = 0
clean_count = 0
#counter to find avg popularity
popular_exp = 0
popular_clean = 0
#counter for avg time in minutes are below:
timer = 0
data_timer = 0
timer_clean = 0
for track in tracks:
print("The track,", track['name'],", with the id",track['id'], "is", track['explicit'],"for explicit content, and has the popularity of", track['popularity'])
track_id = track['id']
time_ms = track['duration_ms']
# TA-COMMENT: (-1) If what is true? "if True" will always evaluate to True....
if True:
track_count = track_count + 1
popular_exp = popular_exp + track['popularity'] # TA-COMMENT: What is this supposed to capture?
# It HAPPENS to be the case that all the tracks are explicit, but if that were not true, would this be correct?
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_track = response.json()
print("and has the duration of", data_track['duration_ms'], "milli seconds.")
timer = timer + time_ms
timer_minutes = ((timer / (1000*60)) % 60)
if not track['explicit']:
clean_count = clean_count + 1
popular_clean = popular_clean + track['popularity']
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_tracks = response.json()
timer_clean = timer_clean + time_ms
timer_minutes_clean = ((data_timer / (1000*60)) % 60)
print(", and has the duration of", timer_minutes_clean, "minutes")
print("------------------------------------")
avg_pop = popular_exp / track_count
print("I have found", track_count, "tracks, and has the average popularity of", avg_pop, "and has the average duration of", timer_minutes,"minutes and", clean_count, "are clean")
#print("Overall, I discovered", track_count, "tracks")
#print("And", clean_count, "were non-explicit")
#print("Which means", , " percent were clean for Lil Wayne")
# TA-COMMENT: example of what happens if you do just "if True" as in the code above.
if True:
print("hello")
# TA-COMMENT: Same commends apply here.
#Get an average popularity for their explicit songs vs. their non-explicit songs.
#How many minutes of explicit songs do they have? Non-explicit?
# How explicit is Lils?
response = requests.get("https://api.spotify.com/v1/artists/" +wayne+ "/top-tracks?country=US")
data = response.json()
# counter for tracks for explicit and clean
track_count = 0
clean_count = 0
#counter to find avg popularity
popular_exp = 0
popular_clean = 0
#counter for avg time in minutes are below:
timer = 0
#data_timer = 0
timer_clean = 0
for track in tracks:
print("The track,", track['name'],", with the id",track['id'], "is", track['explicit'],"for explicit content, and has the popularity of", track['popularity'])
track_id = track['id']
time_ms = data_track['duration_ms']
if True:
track_count = track_count + 1
popular_exp = popular_exp + track['popularity']
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_track = response.json()
print("and has the duration of", data_track['duration_ms'], "milli seconds.")
timer = timer + time_ms
timer_minutes = ((timer / (1000*60)) % 60)
if not track['explicit']:
clean_count = clean_count + 1
popular_clean = popular_clean + track['popularity']
response = requests.get("https://api.spotify.com/v1/tracks/" + track_id)
data_tracks = response.json()
timer_clean = timer_clean + time_ms
timer_minutes_clean = ((data_timer / (1000*60)) % 60)
print(", and has the duration of", timer_minutes_clean, "minutes")
print("------------------------------------")
avg_pop = popular_exp / track_count
print("I have found", track_count, "tracks, and has the average popularity of", avg_pop, "and has the average duration of", timer_minutes,"minutes and", clean_count, "are clean")
#print("Overall, I discovered", track_count, "tracks")
#print("And", clean_count, "were non-explicit")
#print("Which means", , " percent were clean for Lil Wayne")
#How many total "Biggie" artists are there? How many total "Lil"s?
#If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
biggie_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&country=US')
biggie_data = biggie_response.json()
biggie_artists = biggie_data['artists']['total']
print("Total number of Biggie artists are", biggie_artists)
lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&country=US')
lil_data = lil_response.json()
lil_artists = lil_data['artists']['total']
print("Total number of Lil artists are", lil_artists)
#If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
limit_download = 50
biggie_artists = biggie_data['artists']['total']
Lil_artist = Lil_data['artists']['total']
#1n 5 sec = 50
#in 1 sec = 50 / 5 req = 10 no, for 1 no, 1/10 sec
# for 4501 = 4501/10 sec
# for 49 49/ 10 sec
big_count = biggie_artists/10
lil_count = Lil_artist / 10
print("It would take", big_count, "seconds for Biggies, where as it would take", lil_count,"seconds for Lils" )
# TA-COMMENT: (-1) If one request takes 5 seconds, then 50 requests would take (50 * 5) seconds
# (one request for each 'Biggie')
# So, 4510 Lil artists would take 4510 * 5 seconds
#Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average?
biggie_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50&country=US')
biggie_data = biggie_response.json()
biggie_artists = biggie_data['artists']['items']
big_count_pop = 0
for artist in biggie_artists:
#count_pop = artist['popularity']
big_count_pop = big_count_pop + artist['popularity']
print("Biggie has a total popularity of ", big_count_pop)
big_pop = big_count_pop / 49
print("Biggie is on an average", big_pop,"popular")
#Lil
Lil_response = requests.get('https://api.spotify.com/v1/search?query=Lil&type=artist&limit=50&country=US')
Lil_data = Lil_response.json()
Lil_artists = Lil_data['artists']['items']
lil_count_pop = 0
for artist in Lil_artists:
count_pop_lil = artist['popularity']
lil_count_pop = lil_count_pop + count_pop_lil
lil_pop = lil_count_pop / 50
print("Lil is on an average", lil_pop,"popular")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: When you try to print the variable created, you will not get what you want. It is just an object created to use in later statements.
Step2: We won't get any error because we are creating a new file only error we might get is harddisk full error from system. After we are done manipulating the file we have to close the file. with close() function.
Step3: Reading Files
Step4: I used while loop to show the logic behind the reading, however for loop gives us a more elegant way.
Step5: Writing Files
Step6: The write method does not add a newline character to the output string . Thus, a newline character will be output only if it is part of the string being written. But in the example above line variable comes with \n at the end.
Step7: There are a number of methods specific to strings in addition to the general sequence operations.
Step8: str.isalpha()
Step9: str.isdigit() Returns True if str contains only digits.
Step10: str.islower() and str.isupper()
Step11: str.lower() and str.upper()
Step12: Searching the Contents of a String
Step13: Replacing the Contents of a String
Step14: Removing the Contents of a String
Step15: Splitting a String
|
<ASSISTANT_TASK:>
Python Code:
input_file = open('sample.txt', 'r') # IOError occured because we should put the write true direction.
input_file = open('data/sample.txt', 'r') # True direction, with folder inside.
print(input_file)
input_file.close()
output_file = open('data/mynewfile.txt', 'w') # I used data/name because I want to save in data folder
output_file.close()
input_file = open('data/sample.txt', 'r')
empty_str = ''
line = input_file.readline()
while line != empty_str:
print(line)
line = input_file.readline()
input_file.close()
input_file = open('data/sample.txt', 'r')
for line in input_file:
print(line)
empty_str= ''
input_file = open('data/sample.txt', 'r')
output_file = open('data/newfile.txt', 'w')
line = input_file.readline()
while line != empty_str:
output_file.write(line)
line = input_file.readline()
output_file.close()
space = ' '
num_spaces = 0
line = input_file.readline()
for k in range(0, len(line)):
if line[k] == space:
num_spaces = num_spaces + 1
num_spaces
s = 'Hello World!'
s.isalpha() #
s.isdigit()
"1".isdigit()
s.islower()
s
s.isupper()
"HELLO WORLD".isupper()
s
s.upper()
s.lower()
s # Does not change... You have to assign it to an new variable or overwrite
s = s.lower()
s
s
s.find('d')
s.find('x')
s
s.replace("l", "*")
s
s.strip('!')
s
s.strip('!').split(" ")
s[:-4]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Periodic, so probably not a good representation of the 18 domain
Step2: Look at same stuff at depth 4 light cones
Step3: Look at states for increasing light cone size
|
<ASSISTANT_TASK:>
Python Code:
first_test = epsilon_field(domain.get_spacetime())
first_test.estimate_states(2,2,1)
first_test.filter_data()
print first_test.number_of_states()
for state in first_test.causal_states():
print state.plc_configs()
second_test = epsilon_field(domain.get_spacetime())
second_test.estimate_states(2,2,1,alpha= 0)
second_test.filter_data()
print second_test.number_of_states()
for state in second_test.causal_states():
print state.plc_configs()
big_domain = ECA(18, domain_18(2000))
big_domain.evolve(2000)
third_test = epsilon_field(big_domain.get_spacetime())
third_test.estimate_states(2,2,1,alpha=0)
third_test.filter_data()
print third_test.number_of_states()
for state in third_test.causal_states():
print state.plc_configs()
print 2**5
print np.size(np.unique(third_test.PLCs().all_labels()))
print np.size(np.unique(third_test.FLCs().all_labels()))
print third_test.causal_states()[0].plc_shape(0)
print third_test.causal_states()[1].plc_shape(0), '\n'
print third_test.causal_states()[1].plc_shape(1)
print third_test.causal_states()[2].plc_shape(0)
diagram(domain.get_spacetime(), t_min = 400, t_max = 440, x_min = 400, x_max = 440)
print np.size([0,1]*10)
checkerish = ECA(18, ([0,0,1]+[0,1,0])*200)
checkerish.evolve(1200)
checkerish.diagram()
fourth_test = epsilon_field(checkerish.get_spacetime())
fourth_test.estimate_states(3,3,1)
fourth_test.filter_data()
print fourth_test.number_of_states()
for state in fourth_test.causal_states():
print state.plc_configs()
fifth_test = epsilon_field(domain.get_spacetime())
fifth_test.estimate_states(3,3,1, alpha=0)
fifth_test.filter_data()
print fifth_test.number_of_states()
print np.size(np.unique(fifth_test.PLCs().all_labels()))
for state in fifth_test.causal_states():
print state.index(), state.plc_configs()
for state in fifth_test.causal_states():
print state.index(), state.morph_support_configs(), np.size(state.morph_support_configs()),'\n'
sixth_test = epsilon_field(domain.get_spacetime())
sixth_test.estimate_states(4,4,1,alpha = 0)
sixth_test.filter_data()
print sixth_test.number_of_states()
print np.size(np.unique(sixth_test.PLCs().all_labels()))
for state in sixth_test.causal_states():
print state.index(), state.plc_configs(), '\n'
for state in sixth_test.causal_states():
print state.index(), state.morph_support_configs(), np.size(state.morph_support_configs()), '\n'
diagram(domain.get_spacetime(), t_min = 400, t_max = 440, x_min = 400, x_max = 440)
state_overlay_diagram(domain.get_spacetime(), second_test.get_causal_field(),\
t_min = 400, t_max = 440, x_min = 400, x_max = 440)
state_overlay_diagram(domain.get_spacetime(), fifth_test.get_causal_field(),\
t_min = 400, t_max = 440, x_min = 400, x_max = 440)
state_overlay_diagram(domain.get_spacetime(), sixth_test.get_causal_field(),\
t_min = 400, t_max = 440, x_min = 400, x_max = 440)
seventh_test = epsilon_field(domain.get_spacetime())
seventh_test.estimate_states(5,5,1,alpha=0)
seventh_test.filter_data()
print seventh_test.number_of_states()
state_overlay_diagram(domain.get_spacetime(), seventh_test.get_causal_field(),\
t_min = 400, t_max = 440, x_min = 400, x_max = 440)
domain = ECA(18, domain_18(400))
domain.evolve(400)
domain_states = epsilon_field(domain.get_spacetime())
domain_states.estimate_states(5,3,1)
domain_states.filter_data()
print domain_states.number_of_states()
print domain_states.nonunifilar_transitions()
state_overlay_diagram(domain.get_spacetime(), domain_states.get_causal_field(), t_min = 50, t_max = 100, \
x_min = 50, x_max = 100)
mask_field = np.copy(domain_states.get_causal_field())
mask_field[mask_field == 3] = 1
state_overlay_diagram(domain.get_spacetime(), mask_field, t_min = 50, t_max = 100, \
x_min = 50, x_max = 100)
print mask_field[10:30, 120:140]
print len(domain_states.all_transitions())
np.random.seed(0)
domain_states = epsilon_field(domain.get_spacetime())
domain_states.estimate_states(3,3,1)
domain_states.filter_data()
print domain_states.nonunifilar_transitions()
print len(domain_states.all_transitions())
for transition in domain_states.all_transitions():
print transition
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a DataArray
Step2: If you supply a pandas Series or DataFrame, metadata is copied directly
Step3: Here are the key properties for a DataArray
Step4: Indexing
Step5: Computation
Step6: However, aggregation operations can use dimension names instead of axis numbers
Step7: Arithmetic operations broadcast based on dimension name. This means you don’t need to insert dummy dimensions for alignment
Step8: Another broadcast example
Step9: It also means that in most cases you do not need to worry about the order of dimensions
Step10: Operations also align based on index labels
Step11: GroupBy
Step12: Convert to pandas
Step13: Datasets and NetCDF
Step14: You can do almost everything you can do with DataArray objects with Dataset objects if you prefer to work with multiple variables at once.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import xarray as xr
xr.DataArray(np.random.randn(2, 3))
data = xr.DataArray(np.random.randn(2, 3), [('x', ['a', 'b']), ('y', [-2, 0, 2])])
data
xr.DataArray(pd.Series(range(3), index=list('abc'), name='foo'))
data.values
data.dims
data.coords
len(data.coords)
data.coords['x']
data.attrs
data[[0, 1]]
data.loc['a':'b']
data.loc
data.isel(x=slice(2))
data.sel(x=['a', 'b'])
data
data + 10
np.sin(data)
data.T
data.sum()
data.mean(dim='x')
a = xr.DataArray(np.random.randn(3), [data.coords['y']])
b = xr.DataArray(np.random.randn(4), dims='z')
a
b
a + b
v1 = xr.DataArray(np.random.rand(3, 2, 4), dims=['t', 'y', 'x'])
v2 = xr.DataArray(np.random.rand(2, 4), dims=['y', 'x'])
v1
v2
v1 + v2
data - data.T
data[:-1]
data[:1]
data[:-1] - data[:1]
labels = xr.DataArray(['E', 'F', 'E'], [data.coords['y']], name='labels')
labels
data
data.groupby(labels).mean('y')
data.groupby(labels).apply(lambda x: x - x.min())
data.to_series()
data.to_pandas()
ds = data.to_dataset(name='foo')
ds
ds.to_netcdf('example.nc')
xr.open_dataset('example.nc')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: DictVectorizer
Step2: CountVectorizer
Step3: Stop Word Filtering
Step4: Stemming and Lemmatization
Step5: As we can see both sentences are having same meaning but their feature vectors have no elements in common. Let's use the lexical analysis on the data
Step6: The Porter stemmer cannot consider the inflected form's part of speech and returns gather for both documents
Step7: Extending bag-of-words with TF-IDF weights
Step8: Data Standardization
|
<ASSISTANT_TASK:>
Python Code:
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer, HashingVectorizer
from sklearn.metrics.pairwise import euclidean_distances
from sklearn import preprocessing
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.stem import PorterStemmer
from nltk import word_tokenize
from nltk import pos_tag
import numpy as np
onehot_encoder = DictVectorizer()
instances = [
{'city': 'New York'},
{'city': 'San Francisco'},
{'city': 'Chapel Hill'} ]
print (onehot_encoder.fit_transform(instances).toarray())
corpus = [
'UNC played Duke in basketball',
'Duke lost the basketball game'
]
vectorizer = CountVectorizer()
print (vectorizer.fit_transform(corpus).todense())
print (vectorizer.vocabulary_)
# adding one more sentence in corpus
corpus = [
'UNC played Duke in basketball',
'Duke lost the basketball game',
'This is Atul Singh'
]
vectorizer = CountVectorizer()
print (vectorizer.fit_transform(corpus).todense())
print (vectorizer.vocabulary_)
# checking the euclidean distance
# converting sentence into CountVectorizer
counts = vectorizer.fit_transform(corpus).todense()
print("1 & 2", euclidean_distances(counts[0], counts[1]))
print("2 & 3", euclidean_distances(counts[1], counts[2]))
print("1 & 3", euclidean_distances(counts[0], counts[2]))
vectorizer = CountVectorizer(stop_words='english') # added one option which remove the grammer words from corpus
print (vectorizer.fit_transform(corpus).todense())
print (vectorizer.vocabulary_)
print("1 & 2", euclidean_distances(counts[0], counts[1]))
print("2 & 3", euclidean_distances(counts[1], counts[2]))
print("1 & 3", euclidean_distances(counts[0], counts[2]))
corpus = [
'He ate the sandwiches',
'Every sandwich was eaten by him'
]
vectorizer = CountVectorizer(stop_words='english') # added one option which remove the grammer words from corpus
print (vectorizer.fit_transform(corpus).todense())
print (vectorizer.vocabulary_)
lemmatizer = WordNetLemmatizer()
print (lemmatizer.lemmatize('gathering', 'v'))
print (lemmatizer.lemmatize('gathering', 'n'))
stemmer = PorterStemmer()
print (stemmer.stem('gathering'))
wordnet_tags = ['n', 'v']
corpus = [
'He ate the sandwiches',
'Every sandwich was eaten by him'
]
stemmer = PorterStemmer()
print ('Stemmed:', [[stemmer.stem(token) for token in word_tokenize(document)] for document in corpus])
def lemmatize(token, tag):
if tag[0].lower() in ['n', 'v']:
return lemmatizer.lemmatize(token, tag[0].lower())
return token
lemmatizer = WordNetLemmatizer()
tagged_corpus = [pos_tag(word_tokenize(document)) for document in corpus]
print ('Lemmatized:', [[lemmatize(token, tag) for token, tag in document] for document in tagged_corpus])
corpus = ['The dog ate a sandwich, the wizard transfigured a sandwich, and I ate a sandwich']
vectorizer = CountVectorizer(stop_words='english')
print (vectorizer.fit_transform(corpus).todense())
print(vectorizer.vocabulary_)
corpus = ['The dog ate a sandwich and I ate a sandwich',
'The wizard transfigured a sandwich']
vectorizer = TfidfVectorizer(stop_words='english')
print (vectorizer.fit_transform(corpus).todense())
print(vectorizer.vocabulary_)
corpus = ['The dog ate a sandwich and I ate a sandwich',
'The wizard transfigured a sandwich']
vectorizer = HashingVectorizer(n_features=6)
print (vectorizer.fit_transform(corpus).todense())
X = [[1,2,3],
[4,5,1],
[3,6,2]
]
print(preprocessing.scale(X))
x1 = preprocessing.StandardScaler()
print(x1)
print(x1.fit_transform(X))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: Lesson
Step3: Project 1
Step4: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
Step5: TODO
Step6: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
Step7: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
Step8: Examine the ratios you've calculated for a few words
Step9: Looking closely at the values you just calculated, we see the following
Step10: Examine the new ratios you've calculated for the same words from before
Step11: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Step12: End of Project 1.
Step13: Project 2
Step14: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
Step15: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
Step16: TODO
Step17: Run the following cell. It should display (1, 74074)
Step18: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
Step20: TODO
Step21: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
Step23: TODO
Step24: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
Step25: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
Step29: End of Project 2.
Step30: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
Step31: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
Step32: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
Step33: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
Step34: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
Step35: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
Step39: Project 4
Step40: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
Step41: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
Step42: End of Project 4.
Step46: Project 5
Step47: Run the following cell to recreate the network and train it once again.
Step48: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
Step49: End of Project 5.
Step53: Project 6
Step54: Run the following cell to train your network with a small polarity cutoff.
Step55: And run the following cell to test it's performance. It should be
Step56: Run the following cell to train your network with a much larger polarity cutoff.
Step57: And run the following cell to test it's performance.
Step58: End of Project 6.
|
<ASSISTANT_TASK:>
Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
from collections import Counter
import numpy as np
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for label, review in zip(labels, reviews):
words = review.lower().replace(",", " ").replace(".", " ").split(" ")
total_counts.update(words)
if label == "POSITIVE" :
positive_counts.update(words)
else:
negative_counts.update(words)
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for word, count in total_counts.most_common():
ratio = positive_counts[word]/(float(negative_counts[word])+1.0)
pos_neg_ratios.update({word:ratio})
if count <100:
break
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
x = np.arange(0,5,0.01)
y = [np.log(x) if x >= 1 else -np.log(1/(x + 0.01)) for x in x]
import matplotlib.pyplot as plt
plt.plot(x,y)
plt.grid(True)
plt.show()
# TODO: Convert ratios to logs
for word, ratio in pos_neg_ratios.most_common():
if ratio >=1:
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log(1/(ratio + 0.01))
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
from IPython.display import Image
Image(filename='sentiment_network_2.png')
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = np.zeros((1,vocab_size))
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
# use global avoide create new variable that may fill your RAM!
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
words = review.lower().replace(",", " ").replace(".", " ").split(" ")
for word in words:
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
# TODO: Your code here
return 1 if label=="POSITIVE" else 0
labels[1]
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.lower().replace(",", " ").replace(".", " ").split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.lower().replace(",", " ").replace(".", " ").split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.test(reviews[-1000:],labels[-1000:])
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.lower().replace(",", " ").replace(".", " ").split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].lower().replace(",", " ").replace(".", " ").split(" "):
review_counter[word] += 1
review_counter.most_common()
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.lower().replace(",", " ").replace(".", " ").split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.lower().replace(",", " ").replace(".", " ").split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
## New for Project 6: added min_count and polarity_cutoff parameters
def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
min_count(int) - Words should only be added to the vocabulary
if they occur more than this many times
polarity_cutoff(float) - The absolute value of a word's positive-to-negative
ratio must be at least this big to be considered.
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
## New for Project 6: added min_count and polarity_cutoff arguments to pre_process_data call
self.pre_process_data(reviews, labels, polarity_cutoff, min_count)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
## New for Project 6: added min_count and polarity_cutoff parameters
def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):
## ----------------------------------------
## New for Project 6: Calculate positive-to-negative ratios for words before
# building vocabulary
#
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
#
## end New for Project 6
## ----------------------------------------
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
## New for Project 6: only add words that occur at least min_count times
# and for words with pos/neg ratios, only add words
# that meet the polarity_cutoff
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize))
p.scatter(x="x1", y="x2", size=8, source=source,color=colors_list)
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Parsing is hard...
Step3: Exercise I
Step4: Python Regexp
Step5: Achieve more complex splitting using regular expressions.
Step6: Benchmarking in iPython - I
Step8: Parsing
|
<ASSISTANT_TASK:>
Python Code:
import re
import nose
# %timeit
from __future__ import print_function
# Before writing the parser, collect samples of
# the interesting lines. For now just
mail_sent = 'May 31 08:00:00 test-fe1 postfix/smtp[16669]: 7CD8E730020: to=<jon@doe.it>, relay=examplemx2.doe.it[222.33.44.555]:25, delay=0.8, delays=0.17/0.01/0.43/0.19, dsn=2.0.0, status=sent(250 ok: Message 2108406157 accepted)'
mail_delivered = 'May 31 08:00:00 test-fe1 postfix/smtp[16669]: 7CD8E730020: removed'
print("I'm goint to parse the following line", mail_sent, sep="\n\n")
def test_sent():
hour, host, to = parse_line(mail_sent)
assert hour == '08:00:00'
assert to == 'jon@doe.it'
# Play with mail_sent
mail_sent.split()
# You can number fields with enumerate.
# Remember that ipython puts the last returned value in `_`
# in our case: _ = mail_sent.split()
# which is useful in interactive mode!
fields, counting = _, enumerate(_)
print(*counting, sep="\n")
#counting = enumerate(mail_sent.split())
#for it in counting:
# print(it)
# Now we can pick fields singularly...
hour, host, dest = fields[2], fields[3], fields[6]
print("Hour: {}, host: {}, dest: {}".format(hour, host, dest))
test_str_1 = 'Nov 31 08:00:00 test-fe1 postfix/smtp[16669]: 7CD8E730020: to=<jon@doe.it>, relay=examplemx2.doe.it[222.33.44.555]:25, delay=0.8, delays=0.17/0.01/0.43/0.19, dsn=2.0.0, status=sent(250 ok: Message 2108406157 accepted)'
test_str_2 = 'Nov 31 08:00:00 test-fe1 postfix/smtp[16669]: 7CD8E730020: removed'
def test_sent():
hour, host, destination = parse_line(test_str_1)
assert hour == '08:00:00'
assert host == 'test-fe1'
assert destination == 'to=<jon@doe.it>,'
def test_delivered():
hour, host, destination = parse_line(test_str_2)
print(destination)
assert hour == '08:00:00'
assert host == 'test-fe1'
assert destination is None
def parse_line(line):
Complete the parse line function.
# Hint: "you can".split()
# Hint: "<you can slice>"[1:-1] or use re.split
pass
test_sent()
test_delivered()
# Python supports regular expressions via
import re
# We start showing a grep-reloaded function
def grep(expr, fpath):
one = re.compile(expr) # ...has two lookup methods...
assert ( one.match # which searches from ^ the beginning
and one.search ) # that searches $\pyver{anywhere}$
with open(fpath) as fp:
return [x for x in fp if one.search(x)]
# The function seems to work as expected ;)
assert not grep(r'^localhost', '/etc/hosts')
# And some more tests
ret = grep('127.0.0.1', '/etc/hosts')
assert ret, "ret should not be empty"
print(*ret)
# Splitting with re.findall
from re import findall # can be misused too;
# eg for adding the ":" to a
mac = "00""24""e8""b4""33""20"
# ...using this
re_hex = "[0-9a-fA-F]{2}"
mac_address = ':'.join(findall(re_hex, mac))
print("The mac address is ", mac_address)
# Actually this does a bit of validation, requiring all chars to be in the 0-F range
# Run the following cell many times.
# Do you always get the same results?
import timeit
test_all_regexps = ("..", "[a-fA-F0-9]{2}")
for re_s in test_all_regexps:
print(timeit.timeit(stmt="':'.join(findall(re_s, mac))",
setup="from re import findall;re_s='{}';mac='{}'".format(re_s, mac)))
# We can even compare compiled vs inline regexp
import re
from time import sleep
for re_s in test_all_regexps:
print(timeit.timeit(stmt="':'.join(re_c.findall(mac))",
setup="from re import findall, compile;re_c=compile('{}');mac='{}'".format(re_s, mac)))
# ...or simple
print(timeit.timeit(stmt="':'.join([mac[i:i+2] for i in range(0,12,2)])",
setup="from re import findall;mac='{}'".format(mac)))
#
# Use this cell for Exercise II
#
test_str_1 = 'Nov 31 08:00:00 test-fe1 postfix/smtp[16669]: 7CD8E730020: to=<jon@doe.it>, relay=examplemx2.doe.it[222.33.44.555]:25, delay=0.8, delays=0.17/0.01/0.43/0.19, dsn=2.0.0, status=sent(250 ok: Message 2108406157 accepted)'
test_str_2 = 'Nov 31 08:00:00 test-fe1 postfix/smtp[16669]: 7CD8E730020: removed'
def test_sent():
hour, host, destination = parse_line(test_str_1)
assert hour == '08:00:00'
assert host == 'test-fe1'
assert destination == 'jon@doe.it'
def test_delivered():
hour, host, destination = parse_line(test_str_2)
assert hour == '08:00:00'
assert host == 'test-fe1'
assert destination is None
def parse_line(line):
Complete the parse line function.
# Hint: "you can".split()
# Hint: "<you can slice>"[1:-1] or use re.split
pass
test_sent()
test_delivered()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, import the EBTEL results.
Step2: Next, reshape the EBTEL results into something readable by the IonPopSolver code and save them to a file. Set some parameters for reading the data structure.
Step3: Now, print the files.
Step4: We need to modify the XML input file for the IonPopSolver code to make sure it points to the right atomic database (see install instructions in IonPopSolver). We'll also set the cutoff ion fraction to $1\times10^{-6}$ to speed up the calculation.
Step5: Now, we'll run the code in parallel with the subprocess module. First, define the worker function that will run in parallel.
Step6: Now, we need to assemble our list of inputs.
|
<ASSISTANT_TASK:>
Python Code:
import os
import pickle
import multiprocessing
import subprocess
import xml.etree.ElementTree as ET
import numpy as np
with open(__depends__[0],'rb') as f:
varying_tau_results = pickle.load(f)
tau_indices = [0,-1]
prefixes = ['tau20','tau500']
parameter_sets = {'single':('t','T','n'),'electron':('te','Tee','ne'),'ion':('ti','Tie','ni')}
inputs = []
for i,pre in zip(tau_indices,prefixes):
for key in parameter_sets:
inputs.append(os.path.join('../results/','_tmp_%s.%s.ips.txt'%(pre,key)))
np.savetxt(os.path.join('../results/','_tmp_%s.%s.ips.txt'%(pre,key)),
np.transpose([varying_tau_results[i][parameter_sets[key][0]],
varying_tau_results[i][parameter_sets[key][1]],
varying_tau_results[i][parameter_sets[key][2]]]),
header=str(len(varying_tau_results[i][parameter_sets[key][0]])),comments='',fmt='%f\t%e\t%e')
xml_tree = ET.parse(os.path.join(os.environ['EXP_DIR'],'IonPopSolver/test/radiation.example.cfg.xml'))
root = xml_tree.getroot()
node1 = root.find('atomicDB')
node1.text = os.path.join(os.environ['EXP_DIR'],'apolloDB') + '/'
node2 = root.find('cutoff_ion_fraction')
node2.text = '1e-6'
xml_tree.write(os.path.join(os.environ['EXP_DIR'],'IonPopSolver/test/radiation.local.cfg.xml'))
def worker((input_file,output_file)):
print("Running IonPopSolver for input %s"%(input_file))
executable = os.path.join(os.environ['EXP_DIR'],'IonPopSolver/bin/IonPopSolver.run')
static_args = ["-Z","26","-f","9","-t","27","-r",
os.path.join(os.environ['EXP_DIR'],'IonPopSolver/test/radiation.local.cfg.xml')]
var_args = ["-I",os.path.abspath(input_file),"-O",os.path.abspath(output_file)]
subprocess.call([executable]+static_args+var_args)
print("Finished IonPopSolver for input %s"%(input_file))
p = multiprocessing.Pool()
p.map(worker,zip(sorted(inputs),__dest__))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: El segundo paso consiste en usar como likelihood una distribución binomial y no una Gaussiana. De esta forma el modelo queda expresado como
Step2: Ahora graficaremos las 3 especies versus la longitud del sépalo usando la función stripplot de seaborn
Step3: Observe en la figura 4.2 que en el eje y se representan una variable continua mientras que en el eje x la variable es categórica. La dispersión (o jitter) de los puntos a lo largo del eje x no tiene ningún significado, y es solo un truco para evitar que todos los puntos colapsen en una sola línea (pueden probar pasando el argumento jitter=False). Por lo tanto lo único que importa al leer el eje x es la pertenencia de los puntos a las clases setosa, versicolor o virginica.
Step4: Antes de continuar, tómese un tiempo para estudiar las gráficas anteriores y familiarizarse con el conjunto de datos y cómo se relacionan las variables dependientes y las independientes.
Step5: Al igual que con otros modelos lineales, centrar los datos puede ayudar con el muestreo. Ahora que tenemos los datos en el formato adecuado, finalmente podemos construir el modelo con PyMC3.
Step6: Como es habitual, también mostramos el summary del posterior. Más adelante, compararemos el valor que obtengamos para el límite de decisión con un valor calculado utilizando otro método.
Step7: Ahora vamos a graficar los datos junto con la curva sigmoide ajustada
Step8: La figura 4.4 muestra la longitud del sépalo para las especies (setosa = 0, versicolor = 1). Para mitigar la superposición de los datos, hemos agregado ruido (jitter) a las variables-respuesta binarias. Una línea verde en forma de S representa el valor medio de $\theta$. Esta línea se puede interpretar como la probabilidad que una flor sea versicolor dado el valor de la longitud del sépalo. La banda verde semitransparente es el intervalo del 94% de HPD. De esta forma podemos interpretar a la regresión logística como una forma de combinar variables linealmente a fin de obtener una probabilidad para variables binarias.
Step9: El límite de decisión
Step10: Como hicimos para una única variable predictiva, vamos a graficar los datos y el límite de decisión.
Step11: El límite de decisión es una línea recta, como ya hemos visto. No se confunda con el aspecto curvo de la banda del 94% de HPD. La curvatura aparente es el resultado de tener múltiples líneas que giran alrededor de una región central (aproximadamente alrededor de la media de x y la media de y).
Step12: Por lo tanto, los valores de los coeficientes proporcionados por summary están en la escala log-odds.
Step13: Una forma muy empírica de entender los modelos es cambiar los parámetros y ver qué sucede. En el siguiente bloque de código, calculamos las log-odds en favor de versicolor como $\text {log_odds_versicolor_i} = \alpha + beta_1 x1 + \beta_2 x2$, y luego la probabilidad de versicolor con la función logística. Luego repetimos el cálculo arreglando $x_2$ y aumentando $x_1$ en 1.
Step14: Si ejecutas el código, encontrarás que el aumento en las log-odds es de $\approx 4.7$, que es exactamente el valor de $\beta_0$ (verifique el summary para trace_1). Esto está en línea con nuestro hallazgo anterior que muestra que los coeficientes $\beta$ indican el aumento en unidades log-odds por incremento unitario de la variable $x$. El aumento en la probabilidad es $\approx 0.70$.
Step15: Para generar la figura 4.7, hemos utilizado una máscara que elimina el triángulo superior y los elementos diagonales del heatmap, ya que estos son poco informativos o redundantes. Observe también que hemos graficado el valor absoluto de la correlación, ya que en este momento no nos importa el signo de la correlación entre las variables, solo su fuerza.
Step16: Y ahora ejecutamos una regresión logística múltiple, tal cual hicimos antes.
Step17: El límite de decisión se desplaza hacia la clase menos abundante y la incertidumbre es más grande que antes. Este es el comportamiento típico de un modelo logístico para datos no balanceados. ¡Pero espera un minuto! Bien podrías argumentar que te estoy engañando ya que la mayor incertidumbre es en realidad el producto de tener menos datos y no solo menos setosas que versicolores. Este es un punto totalmente válido, pero si realizas el ejercicio 2 podrás verificar que lo que explica esta gráfica son los datos desequilibrados.
Step18: ¿Qué hacer si encontramos datos desequilibrados? Bueno, la solución obvia es obtener un conjunto de datos con aproximadamente la misma cantidad por clase. Este es un punto a tener en cuenta al recopilar o generar los datos. Si no tenés control sobre el conjunto de datos, debes tener cuidado al interpretar los resultados para datos no balanceados. Verifique la incertidumbre del modelo y ejecute algunas verificaciones predictivas posteriores para ver si los resultados son útiles para usted. Otra opción sería utilizar priors más informativos y/o ejecutar un modelo alternativo como se explica más adelante en este capítulo.
Step19: El código de PyMC3 refleja los pocos cambios entre el modelo logístico y el modelo softmax. Presta atención a los valores de shape para los coeficientes $\alpha $ y $\beta$. En el siguiente código usamos la función softmax de Theano. Hemos utilizado la expresión import theano.tensor as tt, que es la convención utilizada por los desarrolladores de PyMC3
Step20: ¿Qué tan bien funciona nuestro modelo? Averigüemos cuántos casos podemos predecir correctamente. En el siguiente código, solo usamos la media de los parámetros para calcular la probabilidad de que cada punto de datos pertenezca a cada una de las tres clases, luego asignamos la clase usando la función argmax. Y comparamos el resultado con los valores observados
Step21: El resultado es que clasificamos correctamente $\approx 98 \%$ de los datos, es decir, clasificamos erroneamente solo tres casos. Ese es realmente un muy buen trabajo. Sin embargo, una verdadera prueba para evaluar el rendimiento de nuestro modelo sería verificarlo con un conjunto de datos no usado para ajustar al modelo. De lo contrario, es posible que estemos sobreestimando la capacidad real del modelo para generalizar a otros datos.
Step22: Linear discriminant analysis (LDA)
Step23: Ahora vamos a generar una figura que muestra las dos clases (setosa = 0 yversicolor = 1) contra los valores de la longitud del sépalo, y también el límite de decisión como una línea naranja y el intervalo del 94% de HPD como una banda naranja semitransparente.
Step24: Como habrá notado, la figura 4.9 es bastante similar a la figura 4.4. Verifique también los valores de la decisión de límite en el siguiente summary
Step25: Tanto el modelo LDA como la regresión logística proporcionan resultados similares. El modelo discriminante lineal puede extenderse a más de una característica al modelar las clases como Gaussianas multivariadas. Además, es posible relajar el supuesto de que las clases comparten una varianza común (o covarianza). Esto conduce a un modelo conocido como análisis discriminante cuadrático (QDA).
Step26: Es importante notar que $\mu$ puede ser un flotante, pero la distribución modela probabilidad de un número discreto de eventos. En la figura 4.10, los puntos representan los valores de la distribución, mientras que las líneas continuas son una ayuda visual que nos ayuda a comprender fácilmente la forma de la distribución. Recuerde, la distribución de Poisson es una distribución discreta.
Step27: Regresión de Poisson y regresión ZIP
Step28: Lo dejo como un ejercicio para que explore el conjunto de datos utilizando gráficos y / o una función de Pandas, como describe(). Por ahora vamos a continuar traduciendo el diagrama de Kruschke anterior a PyMC3
Step29: Para entender mejor los resultados de nuestra inferencia, hagamos una gráfica.
Step30: Regresión logística robusta
Step31: Aquí tenemos algunas versicolors (1s) con una longitud de sépalo inusualmente corta. Podemos arreglar esto con un modelo de mezcla. Vamos a decir que la variable de salida viene con probabilidad $\pi$ por adivinación aleatoria o con probabilidad $1-\pi$ de un modelo de regresión logística. Matemáticamente, tenemos
|
<ASSISTANT_TASK:>
Python Code:
z = np.linspace(-6, 6)
logística = 1 / (1 + np.exp(-z))
plt.plot(z, logística)
plt.xlabel('z')
plt.ylabel('logística(z)')
plt.title('Figure 4.1');
iris = pd.read_csv('datos/iris.csv')
iris.head()
sns.stripplot(x="species", y="sepal_length", data=iris, jitter=True)
plt.title('Figure 4.2');
sns.pairplot(iris, hue='species')
plt.title('Figure 4.3');
df = iris.query("species == ('setosa', 'versicolor')")
y_0 = pd.Categorical(df['species']).codes
x_n = 'sepal_length'
x_0 = df[x_n].values
x_c = x_0 - x_0.mean()
with pm.Model() as modelo_0:
α = pm.Normal('α', mu=0, sd=10)
β = pm.Normal('β', mu=0, sd=10)
μ = α + pm.math.dot(x_c, β)
θ = pm.Deterministic('θ', pm.math.sigmoid(μ))
bd = pm.Deterministic('bd', -α/β)
yl = pm.Bernoulli('yl', p=θ, observed=y_0)
trace_0 = pm.sample(1000)
varnames = ['α', 'β', 'bd']
az.plot_trace(trace_0, varnames);
az.summary(trace_0, varnames)
theta = trace_0['θ'].mean(axis=0)
idx = np.argsort(x_c)
plt.figure(figsize=(10, 6))
plt.plot(x_c[idx], theta[idx], color='C2', lw=3);
plt.vlines(trace_0['bd'].mean(), 0, 1, color='k')
bd_hpd = az.hpd(trace_0['bd'])
plt.fill_betweenx([0, 1], bd_hpd[0], bd_hpd[1], color='k', alpha=0.5)
plt.scatter(x_c, np.random.normal(y_0, 0.02), marker='.', color=[f'C{x}' for x in y_0])
theta_hpd = az.hpd(trace_0['θ'])[idx]
plt.fill_between(x_c[idx], theta_hpd[:,0], theta_hpd[:,1], color='C2', alpha=0.5)
plt.xlabel(x_n)
plt.ylabel('θ', rotation=0, labelpad=20)
# use original scale for xticks
locs, _ = plt.xticks()
plt.xticks(locs, np.round(locs + x_0.mean(), 1))
plt.title('Figure 4.4');
df = iris.query("species == ('setosa', 'versicolor')")
y_1 = pd.Categorical(df['species']).codes
x_n = ['sepal_length', 'sepal_width']
#x_n = ['petal_length', 'petal_width']
x_1 = df[x_n].values
with pm.Model() as modelo_1:
α = pm.Normal('α', mu=0, sd=10)
β = pm.Normal('β', mu=0, sd=2, shape=len(x_n))
μ = α + pm.math.dot(x_1, β)
θ = pm.Deterministic('θ', pm.math.sigmoid(μ))
bd = pm.Deterministic('bd', -α/β[1] - β[0]/β[1] * x_1[:,0])
yl = pm.Bernoulli('yl', p=θ, observed=y_1)
trace_1 = pm.sample(2000)
varnames = ['α', 'β']
az.plot_forest(trace_1, var_names=varnames);
idx = np.argsort(x_1[:,0])
bd = trace_1['bd'].mean(0)[idx]
plt.scatter(x_1[:,0], x_1[:,1], c=[f'C{x}' for x in y_0])
plt.plot(x_1[:,0][idx], bd, color='k');
bd_hpd = az.hpd(trace_1['bd'])[idx]
plt.fill_between(x_1[:,0][idx], bd_hpd[:,0], bd_hpd[:,1], color='k', alpha=0.5);
plt.xlabel(x_n[0])
plt.ylabel(x_n[1])
plt.title('Figure 4.5');
probability = np.linspace(0.01, 1, 100)
odds = probability / (1 - probability)
_, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(probability, odds, 'C0')
ax2.plot(probability, np.log(odds), 'C1')
ax1.set_xlabel('probability')
ax1.set_ylabel('odds', color='C0')
ax2.set_ylabel('log-odds', color='C1')
ax1.grid(False)
ax2.grid(False)
plt.title('Figure 4.6');
df = az.summary(trace_1, varnames)
df
x_1 = 4.5 # sepal_length
x_2 = 3 # sepal_width
log_odds_versicolor_i = (df['mean'] * [1, x_1, x_2]).sum()
probability_versicolor_i = logistic(log_odds_versicolor_i)
log_odds_versicolor_f = (df['mean'] * [1, x_1 + 1, x_2]).sum()
probability_versicolor_f = logistic(log_odds_versicolor_f)
(f'{log_odds_versicolor_f - log_odds_versicolor_i:.2f}',
f'{probability_versicolor_f - probability_versicolor_i:.2f}')
corr = iris[iris['species'] != 'virginica'].corr()
mask = np.tri(*corr.shape).T
sns.heatmap(corr.abs(), mask=mask, annot=True, cmap='viridis')
plt.title('Figure 4.7');
df = iris.query("species == ('setosa', 'versicolor')")
df = df[45:]
y_3 = pd.Categorical(df['species']).codes
x_n = ['sepal_length', 'sepal_width']
x_3 = df[x_n].values
with pm.Model() as modelo_3:
α = pm.Normal('α', mu=0, sd=10)
β = pm.Normal('β', mu=0, sd=2, shape=len(x_n))
μ = α + pm.math.dot(x_3, β)
θ = pm.math.sigmoid(μ)
bd = pm.Deterministic('bd', -α/β[1] - β[0]/β[1] * x_3[:,0])
yl = pm.Bernoulli('yl', p=θ, observed=y_3)
trace_3 = pm.sample(1000)
idx = np.argsort(x_3[:,0])
bd = trace_3['bd'].mean(0)[idx]
plt.scatter(x_3[:,0], x_3[:,1], c= [f'C{x}' for x in y_3])
plt.plot(x_3[:,0][idx], bd, color='k');
bd_hpd = pm.hpd(trace_3['bd'])[idx]
plt.fill_between(x_3[:,0][idx], bd_hpd[:,0], bd_hpd[:,1], color='k', alpha=0.5);
plt.xlabel(x_n[0])
plt.ylabel(x_n[1]);
plt.title('Figure 4.8')
iris = sns.load_dataset('iris')
y_s = pd.Categorical(iris['species']).codes
x_n = iris.columns[:-1]
x_s = iris[x_n].values
x_s = (x_s - x_s.mean(axis=0)) / x_s.std(axis=0)
with pm.Model() as modelo_s:
α = pm.Normal('α', mu=0, sd=5, shape=3)
β = pm.Normal('β', mu=0, sd=5, shape=(4,3))
μ = pm.Deterministic('μ', α + pm.math.dot(x_s, β))
θ = tt.nnet.softmax(μ)
yl = pm.Categorical('yl', p=θ, observed=y_s)
trace_s = pm.sample(2000)
az.plot_forest(trace_s, var_names=['α', 'β']);
data_pred = trace_s['μ'].mean(0)
y_pred = [np.exp(point)/np.sum(np.exp(point), axis=0) for point in data_pred]
f'{np.sum(y_s == np.argmax(y_pred, axis=1)) / len(y_s):.2f}'
with pm.Model() as modelo_sf:
α = pm.Normal('α', mu=0, sd=2, shape=2)
β = pm.Normal('β', mu=0, sd=2, shape=(4,2))
α_f = tt.concatenate([[0] ,α])
β_f = tt.concatenate([np.zeros((4,1)) , β], axis=1)
μ = α_f + pm.math.dot(x_s, β_f)
θ = tt.nnet.softmax(μ)
yl = pm.Categorical('yl', p=θ, observed=y_s)
trace_sf = pm.sample(1000)
az.plot_forest(trace_sf, var_names=['α', 'β']);
with pm.Model() as modelo_lda:
μ = pm.Normal('μ', mu=0, sd=10, shape=2)
σ = pm.HalfNormal('σ', 10)
setosa = pm.Normal('setosa', mu=μ[0], sd=σ, observed=x_0[:50])
versicolor = pm.Normal('versicolor', mu=μ[1], sd=σ, observed=x_0[50:])
bd = pm.Deterministic('bd', (μ[0] + μ[1]) / 2)
trace_lda = pm.sample(1000)
plt.axvline(trace_lda['bd'].mean(), ymax=1, color='C1')
bd_hpd = pm.hpd(trace_lda['bd'])
plt.fill_betweenx([0, 1], bd_hpd[0], bd_hpd[1], color='C1', alpha=0.5)
plt.plot(x_0, np.random.normal(y_0, 0.02), '.', color='k')
plt.ylabel('θ', rotation=0)
plt.xlabel('sepal_length')
plt.title('Figure 4.9');
az.summary(trace_lda)
mu_params = [0.5, 1.5, 3, 8]
x = np.arange(0, max(mu_params) * 3)
for mu in mu_params:
y = stats.poisson(mu).pmf(x)
plt.plot(x, y, 'o-', label=f'μ = {mu:3.1f}')
plt.legend()
plt.xlabel('x')
plt.ylabel('f(x)')
plt.title('Figure 4.10')
plt.savefig('B11197_04_10.png', dpi=300);
#np.random.seed(42)
n = 100
θ_real = 2.5
ψ = 0.1
# Simulate some data
counts = np.array([(np.random.random() > (1-ψ)) * np.random.poisson(θ_real)
for i in range(n)])
with pm.Model() as ZIP:
ψ = pm.Beta('ψ', 1., 1.)
θ = pm.Gamma('θ', 2., 0.1)
y = pm.ZeroInflatedPoisson('y', ψ, θ, observed=counts)
trace = pm.sample(1000)
az.plot_trace(trace)
plt.title('Figure 4.11');
az.summary(trace)
fish_data = pd.read_csv('datos/fish.csv')
with pm.Model() as ZIP_reg:
ψ = pm.Beta('ψ', 1, 1)
α = pm.Normal('α', 0, 10)
β = pm.Normal('β', 0, 10, shape=2)
θ = pm.math.exp(α + β[0] * fish_data['child'] + β[1] * fish_data['camper'])
yl = pm.ZeroInflatedPoisson('yl', ψ, θ, observed=fish_data['count'])
trace_ZIP_reg = pm.sample(1000)
az.plot_trace(trace_ZIP_reg);
children = [0, 1, 2, 3, 4]
fish_count_pred_0 = []
fish_count_pred_1 = []
for n in children:
without_camper = trace_ZIP_reg['α'] + trace_ZIP_reg['β'][:,0] * n
with_camper = without_camper + trace_ZIP_reg['β'][:,1]
fish_count_pred_0.append(np.exp(without_camper))
fish_count_pred_1.append(np.exp(with_camper))
plt.plot(children, fish_count_pred_0, 'C0.', alpha=0.01)
plt.plot(children, fish_count_pred_1, 'C1.', alpha=0.01)
plt.xticks(children);
plt.xlabel('Number of children')
plt.ylabel('Fish caught')
plt.plot([], 'C0o', label='without camper')
plt.plot([], 'C1o', label='with camper')
plt.legend();
plt.title('Figure 4.12');
iris = sns.load_dataset("iris")
df = iris.query("species == ('setosa', 'versicolor')")
y_0 = pd.Categorical(df['species']).codes
x_n = 'sepal_length'
x_0 = df[x_n].values
y_0 = np.concatenate((y_0, np.ones(6, dtype=int)))
x_0 = np.concatenate((x_0, [4.2, 4.5, 4.0, 4.3, 4.2, 4.4]))
x_c = x_0 - x_0.mean()
plt.plot(x_c, y_0, 'o', color='k');
with pm.Model() as modelo_rlg:
α = pm.Normal('α', mu=0, sd=10)
β = pm.Normal('β', mu=0, sd=10)
μ = α + x_c * β
θ = pm.Deterministic('θ', pm.math.sigmoid(μ))
bd = pm.Deterministic('bd', -α/β)
π = pm.Beta('π', 1, 1)
p = π * 0.5 + (1 - π) * θ
yl = pm.Bernoulli('yl', p=p, observed=y_0)
trace_rlg = pm.sample(1000)
az.summary(trace_rlg, varnames)
theta = trace_rlg['θ'].mean(axis=0)
idx = np.argsort(x_c)
plt.plot(x_c[idx], theta[idx], color='C2', lw=3);
plt.vlines(trace_rlg['bd'].mean(), 0, 1, color='k')
bd_hpd = pm.hpd(trace_rlg['bd'])
plt.fill_betweenx([0, 1], bd_hpd[0], bd_hpd[1], color='k', alpha=0.5)
plt.scatter(x_c, np.random.normal(y_0, 0.02), marker='.', color=[f'C{x}' for x in y_0])
theta_hpd = pm.hpd(trace_rlg['θ'])[idx]
plt.fill_between(x_c[idx], theta_hpd[:,0], theta_hpd[:,1], color='C2', alpha=0.5)
plt.xlabel(x_n)
plt.ylabel('θ', rotation=0)
# use original scale for xticks
locs, _ = plt.xticks()
plt.xticks(locs, np.round(locs + x_0.mean(), 1))
plt.title('Figure 4.13');
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Violations of graphical excellence and integrity
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
# Add your filename and uncomment the following line:
# Image(filename='yourfile.png')
Image('netfli.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's coarse grain an atomic PDB structure to the amino acid level
Step2: Create Input and run MC simulation
Step3: Analysis
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division, unicode_literals, print_function
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np, pandas as pd
import os.path, os, sys, json, filecmp, copy
plt.rcParams.update({'font.size': 16, 'figure.figsize': [8.0, 6.0]})
try:
workdir
except NameError:
workdir=%pwd
else:
%cd $workdir
print(workdir)
%%bash -s "$workdir"
%cd $1
if [ ! -d "faunus/" ]; then
git clone https://github.com/mlund/faunus.git
cd faunus
git checkout 86a1f74
else
cd faunus
fi
# if different, copy custom gctit.cpp into faunus
if ! cmp ../titrate.cpp src/examples/gctit.cpp
then
cp ../titrate.cpp src/examples/gctit.cpp
fi
pwd
cmake . -DCMAKE_BUILD_TYPE=Release -DENABLE_APPROXMATH=on &>/dev/null
make example_gctit -j4
%cd $workdir
import mdtraj as md
traj = md.load_pdb('1BPI.pdb')
for chain in traj.topology.chains:
print('chain: ', chain.index)
# filter pdb to only select protein(s)
sel = chain.topology.select('protein')
top = chain.topology.subset(sel)
f = open('chain'+str(chain.index)+'.aam','w')
f.write(str(top.n_residues)+'\n')
# locate disulfide bonds (not used for anything yet)
for b in top.bonds:
i = b[0].residue.index
j = b[1].residue.index
if j>i+1:
if (b[0].residue.name == 'CYS'):
if (b[1].residue.name == 'CYS'):
print('SS-bond between residues', i, j)
# loop over residues and calculate residue mass centers, radius, and weight
top.create_disulfide_bonds( traj.xyz[0] )
for res in top.residues:
if res.is_protein:
cm = [0,0,0] # residue mass center
mw = 0 # residue weight
for a in res.atoms:
cm = cm + a.element.mass * traj.xyz[0][a.index]
mw = mw + a.element.mass
cm = cm/mw*10
radius = ( 3./(4*np.pi)*mw/1.0 )**(1/3.)
f.write('{0:4} {1:5} {2:8.3f} {3:8.3f} {4:8.3f} {5:6.3f} {6:6.2f} {7:6.2f}\n'\
.format(res.name,res.index,cm[0],cm[1],cm[2],0,mw,radius))
f.close()
pH_range = [7.0]
salt_range = [0.03] # mol/l
%cd $workdir'/'
def mkinput():
js = {
"energy": {
"eqstate": { "processfile": "titrate.json" },
"nonbonded": {
"coulomb": { "epsr": 80 }
}
},
"system": {
"temperature": 298.15,
"sphere" : { "radius" : 90 },
"mcloop": { "macro": 10, "micro": micro }
},
"moves": {
"gctit" : { "molecule": "salt", "prob": 0.5 },
"atomtranslate" : {
"salt": { "prob": 0.5 }
}
},
"moleculelist": {
"protein": { "structure":"../chain0.aam", "Ninit":1, "insdir":"0 0 0"},
"salt": {"atoms":"Na Cl", "Ninit":60, "atomic":True }
},
"atomlist" : {
"Na" : { "q": 1, "r":1.9, "eps":0.005, "mw":22.99, "dp":100, "activity":salt },
"Cl" : { "q":-1, "r":1.7, "eps":0.005, "mw":35.45, "dp":100, "activity":salt },
"ASP" : { "q":-1, "r":3.6, "eps":0.05, "mw":110 },
"HASP" : { "q":0, "r":3.6, "eps":0.05, "mw":110 },
"LASP" : { "q":2, "r":3.6, "eps":0.05, "mw":110 },
"CTR" : { "q":-1, "r":2.0, "eps":0.05, "mw":16 },
"HCTR" : { "q":0, "r":2.0, "eps":0.05, "mw":16 },
"GLU" : { "q":-1, "r":3.8, "eps":0.05, "mw":122 },
"HGLU" : { "q":0, "r":3.8, "eps":0.05, "mw":122 },
"LGLU" : { "q":2, "r":3.8, "eps":0.05, "mw":122 },
"HIS" : { "q":0, "r":3.9, "eps":0.05, "mw":130 },
"HHIS" : { "q":1, "r":3.9, "eps":0.05, "mw":130 },
"NTR" : { "q":0, "r":2.0, "eps":0.05, "mw":14 },
"HNTR" : { "q":1, "r":2.0, "eps":0.05, "mw":14 },
"TYR" : { "q":-1, "r":4.1, "eps":0.05, "mw":154 },
"HTYR" : { "q":0, "r":4.1, "eps":0.05, "mw":154 },
"LYS" : { "q":0, "r":3.7, "eps":0.05, "mw":116 },
"HLYS" : { "q":1, "r":3.7, "eps":0.05, "mw":116 },
"CYb" : { "q":0, "r":3.6, "eps":0.05, "mw":103 },
"CYS" : { "q":-1, "r":3.6, "eps":0.05, "mw":103 },
"HCYS" : { "q":0, "r":3.6, "eps":0.05, "mw":103 },
"ARG" : { "q":0, "r":4.0, "eps":0.05, "mw":144 },
"HARG" : { "q":1, "r":4.0, "eps":0.05, "mw":144 },
"ALA" : { "q":0, "r":3.1, "eps":0.05, "mw":66 },
"ILE" : { "q":0, "r":3.6, "eps":0.05, "mw":102 },
"LEU" : { "q":0, "r":3.6, "eps":0.05, "mw":102 },
"MET" : { "q":0, "r":3.8, "eps":0.05, "mw":122 },
"PHE" : { "q":0, "r":3.9, "eps":0.05, "mw":138 },
"PRO" : { "q":0, "r":3.4, "eps":0.05, "mw":90 },
"TRP" : { "q":0, "r":4.3, "eps":0.05, "mw":176 },
"VAL" : { "q":0, "r":3.4, "eps":0.05, "mw":90 },
"SER" : { "q":0, "r":3.3, "eps":0.05, "mw":82 },
"THR" : { "q":0, "r":3.5, "eps":0.05, "mw":94 },
"ASN" : { "q":0, "r":3.6, "eps":0.05, "mw":108 },
"GLN" : { "q":0, "r":3.8, "eps":0.05, "mw":120 },
"GLY" : { "q":0, "r":2.9, "eps":0.05, "mw":54 }
},
"processes" : {
"H-Asp" : { "bound":"HASP" , "free":"ASP" , "pKd":4.0 , "pX":pH },
"H-Ctr" : { "bound":"HCTR" , "free":"CTR" , "pKd":2.6 , "pX":pH },
"H-Glu" : { "bound":"HGLU" , "free":"GLU" , "pKd":4.4 , "pX":pH },
"H-His" : { "bound":"HHIS" , "free":"HIS" , "pKd":6.3 , "pX":pH },
"H-Arg" : { "bound":"HARG" , "free":"ARG" , "pKd":12.0 , "pX":pH },
"H-Ntr" : { "bound":"HNTR" , "free":"NTR" , "pKd":7.5 , "pX":pH },
"H-Cys" : { "bound":"HCYS" , "free":"CYS" , "pKd":10.8 , "pX":pH },
"H-Tyr" : { "bound":"HTYR" , "free":"TYR" , "pKd":9.6 , "pX":pH },
"H-Lys" : { "bound":"HLYS" , "free":"LYS" , "pKd":10.4 , "pX":pH }
}
}
with open('titrate.json', 'w+') as f:
f.write(json.dumps(js, indent=4))
for pH in pH_range:
for salt in salt_range:
pfx='pH'+str(pH)+'-I'+str(salt)
if not os.path.isdir(pfx):
%mkdir -p $pfx
%cd $pfx
# equilibration run (no translation)
!rm -fR state
micro=100
mkinput()
!../faunus/src/examples/gctit > eq
# production run
micro=1000
mkinput()
%time !../faunus/src/examples/gctit > out
%cd ..
%cd ..
print('done.')
%cd $workdir'/'
import json
for pH in pH_range:
for salt in salt_range:
pfx='pH'+str(pH)+'-I'+str(salt)
if os.path.isdir(pfx):
%cd $pfx
js = json.load( open('gctit-output.json') )
charge = js['protein']['charge']
index = js['protein']['index']
resname = js['protein']['resname']
plt.plot(index,charge, 'ro')
%cd ..
for i in range(0,len(index)):
label = resname[i]+' '+str(index[i]+1)
plt.annotate(label, xy=(index[i], charge[i]), fontsize=8, rotation=70)
plt.title('Protonation States of All Residues')
plt.legend(loc=0, frameon=False)
plt.xlabel(r'residue number')
plt.ylabel(r'average charge, $z$')
plt.ylim((-1.1, 1.1))
#plt.xticks(i, resname, rotation=70, fontsize='small')
plt.savefig('fig.pdf', bbox_inches='tight')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 1
Step2: Exercise 2
Step3: Exercise 3
Step4: b. Confidence Intervals.
Step5: Exercise 4
|
<ASSISTANT_TASK:>
Python Code:
# Useful Functions
class DiscreteRandomVariable:
def __init__(self, a=0, b=1):
self.variableType = ""
self.low = a
self.high = b
return
def draw(self, numberOfSamples):
samples = np.random.randint(self.low, self.high, numberOfSamples)
return samples
class BinomialRandomVariable(DiscreteRandomVariable):
def __init__(self, numberOfTrials = 10, probabilityOfSuccess = 0.5):
self.variableType = "Binomial"
self.numberOfTrials = numberOfTrials
self.probabilityOfSuccess = probabilityOfSuccess
return
def draw(self, numberOfSamples):
samples = np.random.binomial(self.numberOfTrials, self.probabilityOfSuccess, numberOfSamples)
return samples
def factorial(n):return reduce(lambda x,y:x*y,[1]+range(1,n+1))
# Useful Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.stats as stats
from statsmodels.stats import stattools
from __future__ import division
# Histograms with 10 tosses.
cointoss = DiscreteRandomVariable(1, 3)
plt.hist(cointoss.draw(10), align = 'mid')
plt.xlabel('Value')
plt.ylabel('Occurences')
plt.legend(['Coin Tosses']);
# Histograms with 1000000 tosses.
cointoss = DiscreteRandomVariable(1, 3)
plt.hist(cointoss.draw(1000000), align = 'mid')
plt.xlabel('Value')
plt.ylabel('Occurences')
plt.legend(['Coin Tosses']);
# Binomial distribution with p=0.25 and n=20
binomialdistribution = BinomialRandomVariable(20, 0.25)
bins = np.arange(0,21,1)
n, bins, patches = plt.hist(binomialdistribution.draw(1000000), bins=bins)
plt.title('Binomial Distribution with p=0.25 and n=20')
plt.xlabel('Value')
plt.ylabel('Occurrences')
plt.legend(['Die Rolls']);
# Finding x which occurs most often
elem = np.argmax(n)
print 'Maximum occurance for x =', elem
# Calculating the probability of finding x.
n = 20
p = 0.5
x = elem
n_factorial = factorial(n)
x_factorial = factorial(x)
n_x_factorial = factorial(n-x)
fact = n_factorial / (n_x_factorial * x_factorial)
probability = fact * (p**x) * ((1-p)**(n-x))
print 'proabability of x = %d' % x, probability
# Graphing a normal distribution pdf.
mu = 0
sigma = 5
x = np.linspace(-30, 30, 200)
y = (1/(sigma * np.sqrt(2 * 3.14159))) * np.exp(-(x - mu)*(x - mu) / (2 * sigma * sigma))
plt.plot(x, y)
plt.title('Graph of PDF with mu = 0 and sigma = 5')
plt.xlabel('Value')
plt.ylabel('Probability');
# finding the 1st, 2nd, and third confidence intervals.
first_ci = (-sigma, sigma)
second_ci = (-2*sigma, 2*sigma)
third_ci = (-3*sigma, 3*sigma)
print '1-sigma -> mu +/-', sigma
print '2-sigma -> mu +/-', second_ci[1]
print '3-sigma -> mu +/-', third_ci[1]
plt.axvline(first_ci[0], linestyle='dashdot', label='68% of observations', color = 'blue')
plt.axvline(first_ci[1], linestyle='dashdot', label='68% of observations', color = 'blue')
plt.axvline(second_ci[0], linestyle='dashdot', label='95% of observations', color = 'red')
plt.axvline(second_ci[1],linestyle='dashdot', color = 'red')
plt.axvline(third_ci[0], linestyle='dashdot', label='99% of observations', color = 'green')
plt.axvline(third_ci[1], linestyle='dashdot', color = 'green')
plt.plot(x,y)
plt.title('Graph of PDF with 3 confidence intervals.')
plt.legend();
# Collect prices and returns.
prices = get_pricing('SPY', start_date = '2016-01-01', end_date='2016-05-01',
fields = 'price')
returns = prices.pct_change()[1:]
# Calculating the mean and standard deviation.
sample_mean = np.mean(returns)
sample_std_dev = np.std(returns)
x = np.linspace(-(sample_mean + 4 * sample_std_dev), (sample_mean + 4 * sample_std_dev), len(returns))
sample_distribution = ((1/(sample_std_dev * 2 * np.pi)) *
np.exp(-(x - sample_mean)*(x - sample_mean) / (2 * sample_std_dev * sample_std_dev)))
# Plotting histograms and confidence intervals.
plt.hist(returns, range=(returns.min(), returns.max()), normed = True);
plt.plot(x, sample_distribution)
plt.axvline(sample_std_dev, linestyle='dashed', color='red', label='1st Confidence Interval')
plt.axvline(-sample_std_dev, linestyle='dashed', color='red')
plt.axvline(2*sample_std_dev, linestyle='dashed', color='k', label='2st Confidence Interval')
plt.axvline(-2*sample_std_dev, linestyle='dashed', color='k')
plt.axvline(3*sample_std_dev, linestyle='dashed', color='green', label='3st Confidence Interval')
plt.axvline(-3*sample_std_dev, linestyle='dashed', color='green')
plt.legend();
# Run the JB test for normality.
cutoff = 0.01
_, p_value, skewness, kurtosis = stattools.jarque_bera(returns)
print "The JB test p-value is: ", p_value
print "We reject the hypothesis that the data are normally distributed ", p_value < cutoff
print "The skewness of the returns is: ", skewness
print "The kurtosis of the returns is: ", kurtosis
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Este es como se leen los mapas de direcciones y dem para el trazado de cuencas y corrientes
Step2: Trazado de corrientes
Step3: El perfil de una corriente puede ser utilizado como punto de referencia para la búsqueda de puntos de trazado
Step4: Trazado de cuenca
Step5: La ultima cuenca tiene un conteo de celdas igual a 1, lo cual significa que no se ha trazado nada y que por esta celda no pasa ninguna otra celda, por lo tanto esto no es una cuenca, y no debe ser usado para ningún tipo de cálculos, en la siguiente línea este elemento es eliminado
Step6: Balance sobre cuencas
Step7: En la Figura se presenta el caudal medio estimado para cada elemento de la cuenca, incluidas celdas en donde no se considera la presencia de red hídrica.
Step8: La figura anterior ha sido guardada en el disco mediante el comando ruta = 'Caldas_ETR.png', en este caso ha sido sobre el directorio de trabajo actual, si este se cambia, se cambia el directorio donde se guarda.
Step9: Cada entrada en Qmax y Qmin corresponde al periodo de retorno Tr [2.33, 5, 10, 25, 50, 100], estos pueden ser cambiados al interior de la función al cambiar la propiedad Tr en el momento en que esta es invocada.
Step10: Guardado en shp
Step11: Geomorfologia
|
<ASSISTANT_TASK:>
Python Code:
#Paquete Watershed Modelling Framework (WMF) para el trabajo con cuencas.
from wmf import wmf
# Lectura del DEM
DEM = wmf.read_map_raster('/media/nicolas/discoGrande/raster/dem_corr.tif',isDEMorDIR=True, dxp=30.0)
DIR = wmf.read_map_raster('/media/nicolas/discoGrande/raster/dirAMVA.tif',isDEMorDIR=True, dxp= 30.0)
wmf.cu.nodata=-9999.0; wmf.cu.dxp=30.0
DIR[DIR<=0]=wmf.cu.nodata.astype(int)
DIR=wmf.cu.dir_reclass(DIR,wmf.cu.ncols,wmf.cu.nrows)
st = wmf.Stream(-75.618,6.00,DEM=DEM,DIR=DIR,name ='Rio Medellin')
st.structure
st.Plot_Profile()
# Mediante el comando de busqueda hemos buscado donde se localizan las coordenadas que cumplen la propiedad de estar a
#una distancia de la salida que oscila entre 10000 y 10100 metros.
np.where((st.structure[3]>10000) & (st.structure[3]<10100))
# Las coordenadas en la entrada 289 son:
print st.structure[0,289]
print st.structure[1,289]
# La cuenca puede ser trtazada utilizando las coordenadas de forma implicita (como en este ejemplo), o de una
# manera explicita como se realizaría en la segunda línea de código.
cuenca = wmf.Basin(-75.6364,6.11051,DEM,DIR,name='ejemplo',stream=st)
# en esta segunda linea estamos trazando una cuenca con unas coordenadas que no son exactas y no se sabe si estan
# sobre la corriente, este problema se corrige al pasarle la corriente al trazador mediante el comando stream, el cual
# recibe como entrada el objeto corriente previamente obtenido.
cuenca2 = wmf.Basin(-75.6422,6.082,DEM,DIR,name='ejemplo',stream=st)
# Cuenca error: en este caso no se para el argumento stream, por lo que la cuenca se traza sobre las coordenadas
# que se han dado, lo cual probablemente produzca un error.
cuenca3 = wmf.Basin(-75.6364,6.11051,DEM,DIR,name='ejemplo',stream=st)
# Se imprime la cantidad de celdas que comprenden a cada una de las cuencas obtenidas, esto para ver que efectivamente
# existe una diferencia entre ambas debida a las diferencias de coordenadas.
print cuenca.ncells
print cuenca2.ncells
print cuenca3.ncells
del(cuenca3)
# Balance en una cuenca asumiendo precipitación anual igual a 2000 mm/año sobre toda la cuenca
cuenca.GetQ_Balance(2100)
# La variable de balance de largo plazo se calcula para cada celda de la cuenca y queda almacenada en cuenca.CellQmed
cuenca.Plot_basin(cuenca.CellQmed)
# Plot de la evaporación sobre la cuenca de caldas
cuenca.Plot_basin(cuenca.CellETR, extra_lat= 0.001, extra_long= 0.001, lines_spaces= 0.02,
ruta = 'Caldas_ETR.png')
# Estimacion de maximos, por defecto lo hace por gumbel, lo puede hacer tambien por lognormal
Qmax = cuenca.GetQ_Max(cuenca.CellQmed)
Qmax2 = cuenca.GetQ_Max(cuenca.CellQmed, Tr= [3, 15])
# Estimacion de minimos, por defecto lo hace por gumbel, lo puede hacer tambien por lognormal
Qmin = cuenca.GetQ_Min(cuenca.CellQmed)
Qmin[Qmin<0]=0
# Plot del caudal máximo para un periodo de retorno de 2.33
cuenca.Plot_basin(Qmax[0])
# Plot del caudal máximo para un periodo de retorno de 100
cuenca.Plot_basin(Qmax[5])
cuenca.Save_Basin2Map('Cuenca.kml',DriverFormat='kml')
cuenca.Save_Net2Map('Red.kml',DriverFormat='kml',qmed=cuenca.CellQmed)
# Calcula geomorfología por cauces
cuenca.GetGeo_Cell_Basics()
# reporte de geomorfologia generico y los almacena en cuenca.GeoParameters y en cuenca.Tc
cuenca.GetGeo_Parameters()
cuenca.GeoParameters
# Tiempos de concentracion
cuenca.Tc
cuenca.Plot_Tc()
cuenca.GetGeo_IsoChrones(1.34)
cuenca.Plot_basin(cuenca.CellTravelTime)
cuenca.Plot_Travell_Hist()
cuenca.GetGeo_Ppal_Hipsometric()
cuenca.PlotPpalStream()
cuenca.Plot_Hipsometric()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import pyfabm - the python module that contains the Fortran based FABM
Step2: Configuration
Step3: Model increment
Step4: Time axis and model integration
Step5: Plot the results
|
<ASSISTANT_TASK:>
Python Code:
import numpy
import scipy.integrate
import pyfabm
#pyfabm.get_version()
yaml_file = 'fabm-bb-lorenz63.yaml'
model = pyfabm.Model(yaml_file)
model.findDependency('bottom_depth').value = 1.
model.checkReady(stop=True)
def dy(y,t0):
model.state[:] = y
return model.getRates()
t = numpy.arange(0.0, 40.0, 0.01)
y = scipy.integrate.odeint(dy,model.state,t)
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(y[:,0], y[:,1], y[:,2])
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.1 Required Module
Step7: 2. Classfication Model
Step8: 3. Training
Step11: 3.2 Initialize model parameters
Step15: 作业1
Step17: 3.3 Repeat over certain numbers of epoch
Step20: 3.3.2 & 3.3.3 compute gradient of loss over parameters & update parameters with gradient descent
Step23: Define function fit and use train_epoch and test_epoch
Step24: 作业 2
Step25: 模型没有训练到过拟合,观察上面训练数据,随着代数增多,测试集的正确率并没有下降。
Step26: 作业 3
Step27: 观察数据可以发现其实10代也没有过拟合。
Step28: 3.4 save model
Step29: 作业 4
Step30: 4. Training Advanced
Step31: 作业 5
Step32: 4.2 dropout
Step33: 4.3 batch_normalization
Step34: 4.4 data augmentation
Step35: 作业 6
Step36: 5. Visualizatio of training and validation phase
Step37: 6.1.1 Gradient Vanishing
Step38: 6.1.2 Gradient Explosion
Step40: 6.1.2.2 normalization for input data
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
# Load all necessary modules here, for clearness
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# from torchvision.datasets import MNIST
import torchvision
from torchvision import transforms
from torch.optim import lr_scheduler
# from tensorboardX import SummaryWriter
from collections import OrderedDict
import matplotlib.pyplot as plt
# from tqdm import tqdm
# Whether to put data in GPU according to GPU is available or not
# cuda = torch.cuda.is_available()
# In case the default gpu does not have enough space, you can choose which device to use
# torch.cuda.set_device(device) # device: id
# Since gpu in lab is not enough for your guys, we prefer to cpu computation
cuda = torch.device('cpu')
class FeedForwardNeuralNetwork(nn.Module):
Inputs Linear/Function Output
[128, 1, 28, 28] -> Linear(28*28, 100) -> [128, 100] # first hidden lyaer
-> ReLU -> [128, 100] # relu activation function, may sigmoid
-> Linear(100, 100) -> [128, 100] # second hidden lyaer
-> ReLU -> [128, 100] # relu activation function, may sigmoid
-> Linear(100, 100) -> [128, 100] # third hidden lyaer
-> ReLU -> [128, 100] # relu activation function, may sigmoid
-> Linear(100, 10) -> [128, 10] # Classification Layer
def __init__(self, input_size, hidden_size, output_size, activation_function='RELU'):
super(FeedForwardNeuralNetwork, self).__init__()
self.use_dropout = False
self.use_bn = False
self.hidden1 = nn.Linear(input_size, hidden_size) # Linear function 1: 784 --> 100
self.hidden2 = nn.Linear(hidden_size, hidden_size) # Linear function 2: 100 --> 100
self.hidden3 = nn.Linear(hidden_size, hidden_size) # Linear function 3: 100 --> 100
# Linear function 4 (readout): 100 --> 10
self.classification_layer = nn.Linear(hidden_size, output_size)
self.dropout = nn.Dropout(p=0.5) # Drop out with prob = 0.5
self.hidden1_bn = nn.BatchNorm1d(hidden_size) # Batch Normalization
self.hidden2_bn = nn.BatchNorm1d(hidden_size)
self.hidden3_bn = nn.BatchNorm1d(hidden_size)
# Non-linearity
if activation_function == 'SIGMOID':
self.activation_function1 = nn.Sigmoid()
self.activation_function2 = nn.Sigmoid()
self.activation_function3 = nn.Sigmoid()
elif activation_function == 'RELU':
self.activation_function1 = nn.ReLU()
self.activation_function2 = nn.ReLU()
self.activation_function3 = nn.ReLU()
def forward(self, x):
Defines the computation performed at every call.
Should be overridden by all subclasses.
Args:
x: [batch_size, channel, height, width], input for network
Returns:
out: [batch_size, n_classes], output from network
x = x.view(x.size(0), -1) # flatten x in [128, 784]
out = self.hidden1(x)
out = self.activation_function1(out) # Non-linearity 1
if self.use_bn == True:
out = self.hidden1_bn(out)
out = self.hidden2(out)
out = self.activation_function2(out)
if self.use_bn == True:
out = self.hidden2_bn(out)
out = self.hidden3(out)
if self.use_bn == True:
out = self.hidden3_bn(out)
out = self.activation_function3(out)
if self.use_dropout == True:
out = self.dropout(out)
out = self.classification_layer(out)
return out
def set_use_dropout(self, use_dropout):
Whether to use dropout. Auxiliary function for our exp, not necessary.
Args:
use_dropout: True, False
self.use_dropout = use_dropout
def set_use_bn(self, use_bn):
Whether to use batch normalization. Auxiliary function for our exp, not necessary.
Args:
use_bn: True, False
self.use_bn = use_bn
def get_grad(self):
Return average grad for hidden2, hidden3. Auxiliary function for our exp, not necessary.
hidden2_average_grad = np.mean(np.sqrt(np.square(self.hidden2.weight.grad.detach().numpy())))
hidden3_average_grad = np.mean(np.sqrt(np.square(self.hidden3.weight.grad.detach().numpy())))
return hidden2_average_grad, hidden3_average_grad
### Hyper parameters
batch_size = 128 # batch size is 128
n_epochs = 5 # train for 5 epochs
learning_rate = 0.01 # learning rate is 0.01
input_size = 28*28 # input image has size 28x28
hidden_size = 100 # hidden neurons is 100 for each layer
output_size = 10 # classes of prediction
l2_norm = 0 # not to use l2 penalty
dropout = False # not to use
get_grad = False # not to obtain grad
# create a model object
model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# Cross entropy
loss_fn = torch.nn.CrossEntropyLoss()
# l2_norm can be done in SGD
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
def show_weight_bias(model):
Show some weights and bias distribution every layers in model.
!!YOU CAN READ THIS CODE LATER!!
# Create a figure and a set of subplots
fig, axs = plt.subplots(2,3, sharey=False, tight_layout=True)
# weight and bias for every hidden layer
h1_w = model.hidden1.weight.detach().numpy().flatten()
h1_b = model.hidden1.bias.detach().numpy().flatten()
h2_w = model.hidden2.weight.detach().numpy().flatten()
h2_b = model.hidden2.bias.detach().numpy().flatten()
h3_w = model.hidden3.weight.detach().numpy().flatten()
h3_b = model.hidden3.bias.detach().numpy().flatten()
axs[0,0].hist(h1_w)
axs[0,1].hist(h2_w)
axs[0,2].hist(h3_w)
axs[1,0].hist(h1_b)
axs[1,1].hist(h2_b)
axs[1,2].hist(h3_b)
# set title for every sub plots
axs[0,0].set_title('hidden1_weight')
axs[0,1].set_title('hidden2_weight')
axs[0,2].set_title('hidden3_weight')
axs[1,0].set_title('hidden1_bias')
axs[1,1].set_title('hidden2_bias')
axs[1,2].set_title('hidden3_bias')
# Show default initialization for every hidden layer by pytorch
# it's uniform distribution
show_weight_bias(model)
# If you want to use other intialization method, you can use code below
# and define your initialization below
def weight_bias_reset(model):
Custom initialization, you can use your favorable initialization method.
for m in model.modules():
if isinstance(m, nn.Linear):
# initialize linear layer with mean and std
mean, std = 0, 0.1
# Initialization method
torch.nn.init.normal_(m.weight, mean, std)
torch.nn.init.normal_(m.bias, mean, std)
# Another way to initialize
# m.weight.data.normal_(mean, std)
# m.bias.data.normal_(mean, std)
weight_bias_reset(model) # reset parameters for each hidden layer
show_weight_bias(model) # show weight and bias distribution, normal distribution now.
def weight_bias_reset_constant(model):
Constant initalization
for m in model.modules():
if isinstance(m, nn.Linear):
val = 0.1
torch.nn.init.constant_(m.weight, val)
torch.nn.init.constant_(m.bias, val)
weight_bias_reset_constant(model)
show_weight_bias(model)
def weight_bias_reset_xavier_uniform(model):
xaveir_uniform, gain=1
for m in model.modules():
if isinstance(m, nn.Linear):
gain = 1
torch.nn.init.xavier_uniform_(m.weight, gain)
# torch.nn.init.xavier_uniform_(m.bias, gain)
weight_bias_reset_xavier_uniform(model)
show_weight_bias(model)
def weight_bias_reset_kaiming_uniform(model):
kaiming_uniform, a=0, mode='fan_in', non_linearity='relu'
for m in model.modules():
if isinstance(m, nn.Linear):
a = 0
torch.nn.init.kaiming_uniform_(m.weight, a=a, mode='fan_in', nonlinearity='relu')
# torch.nn.init.kaiming_uniform_(m.bias, a=a, mode='fan_in', nonlinearity='relu')
weight_bias_reset_kaiming_uniform(model)
show_weight_bias(model)
# define method of preprocessing data for evaluating
train_transform = transforms.Compose([
transforms.ToTensor(), # Convert a PIL Image or numpy.ndarray to tensor.
# Normalize a tensor image with mean 0.1307 and standard deviation 0.3081
transforms.Normalize((0.1307,), (0.3081,))
])
test_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
# use MNIST provided by torchvision
# torchvision.datasets provide MNIST dataset for classification
train_dataset = torchvision.datasets.MNIST(root='./data',
train=True,
transform=train_transform,
download=True)
test_dataset = torchvision.datasets.MNIST(root='./data',
train=False,
transform=test_transform,
download=False)
# pay attention to this, train_dataset doesn't load any data
# It just defined some method and store some message to preprocess data
train_dataset
# Data loader.
# Combines a dataset and a sampler,
# and provides single- or multi-process iterators over the dataset.
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=False)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# functions to show an image
def imshow(img):
show some imgs in datasets
!!YOU CAN READ THIS CODE LATER!!
npimg = img.numpy() # convert tensor to numpy
plt.imshow(np.transpose(npimg, (1, 2, 0))) # [channel, height, width] -> [height, width, channel]
plt.show()
# get some random training images by batch
dataiter = iter(train_loader)
images, labels = dataiter.next() # get a batch of images
# show images
imshow(torchvision.utils.make_grid(images))
def train(train_loader, model, loss_fn, optimizer, get_grad=False):
train model using loss_fn and optimizer. When thid function is called, model trains for one epoch.
Args:
train_loader: train data
model: prediction model
loss_fn: loss function to judge the distance between target and outputs
optimizer: optimize the loss function
get_grad: True, False
Returns:
total_loss: loss
average_grad2: average grad for hidden 2 in this epoch
average_grad3: average grad for hidden 3 in this epoch
# set the module in training model, affecting module e.g., Dropout, BatchNorm, etc.
model.train()
total_loss = 0
grad_2 = 0.0 # store sum(grad) for hidden 3 layer
grad_3 = 0.0 # store sum(grad) for hidden 3 layer
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad() # clear gradients of all optimized torch.Tensors'
outputs = model(data) # make predictions
loss = loss_fn(outputs, target) # compute loss
total_loss += loss.item() # accumulate every batch loss in a epoch
loss.backward() # compute gradient of loss over parameters
if get_grad == True:
g2, g3 = model.get_grad() # get grad for hiddern 2 and 3 layer in this batch
grad_2 += g2 # accumulate grad for hidden 2
grad_3 += g3 # accumulate grad for hidden 2
optimizer.step() # update parameters with gradient descent
average_loss = total_loss / batch_idx # average loss in this epoch
average_grad2 = grad_2 / batch_idx # average grad for hidden 2 in this epoch
average_grad3 = grad_3 / batch_idx # average grad for hidden 3 in this epoch
return average_loss, average_grad2, average_grad3
def evaluate(loader, model, loss_fn):
test model's prediction performance on loader.
When thid function is called, model is evaluated.
Args:
loader: data for evaluation
model: prediction model
loss_fn: loss function to judge the distance between target and outputs
Returns:
total_loss
accuracy
# context-manager that disabled gradient computation
with torch.no_grad():
# set the module in evaluation mode
model.eval()
correct = 0.0 # account correct amount of data
total_loss = 0 # account loss
for batch_idx, (data, target) in enumerate(loader):
outputs = model(data) # make predictions
# return the maximum value of each row of the input tensor in the
# given dimension dim, the second return vale is the index location
# of each maxium value found(argmax)
_, predicted = torch.max(outputs, 1)
# Detach: Returns a new Tensor, detached from the current graph.
#The result will never require gradient.
correct += (predicted == target).sum().detach().numpy()
loss = loss_fn(outputs, target) # compute loss
total_loss += loss.item() # accumulate every batch loss in a epoch
accuracy = correct*100.0 / len(loader.dataset) # accuracy in a epoch
return total_loss, accuracy
def fit(train_loader, val_loader, model, loss_fn, optimizer, n_epochs, get_grad=False):
train and val model here, we use train_epoch to train model and
val_epoch to val model prediction performance
Args:
train_loader: train data
val_loader: validation data
model: prediction model
loss_fn: loss function to judge the distance between target and outputs
optimizer: optimize the loss function
n_epochs: training epochs
get_grad: Whether to get grad of hidden2 layer and hidden3 layer
Returns:
train_accs: accuracy of train n_epochs, a list
train_losses: loss of n_epochs, a list
grad_2 = [] # save grad for hidden 2 every epoch
grad_3 = [] # save grad for hidden 3 every epoch
train_accs = [] # save train accuracy every epoch
train_losses = [] # save train loss every epoch
for epoch in range(n_epochs): # train for n_epochs
# train model on training datasets, optimize loss function and update model parameters
train_loss, average_grad2, average_grad3 = train(train_loader, model, loss_fn, optimizer, get_grad)
# evaluate model performance on train dataset
_, train_accuracy = evaluate(train_loader, model, loss_fn)
message = 'Epoch: {}/{}. Train set: Average loss: {:.4f}, Accuracy: {:.4f}'.format(epoch+1, \
n_epochs, train_loss, train_accuracy)
print(message)
# save loss, accuracy, grad
train_accs.append(train_accuracy)
train_losses.append(train_loss)
grad_2.append(average_grad2)
grad_3.append(average_grad3)
# evaluate model performance on val dataset
val_loss, val_accuracy = evaluate(val_loader, model, loss_fn)
message = 'Epoch: {}/{}. Validation set: Average loss: {:.4f}, Accuracy: {:.4f}'.format(epoch+1, \
n_epochs, val_loss, val_accuracy)
print(message)
# Whether to get grad for showing
if get_grad == True:
fig, ax = plt.subplots() # add a set of subplots to this figure
ax.plot(grad_2, label='Gradient for Hidden 2 Layer') # plot grad 2
ax.plot(grad_3, label='Gradient for Hidden 3 Layer') # plot grad 3
plt.ylim(top=0.004)
# place a legend on axes
legend = ax.legend(loc='best', shadow=True, fontsize='x-large')
return train_accs, train_losses
def show_curve(ys, title):
plot curlve for Loss and Accuacy
!!YOU CAN READ THIS LATER, if you are interested
Args:
ys: loss or acc list
title: Loss or Accuracy
x = np.array(range(len(ys)))
y = np.array(ys)
plt.plot(x, y, c='b')
plt.axis()
plt.title('{} Curve:'.format(title))
plt.xlabel('Epoch')
plt.ylabel('{} Value'.format(title))
plt.show()
### Hyper parameters
batch_size = 128 # batch size is 128
n_epochs = 5 # train for 5 epochs
learning_rate = 0.01 # learning rate is 0.01
input_size = 28*28 # input image has size 28x28
hidden_size = 100 # hidden neurons is 100 for each layer
output_size = 10 # classes of prediction
l2_norm = 0 # not to use l2 penalty
dropout = False # not to use
get_grad = False # not to obtain grad
# declare a model
model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# Cross entropy
loss_fn = torch.nn.CrossEntropyLoss()
# l2_norm can be done in SGD
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
train_accs, train_losses = fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
show_curve(train_accs, 'accuracy')
show_curve(train_losses, 'loss')
### Hyper parameters
batch_size = 128 # batch size is 128
n_epochs = 5 # train for 5 epochs
learning_rate = 0.01 # learning rate is 0.01
input_size = 28*28 # input image has size 28x28
hidden_size = 100 # hidden neurons is 100 for each layer
output_size = 10 # classes of prediction
l2_norm = 0 # not to use l2 penalty
dropout = False # not to use
get_grad = False # not to obtain grad
# declare a model
model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# Cross entropy
loss_fn = torch.nn.CrossEntropyLoss()
# l2_norm can be done in SGD
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
# 3.1 Train
n_epochs = 10
train_accs, train_losses = fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
# 3.1 show_curve
show_curve(train_accs, 'accuracy')
show_curve(train_losses, 'loss')
# 3.2 Train
batch_size = 128 # batch size is 128
n_epochs = 5 # train for 5 epochs
learning_rate = 0.7
input_size = 28*28 # input image has size 28x28
hidden_size = 100 # hidden neurons is 100 for each layer
output_size = 10 # classes of prediction
l2_norm = 0 # not to use l2 penalty
dropout = False # not to use
get_grad = False # not to obtain grad
# declare a model
model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# Cross entropy
loss_fn = torch.nn.CrossEntropyLoss()
# l2_norm can be done in SGD
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
train_accs, train_losses = fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
# 3.2 show_curve
show_curve(train_accs, 'accuracy')
show_curve(train_losses, 'loss')
# show parameters in model
# Print model's state_dict
print("Model's state_dict:")
for param_tensor in model.state_dict():
print(param_tensor, "\t", model.state_dict()[param_tensor].size())
# Print optimizer's state_dict
print("\nOptimizer's state_dict:")
for var_name in optimizer.state_dict():
print(var_name, "\t", optimizer.state_dict()[var_name])
# save model
save_path = './model.pt'
torch.save(model.state_dict(), save_path)
# load parameters from files
saved_parametes = torch.load(save_path)
print(saved_parametes)
# initailze model by saved parameters
new_model = FeedForwardNeuralNetwork(input_size, hidden_size, output_size)
new_model.load_state_dict(saved_parametes)
# test your model prediction performance
new_test_loss, new_test_accuracy = evaluate(test_loader, new_model, loss_fn)
message = 'Average loss: {:.4f}, Accuracy: {:.4f}'.format(new_test_loss, new_test_accuracy)
print(message)
### Hyper parameters
batch_size = 128
n_epochs = 5
learning_rate = 0.01
input_size = 28*28
hidden_size = 100
output_size = 10
l2_norm = 0.01 # use l2 penalty
get_grad = False
# declare a model
model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# Cross entropy
loss_fn = torch.nn.CrossEntropyLoss()
# l2_norm can be done in SGD
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
train_accs, train_losses = fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
# Hyper parameters
batch_size = 128
n_epochs = 5
learning_rate = 0.01
input_size = 28*28
hidden_size = 100
output_size = 10
l2_norm = 1 # use l2 penalty
get_grad = False
# declare a model
model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# Cross entropy
loss_fn = torch.nn.CrossEntropyLoss()
# l2_norm can be done in SGD
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
train_accs, train_losses = fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
### Hyper parameters
batch_size = 128
n_epochs = 5
learning_rate = 0.01
input_size = 28*28
hidden_size = 100
output_size = 10
l2_norm = 0 # without using l2 penalty
get_grad = False
# declare a model
model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# Cross entropy
loss_fn = torch.nn.CrossEntropyLoss()
# l2_norm can be done in SGD
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
# Set dropout to True and probability = 0.5
model.set_use_dropout(True)
train_accs, train_losses = fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
### Hyper parameters
batch_size = 128
n_epochs = 5
learning_rate = 0.01
input_size = 28*28
hidden_size = 100
output_size = 10
l2_norm = 0 # without using l2 penalty
get_grad = False
# declare a model
model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# Cross entropy
loss_fn = torch.nn.CrossEntropyLoss()
# l2_norm can be done in SGD
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
model.set_use_bn(True)
model.use_bn
train_accs, train_losses = fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
# only add random horizontal flip
train_transform_1 = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.ToTensor(), # Convert a PIL Image or numpy.ndarray to tensor.
# Normalize a tensor image with mean and standard deviation
transforms.Normalize((0.1307,), (0.3081,))
])
# only add random crop
train_transform_2 = transforms.Compose([
transforms.RandomCrop(size=[28,28], padding=4),
transforms.ToTensor(), # Convert a PIL Image or numpy.ndarray to tensor.
# Normalize a tensor image with mean and standard deviation
transforms.Normalize((0.1307,), (0.3081,))
])
# add random horizontal flip and random crop
train_transform_3 = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(size=[28,28], padding=4),
transforms.ToTensor(), # Convert a PIL Image or numpy.ndarray to tensor.
# Normalize a tensor image with mean and standard deviation
transforms.Normalize((0.1307,), (0.3081,))
])
# reload train_loader using trans
train_dataset_1 = torchvision.datasets.MNIST(root='./data',
train=True,
transform=train_transform_1,
download=False)
train_loader_1 = torch.utils.data.DataLoader(dataset=train_dataset_1,
batch_size=batch_size,
shuffle=True)
print(train_dataset_1)
### Hyper parameters
batch_size = 128
n_epochs = 5
learning_rate = 0.01
input_size = 28*28
hidden_size = 100
output_size = 10
l2_norm = 0 # without using l2 penalty
get_grad = False
# declare a model
model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# Cross entropy
loss_fn = torch.nn.CrossEntropyLoss()
# l2_norm can be done in SGD
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
train_accs, train_losses = fit(train_loader_1, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
# train_transform_2
batch_size = 128
train_dataset_2 = torchvision.datasets.MNIST(root='./data',
train=True,
transform=train_transform_2,
download=False)
train_loader_2 = torch.utils.data.DataLoader(dataset=train_dataset_2,
batch_size=batch_size,
shuffle=True)
n_epochs = 5
learning_rate = 0.01
input_size = 28*28
hidden_size = 100
output_size = 10
l2_norm = 0 # without using l2 penalty
get_grad = False
# declare a model
model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# Cross entropy
loss_fn = torch.nn.CrossEntropyLoss()
# l2_norm can be done in SGD
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
train_accs, train_losses = fit(train_loader_2, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
# train_transform_3
batch_size = 128
train_dataset_3 = torchvision.datasets.MNIST(root='./data',
train=True,
transform=train_transform_3,
download=False)
train_loader_3 = torch.utils.data.DataLoader(dataset=train_dataset_3,
batch_size=batch_size,
shuffle=True)
n_epochs = 5
learning_rate = 0.01
input_size = 28*28
hidden_size = 100
output_size = 10
l2_norm = 0 # without using l2 penalty
get_grad = False
# declare a model
model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# Cross entropy
loss_fn = torch.nn.CrossEntropyLoss()
# l2_norm can be done in SGD
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
train_accs, train_losses = fit(train_loader_3, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
### Hyper parameters
batch_size = 128
n_epochs = 15
learning_rate = 0.01
input_size = 28*28
hidden_size = 100
output_size = 10
l2_norm = 0 # use l2 penalty
get_grad = True
# declare a model
model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# Cross entropy
loss_fn = torch.nn.CrossEntropyLoss()
# l2_norm can be done in SGD
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad)
### Hyper parameters
batch_size = 128
n_epochs = 15
learning_rate = 1e-10
input_size = 28*28
hidden_size = 100
output_size = 10
l2_norm = 0 # use l2 penalty
get_grad = True
# declare a model
model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# Cross entropy
loss_fn = torch.nn.CrossEntropyLoss()
# l2_norm can be done in SGD
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad=get_grad)
### Hyper parameters
batch_size = 128
n_epochs = 15
learning_rate = 10
input_size = 28*28
hidden_size = 100
output_size = 10
l2_norm = 0 # not to use l2 penalty
get_grad = True
# declare a model
model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# Cross entropy
loss_fn = torch.nn.CrossEntropyLoss()
# l2_norm can be done in SGD
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad=True)
### Hyper parameters
batch_size = 128
n_epochs = 15
learning_rate = 1
input_size = 28*28
hidden_size = 100
output_size = 10
l2_norm = 0 # not to use l2 penalty
get_grad = True
# declare a model
model = FeedForwardNeuralNetwork(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# Cross entropy
loss_fn = torch.nn.CrossEntropyLoss()
# l2_norm can be done in SGD
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, weight_decay=l2_norm)
# reset parameters as 10
def wrong_weight_bias_reset(model):
Using normalization with mean=0, std=1 to initialize model's parameter
for m in model.modules():
if isinstance(m, nn.Linear):
# initialize linear layer with mean and std
mean, std = 0, 1
# Initialization method
torch.nn.init.normal_(m.weight, mean, std)
torch.nn.init.normal_(m.bias, mean, std)
wrong_weight_bias_reset(model)
show_weight_bias(model)
fit(train_loader, test_loader, model, loss_fn, optimizer, n_epochs, get_grad=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The module re provides support for <a href='https
Step2: The function $\texttt{tokenize}(s)$ takes a string $s$ representing an arithmetic expression and splits this string into a list of tokens.
Step3: Given an operator $o$, the expression $\texttt{precedence}(o)$ returns the precedence of the operator
Step4: The expression isLeftAssociative}(o) is True iff the operator $o$
Step5: The function evalBefore(o1, o2) receives to strings representing arithmetical operators. It returns True if the operator $o_1$ should be evaluated before the operator $o_2$ in an arithmetical expression of the form $a \;\texttt{o}_1\; b \;\texttt{o}_2\; c$. In order to determine whether $o_1$ should be evaluated before $o_2$ it uses the precedence and the associativity of the operators.
Step6: The class Calculator supports three member variables
Step7: The method __str__ is used to convert an object of class Calculator to a string.
Step8: The function $\texttt{evaluate}(\texttt{self})$ evaluates the expression that is given by the tokens on the mTokenStack.
Step9: The method $\texttt{popAndEvaluate}(\texttt{self})$ removes the two topmost numbers $\texttt{rhs}$ and $\texttt{lhs}$ from the argument stack and
|
<ASSISTANT_TASK:>
Python Code:
def toInt(s):
try:
return int(s)
except ValueError:
return s
toInt('123')
toInt('**')
import re
def tokenize(s):
regExp = r'[0-9]+|\*\*|[()+\-*%/]'
L = [ toInt(t) for t in re.findall(regExp, s) ]
return list(reversed(L))
re.findall(r'[0-9]+|\*\*|[()+*%/-]', '11 * 22 * 33**45')
tokenize('12 * 23 * 34**45')
def precedence(o):
Precedence = { '+': 1, '-': 1, '*': 2, '/': 2, '%': 2, '**' : 3 }
return Precedence[o]
def isLeftAssociative(o):
if o in { '+', '-', '*', '/', '%' }:
return True
if o in { '**' }:
return False
assert False, f'unknown operator {o}'
def evalBefore(stackOp, nextOp):
if precedence(stackOp) > precedence(nextOp):
return True
if stackOp == nextOp:
return isLeftAssociative(stackOp)
if precedence(stackOp) == precedence(nextOp) and stackOp != nextOp:
return True
if precedence(stackOp) < precedence(nextOp):
return False
assert False, f'incomplete case distinction in evalBefore({stackOp}, {nextOp})'
%%capture
%run Stack.ipynb
class Calculator:
def __init__(self, s):
self.mTokens = createStack(tokenize(s))
self.mOperators = Stack()
self.mArguments = Stack()
def toString(self):
return '\n'.join(['_'*50,
'Tokens: ', str(self.mTokens),
'Arguments: ', str(self.mArguments),
'Operators: ', str(self.mOperators),
'_'*50])
Calculator.__str__ = toString
del toString
Calculator.__repr__ = Calculator.__str__
def evaluate(self):
while not self.mTokens.isEmpty():
print(self) # only for debugging
nextOp = self.mTokens.top(); self.mTokens.pop()
if isinstance(nextOp, int):
self.mArguments.push(nextOp)
continue
if self.mOperators.isEmpty():
self.mOperators.push(nextOp)
continue
if nextOp == "(":
self.mOperators.push(nextOp)
continue
stackOp = self.mOperators.top()
if stackOp == "(" and nextOp == ")":
self.mOperators.pop()
continue
if nextOp == ")":
self.popAndEvaluate()
self.mTokens.push(nextOp)
continue
if stackOp == '(':
self.mOperators.push(nextOp)
continue
if evalBefore(stackOp, nextOp):
self.popAndEvaluate()
self.mTokens.push(nextOp)
else:
self.mOperators.push(nextOp)
while not self.mOperators.isEmpty():
print(self) # only for debugging
self.popAndEvaluate()
print(self)
return self.mArguments.top()
Calculator.evaluate = evaluate
del evaluate
def popAndEvaluate(self):
rhs = self.mArguments.top(); self.mArguments.pop()
lhs = self.mArguments.top(); self.mArguments.pop()
op = self.mOperators.top(); self.mOperators.pop()
result = None
if op == '+':
result = lhs + rhs
if op == '-':
result = lhs - rhs
if op == '*':
result = lhs * rhs
if op == '/':
result = lhs // rhs
if op == '%':
result = lhs % rhs
if op == '**':
result = lhs ** rhs
assert result != None, f'ERROR: *** Unknown Operator *** "{op}"'
self.mArguments.push(result)
Calculator.popAndEvaluate = popAndEvaluate
del popAndEvaluate
C = Calculator('1)*(3*(2+1)-4)**2')
C.evaluate()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The
Step2: Notice that the reader function returned a list of evoked instances. This is
Step3: If you're gone through the tutorials of raw and epochs datasets, you're
Step4: The evoked data structure also contains some new attributes easily
Step5: The data is also easily accessible. Since the evoked data arrays are usually
Step6: The data is arranged in an array of shape (n_channels, n_times). Notice
Step7: If you want to import evoked data from some other system and you have it in a
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import mne
data_path = mne.datasets.sample.data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
evokeds = mne.read_evokeds(fname, baseline=(None, 0), proj=True)
print(evokeds)
evoked = mne.read_evokeds(fname, condition='Left Auditory')
evoked.apply_baseline((None, 0)).apply_proj()
print(evoked)
print(evoked.info)
print(evoked.times)
print(evoked.nave) # Number of averaged epochs.
print(evoked.first) # First time sample.
print(evoked.last) # Last time sample.
print(evoked.comment) # Comment on dataset. Usually the condition.
print(evoked.kind) # Type of data, either average or standard_error.
data = evoked.data
print(data.shape)
print('Data from channel {0}:'.format(evoked.ch_names[10]))
print(data[10])
evoked = mne.EvokedArray(data, evoked.info, tmin=evoked.times[0])
evoked.plot()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data, in this case a sample data.
Step2: Make a single DataFrame containing all the users
Step3: Drop useless column(test_users don't have it)
Step4: Age
Step5: Set limits to age
Step6: Fill NaNs with -1 to make it more noticiable
Step7: The age, is really fine grained. We are going to make bins and fit each user in the proper age group
Step8: NaNs
Step9: Date
Step10: Convert to DatetimeIndex
Step11: Split dates into day, week, month, year
Step12: Get the difference(time lag) between the date in which the account was created and when it was first active
Step13: Drop duplicated columns
Step14: Session Information
Step15: Frequency Count
Step16: Elapsed Seconds Stats
Step17: Encode categorical features
Step18: Persistence
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
train_users = pd.read_csv('../data/train_users_sample.csv')
test_users = pd.read_csv('../data/test_users_sample.csv')
sessions = pd.read_csv('../data/sessions_sample.csv')
users = pd.concat([train_users, test_users], axis=0, ignore_index=True)
users.drop('date_first_booking', axis=1, inplace=True)
user_with_year_age_mask = users['age'] > 1000
users.loc[user_with_year_age_mask, 'age'] = 2015 - users.loc[user_with_year_age_mask, 'age']
users.loc[(users['age'] > 100) | (users['age'] < 18), 'age'] = -1
users['age'].fillna(-1, inplace=True)
bins = [-1, 20, 25, 30, 40, 50, 60, 75, 100]
users['age_group'] = np.digitize(users['age'], bins, right=True)
users['nans'] = np.sum([
(users['age'] == -1),
(users['gender'] == '-unknown-'),
(users['language'] == '-unknown-'),
(users['first_affiliate_tracked'] == 'untracked'),
(users['first_browser'] == '-unknown-')
], axis=0)
users['date_account_created'] = pd.to_datetime(users['date_account_created'], errors='ignore')
users['date_first_active'] = pd.to_datetime(users['timestamp_first_active'], format='%Y%m%d%H%M%S')
date_account_created = pd.DatetimeIndex(users['date_account_created'])
date_first_active = pd.DatetimeIndex(users['date_first_active'])
users['day_account_created'] = date_account_created.day
users['weekday_account_created'] = date_account_created.weekday
users['week_account_created'] = date_account_created.week
users['month_account_created'] = date_account_created.month
users['year_account_created'] = date_account_created.year
users['day_first_active'] = date_first_active.day
users['weekday_first_active'] = date_first_active.weekday
users['week_first_active'] = date_first_active.week
users['month_first_active'] = date_first_active.month
users['year_first_active'] = date_first_active.year
users['time_lag'] = (date_account_created.values - date_first_active.values).astype(int)
drop_list = [
'date_account_created',
'date_first_active',
'timestamp_first_active'
]
users.drop(drop_list, axis=1, inplace=True)
sessions.rename(columns = {'user_id': 'id'}, inplace=True)
action_count = sessions.groupby(['id', 'action'])['secs_elapsed'].agg(len).unstack()
action_type_count = sessions.groupby(['id', 'action_type'])['secs_elapsed'].agg(len).unstack()
action_detail_count = sessions.groupby(['id', 'action_detail'])['secs_elapsed'].agg(len).unstack()
device_type_sum = sessions.groupby(['id', 'device_type'])['secs_elapsed'].agg(sum).unstack()
sessions_data = pd.concat([action_count, action_type_count, action_detail_count, device_type_sum],axis=1)
sessions_data.columns = sessions_data.columns.map(lambda x: str(x) + '_count')
# Most used device
sessions_data['most_used_device'] = sessions.groupby('id')['device_type'].max()
users = users.join(sessions_data, on='id')
secs_elapsed = sessions.groupby('id')['secs_elapsed']
secs_elapsed = secs_elapsed.agg(
{
'secs_elapsed_sum': np.sum,
'secs_elapsed_mean': np.mean,
'secs_elapsed_min': np.min,
'secs_elapsed_max': np.max,
'secs_elapsed_median': np.median,
'secs_elapsed_std': np.std,
'secs_elapsed_var': np.var,
'day_pauses': lambda x: (x > 86400).sum(),
'long_pauses': lambda x: (x > 300000).sum(),
'short_pauses': lambda x: (x < 3600).sum(),
'session_length' : np.count_nonzero
}
)
users = users.join(secs_elapsed, on='id')
categorical_features = [
'gender', 'signup_method', 'signup_flow', 'language',
'affiliate_channel', 'affiliate_provider', 'first_affiliate_tracked',
'signup_app', 'first_device_type', 'first_browser', 'most_used_device'
]
users = pd.get_dummies(users, columns=categorical_features)
users.set_index('id', inplace=True)
users.loc[train_users['id']].to_csv('../cache/train_users.csv')
users.loc[test_users['id']].drop('country_destination', axis=1).to_csv('../cache/test_users.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Prepare the data
Step2: Build the model
Step3: Prepare the text sampling function
Step4: Train the model
|
<ASSISTANT_TASK:>
Python Code:
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np
import random
import io
path = keras.utils.get_file(
"nietzsche.txt", origin="https://s3.amazonaws.com/text-datasets/nietzsche.txt"
)
with io.open(path, encoding="utf-8") as f:
text = f.read().lower()
text = text.replace("\n", " ") # We remove newlines chars for nicer display
print("Corpus length:", len(text))
chars = sorted(list(set(text)))
print("Total chars:", len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
# cut the text in semi-redundant sequences of maxlen characters
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i : i + maxlen])
next_chars.append(text[i + maxlen])
print("Number of sequences:", len(sentences))
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
model = keras.Sequential(
[
keras.Input(shape=(maxlen, len(chars))),
layers.LSTM(128),
layers.Dense(len(chars), activation="softmax"),
]
)
optimizer = keras.optimizers.RMSprop(learning_rate=0.01)
model.compile(loss="categorical_crossentropy", optimizer=optimizer)
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype("float64")
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
epochs = 40
batch_size = 128
for epoch in range(epochs):
model.fit(x, y, batch_size=batch_size, epochs=1)
print()
print("Generating text after epoch: %d" % epoch)
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0, 1.2]:
print("...Diversity:", diversity)
generated = ""
sentence = text[start_index : start_index + maxlen]
print('...Generating with seed: "' + sentence + '"')
for i in range(400):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.0
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
sentence = sentence[1:] + next_char
generated += next_char
print("...Generated: ", generated)
print()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data and have initial look
Step2: Look at class distribution
Step3: Data in the table is organized the following way
Step4: Split the data in training and test set
Step5: Train the model
Step6: Predict y (class) on test set and probabilities that sample belongs to each of two classes.
Step7: Calculate confusion matrix
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
from sklearn.metrics import roc_curve, roc_auc_score, auc, recall_score, accuracy_score, confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
df = pd.read_csv('../data/pima-indians-diabetes-data.csv', index_col=[0])
df.head()
df.describe()
len(df[df['class'] == 1]), len(df[df['class'] == 0])
X = df.drop('class', axis=1).values
y = df['class'].values
X_train, X_test, y_train, y_test = train_test_split(X, y)
clf = RandomForestClassifier(n_estimators=100)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
y_pred_proba = clf.predict_proba(X_test)
print("AUC: %.3f" % roc_auc_score(y_test, y_pred_proba.T[1]))
confusion_matrix(y_test, y_pred, labels=[1,0])
recall_score(y_test, y_pred, pos_label=1) # Low-moderate sensitivity
recall_score(y_test, y_pred, pos_label=0) # High specificity
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load capacity curves
Step2: Load ground motion records
Step3: Load damage state thresholds
Step4: Obtain the damage probability matrix
Step5: Fit lognormal CDF fragility curves
Step6: Plot fragility functions
Step7: Save fragility functions
Step8: Obtain vulnerability function
Step9: Plot vulnerability function
Step10: Save vulnerability function
|
<ASSISTANT_TASK:>
Python Code:
import capacitySpectrumMethod
from rmtk.vulnerability.common import utils
%matplotlib inline
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Sa-Sd.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
utils.plot_capacity_curves(capacity_curves)
gmrs_folder = "../../../../../../rmtk_data/GMRs"
minT, maxT = 0.1, 2.0
gmrs = utils.read_gmrs(gmrs_folder)
#utils.plot_response_spectra(gmrs, minT, maxT)
damage_model_file = "../../../../../../rmtk_data/damage_model_Sd.csv"
damage_model = utils.read_damage_model(damage_model_file)
damping_model = "Iwan_1980"
damping_ratio = 0.05
PDM, Sds = capacitySpectrumMethod.calculate_fragility(capacity_curves, gmrs, damage_model,
damping_model, damping_ratio)
IMT = "Sa"
period = 0.3
regression_method = "least squares"
fragility_model = utils.calculate_mean_fragility(gmrs, PDM, period, damping_ratio,
IMT, damage_model, regression_method)
minIML, maxIML = 0.01, 2.00
utils.plot_fragility_model(fragility_model, minIML, maxIML)
# utils.plot_fragility_stats(fragility_statistics,minIML,maxIML)
taxonomy = "RC"
minIML, maxIML = 0.01, 2.00
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
cons_model_file = "../../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00,
2.20, 2.40, 2.60, 2.80, 3.00, 3.20, 3.40, 3.60, 3.80, 4.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
utils.plot_vulnerability_model(vulnerability_model)
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define a function that returns a slice as a Plotly Surface
Step2: Let us plot the slices z=0 and y=-0.5 in the volume defined by
Step3: In order to be able to compare the two slices, we choose a unique interval of values to be mapped to the colorscale
Step4: Oblique slice in volumetric data
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import plotly.graph_objects as go
from IPython
def get_the_slice(x,y,z, surfacecolor):
return go.Surface(x=x,
y=y,
z=z,
surfacecolor=surfacecolor,
coloraxis='coloraxis')
def get_lims_colors(surfacecolor):# color limits for a slice
return np.min(surfacecolor), np.max(surfacecolor)
scalar_f = lambda x,y,z: x*np.exp(-x**2-y**2-z**2)
x = np.linspace(-2,2, 50)
y = np.linspace(-2,2, 50)
x, y = np.meshgrid(x,y)
z = np.zeros(x.shape)
surfcolor_z = scalar_f(x,y,z)
sminz, smaxz = get_lims_colors(surfcolor_z)
slice_z = get_the_slice(x, y, z, surfcolor_z)
x = np.linspace(-2,2, 50)
z = np.linspace(-2,2, 50)
x, z = np.meshgrid(x,y)
y = -0.5 * np.ones(x.shape)
surfcolor_y = scalar_f(x,y,z)
sminy, smaxy = get_lims_colors(surfcolor_y)
vmin = min([sminz, sminy])
vmax = max([smaxz, smaxy])
slice_y = get_the_slice(x, y, z, surfcolor_y)
def colorax(vmin, vmax):
return dict(cmin=vmin,
cmax=vmax)
fig1 = go.Figure(data=[slice_z, slice_y])
fig1.update_layout(
title_text='Slices in volumetric data',
title_x=0.5,
width=700,
height=700,
scene_zaxis_range=[-2,2],
coloraxis=dict(colorscale='BrBG',
colorbar_thickness=25,
colorbar_len=0.75,
**colorax(vmin, vmax)))
#fig1.show()
from IPython.display import IFrame
IFrame('https://chart-studio.plotly.com/~empet/13862', width=700, height=700)
alpha = np.pi/4
x = np.linspace(-2, 2, 50)
y = np.linspace(-2, 2, 50)
x, y = np.meshgrid(x,y)
z = -x * np.tan(alpha)
surfcolor_obl = scalar_f(x,y,z)
smino, smaxo = get_lims_colors(surfcolor_obl)
vmin = min([sminz, smino])
vmax = max([smaxz, smaxo])
slice_obl = get_the_slice(x,y,z, surfcolor_obl)
fig2 = go.Figure(data=[slice_z, slice_obl], layout=fig1.layout)
fig2.update_layout( coloraxis=colorax(vmin, vmax))
#fig2.show()
IFrame('https://chart-studio.plotly.com/~empet/13864', width=700, height=700)
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'sandbox-3', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load pickled coordinates for the first Hungriger Wolf camera
Step2: Set the paramters for the image clustering
Step3: Load image and preprocess it
Step4: Predict the labels with the trained model and convert it into a mask store
Step5: Denoise the cloud mask and label the clouds
Step6: Load the second image and cutted
Step7: Move the cloud around to find the best matching point
Step8: Example for SpatialCloud
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import logging
import pyclamster
import pickle
import scipy
import scipy.misc
from skimage.feature import match_template
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")
filename = "../examples/calibration/wolf-3-calibration.pk"
calibration = pickle.load(open(filename, 'rb'))
cal_coords = calibration.create_coordinates()
cal_coords.z = 2000
plt.subplot(221)
plt.title("elevation on the image [deg]")
plt.imshow(cal_coords.elevation*360/(2*np.pi))
plt.colorbar()
plt.subplot(222)
plt.title("azimuth on the image [deg]")
plt.imshow(cal_coords.azimuth*360/(2*np.pi))
plt.colorbar()
plt.subplot(223)
plt.title("[z=2000 plane]\nreal-world x on the image [m]")
plt.imshow(cal_coords.x)
plt.colorbar()
plt.subplot(224)
plt.title("[z=2000 plane]\nreal-world y on the image [m]")
plt.imshow(cal_coords.y)
plt.colorbar()
plt.tight_layout()
base_folder = "../"
image_directory = os.path.join(base_folder, "examples", "images", "wolf")
trained_models = os.path.join(base_folder, "trained_models")
good_angle = 45
center = int(1920/2)
good_angle_dpi = int(np.round(1920 / 180 * good_angle))
denoising_ratio = 10
#all_images = glob.glob(os.path.join(image_directory, "Image_20160531_114000_UTCp1_*.jpg"))
#print(all_images)
all_images = [
os.path.join(image_directory, "Image_20160531_114100_UTCp1_3.jpg"),
os.path.join(image_directory, "Image_20160531_114100_UTCp1_4.jpg")]
kmeans = pickle.load(open(os.path.join(trained_models, "kmeans.pk"), "rb"))
image = pyclamster.Image(all_images[0])
image.coordinates = cal_coords
cutted_image = image.cut([960, 960, 1460, 1460])
plt.title("The raw cutted image")
plt.imshow(cutted_image)
plt.axis('off')
image.data = pyclamster.clustering.preprocess.LCN(size=(25,25,3), scale=False).fit_transform(image.data)
image = image.cut([960, 960, 1460, 1460])
w, h, _ = original_shape = image.data.shape
raw_image = pyclamster.clustering.functions.rbDetection(image.data).reshape((w*h, -1))
label = kmeans.predict(raw_image)
label.reshape((w, h), replace=True)
plt.title("The masked clouds")
plt.imshow(label.labels, cmap='gray')
plt.axis('off')
masks = label.getMaskStore()
masks.denoise([0], 1000)
cloud_labels, _ = masks.labelMask([0,])
plt.title("The labeled clouds")
plt.imshow(cloud_labels.labels, cmap='gray')
plt.axis('off')
cloud_store = cloud_labels.getMaskStore()
clouds = [cloud_store.getCloud(cutted_image, [k,]) for k in cloud_store.masks.keys()]
cloud1 = cloud_store.cutMask(cutted_image, [1,])
print(cloud1.data.shape)
image = pyclamster.Image(all_images[1])
image = image.cut([850, 850, 1460, 1460])
plt.title("The raw cutted image")
plt.imshow(image)
plt.axis('off')
result = match_template(image.data, cloud1.data, pad_input=True, mode='reflect', constant_values=0)
plt.title("The matching result")
plt.imshow(result, cmap='gray')
plt.colorbar()
plt.axis('off')
#print(np.unravel_index(np.argmax(result), result.shape))
image = pyclamster.Image(all_images[1])
image.coordinates = cal_coords # Fake the second coordiante
image.crop([931, 981, 1430, 1480])
cloud2 = pyclamster.matching.Cloud(image)
sCloud = pyclamster.matching.SpatialCloud(pyclamster.matching.Cloud(cloud1), cloud2)
position = sCloud._calc_position()
print(position[2])
plt.title("A faked height map")
plt.imshow(position[2], cmap='gray')
plt.colorbar()
plt.axis('off')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data conversion
Step2: Standard In, Standard Out and Standard Error
|
<ASSISTANT_TASK:>
Python Code:
fd = open('README.md', 'r')
print(fd.readline(), end='') # \n is included in input string
for s in fd: # file object(descriptor) is iterable, and can be used in for loop
print(s.strip()) # strip() removes extra space and \n
# print(s.split()) # convert string to List
s = '100'
print(int(s)+1)
s = '1 2 3'
for i in map(int, s.split()):
print(i)
s = '1 2 3 4'
x = list(map(int, s.split()))
print(x)
y = list()
for i in s.split(): # ['1', '2', '3', '4']
y.append(int(i))
print(y)
# need to import sys module to use standard I/O file descriptor
import sys
print('input somthing: ')
# jupyter does not handle stdin ?
# sys.stdin is used as a file-descriptor
s = sys.stdin.readline()
print(s.split()) # convert input to List
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 2
Step2: Error in approximation at q = 2
Step3: q = 4
Step4: Error in approximation at q = 4
Step5: q = 8
Step6: Error in approximation at q = 8
Step7: q= 16
Step8: Error in approximation at q = 16
Step9: B.2 Studying a function for different parameter values
Step10: How large n needs to be in order for the difference between the max of the function $f(x) = sin\dfrac{1}{x+eps}$ in [0,1] using n nodes and n+10 nodes to be less than 0.1
Step11: n = 200 * eps (for an epsilon of 0.2). Increasing n further does not change the plot so that it is visible on the screen.
Step12: B.6 Using the trapezoid method to approximate integrals
Step13: The plot is symmetric over the y-axis. Therefore the integral of the function from -$\infty$ to $\infty$ will be the same as twice the integral of the function from 0 to $\infty$
Step14: $$\int_{-L}^{L} {e}^{-x^2} dx = 2\int_{0}^{L} {e}^{-x^2} dx$$
Step15: The error decreases as n increases and as L increases
Step16: $$g'(x)-10{e}^{10x}sin({e}^{10x})$$
Step17: $$h'(x)=(ln x)x^x+xx^{x-1}$$
|
<ASSISTANT_TASK:>
Python Code:
p1.f(-.45)
p1.S_k(-1, 1, -.45, 10)
p1.f(-.45)
p1.S_k(-1, 1, -.45, 2)
p1.error(-1, 1, -.45, 2)
p1.S_k(-1, 1, -.45, 4)
p1.error(-1, 1, -.45, 4)
p1.S_k(-1, 1, -.45, 8)
p1.error(-1, 1, -.45, 8)
p1.S_k(-1, 1, -.45, 16)
p1.error(-1, 1, -.45, 16)
p2.sin_graph(0.2, 10)
p2.multigraph(0.2, 10)
p2.choose_n(0.2)
p2.choose_n(0.1)
p2.choose_n(0.05)
p2.multigraph(0.2, 40)
p3.graph()
p3.T(100, 10)
p3.trap(100, 10) * 2
p3.table()
p3.error()
p4.graph(p4.diff(p4.f, 1/1000, 1, 100))
p4.multigraph(p4.diff(p4.f, 1/1000, 1, 100), p4.fprime)
p4.graph(p4.diff(p4.g, 1/1000, 1, 100))
p4.multigraph(p4.diff(p4.g, 1/1000, 1, 100), p4.gprime)
p4.graph(p4.diff(p4.h, 1/1000, 1, 100))
p4.multigraph(p4.diff(p4.h, 1/1000, 1, 100), p4.hprime)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Table
Step2: View Table
Step3: Create New Empty Table
Step4: Copy Contents Of First Table Into Empty Table
Step5: View Previously Empty Table
|
<ASSISTANT_TASK:>
Python Code:
# Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
%%sql
-- Create a table of criminals_1
CREATE TABLE criminals_1 (pid, name, age, sex, city, minor);
INSERT INTO criminals_1 VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals_1 VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0);
INSERT INTO criminals_1 VALUES (632, 'Stacy Miller', 23, 'F', 'Santa Rosa', 0);
INSERT INTO criminals_1 VALUES (621, 'Betty Bob', NULL, 'F', 'Petaluma', 1);
INSERT INTO criminals_1 VALUES (162, 'Jaden Ado', 49, 'M', NULL, 0);
INSERT INTO criminals_1 VALUES (901, 'Gordon Ado', 32, 'F', 'Santa Rosa', 0);
INSERT INTO criminals_1 VALUES (512, 'Bill Byson', 21, 'M', 'Santa Rosa', 0);
INSERT INTO criminals_1 VALUES (411, 'Bob Iton', NULL, 'M', 'San Francisco', 0);
%%sql
-- Select all
SELECT *
-- From the table 'criminals_1'
FROM criminals_1
%%sql
-- Create a table called criminals_2
CREATE TABLE criminals_2 (pid, name, age, sex, city, minor);
%%sql
-- Insert into the empty table
INSERT INTO criminals_2
-- Everything
SELECT *
-- From the first table
FROM criminals_1;
%%sql
-- Select everything
SELECT *
-- From the previously empty table
FROM criminals_2
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Photo-count detection
Step2: Solve using stochastic master equation
Step3: Homodyne detection
Step4: Theory
Step5: $$
Step6: Solve problem again, this time with a specified noise (from previous run)
Step7: Using QuTiP built-in homodyne detection functions for $D_1$ and $D_2$
Step8: Solve problem again, this time with a specified noise (from previous run)
Step9: Heterodyne detection
Step10: $$D_{2}^{(1)}[c, |\psi(t)\rangle] = \sqrt{1/2} (c - \langle c + c^\dagger \rangle / 2) \psi$$
Step11: Using QuTiP built-in heterodyne detection functions for $D_1$ and $D_2$
Step12: Solve problem again, this time with a specified noise (from previous run)
Step13: Software version
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from qutip import *
import numpy as np
N = 10
w0 = 0.5 * 2 * np.pi
times = np.linspace(0, 15, 150)
dt = times[1] - times[0]
gamma = 0.25
A = 2.5
ntraj = 50
nsubsteps = 100
a = destroy(N)
x = a + a.dag()
H = w0 * a.dag() * a
psi0 = fock(N, 5)
sc_ops = [np.sqrt(gamma) * a]
e_ops = [a.dag() * a, a + a.dag(), (-1j)*(a - a.dag())]
result_ref = mesolve(H, psi0, times, sc_ops, e_ops)
plot_expectation_values(result_ref);
result = photocurrent_sesolve(H, psi0, times, sc_ops, e_ops,
ntraj=ntraj*5, nsubsteps=nsubsteps, store_measurement=True, normalize=0)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.step(times, dt * m.real)
ntraj = 50
nsubsteps = 200
H = w0 * a.dag() * a + A * (a + a.dag())
result_ref = mesolve(H, psi0, times, sc_ops, e_ops)
op = sc_ops[0]
opp = (op + op.dag()).data
opn = (op.dag()*op).data
op = op.data
Hd = H.data * -1j
def d1_psi_func(t, psi):
e_x = cy.cy_expect_psi(opp, psi, 0)
return cy.spmv(Hd, psi) + 0.5 * (e_x * cy.spmv(op, psi) - cy.spmv(opn, psi) - 0.25 * e_x ** 2 * psi)
def d2_psi_func(t, psi):
out = np.zeros((1,len(psi)), dtype=complex)
e_x = cy.cy_expect_psi(opp, psi, 0)
out[0,:] = cy.spmv(op,psi)
out -= 0.5 * e_x * psi
return out
result = general_stochastic(psi0, times, d1=d1_psi_func, d2=d2_psi_func,
e_ops=e_ops, ntraj=ntraj,
m_ops=[a + a.dag()], dW_factors=[1/np.sqrt(gamma)],
nsubsteps=nsubsteps, store_measurement=True)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0].real, 'b', alpha=0.1)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real, 'r', lw=2)
plt.plot(times, result_ref.expect[1], 'k', lw=2)
plt.ylim(-15, 15);
result = general_stochastic(psi0, times, d1=d1_psi_func, d2=d2_psi_func,
e_ops=e_ops, ntraj=ntraj, noise=result.noise,
m_ops=[a + a.dag()], dW_factors=[1/np.sqrt(gamma)],
nsubsteps=nsubsteps, store_measurement=True)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0].real, 'b', alpha=0.1)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real, 'r', lw=2)
plt.plot(times, result_ref.expect[1], 'k', lw=2)
plt.ylim(-15, 15);
result = ssesolve(H, psi0, times, sc_ops, e_ops, ntraj=ntraj, nsubsteps=nsubsteps,
method='homodyne', store_measurement=True, dW_factors=[1])
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0].real, 'b', alpha=0.1)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real/np.sqrt(gamma), 'r', lw=2)
plt.plot(times, result_ref.expect[1], 'k', lw=2)
plt.ylim(-15, 15);
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real/np.sqrt(gamma), 'r', lw=2)
plt.plot(times, result_ref.expect[1], 'k', lw=2)
plt.plot(times, result.expect[1], 'b', lw=2)
result = ssesolve(H, psi0, times, sc_ops, e_ops, ntraj=ntraj, nsubsteps=nsubsteps,
method='homodyne', store_measurement=True, noise=result.noise)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0].real, 'b', alpha=0.1)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real/np.sqrt(gamma), 'r', lw=2)
plt.plot(times, result_ref.expect[1], 'k', lw=2)
plt.ylim(-15, 15);
op = sc_ops[0]
opd = (op.dag()).data
opp = (op + op.dag()).data
opm = (op + op.dag()).data
opn = (op.dag()*op).data
op = op.data
Hd = H.data * -1j
def d1_psi_func(t, psi):
e_xd = cy.cy_expect_psi(opd, psi, 0)
e_x = cy.cy_expect_psi(op, psi, 0)
return cy.spmv(Hd, psi) - 0.5 * (cy.spmv(opn, psi) - e_xd * cy.spmv(op, psi) + 0.5 * e_x * e_xd * psi)
sqrt2 = np.sqrt(0.5)
def d2_psi_func(t, psi):
out = np.zeros((2,len(psi)), dtype=complex)
e_p = cy.cy_expect_psi(opp, psi, 0)
e_m = cy.cy_expect_psi(opm, psi, 0)
out[0,:] = (cy.spmv(op,psi) - e_p * 0.5 * psi)*sqrt2
out[1,:] = (cy.spmv(op,psi) - e_m * 0.5 * psi)*sqrt2*-1j
return out
result = general_stochastic(psi0, times, d1=d1_psi_func, d2=d2_psi_func,
e_ops=e_ops, ntraj=ntraj, len_d2=2,
m_ops=[(a + a.dag()), (-1j)*(a - a.dag())], dW_factors=[2/np.sqrt(gamma), 2/np.sqrt(gamma)],
nsubsteps=nsubsteps, store_measurement=True)
plot_expectation_values([result, result_ref]);
#fig, ax = subplots()
for m in result.measurement:
plt.plot(times, m[:, 0].real, 'r', alpha=0.025)
plt.plot(times, m[:, 1].real, 'b', alpha=0.025)
plt.plot(times, result_ref.expect[1], 'k', lw=2);
plt.plot(times, result_ref.expect[2], 'k', lw=2);
plt.ylim(-10, 10)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real, 'r', lw=2);
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,1].real, 'b', lw=2);
result = ssesolve(H, psi0, times, sc_ops, e_ops, ntraj=ntraj, nsubsteps=nsubsteps,
method='heterodyne', store_measurement=True)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0, 0].real, 'r', alpha=0.025)
plt.plot(times, m[:, 0, 1].real, 'b', alpha=0.025)
plt.plot(times, result_ref.expect[1], 'k', lw=2)
plt.plot(times, result_ref.expect[2], 'k', lw=2)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0,0].real/np.sqrt(gamma), 'r', lw=2)
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0,1].real/np.sqrt(gamma), 'b', lw=2)
result = ssesolve(H, psi0, times, sc_ops, e_ops, ntraj=ntraj, nsubsteps=nsubsteps,
method='heterodyne', store_measurement=True, noise=result.noise)
plot_expectation_values([result, result_ref]);
for m in result.measurement:
plt.plot(times, m[:, 0, 0].real, 'r', alpha=0.025)
plt.plot(times, m[:, 0, 1].real, 'b', alpha=0.025)
plt.plot(times, result_ref.expect[1], 'k', lw=2);
plt.plot(times, result_ref.expect[2], 'k', lw=2);
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0,0].real/np.sqrt(gamma), 'r', lw=2);
plt.plot(times, np.array(result.measurement).mean(axis=0)[:,0,1].real/np.sqrt(gamma), 'b', lw=2);
from qutip.ipynbtools import version_table
version_table()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: CIFAR-10 Data Loading and Preprocessing
Step2: SVM Classifier
Step3: The grad returned from the function above is right now all zero. Derive and implement the gradient for the SVM cost function and implement it inline inside the function svm_loss_naive. You will find it helpful to interleave your new code inside the existing function.
Step4: Inline Question 1
Step5: Stochastic Gradient Descent
|
<ASSISTANT_TASK:>
Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the
# notebook rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Split the data into train, val, and test sets. In addition we will
# create a small development set as a subset of the training data;
# we can use this for development so our code runs faster.
num_training = 49000
num_validation = 1000
num_test = 1000
num_dev = 500
# Our validation set will be num_validation points from the original
# training set.
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
# Our training set will be the first num_train points from the original
# training set.
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
# We will also make a development set, which is a small subset of
# the training set.
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# We use the first num_test points of the original test set as our
# test set.
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# As a sanity check, print out the shapes of the data
print 'Training data shape: ', X_train.shape
print 'Validation data shape: ', X_val.shape
print 'Test data shape: ', X_test.shape
print 'dev data shape: ', X_dev.shape
# Preprocessing: subtract the mean image
# first: compute the image mean based on the training data
mean_image = np.mean(X_train, axis=0)
print mean_image[:10] # print a few of the elements
plt.figure(figsize=(4,4))
plt.imshow(mean_image.reshape((32,32,3)).astype('uint8')) # visualize the mean image
plt.show()
# second: subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# third: append the bias dimension of ones (i.e. bias trick) so that our SVM
# only has to worry about optimizing a single weight matrix W.
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
print X_train.shape, X_val.shape, X_test.shape, X_dev.shape
# Evaluate the naive implementation of the loss we provided for you:
from cs231n.classifiers.linear_svm import svm_loss_naive
import time
# generate a random SVM weight matrix of small numbers
W = np.random.randn(3073, 10) * 0.0001
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.00001)
print 'loss: %f' % (loss, )
# Once you've implemented the gradient, recompute it with the code below
# and gradient check it with the function we provided for you
# Compute the loss and its gradient at W.
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.0)
# Numerically compute the gradient along several randomly chosen dimensions, and
# compare them with your analytically computed gradient. The numbers should match
# almost exactly along all dimensions.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad)
# do the gradient check once again with regularization turned on
# you didn't forget the regularization gradient did you?
loss, grad = svm_loss_naive(W, X_dev, y_dev, 1e2)
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 1e2)[0]
grad_numerical = grad_check_sparse(f, W, grad)
# Next implement the function svm_loss_vectorized; for now only compute the loss;
# we will implement the gradient in a moment.
tic = time.time()
loss_naive, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Naive loss: %e computed in %fs' % (loss_naive, toc - tic)
from cs231n.classifiers.linear_svm import svm_loss_vectorized
tic = time.time()
loss_vectorized, _ = svm_loss_vectorized(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic)
# The losses should match but your vectorized implementation should be much faster.
print 'difference: %f' % (loss_naive - loss_vectorized)
# Complete the implementation of svm_loss_vectorized, and compute the gradient
# of the loss function in a vectorized way.
# The naive implementation and the vectorized implementation should match, but
# the vectorized version should still be much faster.
tic = time.time()
_, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Naive loss and gradient: computed in %fs' % (toc - tic)
tic = time.time()
_, grad_vectorized = svm_loss_vectorized(W, X_dev, y_dev, 0.00001)
toc = time.time()
print 'Vectorized loss and gradient: computed in %fs' % (toc - tic)
# The loss is a single number, so it is easy to compare the values computed
# by the two implementations. The gradient on the other hand is a matrix, so
# we use the Frobenius norm to compare them.
difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print 'difference: %f' % difference
# In the file linear_classifier.py, implement SGD in the function
# LinearClassifier.train() and then run it with the code below.
from cs231n.classifiers import LinearSVM
svm = LinearSVM()
tic = time.time()
loss_hist = svm.train(X_train, y_train, learning_rate=1e-7, reg=5e4,
num_iters=1500, verbose=True)
toc = time.time()
print 'That took %fs' % (toc - tic)
# A useful debugging strategy is to plot the loss as a function of
# iteration number:
plt.plot(loss_hist)
plt.xlabel('Iteration number')
plt.ylabel('Loss value')
plt.show()
# Write the LinearSVM.predict function and evaluate the performance on both the
# training and validation set
y_train_pred = svm.predict(X_train)
print 'training accuracy: %f' % (np.mean(y_train == y_train_pred), )
y_val_pred = svm.predict(X_val)
print 'validation accuracy: %f' % (np.mean(y_val == y_val_pred), )
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of about 0.4 on the validation set.
learning_rates = [1e-7, 5e-5]
regularization_strengths = [5e4, 1e5]
# results is dictionary mapping tuples of the form
# (learning_rate, regularization_strength) to tuples of the form
# (training_accuracy, validation_accuracy). The accuracy is simply the fraction
# of data points that are correctly classified.
results = {}
best_val = -1 # The highest validation accuracy that we have seen so far.
best_svm = None # The LinearSVM object that achieved the highest validation rate.
################################################################################
# TODO: #
# Write code that chooses the best hyperparameters by tuning on the validation #
# set. For each combination of hyperparameters, train a linear SVM on the #
# training set, compute its accuracy on the training and validation sets, and #
# store these numbers in the results dictionary. In addition, store the best #
# validation accuracy in best_val and the LinearSVM object that achieves this #
# accuracy in best_svm. #
# #
# Hint: You should use a small value for num_iters as you develop your #
# validation code so that the SVMs don't take much time to train; once you are #
# confident that your validation code works, you should rerun the validation #
# code with a larger value for num_iters. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
marker_size = 100
colors = [results[x][0] for x in results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [results[x][1] for x in results] # default size of markers is 20
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.show()
# Evaluate the best svm on test set
y_test_pred = best_svm.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print 'linear SVM on raw pixels final test set accuracy: %f' % test_accuracy
# Visualize the learned weights for each class.
# Depending on your choice of learning rate and regularization strength, these may
# or may not be nice to look at.
w = best_svm.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in xrange(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fetch the dataset
Step2: Prepare dataset
Step3: Parse the baseline files for "eyes open"
Step4: Parse the baseline files for "eyes closed"
Step5: Dump arrays
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pyedflib # please check the "requirements.txt" file
import tqdm
import pathlib
import os
curr_dir = pathlib.Path("./")
edf_dir = (curr_dir / "raw_data/").resolve()
if not edf_dir.exists():
try:
edf_dir.mkdir()
except Exeption as err:
print(err)
else:
print(f"\"{edf_dir}\" already exists.")
# Skip fetching the data if the notebook run on Binder.
host = os.environ.get("BINDER_LAUNCH_HOST", None)
if host is None or host != "https://mybinder.org/":
!wget -P "$edf_dir" -c https://physionet.org/static/published-projects/eegmmidb/eeg-motor-movementimagery-dataset-1.0.0.zip
!unzip "$edf_dir"/eeg-motor-movementimagery-dataset-1.0.0.zip -d "$edf_dir/eeg-motor-movementimagery-dataset/"
dataset_root = f"{edf_dir}/eeg-motor-movementimagery-dataset/files"
n_subjects = 106
n_rois = 64
n_samples = 9600
eyes_open = np.zeros((n_subjects, n_rois, n_samples))
eyes_closed = np.zeros((n_subjects, n_rois, n_samples))
for sub_id in tqdm.tqdm(range(n_subjects)):
subj_prefix = f"S{sub_id + 1:03}"
subj_dir = f"{dataset_root}/{subj_prefix}"
baseline_eyes_open = f"{subj_dir}/{subj_prefix}R01"
edf = pyedflib.EdfReader(baseline_eyes_open + ".edf")
annot = edf.read_annotation()
n_signals = edf.signals_in_file
signal_labels = edf.getSignalLabels()
for chan in np.arange(n_signals):
eyes_open[sub_id, chan, :] = edf.readSignal(chan)[0:9600]
for sub_id in tqdm.tqdm(range(n_subjects)):
subj_prefix = f"S{sub_id + 1:03}"
subj_dir = f"{dataset_root}/{subj_prefix}"
baseline_eyes_open = f"{subj_dir}/{subj_prefix}R02"
edf = pyedflib.EdfReader(baseline_eyes_open + ".edf")
annot = edf.read_annotation()
n_signals = edf.signals_in_file
signal_labels = edf.getSignalLabels()
for chan in np.arange(n_signals):
eyes_closed[sub_id, chan, :] = edf.readSignal(chan)[0:9600]
store_dir = (curr_dir / "data/").resolve()
if not store_dir.exists():
try:
store_dir.mkdir()
except Exeption as err:
print(err)
else:
print(f"\"{store_dir}\" already exists.")
np.save(f'{store_dir}/eeg_eyes_opened.npy', eyes_open)
np.save(f'{store_dir}/eeg_eyes_closed.npy', eyes_closed)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Getting the data
Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
Step5: Network Inputs
Step6: Generator
Step7: Discriminator
Step9: Model Loss
Step11: Optimizers
Step12: Building the model
Step13: Here is a function for displaying generated images.
Step14: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an errror without it because of the tf.control_dependencies block we created in model_opt.
Step15: Hyperparameters
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), y
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
# Reshape it to start the convolutional stack
x1 = tf.reshape(x1, (-1, 4, 4, 512))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha * x1, x1)
# 4x4x512 now
x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=training)
x2 = tf.maximum(alpha * x2, x2)
# 8x8x256 now
x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=training)
x3 = tf.maximum(alpha * x3, x3)
# 16x16x128 now
# Output layer
logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')
# 32x32x3 now
out = tf.tanh(logits)
return out
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same')
relu1 = tf.maximum(alpha * x1, x1)
# 16x16x64
x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')
bn2 = tf.layers.batch_normalization(x2, training=True)
relu2 = tf.maximum(alpha * bn2, bn2)
# 8x8x128
x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')
bn3 = tf.layers.batch_normalization(x3, training=True)
relu3 = tf.maximum(alpha * bn3, bn3)
# 4x4x256
# Flatten it
flat = tf.reshape(relu3, (-1, 4*4*256))
logits = tf.layers.dense(flat, 1)
out = tf.sigmoid(logits)
return out, logits
def model_loss(input_real, input_z, output_dim, alpha=0.2):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=0.2)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
real_size = (32,32,3)
z_size = 100
learning_rate = 0.0002
batch_size = 128
epochs = 25
alpha = 0.2
beta1 = 0.5
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that the example_num argument is passed in but not used in the monty_hall function. Although it's unneeded for the function, it is easier to use interact to call functions when they have arguments to manipulate
Step2: By interacting with the function above, we are able to informally verify that the function never allows the host to open a door with a car behind it. Even though the function is random we are able to use interaction to examine its long-term behavior!
Step3: Again, a bit of interaction lets us quickly examine the behavior of winner. We can see that switch appears more often than stay.
Step4: To make an interactive chart, pass a response function in place of one or both of bar's arguments.
Step5: Visualizing the Winners
Step6: Note that by default the plot will adjust its y-axis to match the limits of the data. We can manually set the y-axis limits to better visualize this plot being built up. We will also add labels to our plot
Step7: We can get even fancy and use the Play widget from ipywidgets to animate the plot.
Step8: Now we have an interactive, animated bar plot showing the distribution of wins over time for both Monty Hall strategies. This is a convincing argument that switching is better than staying. In fact, the bar plot above suggests that switching is about twice as likely to win as staying!
Step9: We can then define a function to play sets of games and generate a list of win proportions for each set
Step10: Interacting with generate_proportions shows the relationship between its arguments sample_size and repetitions more quickly than reading the function itself!
Step11: As with last time, it's illustrative to specify the limits of the axes
Step12: We can see that the distribution of wins is centered at roughly 0.66 but the distribution almost spans the entire x-axis. Will increasing the sample size make our distribution more narrow? Will increasing repetitions do the trick? Or both? We can find out through simulation and interaction.
Step13: So increasing the sample size makes the distribution narrower. We can now see more clearly that the distribution is centered at 0.66.
|
<ASSISTANT_TASK:>
Python Code:
from ipywidgets import interact
import numpy as np
import random
PRIZES = ['Car', 'Goat 1', 'Goat 2']
def monty_hall(example_num=0):
'''
Simulates one round of the Monty Hall Problem. Outputs a tuple of
(result if stay, result if switch, result behind opened door) where
each results is one of PRIZES.
'''
pick = random.choice(PRIZES)
opened = random.choice(
[p for p in PRIZES if p != pick and p != 'Car']
)
remainder = next(p for p in PRIZES if p != pick and p != opened)
return (pick, remainder, opened)
interact(monty_hall, example_num=(0, 100));
def winner(example_num=0):
'''
Plays a game of Monty Hall. If staying with the original door wins
a car, return 'stay'. Otherwise, the remaining door contains the car
so 'switch' would have won.
'''
picked, _, _ = monty_hall()
return 'stay' if picked == 'Car' else 'switch'
interact(winner, example_num=(0, 100));
import nbinteract as nbi
nbi.bar(['a', 'b'], [4, 6])
# This function generates the x-values
def categories(n):
return list('abcdefg')[:n]
# This function generates the y-values (heights of bars)
# The y response function always takes in the x-values as its
# first argument
def offset_y(xs, offset):
num_categories = len(xs)
return np.arange(num_categories) + offset
# Each argument of the response functions is passed in as a keyword
# argument to `nbi.bar` in the same format as `interact`
nbi.bar(categories, offset_y, n=(1, 7), offset=(0, 10))
categories = ['stay', 'switch']
winners = [winner() for _ in range(1000)]
# Note that the the first argument to the y response function
# will be the x-values which we don't need
def won(_, num_games):
'''
Outputs a 2-tuple of the number of times each strategy won
after num_games games.
'''
return (winners[:num_games].count('stay'),
winners[:num_games].count('switch'))
nbi.bar(categories, won, num_games=(1, 1000))
options = {
'title': 'Number of times each strategy wins',
'xlabel': 'Strategy',
'ylabel': 'Number of wins',
'ylim': (0, 700),
}
nbi.bar(categories, won, options=options, num_games=(1, 1000))
from ipywidgets import Play
nbi.bar(categories, won, options=options,
num_games=Play(min=0, max=1000, step=10, value=0, interval=17))
def prop_wins(sample_size):
'''Returns proportion of times switching wins after sample_size games.'''
return sum(winner() == 'switch' for _ in range(sample_size)) / sample_size
interact(prop_wins, sample_size=(10, 100));
def generate_proportions(sample_size, repetitions):
'''
Returns an array of length reptitions. Each element in the list is the
proportion of times switching won in sample_size games.
'''
return np.array([prop_wins(sample_size) for _ in range(repetitions)])
interact(generate_proportions, sample_size=(10, 100), repetitions=(10, 100));
# Play the game 10 times, recording the proportion of times switching wins.
# Repeat 100 times to record 100 proportions
proportions = generate_proportions(sample_size=10, repetitions=100)
def props_up_to(num_sets):
return proportions[:num_sets]
nbi.hist(props_up_to, num_sets=Play(min=0, max=100, value=0, interval=50))
options = {
'title': 'Distribution of win proportion over 100 sets of 10 games when switching',
'xlabel': 'Proportions',
'ylabel': 'Percent per area',
'xlim': (0.3, 1),
'ylim': (0, 3),
'bins': 7,
}
nbi.hist(props_up_to, options=options, num_sets=Play(min=0, max=100, value=0, interval=50))
varying_sample_size = [generate_proportions(sample_size, repetitions=100)
for sample_size in range(10, 101)]
def props_for_sample_size(sample_size):
return varying_sample_size[sample_size - 10]
changed_options = {
'title': 'Distribution of win proportions as sample size increases',
'ylim': (0, 6),
'bins': 20,
}
nbi.hist(props_for_sample_size,
options={**options, **changed_options},
sample_size=Play(min=10, max=100, value=10, interval=50))
varying_reps = [generate_proportions(sample_size=10, repetitions=reps) for reps in range(10, 101)]
def props_for_reps(reps):
return varying_reps[reps - 10]
changed_options = {
'title': 'Distribution of win proportions as repetitions increase',
'ylim': (0, 5),
}
nbi.hist(props_for_reps,
options={**options, **changed_options},
reps=Play(min=10, max=100, value=10, interval=50))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Let's first implement sequential updates for GLMs.
Step5: Let's make a function which can sample from a multivariate normal distribution.
Step9: Let's makes some classes to represent different kinds of knobs - we'll just implement fixed and categorical (including binary) knobs here, but of course you can implement other ones.
Step11: And now, to sample and optimize these knobs...
Step13: Now we have all the pieces that we need to do contextual bandit. Let's run a binomial contextual bandit with a bunch of binary knobs, each of which can have a modest effect on the reward - in the range of 10%. We'll run 10 trials per batch, and run this for a number of batches.
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal
def logistic(x):
Returns the logistic of the numeric argument to the function.
return 1 / (1 + np.exp(-x))
def estimate_glm_params(X,
y,
prior_mean,
prior_precision,
model = 'binomial',
niters = 1):
Estimate model parameters for a GLM.
Find MAP estimate of a GLM with a normal prior on the model parameters.
Args:
X: an NxM matrix - the design matrix
y: a N length vector - the measured outcome
prior_mean: an M length vector - the prior mean
prior_precision: an MxM matrix - the prior precision
model: a string - accepts normal, binomial, poisson.
Uses canonical links. Gaussian assumes observation noise has variance 1.
niters: the number of Newton iterations to do.
Returns:
(w_MAP, precision_MAP): the MAP parameter and its precision
( == the Hessian at the MAP)
w = prior_mean
for i in range(niters):
eta = X.dot(w)
if model == 'normal':
mu = eta
H = X.T.dot(X) + prior_precision
elif model == 'binomial':
mu = logistic(eta)
H = X.T.dot(((1 - mu) *mu).reshape((-1,1)) * X) + prior_precision
elif model == 'poisson':
mu = np.exp(eta)
H = X.T.dot(mu.reshape((-1,1)) * X) + prior_precision
else:
raise ValueError('Model should be one of normal, binomial, poisson')
g = X.T.dot(mu - y)
Hg, _, _, _ = np.linalg.lstsq(H, g)
w = w - Hg
return w, H
# Check that one-shot estimation works.
ndata_points = 10000
ndims = 10
X = np.random.randn(ndata_points, ndims)
prior_precision = 100*np.eye(10)
w = np.random.randn((ndims))*.1
threshold = .95
for family in ['normal', 'binomial', 'poisson']:
w_mean = np.zeros((ndims))
if family == 'normal':
mu = X.dot(w)
y = mu + np.random.randn(mu.size)
elif family == 'binomial':
mu = logistic(X.dot(w))
y = np.random.binomial(1, mu)
elif family == 'poisson':
mu = np.exp(X.dot(w))
y = np.random.poisson(mu)
w_est, H_est = estimate_glm_params(X, y, w_mean, prior_precision, family, niters= 10)
assert np.corrcoef(w_est, w)[0, 1] > threshold
w_est0 = w_est.copy()
# Check that sequential estimation works
nbatches = 100
w_est = w_mean.copy()
prior_precision_est = prior_precision.copy()
for n in range(nbatches):
rg = slice( int(n / nbatches), int((n+1)*ndata_points / nbatches))
w_est, prior_precision_est = estimate_glm_params(X[rg,:], y[rg], w_est, prior_precision_est, family)
assert np.corrcoef(w_est0, w)[0, 1] > threshold
assert np.corrcoef(w_est0, w_est)[0, 1] > threshold
print "Sequential estimation in GLMs is working."
def sample_normal_mean_precision(mean, precision, N_samples = 1):
Samples from a normal distribution with a mean and precision.
Uses eigenvalue decomposition to sample from the right distribution.
Reference:
https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Drawing_values_from_the_distribution
Args:
mean: an M-length vector, the mean of the normal.
precision: an MxM matrix, the precision of the normal.
N_samples: the number of samples.
Returns:
An MxN sample matrix.
S, U = np.linalg.eig(precision)
noise_vector = np.random.randn(precision.shape[1], N_samples)
projection_matrix = (U * (S ** (-1/2)).reshape((1, -1)))
sample = mean.reshape((-1, 1)) + projection_matrix.dot(noise_vector)
return sample
X = np.random.randn(100, 10)
S, U = np.linalg.eig(X.T.dot(X))
S = S ** 3
cov = U.dot(S.reshape((-1, 1)) * U.T)
precision = np.linalg.inv(cov)
samples = sample_normal_mean_precision(np.zeros(precision.shape[0]), precision, 1000)
cov_est = samples.dot(samples.T) / samples.shape[1]
assert abs((cov_est - cov) / cov.max()).max() < .1
class FixedKnob(object):
def __init__(self):
Defines a fixed knob
self.dim = 1
def optimal_design(self, knob_values):
Returns the optimal design contingent on the knob values.
return np.ones_like(knob_values)
class CategoricalKnob(object):
def __init__(self, nclasses = 2):
Defines a categorical knob. With nclasses = 2, this becomes a
binary knob.
self.dim = nclasses - 1
def optimal_design(self, knob_values):
if self.dim == 1:
return (1 * (knob_values > 0)).reshape((-1, 1))
else:
max_vals = 1 * (knob_values == knob_values.max(axis = 1).reshape((-1, 1)))
# De-dup in case of ties
max_vals = max_vals * (np.cumsum(max_vals, axis = 1) == 1)
return 1 * (knob_values > 0) * max_vals
# Check that de-duping works
knob = CategoricalKnob(3)
optimal_design = knob.optimal_design(np.array([.5, .5]).reshape((1, -1)))
assert np.allclose(optimal_design, np.array([1, 0]))
# Check that it selects the default category when all the parameters
# are negative.
optimal_design = knob.optimal_design(np.array([-.5, -.5]).reshape((1, -1)))
assert np.allclose(optimal_design, np.array([0, 0]))
def thompson_sampling(knobs, prior_mean, prior_precision, N_samples):
Do Thompson sampling for the posterior distribution of the parameters
of the knobs.
Args:
knobs: a list of knobs
prior_mean: a M-length vector of means
prior_precision: an MxM matrix of means
N_samples: the number of samples to take
Returns:
(sampled_params, optimal_design) the sampled parameters (M x N_samples)
and the optimal design (N_samples X N) corresponding to each draw from
the sampled params.
sampled_params = sample_normal_mean_precision(prior_mean, prior_precision, N_samples)
X = []
start_block = 0
# Sample from each knob in sequence.
for knob in knobs:
rg = slice(start_block, start_block + knob.dim)
X.append(knob.optimal_design(sampled_params[rg,:].T))
start_block += knob.dim
return sampled_params, np.hstack(X)
knobs = [FixedKnob(),
CategoricalKnob(2)]
# All these knobs are good, so we expect a matrix of ones.
w, X = sample_optimize_knobs(knobs,
np.ones(2),
np.eye((2))*100,
10)
assert X.shape[0] == 10
assert X.shape[1] == 2
assert np.allclose(X, np.ones((10, 2)))
knobs = [CategoricalKnob(3)]
# Check that we get roughly the same number of 1's in each column
w, X = thompson_sampling(knobs,
np.ones(2),
np.eye((2))*100,
1000)
assert X.mean(0)[0] > .45 and X.mean(0)[1] < .55
def simulate_binomial_bandit(true_parameters,
knobs,
prior_mean,
prior_precision,
batch_size,
N_batches):
Run the binomial contextual bandit with Thompson sampling policy.
rewards = np.zeros((N_batches,batch_size))
for i in range(N_batches):
# Get a design matrix for this batch
_, X = thompson_sampling(knobs, prior_mean, prior_precision, batch_size)
# Simulate rewards
reward_rate = logistic(X.dot(true_parameters))
batch_rewards = np.random.binomial(1, reward_rate)
# Update the matrix.
prior_mean, prior_precision = estimate_glm_params(X,
batch_rewards,
prior_mean,
prior_precision)
# Store the outcome.
rewards[i, :] = batch_rewards
return rewards, prior_mean
def logit(p):
return np.log(p / (1 - p))
baseline_rate = .5
beta = logit(baseline_rate)
beta_sd = .2
N_knobs = 10
knob_sd = .5
batch_size = 10
N_batches = 50
prior_mean = np.hstack((beta, np.zeros(N_knobs)))
prior_precision = np.diag(np.hstack((1 / beta_sd**2,
np.ones(N_knobs) / knob_sd**2)))
# Pick the parameters from the prior distribution.
true_parameters = sample_normal_mean_precision(prior_mean, prior_precision).squeeze()
knobs = [FixedKnob()]
for i in range(N_knobs):
knobs.append(CategoricalKnob())
rewards, _ = simulate_binomial_bandit(true_parameters,
knobs,
prior_mean,
prior_precision,
batch_size,
N_batches)
reward_sequence = rewards.ravel()
plt.figure(figsize=(13, 5))
# And also plot a smoother version
sigma = 3
rg = np.arange(-int(3*sigma), int(3*sigma) + 1)
thefilt = np.exp(-(rg**2) / 2 / sigma**2)
thefilt = thefilt / thefilt.sum()
smoothed_sequence = scipy.signal.convolve(reward_sequence, thefilt, 'same')
smoothed_sequence /= scipy.signal.convolve(np.ones_like(reward_sequence), thefilt, 'same')
plt.plot(smoothed_sequence)
plt.axis('tight')
plt.box('off')
# And show the optimal average reward
_, opt_design = sample_optimize_knobs(knobs, true_parameters, prior_precision*10000, 1)
opt_reward = logistic(opt_design.dot(true_parameters))
plt.plot([0, N_batches*batch_size], [opt_reward, opt_reward], 'r-')
plt.text(0, opt_reward, 'Best attainable average reward')
plt.xlabel('Trial #')
plt.title('Smoothed reward')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If we have a specific point in space and time we wish to test, it can be
Step2: Absent specific hypotheses, we can also conduct an exploratory
Step3: The results of these mass univariate analyses can be visualised by plotting
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import ttest_ind
import mne
from mne.channels import find_ch_adjacency, make_1020_channel_selections
from mne.stats import spatio_temporal_cluster_test
np.random.seed(0)
# Load the data
path = mne.datasets.kiloword.data_path() + '/kword_metadata-epo.fif'
epochs = mne.read_epochs(path)
# These data are quite smooth, so to speed up processing we'll (unsafely!) just
# decimate them
epochs.decimate(4, verbose='error')
name = "NumberOfLetters"
# Split up the data by the median length in letters via the attached metadata
median_value = str(epochs.metadata[name].median())
long_words = epochs[name + " > " + median_value]
short_words = epochs[name + " < " + median_value]
time_windows = ((.2, .25), (.35, .45))
elecs = ["Fz", "Cz", "Pz"]
index = ['condition', 'epoch', 'time']
# display the EEG data in Pandas format (first 5 rows)
print(epochs.to_data_frame(index=index)[elecs].head())
report = "{elec}, time: {tmin}-{tmax} s; t({df})={t_val:.3f}, p={p:.3f}"
print("\nTargeted statistical test results:")
for (tmin, tmax) in time_windows:
long_df = long_words.copy().crop(tmin, tmax).to_data_frame(index=index)
short_df = short_words.copy().crop(tmin, tmax).to_data_frame(index=index)
for elec in elecs:
# extract data
A = long_df[elec].groupby("condition").mean()
B = short_df[elec].groupby("condition").mean()
# conduct t test
t, p = ttest_ind(A, B)
# display results
format_dict = dict(elec=elec, tmin=tmin, tmax=tmax,
df=len(epochs.events) - 2, t_val=t, p=p)
print(report.format(**format_dict))
# Calculate adjacency matrix between sensors from their locations
adjacency, _ = find_ch_adjacency(epochs.info, "eeg")
# Extract data: transpose because the cluster test requires channels to be last
# In this case, inference is done over items. In the same manner, we could
# also conduct the test over, e.g., subjects.
X = [long_words.get_data().transpose(0, 2, 1),
short_words.get_data().transpose(0, 2, 1)]
tfce = dict(start=.4, step=.4) # ideally start and step would be smaller
# Calculate statistical thresholds
t_obs, clusters, cluster_pv, h0 = spatio_temporal_cluster_test(
X, tfce, adjacency=adjacency,
n_permutations=100) # a more standard number would be 1000+
significant_points = cluster_pv.reshape(t_obs.shape).T < .05
print(str(significant_points.sum()) + " points selected by TFCE ...")
# We need an evoked object to plot the image to be masked
evoked = mne.combine_evoked([long_words.average(), short_words.average()],
weights=[1, -1]) # calculate difference wave
time_unit = dict(time_unit="s")
evoked.plot_joint(title="Long vs. short words", ts_args=time_unit,
topomap_args=time_unit) # show difference wave
# Create ROIs by checking channel labels
selections = make_1020_channel_selections(evoked.info, midline="12z")
# Visualize the results
fig, axes = plt.subplots(nrows=3, figsize=(8, 8))
axes = {sel: ax for sel, ax in zip(selections, axes.ravel())}
evoked.plot_image(axes=axes, group_by=selections, colorbar=False, show=False,
mask=significant_points, show_names="all", titles=None,
**time_unit)
plt.colorbar(axes["Left"].images[-1], ax=list(axes.values()), shrink=.3,
label="µV")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Step3: Before you begin
Step4: Region
Step5: Timestamp
Step6: Authenticate your Google Cloud account
Step7: Create a Cloud Storage bucket
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Step11: Vertex constants
Step12: Hardware Accelerators
Step13: Container (Docker) image
Step14: Machine Type
Step15: Tutorial
Step16: Train a model
Step17: Prepare your disk specification
Step18: Define the worker pool specification
Step19: Assemble a job specification
Step20: Examine the training package
Step21: Task.py contents
Step22: Store training script on your Cloud Storage bucket
Step23: Train the model
Step24: Now get the unique identifier for the custom job you created.
Step25: Get information on a custom job
Step26: Deployment
Step27: Load the saved model
Step28: Evaluate the model
Step29: Perform the model evaluation
Step30: Upload the model for serving
Step31: Upload the model
Step32: Get Model resource information
Step33: Deploy the Model resource
Step34: Now get the unique identifier for the Endpoint resource you created.
Step35: Compute instance scaling
Step36: Deploy Model resource to the Endpoint resource
Step37: Make a online prediction request
Step38: Send the prediction request
Step39: Undeploy the Model resource
Step40: Cleaning up
|
<ASSISTANT_TASK:>
Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
! pip3 install -U google-cloud-storage $USER_FLAG
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "us-central1" # @param {type: "string"}
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_NAME
! gsutil ls -al $BUCKET_NAME
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["job"] = create_job_client()
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_imdb.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
if DIRECT:
job_spec = {"worker_pool_specs": worker_pool_spec}
else:
job_spec = {
"worker_pool_specs": worker_pool_spec,
"base_output_directory": {"output_uri_prefix": MODEL_DIR},
}
custom_job = {"display_name": JOB_NAME, "job_spec": job_spec}
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: IMDB Movie Reviews text binary classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for IMDB
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=1e-4, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print(device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets():
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True,
as_supervised=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
encoder = info.features['text'].encoder
padded_shapes = ([None],())
return train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE, padded_shapes), encoder
train_dataset, encoder = make_datasets()
# Build the Keras model
def build_and_compile_rnn_model(encoder):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(encoder.vocab_size, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(args.lr),
metrics=['accuracy'])
return model
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_rnn_model(encoder)
# Train the model
model.fit(train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_imdb.tar.gz
def create_custom_job(custom_job):
response = clients["job"].create_custom_job(parent=PARENT, custom_job=custom_job)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_custom_job(custom_job)
# The full unique ID for the custom job
job_id = response.name
# The short numeric ID for the custom job
job_short_id = job_id.split("/")[-1]
print(job_id)
def get_custom_job(name, silent=False):
response = clients["job"].get_custom_job(name=name)
if silent:
return response
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = get_custom_job(job_id)
while True:
response = get_custom_job(job_id, True)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_path_to_deploy = None
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
print("Training Time:", response.update_time - response.create_time)
break
time.sleep(60)
print("model_to_deploy:", model_path_to_deploy)
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
import tensorflow_datasets as tfds
dataset, info = tfds.load("imdb_reviews/subwords8k", with_info=True, as_supervised=True)
test_dataset = dataset["test"]
encoder = info.features["text"].encoder
BATCH_SIZE = 64
padded_shapes = ([None], ())
test_dataset = test_dataset.padded_batch(BATCH_SIZE, padded_shapes)
model.evaluate(test_dataset)
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
IMAGE_URI = DEPLOY_IMAGE
def upload_model(display_name, image_uri, model_uri):
model = {
"display_name": display_name,
"metadata_schema_uri": "",
"artifact_uri": model_uri,
"container_spec": {
"image_uri": image_uri,
"command": [],
"args": [],
"env": [{"name": "env_name", "value": "env_value"}],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": "",
},
}
response = clients["model"].upload_model(parent=PARENT, model=model)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=180)
print("upload_model_response")
print(" model:", upload_model_response.model)
return upload_model_response.model
model_to_deploy_id = upload_model("imdb-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy)
def get_model(name):
response = clients["model"].get_model(name=name)
print(response)
get_model(model_to_deploy_id)
ENDPOINT_NAME = "imdb_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
MIN_NODES = 1
MAX_NODES = 1
DEPLOYED_NAME = "imdb_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
import tensorflow_datasets as tfds
dataset, info = tfds.load("imdb_reviews/subwords8k", with_info=True, as_supervised=True)
test_dataset = dataset["test"]
test_dataset.take(1)
for data in test_dataset:
print(data)
break
test_item = data[0].numpy()
def predict_data(data, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{serving_input: data.tolist()}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", prediction)
predict_data(test_item, endpoint_id, None)
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Implementation
Step2: Preparing the Data
Step3: For highly-skewed feature distributions such as 'capital-gain' and 'capital-loss', it is common practice to apply a <a href="https
Step4: Normalizing Numerical Features
Step5: Implementation
Step6: Shuffle and Split Data
Step7: Evaluating Model Performance
Step8: Supverised Learning Models
Step9: Implementation
Step10: Improving Results
Step11: Question 5 - Final Model Evaluation
Step12: Question 7 - Extracting Feature Importance
|
<ASSISTANT_TASK:>
Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualization code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Census dataset
data = pd.read_csv("census.csv")
# Success - Display the first record
display(data.head(n=1))
# TODO: Total number of records
n_records = len(data)
# TODO: Number of records where individual's income is more than $50,000
n_greater_50k = 0
for entry in data.income:
if entry == '>50K':
n_greater_50k = n_greater_50k+1
# TODO: Number of records where individual's income is at most $50,000
n_at_most_50k = 0
for entry in data.income:
if entry == '<=50K':
n_at_most_50k = n_at_most_50k + 1
# TODO: Percentage of individuals whose income is more than $50,000
greater_percent = (float(n_greater_50k)/n_records)*100
# Print the results
print "Total number of records: {}".format(n_records)
print "Individuals making more than $50,000: {}".format(n_greater_50k)
print "Individuals making at most $50,000: {}".format(n_at_most_50k)
print "Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent)
# Split the data into features and target label
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
# Visualize skewed continuous features of original data
vs.distribution(data)
# Log-transform the skewed features
skewed = ['capital-gain', 'capital-loss']
features_raw[skewed] = data[skewed].apply(lambda x: np.log(x + 1))
# Visualize the new log distributions
vs.distribution(features_raw, transformed = True)
# Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
# Initialize a scaler, then apply it to the features
scaler = MinMaxScaler()
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_raw[numerical] = scaler.fit_transform(data[numerical])
# Show an example of a record with scaling applied
display(features_raw.head(n = 1))
from sklearn.preprocessing import LabelEncoder
import pandas as pd
# TODO: One-hot encode the 'features_raw' data using pandas.get_dummies()
features = pd.get_dummies(features_raw)
le = LabelEncoder()
le.fit(income_raw)
# TODO: Encode the 'income_raw' data to numerical values
income = le.transform(income_raw)
# Print the number of features after one-hot encoding
encoded = list(features.columns)
print "{} total features after one-hot encoding.".format(len(encoded))
# Uncomment the following line to see the encoded feature names
print encoded
# Import train_test_split
from sklearn.cross_validation import train_test_split
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features, income, test_size = 0.2, random_state = 0)
# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
# TODO: Calculate accuracy
accuracy = 0.2478
# TODO: Calculate F-score using the formula above for beta = 0.5
fscore = (1+0.5**2)*(0.2478*1)/(((0.5**2)*0.2478)+1)
# Print the results
print "Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore)
# TODO: Import two metrics from sklearn - fbeta_score and accuracy_score
from sklearn.metrics import fbeta_score, accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
# TODO: Fit the learner to the training data using slicing with 'sample_size'
start = time() # Get start time
learner.fit(X_train[:sample_size], y_train[:sample_size])
end = time() # Get end time
# TODO: Calculate the training time
results['train_time'] = end-start
# TODO: Get the predictions on the test set,
# then get predictions on the first 300 training samples
start = time() # Get start time
predictions_test = learner.predict(X_test)
predictions_train = learner.predict(X_train[:300])
end = time() # Get end time
# TODO: Calculate the total prediction time
results['pred_time'] = end-start
# TODO: Compute accuracy on the first 300 training samples
results['acc_train'] = accuracy_score(y_train[:300],predictions_train)
# TODO: Compute accuracy on test set
results['acc_test'] = accuracy_score(y_test,predictions_test)
# TODO: Compute F-score on the the first 300 training samples
results['f_train'] = fbeta_score(y_train[:300],predictions_train, beta = 0.5)
# TODO: Compute F-score on the test set
results['f_test'] = fbeta_score(y_test,predictions_test,beta =0.5)
# Success
print "{} trained on {} samples.".format(learner.__class__.__name__, sample_size)
# Return the results
return results
# TODO: Import the three supervised learning models from sklearn
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
# TODO: Initialize the three models
clf_A = GaussianNB()
clf_B = DecisionTreeClassifier(random_state = 1234)
clf_C = SVC(random_state = 1234)
# TODO: Calculate the number of samples for 1%, 10%, and 100% of the training data
samples_1 = len(X_train)/100
samples_10 = len(X_train)/10
samples_100 = len(X_train)/1
# Collect results on the learners
results = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
# Run metrics visualization for the three supervised learning models chosen
vs.evaluate(results, accuracy, fscore)
# TODO: Import 'GridSearchCV', 'make_scorer', and any other necessary libraries
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
# TODO: Initialize the classifier
clf = DecisionTreeClassifier(random_state = 1234)
# TODO: Create the parameters list you wish to tune
parameters = {'max_depth' : range(1,10)}
# TODO: Make an fbeta_score scoring object
scorer = make_scorer(fbeta_score, beta = 0.5)
# TODO: Perform grid search on the classifier using 'scorer' as the scoring method
grid_obj = GridSearchCV(clf, parameters,scoring = scorer)
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_fit = grid_obj.fit(X_train,y_train)
# Get the estimator
best_clf = grid_fit.best_estimator_
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Report the before-and-afterscores
print "Unoptimized model\n------"
print "Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5))
print "\nOptimized Model\n------"
print "Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))
print "Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))
# TODO: Import a supervised learning model that has 'feature_importances_'
# TODO: Train the supervised model on the training set
model = DecisionTreeClassifier(random_state = 1234)
model.fit(X_train , y_train)
# TODO: Extract the feature importances
importances = model.feature_importances_
# Plot
vs.feature_plot(importances, X_train, y_train)
# Import functionality for cloning a model
from sklearn.base import clone
# Reduce the feature space
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]
# Train on the "best" model found from grid search earlier
clf = (clone(best_clf)).fit(X_train_reduced, y_train)
# Make new predictions
reduced_predictions = clf.predict(X_test_reduced)
# Report scores from the final model using both versions of data
print "Final Model trained on full data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5))
print "\nFinal Model trained on reduced data\n------"
print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions))
print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Premade Estimators
Step2: The data set
Step3: Next, download and parse the Iris data set using Keras and Pandas. Note that you keep distinct datasets for training and testing.
Step4: You can inspect your data to see that you have four float feature columns and one int32 label.
Step5: For each of the datasets, split out the labels, which the model will be trained to predict.
Step6: Overview of programming with Estimators
Step8: Your input function may generate the features dictionary and label list any
Step9: Define the feature columns
Step10: Feature columns can be far more sophisticated than those shown here. You can read more about Feature Columns in this guide.
Step11: Train, Evaluate, and Predict
Step12: Note that you wrap up your input_fn call in a
Step14: Unlike the call to the train method, you did not pass the steps
Step15: The predict method returns a Python iterable, yielding a dictionary of
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import pandas as pd
CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species']
SPECIES = ['Setosa', 'Versicolor', 'Virginica']
train_path = tf.keras.utils.get_file(
"iris_training.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv")
test_path = tf.keras.utils.get_file(
"iris_test.csv", "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv")
train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0)
test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0)
train.head()
train_y = train.pop('Species')
test_y = test.pop('Species')
# The label column has now been removed from the features.
train.head()
def input_evaluation_set():
features = {'SepalLength': np.array([6.4, 5.0]),
'SepalWidth': np.array([2.8, 2.3]),
'PetalLength': np.array([5.6, 3.3]),
'PetalWidth': np.array([2.2, 1.0])}
labels = np.array([2, 1])
return features, labels
def input_fn(features, labels, training=True, batch_size=256):
An input function for training or evaluating
# Convert the inputs to a Dataset.
dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))
# Shuffle and repeat if you are in training mode.
if training:
dataset = dataset.shuffle(1000).repeat()
return dataset.batch(batch_size)
# Feature columns describe how to use the input.
my_feature_columns = []
for key in train.keys():
my_feature_columns.append(tf.feature_column.numeric_column(key=key))
# Build a DNN with 2 hidden layers with 30 and 10 hidden nodes each.
classifier = tf.estimator.DNNClassifier(
feature_columns=my_feature_columns,
# Two hidden layers of 30 and 10 nodes respectively.
hidden_units=[30, 10],
# The model must choose between 3 classes.
n_classes=3)
# Train the Model.
classifier.train(
input_fn=lambda: input_fn(train, train_y, training=True),
steps=5000)
eval_result = classifier.evaluate(
input_fn=lambda: input_fn(test, test_y, training=False))
print('\nTest set accuracy: {accuracy:0.3f}\n'.format(**eval_result))
# Generate predictions from the model
expected = ['Setosa', 'Versicolor', 'Virginica']
predict_x = {
'SepalLength': [5.1, 5.9, 6.9],
'SepalWidth': [3.3, 3.0, 3.1],
'PetalLength': [1.7, 4.2, 5.4],
'PetalWidth': [0.5, 1.5, 2.1],
}
def input_fn(features, batch_size=256):
An input function for prediction.
# Convert the inputs to a Dataset without labels.
return tf.data.Dataset.from_tensor_slices(dict(features)).batch(batch_size)
predictions = classifier.predict(
input_fn=lambda: input_fn(predict_x))
for pred_dict, expec in zip(predictions, expected):
class_id = pred_dict['class_ids'][0]
probability = pred_dict['probabilities'][class_id]
print('Prediction is "{}" ({:.1f}%), expected "{}"'.format(
SPECIES[class_id], 100 * probability, expec))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Read epochs for the channel of interest
Step3: Find the FieldTrip neighbor definition to setup sensor adjacency
Step4: Compute permutation statistic
Step5: Note. The same functions work with source estimate. The only differences
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Denis Engemann <denis.engemann@gmail.com>
# Jona Sassenhagen <jona.sassenhagen@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import mne
from mne.stats import spatio_temporal_cluster_test
from mne.datasets import sample
from mne.channels import find_ch_adjacency
from mne.viz import plot_compare_evokeds
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id = {'Aud/L': 1, 'Aud/R': 2, 'Vis/L': 3, 'Vis/R': 4}
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 30, fir_design='firwin')
events = mne.read_events(event_fname)
picks = mne.pick_types(raw.info, meg='mag', eog=True)
reject = dict(mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
epochs.drop_channels(['EOG 061'])
epochs.equalize_event_counts(event_id)
X = [epochs[k].get_data() for k in event_id] # as 3D matrix
X = [np.transpose(x, (0, 2, 1)) for x in X] # transpose for clustering
adjacency, ch_names = find_ch_adjacency(epochs.info, ch_type='mag')
print(type(adjacency)) # it's a sparse matrix!
plt.imshow(adjacency.toarray(), cmap='gray', origin='lower',
interpolation='nearest')
plt.xlabel('{} Magnetometers'.format(len(ch_names)))
plt.ylabel('{} Magnetometers'.format(len(ch_names)))
plt.title('Between-sensor adjacency')
# set cluster threshold
threshold = 50.0 # very high, but the test is quite sensitive on this data
# set family-wise p-value
p_accept = 0.01
cluster_stats = spatio_temporal_cluster_test(X, n_permutations=1000,
threshold=threshold, tail=1,
n_jobs=1, buffer_size=None,
adjacency=adjacency)
T_obs, clusters, p_values, _ = cluster_stats
good_cluster_inds = np.where(p_values < p_accept)[0]
# configure variables for visualization
colors = {"Aud": "crimson", "Vis": 'steelblue'}
linestyles = {"L": '-', "R": '--'}
# organize data for plotting
evokeds = {cond: epochs[cond].average() for cond in event_id}
# loop over clusters
for i_clu, clu_idx in enumerate(good_cluster_inds):
# unpack cluster information, get unique indices
time_inds, space_inds = np.squeeze(clusters[clu_idx])
ch_inds = np.unique(space_inds)
time_inds = np.unique(time_inds)
# get topography for F stat
f_map = T_obs[time_inds, ...].mean(axis=0)
# get signals at the sensors contributing to the cluster
sig_times = epochs.times[time_inds]
# create spatial mask
mask = np.zeros((f_map.shape[0], 1), dtype=bool)
mask[ch_inds, :] = True
# initialize figure
fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3))
# plot average test statistic and mark significant sensors
f_evoked = mne.EvokedArray(f_map[:, np.newaxis], epochs.info, tmin=0)
f_evoked.plot_topomap(times=0, mask=mask, axes=ax_topo, cmap='Reds',
vmin=np.min, vmax=np.max, show=False,
colorbar=False, mask_params=dict(markersize=10))
image = ax_topo.images[0]
# create additional axes (for ERF and colorbar)
divider = make_axes_locatable(ax_topo)
# add axes for colorbar
ax_colorbar = divider.append_axes('right', size='5%', pad=0.05)
plt.colorbar(image, cax=ax_colorbar)
ax_topo.set_xlabel(
'Averaged F-map ({:0.3f} - {:0.3f} s)'.format(*sig_times[[0, -1]]))
# add new axis for time courses and plot time courses
ax_signals = divider.append_axes('right', size='300%', pad=1.2)
title = 'Cluster #{0}, {1} sensor'.format(i_clu + 1, len(ch_inds))
if len(ch_inds) > 1:
title += "s (mean)"
plot_compare_evokeds(evokeds, title=title, picks=ch_inds, axes=ax_signals,
colors=colors, linestyles=linestyles, show=False,
split_legend=True, truncate_yaxis='auto')
# plot temporal cluster extent
ymin, ymax = ax_signals.get_ylim()
ax_signals.fill_betweenx((ymin, ymax), sig_times[0], sig_times[-1],
color='orange', alpha=0.3)
# clean up viz
mne.viz.tight_layout(fig=fig)
fig.subplots_adjust(bottom=.05)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Read epochs for all channels, removing a bad one
Step3: Transform to source space
Step4: Transform to common cortical space
Step5: It's a good idea to spatially smooth the data, and for visualization
Step6: Now we need to prepare the group matrix for the ANOVA statistic. To make the
Step7: Prepare function for arbitrary contrast
Step8: Finally we will pick the interaction effect by passing 'A
Step9: A stat_fun must deal with a variable number of input arguments.
Step10: Compute clustering statistic
Step11: Visualize the clusters
Step12: Finally, let's investigate interaction effect by reconstructing the time
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# Denis Engemannn <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
import matplotlib.pyplot as plt
import mne
from mne import (io, spatial_tris_connectivity, compute_morph_matrix,
grade_to_tris)
from mne.stats import (spatio_temporal_cluster_test, f_threshold_mway_rm,
f_mway_rm, summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
# we'll load all four conditions that make up the 'two ways' of our ANOVA
event_id = dict(l_aud=1, r_aud=2, l_vis=3, r_vis=4)
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
epochs.equalize_event_counts(event_id)
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
inverse_operator = read_inverse_operator(fname_inv)
# we'll only use one hemisphere to speed up this example
# instead of a second vertex array we'll pass an empty array
sample_vertices = [inverse_operator['src'][0]['vertno'], np.array([], int)]
# Let's average and compute inverse, then resample to speed things up
conditions = []
for cond in ['l_aud', 'r_aud', 'l_vis', 'r_vis']: # order is important
evoked = epochs[cond].average()
evoked.resample(50, npad='auto')
condition = apply_inverse(evoked, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition.crop(0, None)
conditions.append(condition)
tmin = conditions[0].tmin
tstep = conditions[0].tstep
n_vertices_sample, n_times = conditions[0].lh_data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 4) * 10
for ii, condition in enumerate(conditions):
X[:, :, :, ii] += condition.lh_data[:, :, np.newaxis]
fsave_vertices = [np.arange(10242), np.array([], int)] # right hemi is empty
morph_mat = compute_morph_matrix('sample', 'fsaverage', sample_vertices,
fsave_vertices, 20, subjects_dir)
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 4)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 4)
X = np.transpose(X, [2, 1, 0, 3]) #
X = [np.squeeze(x) for x in np.split(X, 4, axis=-1)]
factor_levels = [2, 2]
effects = 'A:B'
# Tell the ANOVA not to compute p-values which we don't need for clustering
return_pvals = False
# a few more convenient bindings
n_times = X[0].shape[1]
n_conditions = 4
def stat_fun(*args):
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=return_pvals)[0]
# get f-values only.
source_space = grade_to_tris(5)
# as we only have one hemisphere we need only need half the connectivity
lh_source_space = source_space[source_space[:, 0] < 10242]
print('Computing connectivity.')
connectivity = spatial_tris_connectivity(lh_source_space)
# Now let's actually do the clustering. Please relax, on a small
# notebook and one single thread only this will take a couple of minutes ...
pthresh = 0.0005
f_thresh = f_threshold_mway_rm(n_subjects, factor_levels, effects, pthresh)
# To speed things up a bit we will ...
n_permutations = 128 # ... run fewer permutations (reduces sensitivity)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1,
threshold=f_thresh, stat_fun=stat_fun,
n_permutations=n_permutations,
buffer_size=None)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# The brighter the color, the stronger the interaction between
# stimulus modality and stimulus location
brain = stc_all_cluster_vis.plot(subjects_dir=subjects_dir, colormap='mne',
views='lateral',
time_label='Duration significant (ms)')
brain.save_image('cluster-lh.png')
brain.show_view('medial')
inds_t, inds_v = [(clusters[cluster_ind]) for ii, cluster_ind in
enumerate(good_cluster_inds)][0] # first cluster
times = np.arange(X[0].shape[1]) * tstep * 1e3
plt.figure()
colors = ['y', 'b', 'g', 'purple']
event_ids = ['l_aud', 'r_aud', 'l_vis', 'r_vis']
for ii, (condition, color, eve_id) in enumerate(zip(X, colors, event_ids)):
# extract time course at cluster vertices
condition = condition[:, :, inds_v]
# normally we would normalize values across subjects but
# here we use data from the same subject so we're good to just
# create average time series across subjects and vertices.
mean_tc = condition.mean(axis=2).mean(axis=0)
std_tc = condition.std(axis=2).std(axis=0)
plt.plot(times, mean_tc.T, color=color, label=eve_id)
plt.fill_between(times, mean_tc + std_tc, mean_tc - std_tc, color='gray',
alpha=0.5, label='')
ymin, ymax = mean_tc.min() - 5, mean_tc.max() + 5
plt.xlabel('Time (ms)')
plt.ylabel('Activation (F-values)')
plt.xlim(times[[0, -1]])
plt.ylim(ymin, ymax)
plt.fill_betweenx((ymin, ymax), times[inds_t[0]],
times[inds_t[-1]], color='orange', alpha=0.3)
plt.legend()
plt.title('Interaction between stimulus-modality and location.')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The strategy, unlike our first attempt, requires a real train/test split in the dataset because we're going to fit an actual model (although a true LOO cross validation is still of course possible). But we need a train_test_split function which is able ot deal with lists of NetworkX objects.
Step2: First Classifier
Step3: Finding Optimal Hyperparameters
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import cPickle as pickle
from copy import deepcopy
from sklearn.utils import shuffle
import sklearn_mmadsen.graphs as skmg
%matplotlib inline
plt.style.use("fivethirtyeight")
sns.set()
all_graphs = pickle.load(open("train-sc-4-5-cont-graphs.pkl",'r'))
all_labels = pickle.load(open("train-sc-4-5-cont-labels.pkl",'r'))
train_graphs, train_labels, test_graphs, test_labels = skmg.graph_train_test_split(all_graphs, all_labels, test_fraction=0.10)
print "train size: %s" % len(train_graphs)
print "test size: %s" % len(test_graphs)
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
train_matrix = skmg.graphs_to_eigenvalue_matrix(train_graphs, num_eigenvalues=10)
test_matrix = skmg.graphs_to_eigenvalue_matrix(test_graphs, num_eigenvalues=10)
clf = GradientBoostingClassifier(n_estimators = 250)
clf.fit(train_matrix, train_labels)
pred_label = clf.predict(test_matrix)
cm = confusion_matrix(test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(test_labels, pred_label)
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
pipeline = Pipeline([
('clf', GradientBoostingClassifier())
])
params = {
'clf__learning_rate': [5.0,2.0,1.0, 0.75, 0.5, 0.25, 0.1, 0.05, 0.01],
'clf__n_estimators': [10,25,50,100,250,500]
}
grid_search = GridSearchCV(pipeline, params, n_jobs = -1, verbose = 1)
grid_search.fit(train_matrix, train_labels)
print("Best score: %0.3f" % grid_search.best_score_)
print("Best parameters:")
best_params = grid_search.best_estimator_.get_params()
for param in sorted(params.keys()):
print("param: %s: %r" % (param, best_params[param]))
pred_label = grid_search.predict(test_matrix)
cm = confusion_matrix(test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(test_labels, pred_label)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Class1
Step2: Morphology of the neurons
Step3: Feature of interneuron
Step4: Feature of Pyramidal
Step5: Sholl Diagram
Step6: histogram of diameters
Step7: Histogram of Slope of each segments
Step8: Histogram of distance from soma
Step9: Local Angle
Step10: Global Angle
Step11: Angle at the branching point
Step12: Rall Ratio
Step13: Slope
Step14: distance from parent
Step15: Ratio of neuronal distance over Euclidian distance from root
Step16: Connectivity matrix
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import McNeuron
import matplotlib.pyplot as plt
%matplotlib inline
#loc1 = "/Volumes/Arch/Projects/Computational Anatomy/neuron_nmo/poorthuis/CNG version/060110-LII-III.CNG.swc"
loc1 = "../Generative-Models-of-Neuron-Morphology/Data/Pyramidal/poorthuis/CNG version/060110-LV.CNG.swc"
loc2 = "../Generative-Models-of-Neuron-Morphology/Data/Interneuron/allen cell types/CNG version/Pvalb-IRES-Cre-Ai14-475465561.CNG.swc"
pyramidal = McNeuron.Neuron(file_format = 'swc', input_file=loc1)
inter = McNeuron.Neuron(file_format = 'swc', input_file=loc2)
a = pyramidal.subsample(20.)
McNeuron.visualize.plot_2D(a,show_radius=True)
print len(a.nodes_list)
a.show_features(15,17,30)
btmorph3.visualize.plot_2D(inter,show_radius=False)
len(inter.nodes_list)
ax1 = McNeuron.visualize.plot_2D(pyramidal, show_radius=False)
ax2 = McNeuron.visualize.plot_2D(inter, show_radius=False)
inter.show_features(15,17,30)
pyramidal.show_features(15,17,50)
f,(ax1, ax2) = plt.subplots(1, 2)
ax1.plot(pyramidal.sholl_r,pyramidal.sholl_n,'g')
ax2.plot(inter.sholl_r,inter.sholl_n,'m')
inter.features
f, (ax1, ax2) = plt.subplots(1, 2)
a = pyramidal.diameter1
b = pyramidal.distance_from_root
c = ax1.hist(a[b>20],bins = 30,color = 'g')
ax1.set_xlabel('diameter (um3)')
ax1.set_ylabel('density')
#ax1.set_title('Histogram of the size of compartments of neuron')
a = inter.diameter
b = inter.distance_from_root
c = ax2.hist(a[b>20],bins = 15,color = 'm')
ax2.set_xlabel('diameter (um3)')
#ax2.set_ylabel('density')
#ax2.set_title('Histogram of the size of compartments of neuron')
a = inter.diameter
b = inter.distance_from_root
c = plt.hist(a[b>20],bins = 15,color = 'm')
plt.xlabel('diameter (um3)')
f, (ax1, ax2) = plt.subplots(1, 2)
e = pyramidal.slope
x = ax1.hist(e[e!=0],bins=40,color = 'g')
ax1.set_xlabel('Value of Slope')
ax1.set_ylabel('density')
e = inter.slope
x = ax2.hist(e[e!=0],bins=40,color = 'm')
ax2.set_xlabel('Value of Slope')
#ax2.set_ylabel('density')
f, (ax1, ax2) = plt.subplots(1, 2)
a = pyramidal.distance_from_root
b = ax1.hist(a[~np.isnan(a)],bins = 50,color = 'g')
ax1.set_xlabel('distance (um)')
ax1.set_ylabel('density')
#plt.title('Histogram of distance from soma for different compartments of neuron')
a = inter.distance_from_root
b = ax2.hist(a[~np.isnan(a)],bins = 50,color = 'm')
ax2.set_xlabel('distance (um)')
#ax2.set_ylabel('density')
#plt.title('Histogram of distance from soma for different compartments of neuron')
a = inter.distance_from_root
b = plt.hist(a[~np.isnan(a)],bins = 50,color = 'm')
plt.xlabel('distance (um)')
f, (ax1, ax2) = plt.subplots(1, 2)
a = pyramidal.local_angle
b = ax1.hist(a[~np.isnan(a)],bins = 50,color = 'g')
ax1.set_xlabel('angle (radian)')
ax1.set_ylabel('density')
#plt.title('Histogram of local angles')
a = inter.local_angle
b = ax2.hist(a[~np.isnan(a)],bins = 50,color = 'm')
ax2.set_xlabel('angle (radian)')
#ax2.set_ylabel('density')
a = inter.local_angle
b = plt.hist(a[~np.isnan(a)],bins = 50,color = 'm')
plt.xlabel('angle (radian)')
f, (ax1, ax2) = plt.subplots(1, 2)
a = pyramidal.angle_global
b = ax1.hist(a[~np.isnan(a)],bins = 50,color = 'g')
ax1.set_xlabel('angle (radian)')
ax1.set_ylabel('density')
#plt.title('Histogram of global angles')
a = inter.angle_global
b = ax2.hist(a[~np.isnan(a)],bins = 50,color = 'm')
ax2.set_xlabel('angle (radian)')
a = inter.angle_global
b = plt.hist(a[~np.isnan(a)],bins = 50,color = 'm')
plt.xlabel('angle (radian)')
f, (ax1, ax2) = plt.subplots(1, 2)
a = pyramidal.angle_branch[0,:]
b = ax1.hist(a[~np.isnan(a)],bins = 20,color = 'g')
a = inter.angle_branch[0,:]
b = ax2.hist(a[~np.isnan(a)],bins = 20,color = 'm')
a = inter.angle_branch[0,:]
b = plt.hist(a[~np.isnan(a)],bins = 10,color = 'm')
f, (ax1, ax2) = plt.subplots(1, 2)
a = pyramidal.rall_ratio
b = ax1.hist(a[~np.isnan(a)],bins = 20,color = 'g')
a = inter.rall_ratio
b = ax2.hist(a[~np.isnan(a)],bins = 20,color = 'm')
f, (ax1, ax2) = plt.subplots(1, 2)
a = pyramidal.slope
b = ax1.hist(a[~np.isnan(a)],bins = 40,color = 'g')
a = inter.slope
b = ax2.hist(a[~np.isnan(a)],bins = 40,color = 'm')
a = inter.slope
b = plt.hist(a[~np.isnan(a)],bins = 40,color = 'm')
f, (ax1, ax2) = plt.subplots(1, 2)
a = pyramidal.length_to_parent
b = ax1.hist(a[~np.isnan(a)],bins = 40,color = 'g')
a = inter.length_to_parent
b = ax2.hist(a[~np.isnan(a)],bins = 40,color = 'm')
a = inter.length_to_parent
b = a[~np.isnan(a)]
c = plt.hist(b[np.absolute(b)<4],bins = 70,color = 'm')
np.absolute(b)<3
f, (ax1, ax2) = plt.subplots(1, 2)
ax1.hist(inter.overall_connectivity_matrix.sum(axis = 1)/inter.distance_from_root,bins = 40,color = 'g')
ax2.hist(pyramidal.overall_connectivity_matrix.sum(axis = 1)/pyramidal.distance_from_root,bins = 40,color = 'g')
plt.hist(inter.features['ratio_euclidian_neuronal'],bins = 40,color = 'g')
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(inter.connection)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importez les modules
Step2: Planche 160
Step3: On conjecture que $I_n$ est décroissante. C'est évident puisque si on note $f_n(t)$ son intégrande, on observe que $f_{n+1}(t) \leq f_n(t)$ pour tout $t$, et donc par monotonie de l'intégrale, $I_{n+1} \leq I_n$.
Step4: Ce qu'on observe permet de conjecturer que $\alpha=1$ est l'unique entier tel que $n^{\alpha} I_n$ converge vers une limite non nulle.
Step5: On en déduit qu'il en est de même pour $J_n$, on a $n^{\alpha} J_n \to l$ la même limite que $n^{\alpha} I_n$.
Step6: Puis rapidement, on montre que $\forall x \geq 0, \ln(1 + x) \geq \frac{x}{1+x}$. Ca peut se prouver de plein de façons différentes, mais par exemple on écrit $f(x) = (x+1) \log(x+1) - x$ qui est de classe $\mathcal{C}^1$, et on la dérive. $f'(x) = \log(x+1) + 1 - 1 > 0$ donc $f$ est croissante, et $f(0) = 0$ donc $f(x) \geq f(0) = 0$ pour tout $x \geq 0$.
Step7: Planche 162
Step8: Pas besoin de lire le maximum sur un graphique
Step9: On affiche la fonction, comme demandé, avec un titre
Step10: Pour calculer l'intégrale, on utilise scipy.integrate.quad
Step11: On vérifie avant de se lancer dans l'affichage
Step12: $S_n(x)$ semble diverger pour $x\to2^-$ quand $n\to\infty$.
Step13: Ici, un ne peut pas être utilisé comme une fonction "numpy" qui travaille sur un tableau, on stocke donc les valeurs "plus manuellement"
Step14: La suite $u_n$ semble être croissante (on peut le prouver), toujours plus petite que $1$ (se prouve facilement aussi, $I_{n+1} < I_n$), et semble converger.
Step15: Pour conclure, on peut prouver que la suite est monotone et bornée, donc elle converge.
Step16: Avant de s'en servir pour simuler plein de trajectoirs, on peut vérifier
Step17: En 100 coups, on commence à ne plus voir de tendance
Step18: Pour l'histogramme, on triche un peu en utilisant numpy.bincount. Mais on peut le faire à la main très simplement !
Step19: On va afficher des histogrammes
Step20: On s'approche d'une distribution uniforme !
Step21: On a besoin d'éliminer les erreurs d'arrondis, mais on voit que $1$ est valeur propre, associée au vecteur $[1,\dots,1,\dots,1]$ avec un $1$ seulement à la 8ème composante.
Step22: $P$ n'est pas diagonalisable, à prouver au tableau si l'examinateur le demande.
Step23: Soit $g(x) = 2 - \exp(x)$, telle que $g(x) f(x) = 1$. En dérivant $n > 0$ fois cette identité et en utilisant la formule de Leibniz, on trouve
Step24: On observe que $a(n)$ est comprise entre $\frac{1}{2(\log(2))^n}$ et $\frac{1}{\log(2)^n}$, donc le rayon de convergence de $S(x) = \sum a(n) x^n$ est $\log(2)$.
Step25: On peut vérifie que notre fonction marche
Step26: <span style="color
Step27: Planche 170
Step28: On peut vérifier le prognostic quant à la somme de la série $\sum u_n$
Step29: Avec seulement $50$ termes, on a moins de $1.5\%$ d'erreur relative, c'est déjà pas mal !
Step30: Pour $e = (e_n){n\in\mathbb{N}}$ une suite de nombres égaux à $0$ ou $1$ (i.e., $\forall n, e_n \in {0,1}$, $S_n(e) = \sum{i=0}^n e_i u_i$ est bornée entre $0$ et $\sum{i=0}^n u_i$. Et $u_n \sim \frac{1}{n(n+1)}$ qui est le terme général d'une série convergente (par critère de Cauchy, par exemple, avec $\alpha=2$). Donc la série $\sum u_n$ converge et donc par encadrement, $S_n(e)$ converge pour $n\to\infty$, i.e., $S(e)$ converge. Ces justifications donnent aussi que $$0 \leq S(e) \leq \sum_{n\geq0} u_n = \lim_{n\to\infty} \arctan(n) - \arctan(0) = \frac{\pi}{2}.$$
Step31: Pour inverser la fonction, et trouver la suite $e$ telle que $S(e) = x$ pour un $x$ donné, il faut réfléchir un peu plus.
Step33: Ca suffit pour la partie Python.
Step34: D'abord, on écrit une fonction pour simuler l'événement aléatoire
Step35: Par exemple, pour seulement $2$ lancés, on a $1 - p_n = p^2$ car $\overline{E_n}$ est l'événement d'obtenir $2$ piles qui est de probabilité $p^2$.
Step36: Avec $4$ lancés, on a $p_n$ bien plus petit.
Step37: On vérifie que $p_n(n, p)$ est décroissante en $p$, à $n$ fixé
Step38: On vérifie que $p_n(n, p)$ est décroissante en $n$, à $p$ fixé
Step39: Notons que la suite semble converger ? Ou alors elle décroit de moins en moins rapidement.
Step40: L'intégrale $g(t) = \int_0^x \frac{\ln(1 - t)}{t} \mathrm{d}t$ est bien défine sur $D = [-1, 1]$ puisque son intégrande existe, est continue et bien intégrable sur tout interval de la forme $]a, 0[$ ou $]0, b[$ pour $-1 < a < 0$ ou $0 < b < 1$. Le seul point qui peut déranger l'intégrabilité est en $0$, mais $\ln(1-t) \sim t$ quand $t\to0$ donc l'intégrande est $\sim 1$ en $0^-$ et $0^+$ et donc est bien intégrable. De plus, comme "intégrale de la borne supérieure" d'une fonction continue, $g$ est dérivable sur l'intérieur de son domaine, i.e., sur $]-1, 1[$.
Step41: On visualise les deux fonctions $f$ et $g$ sur le domaine $D$
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (10, 7)
import matplotlib.pyplot as plt
from scipy import integrate
import numpy.random as rd
import seaborn as sns
sns.set(context="notebook", style="whitegrid", palette="hls", font="sans-serif", font_scale=1.1)
import numpy as np
import matplotlib.pyplot as plt
from scipy import integrate
import numpy.random as rd
def I(n):
def f(t):
return 1 / ((1+t)**n * np.sqrt(1-t))
i, err = integrate.quad(f, 0, 1)
return i
def J(n):
def f(t):
return 1 / ((1+t)**n * np.sqrt(1-t))
i, err = integrate.quad(f, 0, 0.5)
return i
valeurs_n = np.arange(1, 50)
valeurs_In = np.array([I(n) for n in valeurs_n])
plt.figure()
plt.plot(valeurs_n, valeurs_In, 'ro')
plt.title("Valeurs de $I_n$")
plt.show()
plt.figure()
plt.plot(np.log(valeurs_n), np.log(valeurs_In), 'go')
plt.title(r"Valeurs de $\ln(I_n)$ en fonction de $\ln(n)$")
plt.show()
valeurs_n = np.arange(1, 500)
valeurs_In = np.array([I(n) for n in valeurs_n])
valeurs_Jn = np.array([J(n) for n in valeurs_n])
alpha = 0.9
plt.figure()
plt.plot(valeurs_n, valeurs_n**alpha * valeurs_In, 'r+', label=r'$n^{\alpha} I_n$')
plt.plot(valeurs_n, valeurs_n**alpha * valeurs_Jn, 'b+', label=r'$n^{\alpha} J_n$')
plt.legend()
plt.title(r"Valeurs de $n^{\alpha} I_n$ et $n^{\alpha} J_n$")
plt.show()
plt.figure()
plt.plot(valeurs_n, valeurs_n**alpha * (valeurs_In - valeurs_Jn), 'g+', label=r'$n^{\alpha} (I_n - J_n)$')
plt.legend()
plt.title(r"Valeurs de $n^{\alpha} (I_n - J_n)$")
plt.show()
X = np.linspace(0, 100, 1000)
plt.plot(X, np.log(1 + X), 'ro-', label=r'$\log(1+x)$', markevery=50)
plt.plot(X, X / (1 + X), 'b+-', label=r'$\frac{x}{1+x}$', markevery=50)
plt.legend()
plt.title("Comparaison entre deux fonctions")
plt.show()
def f(x):
return x * (1 - x) * (1 + np.cos(5 * np.pi * x))
Xs = np.linspace(0, 1, 2000)
Ys = f(Xs)
M = max_de_f = max(Ys)
print("Sur [0, 1], avec 2000 points, le maximum de f(x) est M =", M)
i_maximisant_f = np.argmax(Ys)
x_maximisant_f = Xs[i_maximisant_f]
print("Sur ces 2000 points, le maximum est atteint en x =", x_maximisant_f)
plt.figure()
plt.plot(Xs, Ys)
plt.vlines(x_maximisant_f, min(Ys), max(Ys), color="b", linestyles="dotted")
plt.hlines(max(Ys), min(Xs), max(Xs), color="b", linestyles="dotted")
plt.title("Fonction $f(x)$ sur $[0,1]$")
plt.show()
def In(x, n):
def fn(x):
return f(x) ** n
return integrate.quad(fn, 0, 1)[0]
def Sn(x):
return np.sum([In(Xs, n) * x**n for n in range(0, n+1)], axis=0)
for n in range(10):
print("In(x,", n, ") =", In(Xs, n))
a = 1/M + 0.1
X2s = np.linspace(-a, a, 2000)
plt.figure()
for n in [10, 20, 30, 40, 50]:
plt.plot(X2s, Sn(X2s), "-+", label="n =" + str(n), markevery=20)
plt.legend()
plt.show()
def un(n):
return In(Xs, n + 1) / In(Xs, n)
for n in range(10):
print("un =", un(n), "pour n =", n)
def affiche_termes_un(N):
valeurs_un = [0] * N
for n in range(N):
valeurs_un[n] = un(n)
plt.figure()
plt.plot(valeurs_un, 'o-')
plt.title("Suite $u_n$")
plt.grid(True)
plt.show()
affiche_termes_un(30)
affiche_termes_un(100)
case_max = 12
univers = list(range(case_max))
def prochaine_case(case):
return (case + rd.randint(1, 6+1)) % case_max
def Yn(duree, depart=0):
case = depart
for coup in range(duree):
case = prochaine_case(case)
return case
[Yn(1) for _ in range(10)]
[Yn(100) for _ in range(10)]
observations = [Yn(100) for _ in range(10)]
print(observations)
np.bincount(observations, minlength=case_max)
def histogramme(duree, repetitions=5000):
cases = [Yn(duree) for _ in range(repetitions)]
frequences = np.bincount(cases, minlength=case_max)
# aussi a la main si besoin
frequences = [0] * case_max
for case in cases:
frequences[case] += 1
return frequences / np.sum(frequences)
histogramme(50)
def voir_histogramme(valeurs_n):
for n in valeurs_n:
plt.figure()
plt.bar(np.arange(case_max), histogramme(n))
plt.title("Histogramme de cases visitées en " + str(n) + " coups")
plt.show()
voir_histogramme([1, 2, 3, 50, 100, 200])
case_max = 12
P = np.zeros((case_max, case_max))
for k in range(case_max):
for i in range(k - 6, k):
P[k, i] = 1/6
P
import numpy.linalg as LA
spectre, vecteur_propres = LA.eig(P)
np.round(spectre,10)
np.round(vecteur_propres[0])
def f(x):
return 1 / (2 - np.exp(x))
from math import factorial
def a_0an(nMax):
valeurs_a = np.zeros(nMax+1)
valeurs_a[0] = 1.0
for n in range(1, nMax+1):
valeurs_a[n] = sum(valeurs_a[n-k] / factorial(k) for k in range(1, n+1))
return valeurs_a
nMax = 10
valeurs_n = np.arange(0, nMax + 1)
valeurs_a = a_0an(nMax)
for n in valeurs_n:
print("Pour n =", n, "on a a(n) =", valeurs_a[n])
plt.figure()
plt.plot(valeurs_n, valeurs_a, 'ro-', label=r'$a(n)$', markersize=12)
plt.plot(valeurs_n, 1 / np.log(2)**valeurs_n, 'g+-', label=r'$1/\log(2)^n$')
plt.plot(valeurs_n, 1 / (2 * np.log(2)**valeurs_n), 'bd-', label=r'$1/(2\log(2)^n)$')
plt.title("$a(n)$ et deux autres suites")
plt.legend()
plt.show()
def Sn(x, n):
valeurs_a = a_0an(n)
return sum(valeurs_a[k] * x**k for k in range(0, n + 1))
x = 0.5
for n in range(0, 6 + 1):
print("Pour n =", n, "S_n(x) =", Sn(x, n))
valeurs_x = np.linspace(0, 0.5, 1000)
valeurs_f = f(valeurs_x)
plt.figure()
for n in range(0, 6 + 1):
valeurs_Sn = []
for x in valeurs_x:
valeurs_Sn.append(Sn(x, n))
plt.plot(valeurs_x, valeurs_Sn, 'o:', label='$S_' + str(n) + '(x)$', markevery=50)
plt.plot(valeurs_x, valeurs_f, '+-', label='$f(x)$', markevery=50)
plt.title("$f(x)$ et $S_n(x)$ pour $n = 0$ à $n = 6$")
plt.legend()
plt.show()
def u(n):
return np.arctan(n+1) - np.arctan(n)
valeurs_n = np.arange(50)
valeurs_u = u(valeurs_n)
plt.figure()
plt.plot(valeurs_n, valeurs_u, "o-")
plt.xlabel("Entier $n$")
plt.ylabel("Valeur $u_n$")
plt.title("Premières valeurs de $u_n$")
pi/2
sum(valeurs_u)
somme_serie = pi/2
somme_partielle = sum(valeurs_u)
erreur_relative = abs(somme_partielle - somme_serie) / somme_serie
erreur_relative
valeurs_n = np.arange(10, 1000)
valeurs_u = u(valeurs_n)
valeurs_equivalents = 1 / (valeurs_n * (valeurs_n + 1))
plt.figure()
plt.plot(valeurs_n, valeurs_u / valeurs_equivalents, "-")
plt.xlabel("Entier $n$")
plt.title(r"Valeurs de $u_n / \frac{1}{n(n+1)}$")
from math import ceil, sqrt, pi
def Se(e, delta=1e-5, borne_sur_n_0=10000):
borne_sur_n_1 = int(ceil(1 + sqrt(delta)/2.0))
borne_sur_n = max(borne_sur_n_0, borne_sur_n_1)
somme_partielle = 0
for n in range(0, borne_sur_n + 1):
somme_partielle += e(n) * u(n)
return somme_partielle
def e010101(n):
return 1 if n % 2 == 0 else 0
delta = 1e-5
Se010101 = Se(e010101, delta)
print("Pour delta =", delta, "on a Se010101(delta) ~=", round(Se010101, 5))
def inverse_Se(x, n):
assert 0 < x < pi/2.0, "Erreur : x doit être entre 0 et pi/2 strictement."
print("Je vous laisse chercher.")
raise NotImplementedError
from random import random
def pile(proba):
True si pile, False si face (false, face, facile à retenir).
return random() < proba
def En(n, p):
lance = pile(p)
for i in range(n - 1):
nouveau_lance = pile(p)
if lance and nouveau_lance:
return False
nouveau_lance = lance
return True
import numpy as np
lances = [ En(2, 0.5) for _ in range(100) ]
np.bincount(lances)
def pn(n, p, nbSimulations=100000):
return np.mean([ En(n, p) for _ in range(nbSimulations) ])
pn(2, 0.5)
pn(4, 0.5)
pn(4, 0.1)
pn(4, 0.9)
pn(6, 0.2)
pn(20, 0.2)
pn(100, 0.2)
from math import floor, log, pi
delta = 1e-5
def f(x):
if x == 0: return 0
borne_sur_n = int(floor(log((6/pi**2 * delta), abs(x)) - 1))
somme_partielle = 0
for n in range(1, borne_sur_n + 1):
somme_partielle += x**n / n**2
return somme_partielle
for x in [-0.75, -0.5, 0.25, 0, 0.25, 0.5, 0.75]:
print("Pour x =", x, "\tf(x) =", round(f(x), 5))
from scipy import integrate
def g(x):
def h(t):
return log(1 - t) / t
integrale, erreur = integrate.quad(h, 0, x)
return integrale
import numpy as np
import matplotlib.pyplot as plt
domaine = np.linspace(-0.99, 0.99, 1000)
valeurs_f = [f(x) for x in domaine]
valeurs_g = [g(x) for x in domaine]
plt.figure()
plt.plot(domaine, valeurs_f, "+-", label="$f(x)$", markevery=50)
plt.plot(domaine, valeurs_g, "+-", label="$g(x)$", markevery=50)
plt.legend()
plt.grid()
plt.title("Représentation de $f(x)$ et $g(x)$")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A continuación mostramos los paquetes que usaremos regularmente para tratar datos, pandas, numpy, matplotlib. Al ser un programa en Python, se pueden importar paquetes que seguirán siendo válidos hasta el final del notebook.
Step2: Lo siguiente hace que los gráficos se muestren inline. Para figuras pequeñas se puede utilizar unas figuras interactivas que permiten zoom, usando %maplotlib nbagg.
Step3: Numpy
Step4: Numpy permite generar y procesar arrays de datos de forma muy eficiente. A continuación se muestran algunos ejemplos
Step5: También arrays multidimensionales
Step6: Se pueden aplicar funciones sobre todo el array o matriz, y el resultado será una matriz idéntica con el operador aplicado. Similar a lo que ocurre con la operación map de algunos lenguajes de programación (incluído Python)
Step7: También se pueden filtrar los elementos de un array o matriz que cumplan una condición. Para eso se utiliza el operador de indización ([]) con una expresión booleana.
Step8: ¿Por qué usar Numpy?
Step9: numpy tiene infinidad de funciones, por lo que sería interesante darse una vuelta por su documentación
Step10: Pandas
Step11: Se puede hacer plot también de una columna contra otra.
Step12: Valores incompletos. Si no se establecen, se pone a NaN (not a number).
Step13: fillna() permite cambiar el valor de los datos faltantes.
Step14: A continuación se muestra un ejemplo de uso de Pandas para leer datos y procesarlos en un Dataframe.
Step15: De la fecha, extraer sólo la fecha (no la hora, que no nos interesa).
Step16: Añadimos una columna de todo "1" para especificar que cada pregunta cuenta como 1.
Step17: A los Dataframe de Pandas también se les puede aplicar operaciones de agregación, como groupby o sum. Finalmente, la funcion plot() permite mostrar los datos en un gráfico.
Step18: A continuación comprobamos con la página de la Wikipedia cuándo apareció el lenguaje Swift
Step19: A continuación se muestra cómo ubicar posiciones en un mapa con el paquete folium. Se muestra también cómo acceder a distintas posiciones del Dataframe con iloc, loc, etc.
Step20: El paquete folium permite generar mapas de posiciones. El siguiente ejemplo centra un mapa en Mula y pone un marcador con su nombre
|
<ASSISTANT_TASK:>
Python Code:
!uname -a
%lsmagic
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
matplotlib.style.use('ggplot')
from IPython.display import YouTubeVideo
YouTubeVideo('o8fmjaW9a0A') # Yes, it can also embed youtube videos.
a = np.array([4,5,6])
print(a.shape)
print(a[0])
a[0] = 9
print (a)
np.arange(10)
np.arange(1,20)
a = np.zeros((2,2))
print (a)
a.ndim
a.dtype
b = np.random.random((2,2))
print (b)
a = np.random.random((2,2))
print(a)
print (a >= .5)
print (a[a >= .5])
%%capture timeit_output
%timeit l1 = range(1,1000)
%timeit l2 = np.arange(1,1000)
print(timeit_output)
x = np.array([[1,2],[3,4]])
print (np.sum(x)) # Compute sum of all elements; prints "10"
print (np.sum(x, axis=0)) # Compute sum of each column; prints "[4 6]"
print (np.sum(x, axis=1)) # Compute sum of each row; prints "[3 7]"
x * 2
x ** 2
x = np.arange(0, 3 * np.pi, 0.1)
y = np.sin(x)
plt.subplot()
# Plot the points using matplotlib
plt.plot(x, y)
plt.show()
plt.subplot(211)
plt.plot(range(12))
plt.subplot(212, facecolor='y')
plt.plot(range(100))
plt.show()
# Compute the x and y coordinates for points on sine and cosine curves
x = np.arange(0, 3 * np.pi, 0.1)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Plot the points using matplotlib
plt.plot(x, y_sin)
plt.plot(x, y_cos)
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.title('Sine and Cosine')
plt.legend(['Sine', 'Cosine'])
plt.show()
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
ts
ts.describe()
ts = ts.cumsum()
ts.plot();
df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=list('ABCD'))
df = df.cumsum()
df.plot();
df3 = pd.DataFrame(np.random.randn(1000, 2), columns=['B', 'C']).cumsum()
df3['A'] = pd.Series(list(range(len(df3))))
df3.plot(x='A', y='B');
d = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
df = pd.DataFrame(d)
df
df.fillna(0)
pd.DataFrame(d, index=['d', 'b', 'a'])
pd.DataFrame(d, index=['d', 'b', 'a'], columns=['two', 'three'])
df = pd.read_csv('https://github.com/dsevilla/bdge/raw/master/intro/swift-question-dates.txt.gz',
header=None,
names=['date'],
compression='gzip',
parse_dates=['date'],
index_col='date')
df
df.index = df.index.date
df['Count'] = 1
df
accum = df.groupby(df.index).sum()
accum
# Los 30 primeros registros que tengan un número de preguntas mayor que 20 por día.
accum = accum[accum.Count > 20][:30]
accum
accum[accum.Count > 30][:30].plot.bar()
!pip install lxml
dfwiki = pd.read_html('https://en.wikipedia.org/wiki/Swift_(programming_language)',attrs={'class': 'infobox vevent'})
dfwiki[0]
firstdate = dfwiki[0][1][4]
firstdate
from dateutil.parser import parse
dt = parse(firstdate.split(';')[0])
print (dt.date().isoformat())
print (accum.index[0].isoformat())
assert dt.date().isoformat() == accum.index[0].isoformat()
# cargar municipios y mostrarlos en el mapa
df = pd.read_csv('https://github.com/dsevilla/bdge/raw/master/intro/municipios-2017.csv.gz',header=0,compression='gzip')
df.head()
df.iloc[0]
df.iloc[0].NOMBRE_ACTUAL
df.loc[:,'NOMBRE_ACTUAL']
df.iloc[:,0]
df.PROVINCIA
df[df.PROVINCIA == 'A Coruña']
mula = df[df.NOMBRE_ACTUAL == 'Mula'].iloc[0]
mula
(mula_lat,mula_lon) = (mula.LATITUD_ETRS89, mula.LONGITUD_ETRS89)
(mula_lat,mula_lon)
!pip install folium
import folium
map = folium.Map(location=[mula_lat, mula_lon],zoom_start=10)
folium.Marker(location = [mula_lat, mula_lon], popup="{} ({} habitantes)".format(mula.NOMBRE_ACTUAL,mula.POBLACION_MUNI)).add_to(map)
map
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Importing dependencies
Step3: Visualize data
Step4: Interactive pandas Dataframe
Step5: Grid and potential field
Step6: From potential field to block
Step8: Combining potential fields
Step9: This potential field gives the following block
Step10: Combining both potential field where the first potential field is younger than the second we can obtain the following structure.
Step11: Side note
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
# Importing
import importlib
import theano.tensor as T
import sys, os
sys.path.append("/home/bl3/PycharmProjects/GeMpy/GeMpy")
import pandas as pn
import GeMpy_core
import Visualization
import numpy as np
importlib.reload(GeMpy_core)
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
np.set_printoptions(precision = 4, linewidth= 300, suppress = True)
import matplotlib.pyplot as plt
from matplotlib import cm
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'png')
plt.rcParams['savefig.dpi'] = 75
plt.rcParams['figure.autolayout'] = False
plt.rcParams['figure.figsize'] = 10, 6
plt.rcParams['axes.labelsize'] = 18
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['font.size'] = 16
plt.rcParams['lines.linewidth'] = 2.0
plt.rcParams['lines.markersize'] = 8
plt.rcParams['legend.fontsize'] = 14
import seaborn as sns
# This sets reasonable defaults for font size for
# a figure that will go in a paper
sns.set_context("paper")
# Set the font to be serif, rather than sans
sns.set(font='Arial')
# Make the background white, and specify the
# specific font family
sns.set_style("white")
import qgrid
qgrid.nbinstall(overwrite=True) # copies javascript dependencies to your /nbextensions folder
%matplotlib inline
# Setting extend, grid and compile
# Setting the extent
test = GeMpy_core.GeMpy()
test.import_data([0,10,0,10,0,10])
# =========================
# DATA GENERATION IN PYTHON
# =========================
# Layers coordinates
layer_1 = np.array([[0.5,4,7], [2,4,6.5], [4,4,7], [5,4,6]])#-np.array([5,5,4]))/8+0.5
layer_2 = np.array([[3,4,5], [6,4,4],[8,4,4], [7,4,3], [1,4,6]])
layers = np.asarray([layer_1,layer_2])
# Foliations coordinates
dip_pos_1 = np.array([7,4,7])#- np.array([5,5,4]))/8+0.5
dip_pos_2 = np.array([2.,4,4])
# Dips
dip_angle_1 = float(15)
dip_angle_2 = float(340)
dips_angles = np.asarray([dip_angle_1, dip_angle_2], dtype="float64")
# Azimuths
azimuths = np.asarray([90,90], dtype="float64")
# Polarity
polarity = np.asarray([1,1], dtype="float64")
# Pandas Dataframe with the interfaces data
test.Data.Interfaces = pn.DataFrame(
data = {"X" :np.append(layer_1[:, 0],layer_2[:,0]),
"Y" :np.append(layer_1[:, 1],layer_2[:,1]),
"Z" :np.append(layer_1[:, 2],layer_2[:,2]),
"formation" : np.append(
np.tile("Layer 1", len(layer_1)),
np.tile("Layer 2", len(layer_2)))})
# Pandas Dataframe with the Foliations data
test.Data.Foliations = pn.DataFrame(
data = {"X" :np.append(dip_pos_1[0],dip_pos_2[0]),
"Y" :np.append(dip_pos_1[ 1],dip_pos_2[1]),
"Z" :np.append(dip_pos_1[ 2],dip_pos_2[2]),
"azimuth" : azimuths,
"dip" : dips_angles,
"polarity" : polarity,
"formation" : ["Layer 1", "Layer 2"]})
# Creation of the formations (to be deprecated)
test.Data.formations = test.Data.Interfaces["formation"].unique()
# Calculation of the pola gradient from dip azimuth and polarity
test.Data.calculate_gradient()
# Set defautl series
test.Data.set_series()
# Method to be sure all objects get updated
test.update_data()
# The following code is for the visualization of labels of the input data (yet to be implemented)
# ----------------------------------------------------------------------------------------------
def annotate_plot(frame, label_col, x, y, **kwargs):
Annotate the plot of a given DataFrame using one of its columns
Should be called right after a DataFrame or series plot method,
before telling matplotlib to show the plot.
Parameters
----------
frame : pandas.DataFrame
plot_col : str
The string identifying the column of frame that was plotted
label_col : str
The string identifying the column of frame to be used as label
kwargs:
Other key-word args that should be passed to plt.annotate
Returns
-------
None
Notes
-----
After calling this function you should call plt.show() to get the
results. This function only adds the annotations, it doesn't show
them.
import matplotlib.pyplot as plt # Make sure we have pyplot as plt
for label, x, y in zip(frame[label_col], frame[x], frame[y]):
plt.annotate(label, xy=(x+0.2, y+0.15), **kwargs)
inter_labels =[r'${\bf{x}}_{\alpha \, 0}^1$',
r'${\bf{x}}_{\alpha \, 1}^1$',
r'${\bf{x}}_{\alpha \, 2}^1$',
r'${\bf{x}}_{\alpha \, 3}^1$',
r'${\bf{x}}_{\alpha \, 0}^2$',
r'${\bf{x}}_{\alpha \, 1}^2$',
r'${\bf{x}}_{\alpha \, 2}^2$',
r'${\bf{x}}_{\alpha \, 3}^2$',
r'${\bf{x}}_{\alpha \, 4}^2$']
foli_labels =[r'${\bf{x}}_{\beta \,{0}}$',
r'${\bf{x}}_{\beta \,{1}}$']
test.Data.Interfaces['labels'] = pn.Series(inter_labels)
test.Data.Foliations['labels'] = pn.Series(foli_labels)
# Plot and table
test.Plot.plot_data()
annotate_plot(test.Data.Interfaces, 'labels','X', 'Z', size = 'x-large')
annotate_plot(test.Data.Foliations, 'labels','X', 'Z', size = 'x-large')
test.Data.Interfaces
import qgrid
qgrid.show_grid(test.Data.Interfaces)
test.create_grid()
test.set_interpolator(u_grade=0, verbose = 0)
test.Plot.plot_potential_field(4, direction='y', colorbar = True, cmap = 'magma')
#test.Interpolator.DK;
from ipywidgets import widgets
from ipywidgets import interact
def cov_cubic_f(r,a = 6, c_o = 1):
if r <= a:
return c_o*(1-7*(r/a)**2+35/4*(r/a)**3-7/2*(r/a)**5+3/4*(r/a)**7)
else:
return 0
def cov_cubic_d1_f(r,a = 6., c_o = 1):
SED_dips_dips = r
f = c_o
return (f * ((-14 /a ** 2) + 105 / 4 * SED_dips_dips / a ** 3 -
35 / 2 * SED_dips_dips ** 3 / a ** 5 + 21 / 4 * SED_dips_dips ** 5 / a ** 7))
def cov_cubic_d2_f(r, a = 6, c_o = 1):
SED_dips_dips = r
f = c_o
return 7*f*(9*r**5-20*a**2*r**3+15*a**4*r-4*a**5)/(2*a**7)
def plot_potential_var(a = 10, c_o = 1, nugget_effect = 0):
x = np.linspace(0,12,50)
y = [cov_cubic_f(i, a = a, c_o = c_o) for i in x]
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax1.plot(x,c_o-np.asarray(y)+nugget_effect)
plt.hlines(0,0,12, linestyles = "--")
plt.title("Variogram")
plt.margins(0,0.1)
ax2 = fig.add_subplot(122)
ax2.plot(x,np.asarray(y))
ax2.scatter(0,nugget_effect+c_o)
plt.title("Covariance Function")
plt.tight_layout()
plt.margins(0,0.1)
plt.suptitle('$C_Z(r)$', y = 1.08, fontsize=15, fontweight='bold')
def plot_potential_direction_var( a = 10, c_o = 1, nugget_effect = 0):
x = np.linspace(0,12,50)
y = np.asarray([cov_cubic_d1_f(i, a = a, c_o = c_o) for i in x])
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax1.plot(x,c_o-np.asarray(y)+nugget_effect)
plt.title("Variogram")
plt.margins(0,0.1)
ax2 = fig.add_subplot(122)
ax2.plot(x,np.asarray(y))
#ax2.scatter(0,c_o)
plt.title("Cross-Covariance Function")
plt.tight_layout()
plt.margins(0,0.1)
plt.suptitle('$C\'_Z / r$', y = 1.08, fontsize=15, fontweight='bold')
def plot_directionU_directionU_var(a = 10, c_o = 1, nugget_effect = 0):
x = np.linspace(0.01,12,50)
d1 = np.asarray([cov_cubic_d1_f(i, a = a, c_o = c_o) for i in x])
d2 = np.asarray([cov_cubic_d2_f(i, a = a, c_o = c_o) for i in x])
y = -(d2) # (0.5*x**2)/(x**2)*
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax1.plot(x,c_o-np.asarray(y)+nugget_effect)
plt.title("Variogram")
plt.margins(0,0.1)
ax2 = fig.add_subplot(122)
ax2.plot(x,np.asarray(y))
ax2.scatter(0,nugget_effect+y[0], s = 20)
plt.title("Covariance Function")
plt.tight_layout()
plt.margins(0,0.1)
plt.suptitle('$C_{\partial {Z}/ \partial x, \, \partial {Z}/ \partial x}(h_x)$'
, y = 1.08, fontsize=15)
def plot_directionU_directionV_var(a = 10, c_o = 1, nugget_effect = 0):
x = np.linspace(0.01,12,50)
d1 = np.asarray([cov_cubic_d1_f(i, a = a, c_o = c_o) for i in x])
d2 = np.asarray([cov_cubic_d2_f(i, a = a, c_o = c_o) for i in x])
y = -(d2-d1) # (0.5*x**2)/(x**2)*
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax1.plot(x,c_o-np.asarray(y)+nugget_effect)
plt.title("Variogram")
plt.margins(0,0.1)
ax2 = fig.add_subplot(122)
ax2.plot(x,np.asarray(y))
ax2.scatter(0,nugget_effect+y[0], s = 20)
plt.title("Covariance Function")
plt.tight_layout()
plt.margins(0,0.1)
plt.suptitle('$C_{\partial {Z}/ \partial x, \, \partial {Z}/ \partial y}(h_x,h_y)$'
, y = 1.08, fontsize=15)
def plot_all(a = 10, c_o = 1, nugget_effect = 0):
plot_potential_direction_var(a, c_o, nugget_effect)
plot_directionU_directionU_var(a, c_o, nugget_effect)
plot_directionU_directionV_var(a, c_o, nugget_effect)
test.Interpolator.compute_block_model()
test.Plot.plot_block_section()
# Setting extend, grid and compile
# Setting the extent
two_pot = GeMpy_core.GeMpy()
two_pot.import_data([0,10,0,10,0,10])
# Data Generation
layer_1 = np.array([[0.5,4,7], [2,4,6.5], [4,4,7], [5,4,6]])#-np.array([5,5,4]))/8+0.5
layer_2 = np.array([[3,4,5], [6,4,4],[8,4,4], [7,4,3], [1,4,6]])
layer_3 = np.array([[2,4,3], [8,4,2], [9,4,3]])
dip_pos_1 = np.array([7,4,7])#- np.array([5,5,4]))/8+0.5
dip_pos_2 = np.array([2.,4,4])
dip_pos_3 = np.array([1,4,1])
dip_angle_1 = float(15)
dip_angle_2 = float(340)
dip_angle_3 = float(80)
dips_angles = np.asarray([dip_angle_1, dip_angle_2, dip_angle_3], dtype="float64")
layers = np.asarray([layer_1,layer_2, layer_3])
azimuths = np.asarray([90,90, 90], dtype="float64")
polarity = np.asarray([1,1, 1], dtype="float64")
two_pot.Data.Interfaces = pn.DataFrame(data = {"X" :np.hstack((layer_1[:, 0],layer_2[:,0], layer_3[:,0])),
"Y" :np.hstack((layer_1[:, 1],layer_2[:,1], layer_3[:,1])),
"Z" :np.hstack((layer_1[:, 2],layer_2[:,2], layer_3[:,2])),
"formation" : np.hstack((np.tile("Layer 1", len(layer_1)), np.tile("Layer 2", len(layer_2)),
np.tile("Layer 3", len(layer_3))))})
two_pot.Data.Foliations = pn.DataFrame(data = {"X" :np.hstack((dip_pos_1[0], dip_pos_2[0], dip_pos_3[0])),
"Y" :np.hstack((dip_pos_1[ 1],dip_pos_2[1], dip_pos_3[1])),
"Z" :np.hstack((dip_pos_1[ 2],dip_pos_2[2], dip_pos_3[2])),
"azimuth" : azimuths,
"dip" : dips_angles,
"polarity" : polarity,
"formation" : ["Layer 1", "Layer 2", 'Layer 3']})
inter_labels =[r'${\bf{x}}_{\alpha \, 0}^1$',
r'${\bf{x}}_{\alpha \, 1}^1$',
r'${\bf{x}}_{\alpha \, 2}^1$',
r'${\bf{x}}_{\alpha \, 3}^1$',
r'${\bf{x}}_{\alpha \, 0}^2$',
r'${\bf{x}}_{\alpha \, 1}^2$',
r'${\bf{x}}_{\alpha \, 2}^2$',
r'${\bf{x}}_{\alpha \, 3}^2$',
r'${\bf{x}}_{\alpha \, 4}^2$',
r'${\bf{x}}_{\alpha \, 0}^1$',
r'${\bf{x}}_{\alpha \, 1}^1$',
r'${\bf{x}}_{\alpha \, 2}^1$']
foli_labels =[r'${\bf{x}}_{\beta \,{0}}$',
r'${\bf{x}}_{\beta \,{1}}$',
r'${\bf{x}}_{\beta \,{0}}$']
two_pot.Data.Interfaces['labels'] = pn.Series(inter_labels)
two_pot.Data.Foliations['labels'] = pn.Series(foli_labels)
two_pot.Data.formations = two_pot.Data.Interfaces["formation"].unique()
two_pot.Data.calculate_gradient()
two_pot.Data.set_series({'younger': ('Layer 1', 'Layer 2'),
'older': 'Layer 3'}, order = ['younger', 'older'])
two_pot.update_data()
def annotate_plot(frame, label_col, x, y, **kwargs):
Annotate the plot of a given DataFrame using one of its columns
Should be called right after a DataFrame or series plot method,
before telling matplotlib to show the plot.
Parameters
----------
frame : pandas.DataFrame
plot_col : str
The string identifying the column of frame that was plotted
label_col : str
The string identifying the column of frame to be used as label
kwargs:
Other key-word args that should be passed to plt.annotate
Returns
-------
None
Notes
-----
After calling this function you should call plt.show() to get the
results. This function only adds the annotations, it doesn't show
them.
import matplotlib.pyplot as plt # Make sure we have pyplot as plt
for label, x, y in zip(frame[label_col], frame[x], frame[y]):
plt.annotate(label, xy=(x+0.2, y+0.15), **kwargs)
serie_to_plot = 'older'
two_pot.Plot.plot_data(series = serie_to_plot)
annotate_plot(two_pot.Data.Interfaces[two_pot.Data.Interfaces['series'] == serie_to_plot]
, 'labels','X', 'Z', size = 'x-large')
annotate_plot(two_pot.Data.Foliations[two_pot.Data.Foliations['series'] == serie_to_plot],
'labels','X', 'Z', size = 'x-large')
two_pot.create_grid()
two_pot.set_interpolator(u_grade=0, verbose = 0)
two_pot.Plot.plot_potential_field(4, n_pf=1, direction='y', colorbar = True, cmap = 'magma')
# Reset the block
two_pot.Interpolator.block.set_value(np.zeros_like(two_pot.Grid.grid[:,0]))
two_pot.Interpolator.compute_block_model(series_number=[1])
two_pot.Plot.plot_block_section()
# Reset the block
two_pot.Interpolator.block.set_value(np.zeros_like(two_pot.Grid.grid[:,0]))
two_pot.Interpolator.compute_block_model(series_number=[0,1])
two_pot.Plot.plot_block_section()
plot_potential_var(10,10**2 / 14 / 3 , 0.01)
plot_all(10,10**2 / 14 / 3 , 0.01) # 0**2 /14/3
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The binomial distribution is closely related to the normal distribution (aka Gaussian distribution)
Step2: Could we figure this out analytically?
Step3: How can we use the mean and standard deviation to estimate the probability?
|
<ASSISTANT_TASK:>
Python Code:
import random
results = []
for trial in xrange(10000):
heads = 0
for i in xrange(100):
flip = random.randint(0,1)
if (flip == 0):
heads += 1
results.append(heads)
print results[1:10]
import matplotlib.pyplot as plt
plt.figure()
plt.hist(results)
plt.show()
## Plot the histogram using integer values by creating more bins
plt.figure()
plt.hist(results, bins=range(100))
plt.title("Using integer values")
plt.show()
## Plot the density function, notice bars sum to exactly 1
## Also make the plot bigger
plt.figure(figsize=(15,6))
plt.hist(results, bins=range(100), normed=True)
plt.title("coin flip densities")
plt.show()
flips_mean = float(sum(results)) / len(results)
print flips_mean
## the numpy package has lots of useful routines: http://www.numpy.org/
import numpy as np
mean = np.mean(results)
print mean
## we could code standard deviation by hand, but numpy makes it easier
stdev=np.std(results)
print stdev
## Overlay a normal distribution on top of the coin flip data
plt.figure(figsize=(15,6))
count, bins, patches = plt.hist(results, bins=range(100), normed=True, label="coin flip histogram")
plt.plot(bins, 1/(stdev * np.sqrt(2 * np.pi)) *
np.exp( - (bins - mean)**2 / (2 * stdev**2) ),
linewidth=3, color='red', label="normal distribution")
plt.title("Coin flip densities with normal distribution overlay")
plt.legend()
plt.show()
prob_heads = .5
num_flips = 100
num_heads = 25
prob_flips = np.math.factorial(num_flips) / \
(np.math.factorial(num_heads) * np.math.factorial(num_flips-num_heads)) * \
(prob_heads**num_heads) * ((1-prob_heads)**(num_flips-num_heads))
print "The probability of seeing %d heads in %d flips is %.015f" % (num_heads, num_flips, prob_flips)
## Another super useful package is scipy
import scipy.stats
sp_prob = scipy.stats.binom.pmf(num_heads, num_flips, prob_heads)
print "scipy computed it as %0.15f" % sp_prob
## normal approximatation
print scipy.stats.norm(50, 5).pdf(25)
## Overlay a normal distribution on top of the coin flip data
plt.figure(figsize=(15,6))
count, bins, patches = plt.hist(results, bins=range(100), normed=True, label="coin flip histogram")
plt.plot(bins, scipy.stats.binom.pmf(bins, num_flips, prob_heads),linewidth=3, color='red', label="binomial distribution")
plt.plot(bins, scipy.stats.norm(50,5).pdf(bins),linewidth=3, color='green', linestyle='--', label="normal distribution")
plt.title("Coin flip densities with normal distribution overlay")
plt.legend()
plt.show()
expected_mean = num_flips * prob_heads
expected_stdev = np.math.sqrt(num_flips * prob_heads * (1 - prob_heads))
print "In %d flips, with a probability %.02f" % (num_flips, prob_heads)
print "The expected frequency is %.02f +/- %.02f" % (expected_mean, expected_stdev)
print "The observed frequency was %0.2f +/- %0.2f" % (mean, stdev)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: x1 and x2 have spectral peaks at 40 Hz
Step2: Spectral Granger Causality
Steps
Step3: Compare with nitime version
Step4: Non-parametric version
Steps
Step5: Wilson spectral matrix factorizaion
|
<ASSISTANT_TASK:>
Python Code:
time_extent = (0, .250)
num_trials = 500
sampling_frequency = 200
num_time_points = ((time_extent[1] - time_extent[0]) * sampling_frequency) + 1
time = np.linspace(time_extent[0], time_extent[1], num=num_time_points, endpoint=True)
signal_shape = (len(time), num_trials)
np.random.seed(2)
def simulate_arma_model(ar_coefficients, ma_coefficients=None,
signal_shape=(100,1), sigma=1, axis=0, num_burnin_samples=10):
ar = np.r_[1, -ar_coefficients] # add zero-lag and negate
if ma_coefficients is None:
ma = np.asarray([1])
else:
ma = np.r_[1, ma_coefficients] # add zero-lag
# Add burnin samples to shape
signal_shape = list(signal_shape)
signal_shape[axis] += num_burnin_samples
# Get arma process
white_noise = np.random.normal(0, sigma, size=signal_shape)
signal = scipy.signal.lfilter(ma, ar, white_noise, axis=axis)
# Return non-burnin samples
slc = [slice(None)] * len(signal_shape)
slc[axis] = slice(num_burnin_samples, signal_shape[axis], 1)
return signal[slc]
ar1 = np.array([.55, -0.70])
x1 = simulate_arma_model(ar1, signal_shape=signal_shape, sigma=1, num_burnin_samples=sampling_frequency)
arima_model.ARMA(x1[:, 0], (2,0)).fit(trend='nc', disp=0).summary()
ar2 = np.array([.56, -0.75])
x2 = simulate_arma_model(ar2, signal_shape=signal_shape, sigma=2, num_burnin_samples=sampling_frequency)
arima_model.ARMA(x2[:, 0], (2,0)).fit(trend='nc', disp=0).summary()
x2[1:, :] = 0.60 * x1[:-1, :]
psd1 = spectral.multitaper_power_spectral_density(x1,
sampling_frequency=sampling_frequency,
time_halfbandwidth_product=1,
desired_frequencies=[0, 100])
psd2 = spectral.multitaper_power_spectral_density(x2,
sampling_frequency=sampling_frequency,
time_halfbandwidth_product=1,
desired_frequencies=[0, 100])
fig, axes = plt.subplots(1,2, figsize=(4,3), sharex=True, sharey=True)
psd1.plot(ax=axes[0], legend=False)
axes[0].set_ylabel('Power')
axes[0].set_title('x1')
axes[0].axvline(40, color='black', linestyle=':')
psd2.plot(ax=axes[1], legend=False)
axes[1].axvline(40, color='black', linestyle=':')
axes[1].set_title('x2')
plt.tight_layout()
# Step 1
centered_x1 = spectral._subtract_mean(x1)
centered_x2 = spectral._subtract_mean(x2)
x = np.concatenate((centered_x1[..., np.newaxis],
centered_x2[..., np.newaxis]),
axis=-1)
num_lfps = x.shape[-1]
# Step 2
order = 3
fit = [alg.MAR_est_LWR(x[:, trial, :].T, order)
for trial in np.arange(x1.shape[1])]
# A shape: order-1 x num_lfps x num_lfps
# cov shape: num_lfps x num_lfps
# Step 3
Sigma = np.mean([trial_fit[1] for trial_fit in fit], axis=0)
# Step 4
A = np.mean([trial_fit[0] for trial_fit in fit], axis=0)
# Step 5
pad = 0
number_of_time_samples = int(num_time_points)
next_exponent = spectral._nextpower2(number_of_time_samples)
number_of_fft_samples = max(
2 ** (next_exponent + pad), number_of_time_samples)
half_of_fft_samples = number_of_fft_samples//2 - 1
A_0 = np.concatenate((np.eye(num_lfps)[np.newaxis, :, :], A)).reshape((order, -1))
B = np.zeros((A_0.shape[-1], half_of_fft_samples), dtype='complex')
for coef_ind in np.arange(A_0.shape[-1]):
normalized_freq, B[coef_ind, :] = scipy.signal.freqz(A_0[:, coef_ind],
worN=half_of_fft_samples)
B = B.reshape((num_lfps, num_lfps, half_of_fft_samples))
H = np.zeros_like(B)
for freq_ind in np.arange(half_of_fft_samples):
H[:, :, freq_ind] = np.linalg.inv(B[:, :, freq_ind])
freq = (normalized_freq * sampling_frequency) / (2 * np.pi)
# Step 6
S = np.zeros_like(H)
for freq_ind in np.arange(H.shape[-1]):
S[:, :, freq_ind] = np.linalg.multi_dot(
[H[:, :, freq_ind],
Sigma,
H[:, :, freq_ind].conj().transpose()])
S = np.abs(S)
I12 = -np.log(1 - ((Sigma[0, 0] - Sigma[0, 1]**2 / Sigma[1, 1]) * np.abs(H[1, 0])**2) / S[1, 1])
# I12 = np.log( S[1, 1] / (S[1,1] - (Sigma[0, 0] - Sigma[0, 1]**2 / Sigma[1, 1]) * np.abs(H[1, 0])**2))
I21 = -np.log(1 - ((Sigma[1, 1] - Sigma[0, 1]**2 / Sigma[0, 0]) * np.abs(H[0, 1])**2) / S[0, 0])
plt.plot(freq, I12, label='x1 → x2')
plt.plot(freq, I21, label='x2 → x1')
plt.xlabel('Frequencies (Hz)')
plt.ylabel('Granger Causality')
plt.legend();
order = 3
fit = [alg.MAR_est_LWR(x[:, trial, :].T, order)
for trial in np.arange(x1.shape[1])]
Sigma = np.mean([trial_fit[1] for trial_fit in fit], axis=0)
A = np.mean([trial_fit[0] for trial_fit in fit], axis=0)
normalized_freq, f_x2y, f_y2x, f_xy, Sw = alg.granger_causality_xy(A, Sigma, n_freqs=number_of_fft_samples)
freq = (normalized_freq * sampling_frequency) / (2 * np.pi)
plt.plot(freq, f_x2y, label='x1 → x2')
plt.plot(freq, f_y2x, label='x2 → x1')
plt.xlabel('Frequencies (Hz)')
plt.ylabel('Granger Causality')
plt.legend();
# Step 1
def get_complex_spectrum(data,
sampling_frequency=1000,
time_halfbandwidth_product=3,
pad=0,
tapers=None,
frequencies=None,
freq_ind=None,
number_of_fft_samples=None,
number_of_tapers=None,
desired_frequencies=None):
complex_spectrum = spectral._multitaper_fft(
tapers, spectral._center_data(data), number_of_fft_samples, sampling_frequency)
return np.nanmean(complex_spectrum[freq_ind, :, :], axis=(1, 2)).squeeze()
data = [x1, x2]
num_signals = len(data)
time_halfbandwidth_product = 1
tapers, number_of_fft_samples, frequencies, freq_ind = spectral._set_default_multitaper_parameters(
number_of_time_samples=data[0].shape[0],
sampling_frequency=sampling_frequency,
time_halfbandwidth_product=time_halfbandwidth_product)
complex_spectra = [get_complex_spectrum(
signal,
sampling_frequency=sampling_frequency,
time_halfbandwidth_product=time_halfbandwidth_product,
tapers=tapers,
frequencies=frequencies,
freq_ind=freq_ind,
number_of_fft_samples=number_of_fft_samples) for signal in data]
S = np.stack([np.conj(complex_spectrum1) * complex_spectrum2
for complex_spectrum1, complex_spectrum2
in itertools.product(complex_spectra, repeat=2)])
S = S.reshape((num_signals, num_signals, num_frequencies))
A0 = np.random.normal(size=(num_signals, 1000))
A0 = np.dot(A0, A0.T) / 1000;
A0 = np.linalg.cholesky(A0).T
num_two_sided_frequencies = 2 * num_frequencies - 1
Psi = np.zeros((num_signals, num_signals, num_two_sided_frequencies), dtype=complex)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading and Preprocessing the Images with OpenCV
Step2: Building a Convnet in Keras
|
<ASSISTANT_TASK:>
Python Code:
data = pd.read_csv('data/driving_log.csv', header=None,
names=['center', 'left', 'right', 'angle', 'throttle', 'break', 'speed'])
print(data.ix[0].center)
data.sample()
def img_id(path):
return path.split('/IMG/')[1]
image_paths = data.center.apply(img_id).values.tolist()
image_paths[:5]
# y_all = data[['angle', 'throttle']].values
y_all = data.angle.values
n_samples = y_all.shape[0]
print("Training Model with {} Samples".format(n_samples))
def read_image(path):
img = cv2.imread(path, cv2.IMREAD_COLOR)
img = img[40:160, 0:320] ## Cropping top section of image, just useless noise
# img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
# img = np.expand_dims(img, axis=2)
return img[:,:,::-1]
X_all = np.ndarray((n_samples, ROWS, COLS, CHANNELS), dtype=np.uint8)
for i, path in enumerate(image_paths):
DIR+path
img = read_image(DIR+path)
X_all[i] = img
print(X_all.shape)
for img in X_all[:3]:
plt.imshow(img)
plt.show()
X_train, X_test, y_train, y_test = train_test_split(
X_all, y_all, test_size=0.20, random_state=23)
def fit_gen(data, batch_size):
while 1:
x = np.ndarray((batch_size, ROWS, COLS, CHANNELS), dtype=np.uint8)
y = np.zeros(batch_size)
i=0
for line in data.iterrows():
path = line[1].center.split('/IMG/')[1]
x[i] = read_image(DIR+path)
y[i] = line[1].angle
i+=1
if i == batch_size:
i=0
yield (x, y)
x = np.ndarray((batch_size, ROWS, COLS, CHANNELS), dtype=np.uint8)
y = np.zeros(batch_size)
def rmse(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1))
def get_model():
lr = 0.0001
weight_init='glorot_normal'
opt = RMSprop(lr)
loss = 'mean_squared_error'
model = Sequential()
model.add(BatchNormalization(mode=2, axis=1, input_shape=(ROWS, COLS, CHANNELS)))
model.add(Convolution2D(3, 3, 3, init=weight_init, border_mode='valid', activation='relu', input_shape=(ROWS, COLS, CHANNELS)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(9, 3, 3, init=weight_init, border_mode='valid', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(18, 3, 3, init=weight_init, border_mode='valid', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(32, 3, 3, init=weight_init, border_mode='valid', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(80, activation='relu', init=weight_init))
# model.add(Dropout(0.5))
model.add(Dense(15, activation='relu', init=weight_init))
model.add(Dropout(0.25))
model.add(Dense(1, init=weight_init, activation='linear'))
model.compile(optimizer=opt, loss=loss)
return model
model = get_model()
model.summary()
nb_epoch = 30
batch_size = 64
### Creating Validation Data
X_train, X_test, y_train, y_test = train_test_split(
X_all, y_all, test_size=0.20, random_state=23)
# Callbacks
early_stopping = EarlyStopping(monitor='val_loss', patience=8, verbose=1, mode='auto')
save_weights = ModelCheckpoint('new_model.h5', monitor='val_loss', save_best_only=True)
model.fit_generator(fit_gen(data, 32),
samples_per_epoch=data.shape[0], nb_epoch=nb_epoch,
validation_data=(X_test, y_test), callbacks=[save_weights, early_stopping])
model.fit(X_all, y_all, batch_size=batch_size, nb_epoch=nb_epoch,
validation_data=(X_test, y_test), verbose=1, shuffle=True, callbacks=[save_weights, early_stopping])
preds = model.predict(X_test, verbose=1)
print( "Test MSE: {}".format(mean_squared_error(y_test, preds)))
print( "Test RMSE: {}".format(np.sqrt(mean_squared_error(y_test, preds))))
js = model.to_json()
with open('model.json', 'w') as outfile:
json.dump(js, outfile)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Noisy Data
Step2: Model Fitting
Step3: Two-peaks model
Step4: Fit results from an lmfit Model can be inspected with
Step5: This is good for peeking at the results. However,
Step6: A summary of the one-peak model fit
Step7: Tidy
Step8: Let's see the results for a single dataset
Step9: or for a single parameter across datasets
Step10: Note that there is a much larger error in fitting p2_center
Step11: Augment
Step12: Let's see the results for a single dataset
Step13: Plotting a single dataset is simplified compared to a manual plot
Step14: But keep in mind that, for a single dataset, we could
Step15: However, things become much more interesting when we want to plot multiple
Step16: Comparison of one- or two-peaks models
Step17: Then we perfom a facet plot with seaborn
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format='retina' # for hi-dpi displays
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from lmfit import Model
import lmfit
print('lmfit: %s' % lmfit.__version__)
sns.set_style('whitegrid')
import pybroom as br
N = 20
x = np.linspace(-10, 10, 101)
peak1 = lmfit.models.GaussianModel(prefix='p1_')
peak2 = lmfit.models.GaussianModel(prefix='p2_')
model = peak1 + peak2
#params = model.make_params(p1_amplitude=1.5, p2_amplitude=1,
# p1_sigma=1, p2_sigma=1)
Y_data = np.zeros((N, x.size))
Y_data.shape
for i in range(Y_data.shape[0]):
Y_data[i] = model.eval(x=x, p1_center=-1, p2_center=2,
p1_sigma=0.5, p2_sigma=1.5,
p1_height=1, p2_height=0.5)
Y_data += np.random.randn(*Y_data.shape)/10
plt.plot(x, Y_data.T, '-k', alpha=0.1);
model1 = lmfit.models.GaussianModel()
Results1 = [model1.fit(y, x=x) for y in Y_data]
params = model.make_params(p1_center=0, p2_center=3,
p1_sigma=0.5, p2_sigma=1,
p1_amplitude=1, p2_amplitude=2)
Results = [model.fit(y, x=x, params=params) for y in Y_data]
#print(Results[0].fit_report())
#Results[0].params.pretty_print()
dg = br.glance(Results, var_names='dataset')
dg.drop('model', 1).drop('message', 1).head()
dg1 = br.glance(Results1, var_names='dataset')
dg1.drop('model', 1).drop('message', 1).head()
dt = br.tidy(Results, var_names='dataset')
dt.query('dataset == 0')
dt.query('name == "p1_center"').head()
dt.query('name == "p1_center"')['value'].std()
dt.query('name == "p2_center"')['value'].std()
dt.query('name == "p1_center"')['value'].hist()
dt.query('name == "p2_center"')['value'].hist(ax=plt.gca());
da = br.augment(Results, var_names='dataset')
da1 = br.augment(Results1, var_names='dataset')
r = Results[0]
da.query('dataset == 0').head()
da0 = da.query('dataset == 0')
plt.plot('x', 'data', data=da0, marker='o', ls='None')
plt.plot('x', "Model(gaussian, prefix='p1_')", data=da0, lw=2, ls='--')
plt.plot('x', "Model(gaussian, prefix='p2_')", data=da0, lw=2, ls='--')
plt.plot('x', 'best_fit', data=da0, lw=2);
plt.legend()
Results[0].plot_fit();
grid = sns.FacetGrid(da.query('dataset < 6'), col="dataset", hue="dataset", col_wrap=3)
grid.map(plt.plot, 'x', 'data', marker='o', ls='None', ms=3, color='k')
grid.map(plt.plot, 'x', "Model(gaussian, prefix='p1_')", ls='--')
grid.map(plt.plot, 'x', "Model(gaussian, prefix='p2_')", ls='--')
grid.map(plt.plot, "x", "best_fit");
da['model'] = 'twopeaks'
da1['model'] = 'onepeak'
da_tot = pd.concat([da, da1], ignore_index=True)
grid = sns.FacetGrid(da_tot.query('dataset < 6'), col="dataset", hue="model", col_wrap=3)
grid.map(plt.plot, 'x', 'data', marker='o', ls='None', ms=3, color='k')
grid.map(plt.plot, "x", "best_fit")
grid.add_legend();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create A Simple Object
|
<ASSISTANT_TASK:>
Python Code:
import datetime
class Regiment(object):
def __init__(self, date=datetime.datetime.now()):
self.date = date
def __repr__(self):
return date
def __str__(self):
return str(date)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2
Step2: Step 3
Step3: Retrieving a DataFrame
|
<ASSISTANT_TASK:>
Python Code:
from pydrill.client import PyDrill
#Open a connection to Drill
drill = PyDrill(host='localhost', port=8047)
#Verify the connection is active, throw an error if not.
if not drill.is_active():
raise ImproperlyConfigured('Please run Drill first')
#Execute query in Drill
query_result = drill.query('''
SELECT JobTitle,
AVG( CAST( LTRIM( AnnualSalary, '$' ) AS FLOAT) ) AS avg_salary,
COUNT( DISTINCT name ) AS number
FROM dfs.drillworkshop.`*.csvh`
GROUP BY JobTitle
Order By avg_salary DESC
LIMIT 10
''')
#Iterate through the rows.
for row in query_result:
print( row )
df = query_result.to_dataframe()
df.head()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step18: Build the network
Step21: Convolution and Max Pooling Layer
Step24: Flatten Layer
Step27: Fully-Connected Layer
Step30: Output Layer
Step33: Create Convolutional Model
Step34: Train the Neural Network
Step37: x = np.zeros((100,2,1,5))
Step39: Show Stats
Step40: Hyperparameters
Step42: Train on a Single CIFAR-10 Batch
Step45: Checkpoint
|
<ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 2
sample_id = 18
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
from skimage import color
# def to_hsi(x):
# r = x[:,:,:,0]
# g = x[:,:,:,1]
# b = x[:,:,:,2]
# theta = np.acos((0.5 * ((r-g) + (r-b))) / (((r-g)**2) + (r-b *)))
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
x = x / 255.0
for i in range(x.shape[0]):
x[i,:,:,:] = color.rgb2hsv(x[i,:,:,:])
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
m = (x.shape[0] if hasattr(x, 'shape') else len(x))
rv = np.zeros((m, 10))
rv[range(m),x] = 1
return rv
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
%%time
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
tensor_shape = tuple([None] + list(image_shape))
return tf.placeholder(tf.float32, shape=tensor_shape, name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, shape=(None,n_classes), name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name='keep_prob')
def neural_net_learn_rate_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name='learn_rate')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
conv_ksize = list(conv_ksize)
conv_strides = list(conv_strides)
pool_ksize = list(pool_ksize)
pool_strides = list(pool_strides)
conv_num_inputs = int(x_tensor.shape[3])
conv_weights = tf.Variable(tf.truncated_normal(
conv_ksize + [conv_num_inputs,conv_num_outputs], stddev=0.01))
conv_strides = [1] + conv_strides + [1]
rv = tf.nn.conv2d(x_tensor, conv_weights, conv_strides, 'SAME')
conv_bias = tf.Variable(tf.zeros(rv.shape[1:]))
rv = tf.nn.relu(rv + conv_bias)
pool_ksize = [1] + pool_ksize + [1]
pool_strides = [1] + pool_strides + [1]
rv = tf.nn.max_pool(rv, pool_ksize, pool_strides, 'SAME')
return rv
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
return tf.contrib.layers.flatten(x_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
dimensions = [int(x_tensor.shape[1]), num_outputs]
weights = tf.Variable(tf.truncated_normal(dimensions, stddev=0.01))
bias = tf.Variable(tf.zeros(num_outputs))
return tf.nn.relu(tf.matmul(x_tensor, weights) + bias)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
dimensions = [int(x_tensor.shape[1]), num_outputs]
weights = tf.Variable(tf.truncated_normal(dimensions, stddev=0.01))
bias = tf.Variable(tf.zeros(num_outputs))
return tf.matmul(x_tensor, weights) + bias
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
#with tf.device('/gpu:0'):
x_tensor = x
x_tensor = conv2d_maxpool(x_tensor, 128, [8,8], [1,1], [2,2], [1,1])
x_tensor = conv2d_maxpool(x_tensor, 128, [4,4], [2,2], [2,2], [2,2])
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
#with tf.device('/gpu:1'):
x_tensor = conv2d_maxpool(x_tensor, 64, [4,4], [1,1], [2,2], [2,2])
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x_tensor = flatten(x_tensor)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x_tensor = tf.nn.dropout(fully_conn(x_tensor, 512), keep_prob)
x_tensor = fully_conn(x_tensor, 256)
x_tensor = tf.nn.dropout(fully_conn(x_tensor, 256), keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
x_tensor = output(x_tensor, 10)
# TODO: return output
return x_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
# x = (neural_net_image_input((32, 32, 3)), neural_net_image_input((32, 32, 3)))
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
learn_rate = neural_net_learn_rate_input()
# Model
logits = []
with tf.variable_scope(tf.get_variable_scope()):
logits.append(conv_net(x, keep_prob))
logits = tf.concat(logits, 0)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learn_rate).minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
# def split(t,n=2):
# if n == 1: return (t,)
# dimSize = int(t.shape[0])
# partSize = dimSize/n
# maxIdx = int(partSize)
# rv = [t[:maxIdx,...]]
# for i in range(n-2):
# myMin = int(maxIdx)
# nextMax = min(dimSize,float(maxIdx)+partSize)
# myMax = int(nextMax)
# rv.append(t[myMin:myMax,...])
# maxIdx = nextMax
# rv.append(t[int(maxIdx):,...])
# return tuple(rv)
def train_neural_network(session, optimizer, keep_probability,
feature_batch, label_batch, epoch=0):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
if epoch < 125:
learning_rate=0.001
elif epoch < 175:
learning_rate=0.0003
elif epoch < 225:
learning_rate=0.0001
else:
learning_rate=0.00003
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch,
learn_rate: learning_rate,
keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
import datetime
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
train_cost, train_acc = session.run((cost, accuracy),
feed_dict={x: feature_batch,
y: label_batch,
keep_prob: 1.0})
valid_cost, valid_acc = session.run((cost, accuracy),
feed_dict={x: valid_features,
y: valid_labels,
keep_prob: 1.0})
print("Training loss: {0:.02}, accuracy: {1:.02}".format(train_cost, train_acc))
print(datetime.datetime.now(),"Validation loss: {0:.02}, accuracy: {1:.02}".format(valid_cost, valid_acc))
return valid_acc
# TODO: Tune Parameters
epochs = 250
batch_size = 256
keep_probability = 0.4
import matplotlib.pyplot as plt
%matplotlib inline
fig, axis = plt.subplots(figsize=(13,13))
axis.plot(val_accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
full_val_accuracy = []
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels, epoch)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
full_val_accuracy.append(print_stats(sess, batch_features, batch_labels, cost, accuracy))
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
fig, axis = plt.subplots(figsize=(13,13))
axis.plot(np.array(range(len(full_val_accuracy)))/5, full_val_accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# file_writer = tf.summary.FileWriter('tensorboard', sess.graph)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch,
loaded_y: test_label_batch,
loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Crear un entorno
Step2: Notebooks
Step3: Como véis, no hay puntos y coma al final de cada sentencia, le basta con fin de línea. Y en vez de printf usa print, que es mucho más sencillo.
Step4: Visualizando datos
Step5: Un ejemplo más completo
Step6: Haciendo uso de Machine Learning
Step7: Primero pintamos los puntos
Step8: Aplicamos k-means
Step9: Detectando criterio para abordar orquídeas
|
<ASSISTANT_TASK:>
Python Code:
#from IPython.display import HTML
#HTML('''<script>
#code_show=true;
#function code_toggle() {
# if (code_show){
# $('div.input').hide();
# } else {
# $('div.input').show();
# }#
# code_show = !code_show
#}
#$( ocument ).ready(code_toggle);
#</script>
#The raw code for this IPython notebook is by default hidden for easier reading.
#To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''')
from IPython.display import YouTubeVideo
YouTubeVideo("qb7FT68tcA8")
print("Hola a todos")
sumcars = 0
sumwords = 0
for word in ['hola', 'a', 'todos']:
print("Frase: ", word)
sumcars += len(word)
sumwords += 1
print("Se han mostrado ", sumwords, " palabras y ", sumwords, " caracteres")
%pylab inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(30)
plt.plot(x, x**2)
# example with a legend and latex symbols
fig, ax = plt.subplots()
ax.plot(x, x**2, label=r"$y = \alpha^2$")
ax.plot(x, x**3, label=r"$y = \alpha^3$")
ax.legend(loc=2) # upper left corner
ax.set_xlabel(r'$\alpha$', fontsize=18)
ax.set_ylabel(r'$y$', fontsize=18)
ax.set_title('Ejemplo más completo');
import sklearn.datasets
import sklearn.cluster
import matplotlib.pyplot as plot
# Creamos los puntos
n = 1000
k = 4
# Generate fake data
data, labels = sklearn.datasets.make_blobs(
n_samples=n, n_features=2, centers=k)
plot.scatter(data[:, 0], data[:, 1])
# scikit-learn
kmeans = sklearn.cluster.KMeans(k, max_iter=300)
kmeans.fit(data)
means = kmeans.cluster_centers_
plot.scatter(data[:, 0], data[:, 1], c=labels)
plot.scatter(means[:, 0], means[:, 1], linewidths=2, color='r')
plot.show()
import seaborn as sns
iris = sns.load_dataset("iris")
g = sns.PairGrid(iris, hue="species")
g.map(plt.scatter);
g = g.add_legend()
from sklearn import datasets
# load the iris dataset
iris = datasets.load_iris()
# start with the first two features: sepal length (cm) and sepal width (cm)
X = iris.data[:100,:2]
# save the target values as y
y = iris.target[:100]
# Define bounds on the X and Y axes
X_min, X_max = X[:,0].min()-.5, X[:,0].max()+.5
y_min, y_max = X[:,1].min()-.5, X[:,1].max()+.5
for target in set(y):
x = [X[i,0] for i in range(len(y)) if y[i]==target]
z = [X[i,1] for i in range(len(y)) if y[i]==target]
plt.scatter(x,z,color=['red','blue'][target], label=iris.target_names[:2][target])
plt.xlabel('Sepal Length')
plt.ylabel('Sepal Width')
plt.xlim(X_min,X_max)
plt.ylim(y_min,y_max)
plt.title('Scatter Plot of Sepal Length vs. Sepal Width')
plt.legend(iris.target_names[:2], loc='lower right')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, generate some data to use for our fitting and plotting. This generates the same dataset as in the AstroML demo.
Step2: Component Number Selection
Step3: Cross-Validation
Step4: We can see here that the cross-validation test prefers the maximum number of components (10) that we allowed for the model (and in fact, the score continues to rise as more components are added beyond 10). This is a result of the particular dataset being fit
Step5: Saving to and Reading from a File
Step6: Once the model is saved, it can be read into an XDGMM object using the read_model() function, or a new XDGMM object can be initialized directly from the saved model file. Note that if both a filename and model parameters are passed to the constructor, the parameters saved in the file will override those passed by the user.
Step7: Sampling from the Model
Step8: Conditioning the Model
Step9: Note how the number of components in the conditioned model is the same as in the original joint model, but that the weights of the components have changed, and the mu array is now 1-dimensional (since $y$ has been conditioned out).
Step10: As expected, the conditioning results are the same. Labels will also be saved to and read from files when the save_model() and read_model() functions are used.
Step11: If we sample 1000 points from this conditional distribution, we would get something like this
Step12: Conditional Prediction
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from xdgmm import XDGMM
from sklearn.model_selection import validation_curve
from sklearn.model_selection import ShuffleSplit
from demo_plots import *
'''
Due to AstroML still using the deprecated GMM class from
scikit-learn (instead of GaussianMixture), this demo will
throw numerous errors whenever the XDGMM object calls
an AstroML method, such as fit. The lines below will
suppress these warnings; comment them out to see everything.
This XDGMM class has been updated to use GaussianMixture
instead of GMM when necessary, but since it uses an AstroML
XDGMM object to store and manipulate the model, it is
dependent on AstroML. These warnings will continue to occur
until the XDGMM class from AstroML has been updated.
'''
import warnings
warnings.filterwarnings('ignore')
N = 2000
np.random.seed(0)
# generate the true data
x_true = (1.4 + 2 * np.random.random(N)) ** 2
y_true = 0.1 * x_true ** 2
# add scatter to "true" distribution
dx = 0.1 + 4. / x_true ** 2
dy = 0.1 + 10. / x_true ** 2
x_true += np.random.normal(0, dx, N)
y_true += np.random.normal(0, dy, N)
# add noise to get the "observed" distribution
dx = 0.2 + 0.5 * np.random.random(N)
dy = 0.2 + 0.5 * np.random.random(N)
x = x_true + np.random.normal(0, dx)
y = y_true + np.random.normal(0, dy)
# stack the results for computation
X = np.vstack([x, y]).T
Xerr = np.zeros(X.shape + X.shape[-1:])
diag = np.arange(X.shape[-1])
Xerr[:, diag, diag] = np.vstack([dx ** 2, dy ** 2]).T
# Instantiate an XDGMM model:
xdgmm = XDGMM()
# Define the range of component numbers, and get ready to compute the BIC for each one:
param_range = np.array([1,2,3,4,5,6,7,8,9,10])
# Loop over component numbers, fitting XDGMM model and computing the BIC:
bic, optimal_n_comp, lowest_bic = xdgmm.bic_test(X, Xerr, param_range)
plot_bic(param_range, bic, optimal_n_comp)
param_range = np.array([1,2,3,4,5,6,7,8,9,10])
shuffle_split = ShuffleSplit(3, test_size=0.3,random_state=0)
train_scores,test_scores = validation_curve(xdgmm, X=X, y=Xerr,
param_name="n_components",
param_range=param_range,
n_jobs=3,
cv=shuffle_split,
verbose=1)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plot_val_curve(param_range, train_scores_mean, train_scores_std, test_scores_mean, test_scores_std)
xdgmm.n_components = optimal_n_comp
xdgmm = xdgmm.fit(X, Xerr)
xdgmm.save_model('demo_model.fit')
# Read model into an existing XDGMM object
xdgmm.read_model('demo_model.fit')
# Initialize a new XDGMM object using the model
xdgmm2 = XDGMM(filename='demo_model.fit')
# Comparison --- the arrays should be the same.
print xdgmm.weights
print xdgmm2.weights
sample = xdgmm.sample(N)
plot_sample(x_true, y_true, x, y, sample, xdgmm)
cond_X = np.array([np.nan, 1.5])
cond_Xerr = np.array([0.0,0.05])
cond_xdgmm = xdgmm.condition(X_input = cond_X,Xerr_input = cond_Xerr)
# Compare the conditioned model to the original:
print xdgmm.weights
print cond_xdgmm.weights
print "\n"
print xdgmm.mu
print cond_xdgmm.mu
# First, set the labels in the XDGMM object
xdgmm.labels = np.array(['x','y'])
# The dictionary can pass either floats or tuples
cond_dict = {'y':(1.5,0.05)}
cond_xdgmm2 = xdgmm.condition(X_dict = cond_dict)
# Print the weights and means of the new model.
print cond_xdgmm2.weights
print cond_xdgmm2.mu
print cond_xdgmm2.labels
plot_cond_model(xdgmm, cond_xdgmm, 1.5)
cond_sample = cond_xdgmm.sample(1000)
y = np.ones(1000)*1.5
plot_cond_sample(cond_sample,y)
# Simulate a dataset:
true_sample = xdgmm.sample(1000)
true_x = true_sample[:,0]
y = true_sample[:,1]
# Predict x values given y values:
predicted_x = np.array([])
for this_y in y:
# Specify y-conditioning to apply to P(x,y):
on_this = np.array([np.nan,this_y])
# Compute conditional PDF P(x|y):
cond_gmm = xdgmm.condition(on_this)
# Draw a sample x value from this PDF, and add it to the growing list
predicted_x = np.append(predicted_x, cond_gmm.sample())
# Plot the two datasets, to compare the true x and the predicted x:
plot_conditional_predictions(y, true_x, predicted_x)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Introduction
Step2: Mini-summary
Step3: Mini-summary
Step4: Mini-summary
Step5: Mini-summary
Step6: Mini-summary
Step7: Mini-summary
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import sys # system module
import numpy as np # scientific computing
import pandas as pd # data package
import matplotlib as mpl
import matplotlib.pyplot as plt # graphics module
#Step 1: Input Data
import pandas as pd #Use pandas to read data into Python from our computers.
path = '/Users/Haley/Desktop/Final_Project_Data.xlsx' #Read data with the complete path
sheet1 = pd.read_excel(path,
sheetname='Civilian Labor Force by Sex',
skip_footer =7,
index_col = 0)
print('Data types:\n\n', sheet1.dtypes,sep='')
print('Dimensions:', sheet1.shape)
sheet1.head()
#Step 2: Draw graphs
fig, ax = plt.subplots(2, 1, figsize=(8,8)) # create fig and ax objects
sheet11 = sheet1[['Number of women in the civilian labor force (in thousands)',
'Number of men in the civilian labor force (in thousands)']]
sheet11.plot(ax=ax[0],
kind='line', # line plot
color=['red', 'green'], # line color
alpha=0.65)
ax[0].legend(['Number of women in the civilian labor force',
'Number of men in the civilian labor force'],
fontsize=8,
loc=0)
ax[0].set_ylabel('Number in the civlian force in thousands')
ax[0].set_xlabel('Date')
ax[0].set_ylim(0)
ax[0].set_title('Civilian Labor Force by Sex (1948-2015)', fontsize=14, loc='left')
sheet12=sheet1[['Share of the civilian labor force who are women (percent)',
'Share of the civilian labor force who are men (percent)']]
sheet12.plot(ax=ax[1],
kind='line', # line plot
color=['red', 'green'], # line color
alpha=0.65)
ax[1].legend(['% of women in the civilian labor force',
'% of men in the civilian labor force'],
fontsize=8,
loc=0)
ax[1].set_ylabel('% in the civlian force')
ax[1].set_xlabel('Date')
ax[1].set_ylim(0)
ax[0].spines["top"].set_visible(False)
ax[0].spines["bottom"].set_visible(False)
ax[0].spines["right"].set_visible(False)
ax[0].spines["left"].set_visible(False)
ax[1].spines["top"].set_visible(False)
ax[1].spines["bottom"].set_visible(False)
ax[1].spines["right"].set_visible(False)
ax[1].spines["left"].set_visible(False)
#Step 1: Input Data
sheet2 = pd.read_excel(path,
sheetname='Labor Force Participation Rate',
skiprows = 1,
index_col = 0,
usecols =(range(3)) #only need the first three cols
)
sheet2
#Step 2: Draw a graph
plt.plot(sheet2.index, sheet2['All Women'])
plt.plot(sheet2.index, sheet2['All Men'])
plt.title('Labor Force Partipation Rate by Sex', fontsize=14, loc='left') # add title
plt.ylabel('Labor Force Participation Rate') # y axis label
plt.xlabel('Year') # y axis label
#Step 1: Input Data
sheet3 = pd.read_excel(path,
sheetname='Median annual earnings by sex',
skiprows = 2,
index_col = 0,
usecols =(range(3)) #only need the first three cols
)
sheet3
#To do list
#Draw a line graph
#calculate the difference in year 1960 and the difference in year 2014
#Step 2: Draw a graph
sheet3.plot(title='Median Annual Earnings by Sex', color=['r','g'])
#Step 1: Input Data
sheet4 = pd.read_excel(path,
sheetname='Participation Rate by Edu Sex',
skip_footer = 6,
index_col = 0
)
sheet4=sheet4[['Women','Men']]
#Step 2: Draw a graph
sheet4.plot(figsize=(17,6), ylim=(0,100), kind='bar', color=['red','g'],alpha=0.5,
title='Participation Rate by Edu Sex')
#Step 1: Input Data
sheet5 = pd.read_excel(path,
sheetname='Employed parents by status',
skip_footer = 4,
skiprows=1,
index_col = 0,
#usecols=['Age of youngest child','Percent of total employed of mothers','Percent of total employed of fathers']
# sheet5["Type"] == 'Full-time'
)
sheet16 = sheet5[sheet5['Type'] == 'Full-time']
sheet16
sheet17 = sheet5[sheet5['Type'] == 'Part-time']
sheet17
fig, ax = plt.subplots(2, 1, figsize=(14,14)) # create fig and ax objects
sheet16.plot(ax=ax[0],
kind='bar', # line plot
color=['purple', 'yellow'], # line color
alpha=0.5, width=0.4)
ax[0].legend(['Mothers',
'Fathers'],
fontsize=10,
loc='center')
ax[0].set_ylabel('Percent of total employed')
ax[0].set_xlabel('Age of youngest child')
ax[0].set_ylim(0)
ax[0].set_title('Employed parents by full-time status, sex and age of youngest child, 2015 annual averages', fontsize=10, loc='left')
sheet17.plot(ax=ax[1],
kind='bar', # line plot
color=['purple', 'yellow'], # line color
alpha=0.5, width=0.4)
ax[1].legend(['Mothers',
'Fathers'],
fontsize=10,
loc='center')
ax[1].set_ylabel('Percent of total employed')
ax[1].set_xlabel('Age of youngest child')
ax[1].set_ylim(0)
ax[1].set_title('Employed parents by part-time status, sex and age of youngest child, 2015 annual averages', fontsize=10, loc='left')
ax[0].spines["top"].set_visible(False)
ax[0].spines["right"].set_visible(False)
ax[0].spines["left"].set_visible(False)
ax[1].spines["top"].set_visible(False)
ax[1].spines["left"].set_visible(False)
fig, ax = plt.subplots(figsize=(12,4))
sheet16.plot(ax=ax,
kind='bar', # line plot
color=['purple', 'yellow'], # line color
alpha=0.5, width=0.4)
ax.set_ylabel('Employment rates')
ax.set_xlabel('Age of the youngest child')
ax.set_ylim(0)
ax.set_title('Employment Rate of parents', fontsize=14)
ax.legend(fontsize=8,
loc=0)
ax.spines["top"].set_visible(False)
#Step 1: Input Data
sheet6 = pd.read_excel(path,
sheetname='Unemployment Rate of parents',
skip_footer = 4,
index_col = 0)
sheet6
#Step 2: Draw a graph
fig, ax = plt.subplots(figsize=(12,4))
sheet6.plot(ax=ax,
kind='bar', # line plot
color=['purple', 'yellow'], # line color
alpha=0.5, width=0.4)
ax.set_ylabel('Unemployment rates')
ax.set_xlabel('Age of the youngest child')
ax.set_ylim(0)
ax.set_title('Unemployment Rate of parents', fontsize=14)
ax.legend(fontsize=8,
loc=0)
ax.spines["top"].set_visible(False)
#if column == "under 3 years":
# y_pos += 0.5
#Step 1: Import Data
sheet7 = pd.read_excel(path,
sheetname='Table 8B Important',
skiprows = 3,
skip_footer = 4,
index_col = 0)
sheet71=sheet7[['Men.1','Women.1']]
sheet71 = sheet71.rename(columns={'Men.1': 'Men', 'Women.1': 'Women'})
sheet71=sheet71.iloc[[4, 5, 6, 13, 12,16], :]
sheet71
#Step 2: Draw a graph
fig, ax = plt.subplots(figsize=(12,4))
sheet71.plot(ax=ax,
kind='bar', # line plot
color=['purple', 'yellow'], # line color
alpha=0.5, width=0.4)
ax.set_ylabel('Average Hours per Day ')
ax.set_ylim(0)
ax.set_title('American Use of Time by Sex When the Youngest Child is Under 6', fontsize=14)
ax.legend(['Men', 'Women'],fontsize=8,
loc='best')
ax.spines["top"].set_visible(False)
#Step 1: Import Data
sheet7 = pd.read_excel(path,
sheetname='Table 8B Important',
skiprows = 3,
skip_footer = 4,
index_col = 0)
sheet72=sheet7[['Men.2','Women.2']]
sheet72 = sheet72.rename(columns={'Men.2': 'Men', 'Women.2': 'Women'})
sheet72=sheet72.iloc[[4, 5, 6, 13, 12,16], :]
sheet72
#Step 2: Draw a graph
fig, ax = plt.subplots(figsize=(12,4))
sheet72.plot(ax=ax,
kind='bar', # line plot
color=['purple', 'yellow'], # line color
alpha=0.5, width=0.4)
ax.set_ylabel('Average Hours per Day ')
ax.set_ylim(0)
ax.set_title('American Use of Time by Sex When the Youngest Child is Between 6 to 17', fontsize=14)
ax.legend(['Men', 'Women'],fontsize=8,
loc='best')
ax.spines["top"].set_visible(False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Once Alascan is instantiated and finished running, we can plot the results. The plotScan_interactive function by default, outputs the results in an html file and opens it up in your browser. However, if you are using a notebook to view it, you can display the plot inline by passing the argument display_output='notebook' to the function. Here we display it in the notebook so that it's easier to view alongside the code.
Step2: The plotScan_interactive function displays a bar plot similar to plotScan but now hovering over specific bars displays the corresponding asssociation/solvation free energy values. Additionally, clicking and dragging in the plot allows you to zoom in a subset of values. The plotly modebar in the top right has additional options such as zoom, autoscale and saving as a static image.
|
<ASSISTANT_TASK:>
Python Code:
from aesop import Alascan, plotScan_interactive, plotNetwork_interactive
path_apbs = 'path\to\executable\apbs'
path_coulomb = 'path\to\executable\coulomb'
path_pdb2pqr = 'path\to\executable\pdb2pqr'
jobname = 'alascan'
pdbfile = 'barnase_barstar.pdb'
selstr = ['chain A', 'chain B']
alascan = Alascan(pdb=pdbfile,
pdb2pqr_exe=path_pdb2pqr,
apbs_exe=path_apbs,
coulomb_exe=path_coulomb,
jobname=jobname,
selstr=selstr,
minim=False)
alascan = Alascan(pdb=pdbfile, jobname=jobname, selstr=selstr)
alascan.run()
plotScan_interactive(alascan,display_output='notebook')
#If you are not using a notebook to run your code then use the code below instead:
#plotScan_interactive(alascan)
plotNetwork_interactive(alascan,display_output='notebook')
#If you are not using a notebook to run your code then use the code below instead:
#plotNetwork_interactive(alascan)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download the Cincinnati 311 (Non-Emergency) Service Requests data
Step2: Parse the 1st record
Step3: Implement a class that parses and cleans a Cincinnati 311 data record
|
<ASSISTANT_TASK:>
Python Code:
from Cincinnati311CSVDataParser import Cincinnati311CSVDataParser
from csv import DictReader
import os
import re
import urllib2
data_dir = "./Data"
csv_file_path = os.path.join(data_dir, "cincinnati311.csv")
if not os.path.exists(csv_file_path):
if not os.path.exists(data_dir):
os.mkdir(data_dir)
url = 'https://data.cincinnati-oh.gov/api/views' +\
'/4cjh-bm8b/rows.csv?accessType=DOWNLOAD'
response = urllib2.urlopen(url)
html = response.read()
with open(csv_file_path, 'wb') as h_file:
h_file.write(html)
h_file = open("./Data/cincinnati311.csv", "r")
fieldnames = [re.sub("_", "", elem.lower())\
for elem in h_file.readline().rstrip().split(',')]
readerobj = DictReader(h_file, fieldnames)
print readerobj.next()
h_file.close()
# head -n 3 cincinnati311.csv > sample.csv
h_file = open("./Data/sample.csv", "r")
parserobj = Cincinnati311CSVDataParser(h_file)
for record in parserobj:
print record
h_file.close()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problema Prático 4.6
Step2: Exemplo 4.7
Step3: Problema Prático 4.7
Step4: Teorema de Thèvenin
Step5: Problema Prático 4.8
Step6: Exemplo 4.9
Step7: Problema Prático 4.9
Step8: Exemplo 4.10
Step9: Problema Prático 4.10
|
<ASSISTANT_TASK:>
Python Code:
print("Exemplo 4.6")
#trasforma fonte 1 (corrente -> tensao)
#vs1 = is*R = 12V
#Req em serie entre 4 e 2
#Req1 = 4 + 2 = 6
#transforma fonte 2 (tensao -> corrente)
#is2 = 12/3 = 4A
#transforma fonte 1 (tensao -> corrente)
#is1 = 12/6 = 2A
#Req paralelo entre 6 e 3
#Req2 = 6*3/(6 + 3) = 2
#fonte resultante
#ir = is2 - is1 = 4 - 2 = 2A
#transforma fonte 2 (corrente -> tensao)
#vs2 = Req2*ir = 2 * 2 = 4V
#divisor tensao
#v0 = vs2*8/(8 + Req2)
v0 = 4*8/(8 + 2)
print("Tensao v0",v0,"V")
print("Problema Prático 4.6")
#Req serie 4 e 1 = 5
#Req paralelo 6 e 3 = 2
#transforma fonte 1 (corrente -> tensao)
#vs1 = R*is1 = 5*2 = 10V
#soma fonte 1 e 2 = 5 + 10 = 15V
#transforma fonte soma (tensao -> corrente)
#is = 15/2 = 7,5A
#Req paralelo 5 e 2 = 10/7
#soma fonte corrente = 7,5 + 3 = 10,5 A
#divisor corrente
i0 = 10.5*(10/7)/((10/7) + 7)
print("Corrente i0:",i0,"A")
print("Exemplo 4.7")
#transforma fonte 1 (tensao -> corrente)
#is1 = 6/2 = 3 A
#transforma fonte dep. (corrente -> tensao)
#vs_dep = 0.25Vx * 4 = Vx
#soma fonte dep. e fonte 2 = 18 + Vx
#Req paralelo 2 e 2 = 1
#transforma fontes soma (tensao -> corrente)
#is_soma = 18/4 + Vx/4
#soma fontes = 18/4 + Vx/4 + 3 = 30/4 + Vx/4 = (30 + Vx)/4
#transforma fontes soma (corrente -> tensao)
#fonte resultante = ((30 + Vx)/4)*4 = 30 + Vx
#LKT
#(30 + Vx) - 4*ix - Vx = 0
#ix = (30 + Vx)/5 = 6 + Vx/5
#30 - 24 - 4Vx/5 = 0
vx = 6*5/4
print("Tensão Vx",vx,"V")
print("Problema Prático 4.7")
#transforma fonte dep. (tensao -> corrente)
#is_dep = 2ix/5
#soma fontes = 0.024 - 2ix
#divisor corrente
#ix = (24m - 2ix)*5/(5 + 10)
#ix = (0.12 - 10ix)/15
#ix + 2ix/3 = 0.008
#5ix/3 = 0.008
ix = 0.008*3/5
print("Corrente ix:",ix,"A")
print("Exemplo 4.8")
#Req1 = 4*12/(4 + 12) = 48/16 = 3
#Rth = 3 + 1 = 4
#transforma fonte 1 (tensao -> corrente)
#is1 = 32/4 = 8 A
#soma fontes = 8 + 2 = 10 A
#ix = 10*4/(4 + 12) = 40/16 = 5/2
#Vab = 12*(5/2) = 30 = Vth
Vth = 30
Rth = 4
Rl = 6
Il = Vth/(Rl + Rth)
print("Para RL = 6, Corrente:",Il,"A")
Rl = 16
Il = Vth/(Rl + Rth)
print("Para RL = 6, Corrente:",Il,"A")
Rl = 36
Il = Vth/(Rl + Rth)
print("Para RL = 6, Corrente:",Il,"A")
print("Problema Prático 4.8")
#Req1 = 6 + 6 = 12
#Rth = Req1*4/(Req1 + 4) = 48/16 = 3
Rth = 3
#Superposicao Vsource
#Vab1 = Vs*4/(4 + 6 + 6) = 12*4/16 = 3V
#Superposicao Csource
#Iab = Is*6/(4 + 6 + 6) = 2*6/16 = 3/4
#Vab2 = Iab*4 = 3V
#Vth = Vab1 + Vab2
Vth = 6
I = Vth/(Rth + 1)
print("Tensao Vth:",Vth,"V")
print("Resistencia Rth:",Rth)
print("Corrente I:",I,"A")
print("Exemplo 4.9")
import numpy as np
#Descobrir Rth - desliga fontes indep., nao se alteram fontes dep.
#Aplicar tensao vo arbitraria entre terminais a b
#vo = 1 V
#Analise de malhas
#-2Vx + 2(i1 - i2) = 0
#Vx = i1 - i2
#Vx = -4i2
#i1 + 3i2 = 0
#-Vx + 2(i2 - i1) + 6(i2 - i3) = 0
#2i2 - 2i1 + 6i2 - 6i3 = Vx
#-3i1 + 9i2 - 6i3 = 0
#-i1 + 3i2 - 2i3 = 0
#Vo + 6(i3 - i2) + 2i3 = 0
#6i3 - 6i2 + 2i3 = -1
#-6i2 + 8i3 = -1
coef = np.matrix("1 3 0;-1 3 -2;0 -6 8")
res = np.matrix("0;0;-1")
I = np.linalg.inv(coef)*res
#i3 = -i0
io = -I[2]
#Rth = Vo/io
Rth = 1/io
print("Resistencia Rth:",float(Rth))
#Descobrir Vth
#Analise de tensao em terminais a b
#Analise de Malhas
#i1 = 5 A
#-2Vx + 2(i2 - i3) = 0
#Vx = i2 - i3
#Vx = 4(5 - i3) = 20 - 4i3
#i2 + 3i3 = 20
#4(i3 - 5) + 2(i3 - i2) + 6i3 = 0
#4i3 +2i3 - 2i2 + 6i3 = 20
#-2i2 + 12i3 = 20
#-i2 + 6i3 = 10
coef = np.matrix("1 3;-1 6")
res = np.matrix("20;10")
I = np.linalg.inv(coef)*res
Vth = 6*I[1]
print("Tensão Vth:",float(Vth),"V")
print("Problema Prático 4.9")
#Descobrir Rth
#Vo = 1V
#Analise Nodal
#i1 - Ix/2 = 0
#v1/5 - Ix/2 = 0
#Ix = (v1 - 1)/3
#v1/5 - (v1 - 1)/6 = 0
#v1/5 - v1/6 = -1/6
#v1/30 = -1/6
#v1 = -5
#Ix = (v1 - 1)/3 = -6/3 = -2 A
#i2 = 1/4 A
#io = -Ix + i2 = 9/4 A
#Rth = 1/(9/4) = 4/9
Rth = 4/9
print("Resistencia Rth:",Rth)
#Descobrir Vth
#Analise de Malhas
#-6 + 5i1 + 3Ix + 4Ix = 0
#5i1 + 7Ix = 6
#3Ix/2 + i1 = Ix
#Ix/2 + i1 = 0
#2i1 + Ix = 0
coef = np.matrix("5 7;2 1")
res = np.matrix("6;0")
I = np.linalg.inv(coef)*res
Ix = float(I[1])
Vth = 4*Ix
print("Tensão Vth:",Vth,"V")
print("Exemplo 4.10")
#vab = -vo = -1 V
#i1 = 1/4 A
#ix = 1/2 A
#i0 = 2ix - ix - i1 = 1 - 1/2 - 1/4 = 1/4 A
#Rth = -1/(1/4) = -4
Rth = -4
print("Resistencia Rth:", Rth)
print("Tensao Vth:",0,"V")
print("Problema Prático 4.10")
#iab = 1 A
#-vx + 10i1 + 4vx + 15(i1 - iab) = 0
#3vx + 25i1 - 15iab = 0
#vx = -5i1
#-15i1 + 25i1 = 15
#10i1 = 15
#i1 = 1,5 A = 3/2 A
#vx = -5i1 = -7,5 V = -15/2 V
#vdep = 4*vx = -30V
#vab = vo = 15(i1 - iab) = 15/2 = 7,5V
#Rth = vo/(-iab) = -7,5
Rth = -7.5
print("Tensao Vth:",0,"V")
print("Resistencia Rth",Rth)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This distribution was generated using 100,000 random data points with zero mean ($\mu = 0$) and unit variance ($\sigma^2 = 1$). Thus, the true ATE is zero, meaning there is no treatment effect, and we see that the histogram is centered around that value. We can also see the variation around zero. We'll discuss this next.
Step2: We could also use NumPy's built-in standard deviation method.
Step3: This can be expressed in terms of potential outcomes (Gerber and Green, 2012)
Step4: Let's set our $N$ and $m$ variables.
Step5: Next, we'll generate some data from a normal distribution. In the first example, the treatment, t, and the control, c, will be centered at 5. We'll combine the treatment and control arrays.
Step6: The experimental test statistic is
Step7: Here, we should expect a high $p$-value.
Step8: Using the data from [7], we find that the permuted test statistics are as extreme as the experimental test statistic 65% of the time. This means, we don't have enough evidence to say that the experimental test statistic is different from zero.
Step9: The new experimental test statistic is
Step10: Because the values are actually different, the $p$-value should be low.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
mpl.style.use('ggplot')
mpl.rc('savefig', dpi=100)
np.random.seed(42)
# data
mu, sigma = 0, 1
x = mu + sigma * np.random.randn(100000)
# plot
pd.Series(x).plot(kind='hist', bins=50,
color='#000000', alpha=0.5, normed=True)
# plot options
plt.title('Sampling Distribution Example')
plt.xlabel('ATE Estimates')
plt.ylabel('Density')
plt.tick_params(axis='both',
top='off', bottom='off',
left='off', right='off')
(((x - x.mean()) ** 2).mean()) ** 0.5
x.std()
from code.permutation import permutation_test
N, m = 50, 25
np.random.seed(42)
t = np.random.normal(loc=5, size=m)
c = np.random.normal(loc=5, size=N-m)
outcomes = np.append(t, c)
t.mean() - c.mean()
permutation_test(outcomes, m)
np.random.seed(42)
t = np.random.normal(loc=6, size=m)
c = np.random.normal(loc=5, size=N-m)
outcomes = np.append(t, c)
t.mean() - c.mean()
permutation_test(outcomes, m)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Tokenize name into (surname, title, first name and maiden name)
Step2: 2.1 Extract features from Title variable
Step3: It seems we can extract some info from title
Step4: 3 Examine marriages / sibling relationships
Step5: Initialize data structures for algorithm
Step6: 1. Extract marriages in greedy fashion. Assume is_married has no fp ( might have actually
|
<ASSISTANT_TASK:>
Python Code:
train = pd.read_csv("data/train.csv")
train["dataset"] = "train"
train.head()
test = pd.read_csv("data/test.csv")
test["dataset"] = "test"
test.head()
#Combine both datasets to predict families
train = train.append(test)
train.set_index(train["PassengerId"],inplace=True)
name_tokenizer = re.compile(r"^(?P<surname>[^,]+), (?P<title>[A-Z a-z]+?)\. (?P<f_name>[A-Z a-z.]+)?(?P<maiden_name>\([A-Za-z .]+\))?")
name_tokens = ["surname","title","f_name","maiden_name"]
for name_tk in name_tokens:
train[name_tk] = train.Name.apply(lambda x: name_tokenizer.match(x).group(name_tk))
test[name_tk] = test.Name.apply(lambda x: name_tokenizer.match(x).group(name_tk))
train.head(n=5)
print train.groupby(["title","Sex"]).size()
#Encode special title following this logic
train.has_special_title = train.title.apply(lambda x: x not in ["Mr","Mrs","Miss","Mme","Mlle","Master"])
def is_married(couple_rows):
are_married=False
if couple_rows.irow(0).Sex != couple_rows.irow(1).Sex:
#Get who is the husband and whose the wife
man = couple_rows.irow(0) if couple_rows.irow(0).Sex == "male" else couple_rows.irow(1)
woman = couple_rows.irow(0) if couple_rows.irow(0).Sex == "female" else couple_rows.irow(1)
#Marriage tests
marriage_tests = {}
marriage_tests["same_f_name"] = woman.f_name is not None and woman.f_name in man.f_name
marriage_tests["consistent_title"] = woman.title not in ("Miss","Mlle") and man.title != "Master"
marriage_tests["same_ticket"] = woman.Ticket == man.Ticket
marriage_tests["same_pclass"] = woman.Pclass == man.Pclass
marriage_tests["legal_age"] = (woman.title in ("Mme","Mrs") or woman.Age >= 10) and man.Age > 10
marriage_tests["consistent_SibSp"] = (woman.SibSp > 0 and man.SibSp > 0) or (woman.SibSp == man.SibSp)
are_married = marriage_tests["same_f_name"] and marriage_tests["legal_age"] or ( )
consistency_checks = ( marriage_tests["consistent_title"] and
marriage_tests["legal_age"] and
marriage_tests["same_pclass"] and
marriage_tests["same_ticket"] and
marriage_tests["consistent_SibSp"])
if are_married and not consistency_checks:
failed_tests = ", ".join("{}:{}".format(x,marriage_tests[x]) for x in marriage_tests if not marriage_tests[x])
print "WARNING: Sketchy marriage: {}".format(failed_tests)
print couple_rows
print
return are_married
#Data structures - sets to keep track which ones have already been assigned
married_people = set()
people_with_parents = set()
links_to_assign = train[["SibSp","Parch"]]
#Matches a couple with the Max amount of kids they can have
#Which is the min(husband.Parch, wife.Parch)
marriages_table = {}
#Subset only people who have spouses/siblings on the boat
train_sibsp = train.ix[ train.SibSp > 0]
#People grouped by surname
surname_groups = train_sibsp.groupby("surname").groups
for surname in surname_groups:
surname_rows = surname_groups[surname]
couples = itertools.combinations(surname_rows,2)
for cpl in couples:
cpl_rows = train_sibsp.ix[list(cpl)]
if is_married(cpl_rows):
#Make sure we're not marrying somebody twice :p
assert cpl[0] not in married_people,"{} is already married :/".format(cpl[0])
assert cpl[1] not in married_people,"{} is already married :/".format(cpl[1])
#add couples to married set
married_people.add(cpl[0])
married_people.add(cpl[1])
marriages_table[cpl] = min(links_to_assign.ix[cpl[0]]["Parch"], links_to_assign.ix[cpl[1]]["Parch"] )
#print
# break
marriages_table
train.ix[list((26,1066))]
train.ix[ (train.SibSp > 0) | (train.Parch > 0) ].shape
train
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The station number and date range we are interested in are part of the URL that we use to communicate with the web services. The specific file we receive when the read_csv command runs doesn't exist -- when our script requests the data, the server reads the URL to see what we want, pulls data from a database, packages it, and passes it on to us. The API (the protocol that governs the communication between machines) establishes the "formula" for writing the URL. As long as we follow that formula (and request data that exists), the server will provide it for us.
Step2: Python dictionaries to URLs {.callout}
Step3: Just like there is the Numpy library for matrices and Pandas for tabular data, there is a Python library that provides a simple interface for accessing resources through URLs (take a look at the most popular package repository
Step4: This is not the most elegant way to write the URL but it accomplishes the job! To clean things up a bit, we can replace the values we want to be able to change with variables
Step5: We can now combine it with the rest of our code
Step6: Creating Functions
Step7: The function definition opens with the word def, which is followed by the name of the function and a parenthesized list of parameter names. The body of the function — the statements that are executed when it runs — is indented below the definition line, typically by four spaces.
Step8: The boiling point of water in Kelvin should be 373.15 K, not 273.15 K!
Step9: 5 divided by 9 should be 0.5556, but when we ask Python 2 to divide to integers, it returns an integer! If we want to want to keep the fractional part of the division, we need to convert one or the other number to floating point
Step10: You can also turn an integer into a float by casting
Step11: Casting {.challenge}
Step12: Composing Functions
Step13: What about converting Fahrenheit to Celsius? We could write out the formula, but we don’t need to. Instead, we can compose the two functions we have already created
Step14: This is our first taste of how larger programs are built
Step15: We can make another function plot_discharge to compose to plot and save the figures
Step16: The function plot_discharge produces output that is visible to us but has no return statement because it doesn't need to give anything back when it is called.
Step17: Now that these three functions exist, we can rewrite our previous code in a much simpler script
Step18: Testing and Documenting
Step19: There’s a better way, though. If the first thing in a function is a string that isn’t assigned to a variable, that string is attached to the function as its documentation. A string like this is called a docstring (one set of quotes for single line strings, three sets for multi-line strings!)
Step20: This is better because we can now ask Python’s built-in help system to show us the documentation for the function
Step21: Defining Defaults
Step22: As this example shows, parameters are matched up from left to right, and any that haven’t been given a value explicitly get their default value. We can override this behavior by naming the value as we pass it in
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
new_column_names = ['Agency', 'Station', 'OldDateTime', 'Timezone', 'Discharge_cfs', 'Discharge_stat', 'Stage_ft', 'Stage_stat']
url = 'http://waterservices.usgs.gov/nwis/iv/?format=rdb&sites=09380000&startDT=2016-01-01&endDT=2016-01-10¶meterCd=00060,00065'
data = pd.read_csv(url, header=1, sep='\t', comment='#', names = new_column_names)
data['DateTime'] = pd.to_datetime(data['OldDateTime'])
new_station_name = "0" + str(data['Station'].unique()[0])
data['Station'] = new_station_name
data.plot(x='DateTime', y='Discharge_cfs', title='Station ' + new_station_name)
plt.xlabel('Time')
plt.ylabel('Discharge (cfs)')
plt.savefig('data/discharge_' + new_station_name + '.png')
plt.show()
url_root = 'http://waterservices.usgs.gov/nwis/iv/?' # root of URL
url_1 = 'format=' + 'rdb' # file format
url_2 = 'sites=' + '09380000' # station number
url_3 = 'startDT=' + '2016-01-01' # start date
url_4 = 'endDT=' + '2016-01-10' # end date
url_5 = 'parameterCd=' + '00060,00065' # data fields
url = url_root + url_1 + '&' + url_2 + '&' + url_3 + '&' + url_4 + '&' + url_5
print url
url_dict = {} # create an empty dictionary
url_dict['format'] = 'rdb'
url_dict['sites'] = '09380000'
url_dict['startDT'] = '2016-01-01'
url_dict['endDT'] = '2016-01-10'
url_dict['parameterCd'] = ['00060','00065']
print url_dict
import urllib
# need to set the parameter doseq to 1 to handle the list in url_dict['parameterCd']
url_parameters = urllib.urlencode(url_dict, doseq=1)
print url_root + url_parameters
this_station = '09380000'
startDate = '2016-01-01'
endDate = '2016-01-10'
url_root = 'http://waterservices.usgs.gov/nwis/iv/?'
url_1 = 'format=' + 'rdb'
url_2 = 'sites=' + this_station
url_3 = 'startDT=' + startDate
url_4 = 'endDT=' + endDate
url_5 = 'parameterCd=' + '00060,00065'
url = url_root + url_1 + '&' + url_2 + '&' + url_3 + '&' + url_4 + '&' + url_5
print url
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
########## change these values ###########
this_station = '09380000'
startDate = '2016-01-01'
endDate = '2016-01-10'
##########################################
# create the URL
url_root = 'http://waterservices.usgs.gov/nwis/iv/?'
url_1 = 'format=' + 'rdb'
url_2 = 'sites=' + this_station
url_3 = 'startDT=' + startDate
url_4 = 'endDT=' + endDate
url_5 = 'parameterCd=' + '00060,00065'
url = url_root + url_1 + '&' + url_2 + '&' + url_3 + '&' + url_4 + '&' + url_5
# import the data
new_column_names = ['Agency', 'Station', 'OldDateTime', 'Timezone', 'Discharge_cfs', 'Discharge_stat', 'Stage_ft', 'Stage_stat']
data = pd.read_csv(url, header=1, sep='\t', comment='#', names = new_column_names)
# fix formatting
data['DateTime'] = pd.to_datetime(data['OldDateTime'])
new_station_name = "0" + str(data['Station'].unique()[0])
data['Station'] = new_station_name
# plot and save figure
data.plot(x='DateTime', y='Discharge_cfs', title='Station ' + new_station_name)
plt.xlabel('Time')
plt.ylabel('Discharge (cfs)')
plt.savefig('data/discharge_' + new_station_name + '.png')
plt.show()
def fahr_to_kelvin(temp):
return ((temp - 32) * (5/9)) + 273.15
print 'freezing point of water:', fahr_to_kelvin(32)
print 'boiling point of water:', fahr_to_kelvin(212)
5/9
print 'two integers:', 5/9
print '5.0/9:', 5.0/9
print '5/9.0:', 5/9.0
float(5)/9
def fahr_to_kelvin(temp):
return ((temp - 32) * (5./9)) + 273.15
print 'freezing point of water:', fahr_to_kelvin(32)
print 'boiling point of water:', fahr_to_kelvin(212)
def kelvin_to_celsius(temp_k):
return temp_k - 273.15
print 'absolute zero in Celsius:', kelvin_to_celsius(0.0)
def fahr_to_celsius(temp_f):
temp_k = fahr_to_kelvin(temp_f)
temp_c = kelvin_to_celsius(temp_k)
return temp_c
print 'freezing point of water in Celsius:', fahr_to_celsius(32.0)
def import_streamgage_data(url):
new_column_names = ['Agency', 'Station', 'OldDateTime', 'Timezone', 'Discharge_cfs', 'Discharge_stat', 'Stage_ft', 'Stage_stat']
data = pd.read_csv(url, header=1, sep='\t', comment='#', names = new_column_names)
# fix formatting
data['DateTime'] = pd.to_datetime(data['OldDateTime'])
new_station_name = "0" + str(data['Station'].unique()[0])
data['Station'] = new_station_name
return data
def plot_discharge(data):
data.plot(x='DateTime', y='Discharge_cfs', title='Station ' + new_station_name)
plt.xlabel('Time')
plt.ylabel('Discharge (cfs)')
plt.savefig('data/discharge_' + new_station_name + '.png')
plt.show()
def generate_URL(station, startDT, endDT):
url_root = 'http://waterservices.usgs.gov/nwis/iv/?'
url_1 = 'format=' + 'rdb'
url_2 = 'sites=' + station
url_3 = 'startDT=' + startDT
url_4 = 'endDT=' + endDT
url_5 = 'parameterCd=' + '00060,00065'
url = url_root + url_1 + '&' + url_2 + '&' + url_3 + '&' + url_4 + '&' + url_5
return url
########## change these values ###########
this_station = '09380000'
startDate = '2016-01-01'
endDate = '2016-01-10'
##########################################
url = generate_URL(this_station, startDate, endDate)
data = import_streamgage_data(url)
plot_discharge(data)
# plot_discharge(data): take a DataFrame containing streamgage data, plot the discharge and save a figure to file.
def plot_discharge(data):
data.plot(x='DateTime', y='Discharge_cfs', title='Station ' + new_station_name)
plt.xlabel('Time')
plt.ylabel('Discharge (cfs)')
plt.savefig('data/discharge_' + new_station_name + '.png')
plt.show()
def plot_discharge(data):
'''
Take a DataFrame containing streamgage data,
plot the discharge and save a figure to file.
'''
data.plot(x='DateTime', y='Discharge_cfs', title='Station ' + new_station_name)
plt.xlabel('Time')
plt.ylabel('Discharge (cfs)')
plt.savefig('data/discharge_' + new_station_name + '.png')
plt.show()
help(plot_discharge)
def display(a=1, b=2, c=3):
print 'a:', a, 'b:', b, 'c:', c
print 'no parameters:'
display()
print 'one parameter:'
display(55)
print 'two parameters:'
display(55, 66)
print('only setting the value of c')
display(c=77)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download wikipedia dump files
Step2: Convert two wikipedia dump files
Step3: Initial training
Step4: Japanese new idol group, "Babymetal", weren't known worldwide in 2010, so that the word, "babymetal", is not in oldmodel vocaburary.
Step5: Online update
Step6: Model Comparison
Step7: After online training, the word, "babymetal", is added in model. This word is simillar with rock and metal bands.
Step8: The word, "Zootopia", become disney movie through the years.
|
<ASSISTANT_TASK:>
Python Code:
from gensim.corpora.wikicorpus import WikiCorpus
from gensim.models.word2vec import Word2Vec, LineSentence
from pprint import pprint
from copy import deepcopy
from multiprocessing import cpu_count
%%bash
wget https://dumps.wikimedia.org/archive/2010/2010-11/enwiki/20101011/enwiki-20101011-pages-articles.xml.bz2
wget https://dumps.wikimedia.org/enwiki/20160820/enwiki-20160820-pages-articles.xml.bz2
old, new = [WikiCorpus('enwiki-{}-pages-articles.xml.bz2'.format(ymd)) for ymd in ['20101011', '20160820']]
def write_wiki(wiki, name, titles = []):
with open('{}.wiki'.format(name), 'wb') as f:
wiki.metadata = True
for text, (page_id, title) in wiki.get_texts():
if title not in titles:
f.write(b' '.join(text)+b'\n')
titles.append(title)
return titles
old_titles = write_wiki(old, 'old')
all_titles = write_wiki(new, 'new', old_titles)
oldwiki, newwiki = [LineSentence(f+'.wiki') for f in ['old', 'new']]
%%time
model = Word2Vec(oldwiki, min_count = 0, workers=cpu_count())
# model = Word2Vec.load('oldmodel')
oldmodel = deepcopy(model)
oldmodel.save('oldmodel')
try:
print(oldmodel.most_similar('babymetal'))
except KeyError as e:
print(e)
%%time
model.build_vocab(newwiki, update=True)
model.train(newwiki, total_examples=model.corpus_count, epochs=model.iter)
model.save('newmodel')
# model = Word2Vec.load('newmodel')
for m in ['oldmodel', 'model']:
print('The vocabulary size of the', m, 'is', len(eval(m).wv.vocab))
try:
pprint(model.most_similar('babymetal'))
except KeyError as e:
print(e)
w = 'zootopia'
for m in ['oldmodel', 'model']:
print('The count of the word,'+w+', is', eval(m).wv.vocab[w].count, 'in', m)
pprint(eval(m).most_similar(w))
print('')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TicTaeToe 게임
Step2: TicTaeToe게임을 간단 버젼으로 구현한 것으로 사용자가 먼저 착수하여 승부를 겨루게 됩니다.
|
<ASSISTANT_TASK:>
Python Code:
# 운영체제
!ver
# 현재 위치 및 하위 디렉토리 구조
!dir
# 파이선 버전
!python --version
# 가상환경 버전
!virtualenv --version
# 존재하는 가상환경 목록
!workon
# 가상환경 kookmin1에 진입
# workon kookmin1
# 가상환경 kookmin1에 설치된 패키지
# 데이터 분석 : numpy, pandas
# 시각화 : matplotlib
!pip freeze
from IPython.display import Image
Image(filename='images/TicTaeToe.png')
# %load TicTaeToe.py
import sys
import random
# 게임 방범 설명
print("출처: http://www.practicepython.org")
print("==================================")
print("가로, 세로, 대각선 방향으로 ")
print("세점을 먼저 이어 놓으면 이기는")
print("게임으로 사용자(U)와 Computer(C)가")
print("번갈아 놓습니다.")
print("==================================\n")
# 3 x 3 정보를 담기 위한 저장소 선언
# 0 은 초기 상태
# 1 은 사용자가 선택한 곳
# 2 는 컴퓨터가 선택한 곳
dim=3
list4 = [0,0,0,0,0,0,0,0,0]
# 사용자 안내를 위한 박스를 그리고 그 안에 번호 넣기
def graph():
k = 1
for i in range(dim+1):
print(" ---"*dim)
for j in range(dim):
if (i < dim):
print("| "+str(k), end=" ")
k = k + 1
if (i != 3):
print("|")
# 사용자 또는 컴퓨터가 수를 둘때 마다,
# 누가 이겼는지 체크
def game_wins(list4):
#print(list4)
for i in range(dim):
#checks to see if you win in a column
if list4[i] == list4[i+3] == list4[i+6] == 1:
print("You Won")
elif list4[i] == list4[i+3] == list4[i+6] == 2:
print("You Lost")
#checks to see if you win in a row
if list4[dim*i] == list4[dim*i+1] == list4[dim*i+2] == 1:
print ("You Won")
elif list4[dim*i] == list4[dim*i+1] == list4[dim*i+2] == 2:
print("You Lost")
#checks to see if you win in a diagonal
if list4[0] == list4[4] == list4[8] == 1:
print ("You Won")
elif list4[0] == list4[4] == list4[8] == 2:
print("You Lost")
if list4[2] == list4[4] == list4[6] == 1:
print ("You Won")
elif list4[2] == list4[4] == list4[6] == 2:
print("You Lost")
# 사용자 안내를 위한 박스를 그리고 그 안에 번호 또는 둔 수 표기
def graph_pos(list4):
for idx in range(len(list4)):
if (idx % 3 == 0):
print(" ---"*dim)
if (list4[idx] == 0):
print("| "+str(idx+1), end=" ")
elif (list4[idx] == 1):
print("| "+"U", end=" ")
else:
print("| "+"C", end=" ")
if (idx % 3 == 2):
print("|")
print("\n")
# 게임 시작
go = input("Play TicTaeToe? Enter, or eXit?")
if (go == 'x' or go == 'X'):
sys.exit(0)
graph()
print("\n")
while(1): # 보드게임이 승부가 날때까지 무한 반복
# 빈곳 선택
pos = int(input("You : ")) - 1
while (pos < 0 or pos > 8 or list4[pos] != 0):
pos = int(input("Again : ")) - 1
list4[pos] = 1
# 보드를 갱신하여 그리고, 승부 체크
graph_pos(list4)
game_wins(list4)
# 컴퓨터 차례로, 빈곳을 랜덤하게 선택하여 List에 저장
pos = random.randrange(9)
while (list4[pos] != 0):
pos = random.randrange(9)
print("Computer : " + str(pos+1))
list4[pos] = 2
# 보드를 갱신하여 그리고, 승부 체크
graph_pos(list4)
game_wins(list4)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2 create your request
Step2: Step 3 set the coordinates for your request (for instance, let us choose Curiosity landing site)
Step3: Step 4 retrieve fields from the Mars Climate Database (all fields are stored in the req object)
Step4: Step 5 print requested results
Step5: Main atmospheric variables
Step6: Shortcut
Step7: The extvar number can also be used to inquire a specific variable (see Fortran sources).
Step8: Another way to inquire for a specific variable is through a string.
Step9: Print all field
Step10: 1D slices
Step11: ... and for several variables
Step12: Request seasonal cycle (this takes a longer time)
Step13: It is a good place here to remind you that any field stored in the Mars Climate Database is inside the req object.
Step14: 1D slices
Step15: 2D mapping
Step16: You can also use the method htmlmap2d that will create a PNG file with your figure in it. This is the function actually used in the online MCD interface.
Step17: Adding wind vectors can be done with the incwind argument.
Step18: NB
Step19: NB
Step20: advanced diagnostics
Step21: Zonal average
Step22: Zonal average
Step23: see difference with following case
|
<ASSISTANT_TASK:>
Python Code:
from mcd import mcd
# This line configures matplotlib to show figures embedded in the notebook.
%matplotlib inline
req = mcd()
req.lat = -4.6 # latitude
req.lon = 137.4 # longitude
req.loct = 15. # local time
req.xz = 1. # vertical coordinate
req.xdate = 150.6 # areocentric longitude
req.update()
req.printcoord()
req.printmeanvar()
req.printmcd()
req.printextvar(22)
req.printextvar("tsurf")
req.printallextvar()
req.diurnal()
req.plot1d("t")
req.plot1d(["t","p","u","v"])
req.xzs = -3500.
req.xze = 15000.
req.zkey = 2
req.lat = 25.
req.lon = 195.
req.loct = 4.2
req.xdate = 140.
req.profile(nd=50)
req.plot1d("t")
tpot = req.temptab*((610./req.prestab)**(1.0/3.9))
print tpot
req = mcd()
req.diurnal()
req.getascii("t",filename="diurnal.txt")
%cat diurnal.txt ; rm -rf diurnal.txt
test = mcd()
test.loct = 15.
test.xz = 10000.
test.map2d("t")
test.htmlmap2d("t")
test.map2d("t",incwind=True)
test.map2d(["t","u"])
figname = test.getnameset()+'.png'
print figname
test = mcd()
test.xz = 50000.
test.lat = 20.
test.locts = 0.
test.locte = 24.
test.lons = -180.
test.lone = +180.
test.htmlplot2d("tsurf",figname="hov.png")
test = mcd()
test.zonmean = True
test.lats = -90.
test.late = 90.
test.htmlplot2d("u",figname="zonm.png")
test = mcd()
test.zonmean = True
test.lats = -90.
test.late = 90.
test.xdates = 0.
test.xdatee = 360.
test.htmlplot2d("h2ovap",figname="zonmm.png")
test.zonmean = False
test.lon = 0.
test.htmlplot2d("h2ovap",figname="hovls.png")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
|
<ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
seed_x = 10
### return the tensor as variable 'result'
def g(seed_x):
tf.random.set_seed(seed_x)
return tf.random.uniform(shape=(10,), minval=1, maxval=5, dtype=tf.int32)
result = g(seed_x)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A 3d arrow is designed as a right cone and a disk as its base. We define the standard cone, as the cone of vertex, Vert(0,0, headsize), and angle theta between the symmetry axis, Oz, and any generatrice
Step2: Place a 3d arrow along a line, starting from a point on that line, called origin below
Step3: Parameterize the Moebius strip and define it as a Plotly surface
Step4: Define a unicolor colorscale, to plot the cones and disks defining the 3d arrows
Step5: The following function returns the Plotly traces that represent a 3d arrow
Step6: Define the normals along the central circle, i.e. the curve corresponding to v=0 in the Moebius strip parameterization
Step7:
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import plotly.graph_objects as go
def arrow3d(headsize, theta):
r = headsize*np.tan(theta)
u = np.linspace(0,2*np.pi, 60)
v = np.linspace(0, 1, 15)
U,V = np.meshgrid(u,v)
#parameterization of the standard cone
x = r*V*np.cos(U)
y = r*V*np.sin(U)
z = headsize*(1-V)
cone = np.stack((x,y,z)) #shape(3, m, n)
w = np.linspace(0, r, 10)
u, w = np.meshgrid(u,w)
#parameterization of the base disk
xx = w*np.cos(u)
yy = w*np.sin(u)
zz = np.zeros(w.shape)
disk = np.stack((xx,yy,zz))
return cone, disk
def place_arrow3d(start, end, headsize, theta):
# Move the standard arrow to a position in the 3d space,
# which is computed from the inputted data
# start = array of shape (3,) = the starting point of the arrow's support line
# end = array of shape(3, ) = the end point of the segment of line
# headsize
# theta=the angle between the symmetry axis and a generatrice
epsilon=1.0e-04 # any coordinate less than epsilon is considered 0
cone, disk = arrow3d(headsize, theta)#get the standard cone
arr_dir = end-start# the arrow direction
if np.linalg.norm(arr_dir) > epsilon:
#define a right orthonormal basis (u1, u2, u3), with u3 the unit vector of the arrow_dir
u3 = arr_dir/np.linalg.norm(arr_dir)
origin = end-headsize * u3 #the point where the arrow starts on the supp line
a, b, c = u3
if abs(a) > epsilon or abs(b) > epsilon:
v1 = np.array([-b, a, 0])# v1 orthogonal to u3
u1 = v1/np.linalg.norm(v1)
else:
u1 = np.array([1., 0, 0])
u2 = np.cross(u3, u1)# this def ensures that the orthonormal basis is a right one
T = np.vstack((u1, u2, u3)).T #Transformation T, T(e_i)=u_i, to be applied to the standard cone
cone = np.einsum('ji, imn -> jmn', T, cone)#Transform the standard cone
disk = np.einsum('ji, imn -> jmn', T, disk)#Transform the cone base
cone = np.apply_along_axis(lambda a, v: a+v, 0, cone, origin)#translate the cone;
#dir translation, v=vec(O,origin)
disk = np.apply_along_axis(lambda a, v: a+v, 0, disk, origin)# translate the cone base
return origin, cone, disk
else: return (0, )
u = np.linspace(0, 2*np.pi, 36)
v = np.linspace(-0.5, 0.5, 10)
u, v = np.meshgrid(u,v)
tp = 1+v*np.cos(u/2.)
x = tp*np.cos(u)
y = tp*np.sin(u)
z = v*np.sin(u/2.)
fig= go.Figure(go.Surface(
x=x,
y=y,
z=z,
colorscale="balance",
colorbar=dict(thickness=20, len=0.6)))
pl_c = [[0.0, 'rgb(179, 56, 38)'],
[1.0, 'rgb(179, 56, 38)']]
def get_normals(start, origin, cone, disk, colorscale=pl_c):
tr_cone=go.Surface(
x=cone[0, :, :],
y=cone[1, :, :],
z=cone[2, :, :],
colorscale=colorscale,
showscale=False)
tr_disk=go.Surface(
x=disk[0, :, :],
y=disk[1, :, :],
z=disk[2, :, :],
colorscale=colorscale,
showscale=False)
tr_line=go.Scatter3d(
x=[start[0], origin[0]],
y=[start[1], origin[1]],
z=[start[2], origin[2]],
mode='lines',
line=dict(width=3, color='rgb(60, 9, 17)')
)
return [tr_line, tr_cone, tr_disk] #return a list that is concatenated to data
u = np.linspace(0, 2*np.pi, 24)
xx = np.cos(u)
yy = np.sin(u)
zz = np.zeros(xx.shape)
starters = np.vstack((xx,yy,zz)).T
a = 0.3
#Normal coordinates
Nx = 2*np.cos(u)*np.sin(u/2)
Ny = np.cos(u/2)-np.cos(3*u/2)
Nz = -2*np.cos(u)
ends = starters+a*np.vstack((Nx,Ny, Nz)).T
for j in range(ends.shape[0]):
arr=place_arrow3d(starters[j], ends[j], 0.15, np.pi/15)
if len(arr)==3:# get normals at the regular points on a surface, i.e. where ||Normalvector|| not = 0
fig.add_traces(get_normals(starters[j], arr[0], arr[1], arr[2]))
fig.update_layout(title_text='<br>A vector field along the central circle of the Moebius strip',
title_x=0.5,
font_family='Balto',
width=675,
height=675,
showlegend=False,
scene=dict(camera_eye=dict(x=1.65, y=1.65, z=0.75),
aspectmode='data'))
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TASK
Step2: TASK
Step3: TASK
Step4: TASK
Step5: TASK
Step6: TASK
Step7: TASK
Step8: TASK
Step9: TASK
Step10: TASK
Step11: TASK
Step12: TASK
|
<ASSISTANT_TASK:>
Python Code:
# CODE HERE
import pandas as pd
df = pd.read_csv('bank.csv')
# CODE HERE
df.head()
# CODE HERE
df['age'].mean()
# CODE HERE
df['age'].idxmin()
df.iloc[503]['marital']
# CODE HERE
df['job'].nunique()
# CODE HERE
df['job'].value_counts()
#CODE HERE
# Many, many ways to do this one! Here is just one way:
100*df['marital'].value_counts()['married']/len(df)
# df['marital].value_counts()
df['default code'] = df['default'].map({'no':0,'yes':1})
df.head()
# CODE HERE
df['marital code'] = df['marital'].apply(lambda status: status[0])
df.head()
# CODE HERE
df['duration'].max()
# CODE HERE
df[df['job']=='unemployed']['education'].value_counts()
# CODE HERE
df[df['job']=='unemployed']['age'].mean()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 2. Key Properties --> Software Properties
Step12: 2.2. Code Version
Step13: 2.3. Code Languages
Step14: 3. Key Properties --> Timestep Framework
Step15: 3.2. Split Operator Advection Timestep
Step16: 3.3. Split Operator Physical Timestep
Step17: 3.4. Integrated Timestep
Step18: 3.5. Integrated Scheme Type
Step19: 4. Key Properties --> Meteorological Forcings
Step20: 4.2. Variables 2D
Step21: 4.3. Frequency
Step22: 5. Key Properties --> Resolution
Step23: 5.2. Canonical Horizontal Resolution
Step24: 5.3. Number Of Horizontal Gridpoints
Step25: 5.4. Number Of Vertical Levels
Step26: 5.5. Is Adaptive Grid
Step27: 6. Key Properties --> Tuning Applied
Step28: 6.2. Global Mean Metrics Used
Step29: 6.3. Regional Metrics Used
Step30: 6.4. Trend Metrics Used
Step31: 7. Transport
Step32: 7.2. Scheme
Step33: 7.3. Mass Conservation Scheme
Step34: 7.4. Convention
Step35: 8. Emissions
Step36: 8.2. Method
Step37: 8.3. Sources
Step38: 8.4. Prescribed Climatology
Step39: 8.5. Prescribed Climatology Emitted Species
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Step41: 8.7. Interactive Emitted Species
Step42: 8.8. Other Emitted Species
Step43: 8.9. Other Method Characteristics
Step44: 9. Concentrations
Step45: 9.2. Prescribed Lower Boundary
Step46: 9.3. Prescribed Upper Boundary
Step47: 9.4. Prescribed Fields Mmr
Step48: 9.5. Prescribed Fields Mmr
Step49: 10. Optical Radiative Properties
Step50: 11. Optical Radiative Properties --> Absorption
Step51: 11.2. Dust
Step52: 11.3. Organics
Step53: 12. Optical Radiative Properties --> Mixtures
Step54: 12.2. Internal
Step55: 12.3. Mixing Rule
Step56: 13. Optical Radiative Properties --> Impact Of H2o
Step57: 13.2. Internal Mixture
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Step59: 14.2. Shortwave Bands
Step60: 14.3. Longwave Bands
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Step62: 15.2. Twomey
Step63: 15.3. Twomey Minimum Ccn
Step64: 15.4. Drizzle
Step65: 15.5. Cloud Lifetime
Step66: 15.6. Longwave Bands
Step67: 16. Model
Step68: 16.2. Processes
Step69: 16.3. Coupling
Step70: 16.4. Gas Phase Precursors
Step71: 16.5. Scheme Type
Step72: 16.6. Bulk Scheme Species
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-1', 'aerosol')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: matplotlib
Step2: Next, we want to make sure that the plots that we create are displayed in this notebook. To achieve this we have to issue a command to be interpretted by Jupyter -- called a magic command. A magic command is preceded by a % character. Magics are not Python and will create errs if used outside of the Jupyter notebook
Step3: A quick matplotlib example
Step4: The plot function
Step5: Example
Step6: Example
Step7: Example
Step8: Example
Step9: Figures, axes, and subplots
Step10: In the previous example the figure() function creates a new figure and add_subplot() puts a new axis on the figure. The command fig.add_subplot(1,1,1) means divide the figure fig into a 1 by 1 grid and assign the first component of that grid to the variable ax1.
Step11: Example
Step12: Exporting figures to image files
|
<ASSISTANT_TASK:>
Python Code:
# Define T and g
T = 40
y0 =50
g = 0
# Compute yT using the direct approach and print
# Initialize a 1-dimensional array called y that has T+1 zeros
# Set the initial value of y to equal y0
# Use a for loop to update the values of y one at a time
# Print the final value in the array y
# Import matplotlib.pyplot
# Magic command for the Jupyter Notebook
# Import numpy as np
# Create an array of x values from -6 to 6
# Create a variable y equal to the sin of x
# Use the plot function to plot the
# Add a title and axis labels
# Use the help function to see the documentation for plot
# Create an array of x values from -6 to 6
# Create a variable y equal to the x squared
# Use the plot function to plot the line
# Add a title and axis labels
# Add grid
# Create an array of x values from -6 to 6
# Create y variables
# Use the plot function to plot the lines
# Add a title and axis labels
# Set axis limits
# legend
# Add grid
# Set betas
# Create x values
# create epsilon values from the standard normal distribution
# create y
# plot
# Add a title and axis labels
# Set axis limits
# Add grid
# Create an array of x values from -6 to 6
# Create y variables
# Use the plot function to plot the lines
# Add a title and axis labels
# Add grid
# legend
# Create data
# Create a new figure
# Create axis
# Plot
# Add grid
# Create data
# Create a new figure
# Create axis 1 and plot with title
# Create axis 2 and plot with title
# Create data
# Create a new figure
# Create axis 1 and plot with title
# Create axis 2 and plot with title
# Create axis 3 and plot with title
# Create axis 4 and plot with title
# Adjust margins
# Create data
x = np.arange(-6,6,0.001)
y = np.sin(x)
# Create a new figure, axis, and plot
fig = plt.figure()
ax1 = fig.add_subplot(1,1,1)
ax1.plot(x,y,lw=3,alpha = 0.6)
ax1.grid()
# Save
plt.savefig('fig_econ129_class04_sine.png',dpi=120)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: gpu info
Step2: Understand the input
Step3: Kernel Info from the single stream
Step4: model 3 cuda streams
Step5: start kernel from beginning
Step6: set the h2d start for all the cuda streams
Step7: merge all the cuda stream trace together
Step8: start algorithm
Step9: start algo
|
<ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
import warnings
import pandas as pd
import numpy as np
import os
import sys # error msg, add the modules
import operator # sorting
from math import *
import matplotlib.pyplot as plt
sys.path.append('../../')
import cuda_timeline
import read_trace
import avgblk
import cke
from model_param import *
#from df_util import *
warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning)
gtx950 = DeviceInfo()
gtx950.sm_num = 6
gtx950.sharedmem_per_sm = 49152
gtx950.reg_per_sm = 65536
gtx950.maxthreads_per_sm = 2048
# init SM resources
SM_resList, SM_traceList = init_gpu(gtx950)
#SM_resList[0]
SM_traceList[0]
trace_s1 = 'trace_s1_5m.csv'
df_trace_s1 = read_trace.Trace2dataframe(trace_s1)
trace_s2 = 'trace_s2_5m.csv'
df_trace_s2 = read_trace.Trace2dataframe(trace_s2)
trace_s3 = 'trace_s3_5m.csv'
df_trace_s3 = read_trace.Trace2dataframe(trace_s3)
df_trace_s1
cuda_timeline.plot_trace(df_trace_s1)
cuda_timeline.plot_trace(df_trace_s2)
cuda_timeline.plot_trace(df_trace_s3)
# extract kernel info from trace
# warning: currently lmted to one kernel
kernel = read_trace.GetKernelInfo(df_trace_s1, gtx950)
Dump_kernel_info(kernel)
# for each stream, have a dd for each kernel
stream_kernel_list = []
stream_num = 3
for sid in range(stream_num):
#print sid
# key will be the kernel order
# value will be the kernel info
kern_dd = {}
kern_dd[0] = Copy_kernel_info(kernel)
stream_kernel_list.append(kern_dd)
Dump_kernel_info(stream_kernel_list[0][0])
df_s1_trace_timing = read_trace.Get_timing_from_trace(df_trace_s1)
df_s1 = read_trace.Reset_starting(df_s1_trace_timing)
df_s1
# find when to start the stream and update the starting pos for the trace
H2D_H2D_OVLP_TH = 3.158431
df_cke_list = cke.init_trace_list(df_s1, stream_num = stream_num, h2d_ovlp_th = H2D_H2D_OVLP_TH)
df_cke_list[0]
df_cke_list[1]
df_cke_list[2]
df_all_api = cke.init_sort_api_with_extra_cols(df_cke_list)
df_all_api
# stream_id list
stream_list = [float(x) for x in range(stream_num)]
# pick the 1st sleep api
df_all_api, r1, r1_stream = cke.pick_first_sleep(df_all_api)
df_all_api = SetWake(df_all_api, r1)
df_all_api = UpdateCell(df_all_api, r1, 'current_pos', get_rowinfo(df_all_api, r1)['start'])
df_all_api = UpdateCell(df_all_api, r1, 'pred_end', get_rowinfo(df_all_api, r1)['end'])
print('row {}, stream-id {}'.format(r1, r1_stream))
stream_queue = []
stream_queue.append(r1_stream)
## conconcurrency
cc = 1.0
# extract api calls from other streams
df_other = df_all_api.loc[df_all_api.stream_id <> r1_stream]
other_stream_ids = list(df_other.stream_id.unique())
other_stream_num = len(other_stream_ids)
for i in range(other_stream_num):
df_other, r2, r2_stream = cke.pick_first_sleep(df_other)
print('row {}, stream-id {}'.format(r2, r2_stream))
df_all_api = SetWake(df_all_api, r2)
df_all_api = UpdateCell(df_all_api, r2, 'current_pos', get_rowinfo(df_all_api, r2)['start'])
df_all_api = UpdateCell(df_all_api, r2, 'pred_end', get_rowinfo(df_all_api, r2)['end'])
#---------------
# if r1 and r2 are from the same stream, break the iteration, and finish r1
#---------------
if r1_stream == r2_stream:
break
# when they are not the same stream, check whether there is concurrency
#-----------------------
# move the current_pos to the starting of coming api r2, and update r1 status
#-----------------------
df_all_api = cke.StartNext_byType(df_all_api, [r1, r2])
#-----------------------------
# if one call is done, continue the next round
#-----------------------------
if cke.CheckRowDone(df_all_api, [r1, r2]):
continue
whichType = cke.CheckType(df_all_api, r1, r2) # check whether the same api
print whichType
if whichType == None:
# run noconflict
pass
elif whichType in ['h2d', 'd2h']: # data transfer in the same direction
cc = cc + 1
df_all_api = cke.Predict_transferOvlp(df_all_api, [r1, r2], ways = cc)
break
else:
# concurrent kernel: todo
pass
break
# other_stream_list = cke.find_unique_streams(df_other)
# find the 1st sleep api that is other stream
# if there is overlapping, we start ovlp mode, if not finish r1, start current
# go through each
# rest_stream_list = [x for x in stream_list if x <> r1_stream]
# print rest_stream_list
# for sid in rest_stream_list:
# df_stream = df_all_api.loc[df_all_api.stream_id == sid]
df_all_api
#
#
# run above
count = 0
# break_count = 7
break_count = 7
while not cke.AllDone(df_all_api):
count = count + 1
#if count == break_count: break
#-----------------------
# pick two api to model
#-----------------------
df_all_api, r1, r2 = cke.PickTwo(df_all_api)
#if count == break_count: break
#-----------------------
# check the last api or not
#-----------------------
last_api = False
if r1 == None and r2 == None:
last_api = True
if last_api == True: # go directly updating the last wake api
df_all_api = cke.UpdateStream_lastapi(df_all_api)
break
#-----------------------
# move the current_pos to the starting of coming api r2, and update r1 status
#-----------------------
df_all_api = cke.StartNext_byType(df_all_api, [r1, r2])
#if count == break_count: break
#-----------------------------
# if one call is done, continue the next round
#-----------------------------
if cke.CheckRowDone(df_all_api, r1, r2):
continue
#if count == break_count: break
#-----------------------------
# when all calls are active
#-----------------------------
#-----------------------------
# check whether the two calls are kerns, if yes
#-----------------------------
whichType = cke.CheckType(df_all_api, r1, r2) # check whether the same api
if whichType == None:
df_all_api = cke.Predict_noConflict(df_all_api, r1, r2)
elif whichType in ['h2d', 'd2h']: # data transfer in the same direction
df_all_api = cke.Predict_transferOvlp(df_all_api, r1, r2, ways = 2.0)
else: # concurrent kernel: todo
print('run cke model')
#cke.model_2cke(df_all_api, r1, r2)
#if count == break_count: break
r1_sid, r1_kid =cke.FindStreamAndKernID(df_all_api, r1)
#print('r1_stream_id {} , r1_kernel_id {}'.format(r1_sid, r1_kid))
r2_sid, r2_kid =cke.FindStreamAndKernID(df_all_api, r2)
#print('r2_stream_id {} , r2_kernel_id {}'.format(r2_sid, r2_kid))
r1_start_ms = cke.GetStartTime(df_all_api, r1)
r2_start_ms = cke.GetStartTime(df_all_api, r2)
#print r1_start_ms
#print r2_start_ms
#print('before:')
#print('r1 start :{} r2 start : {}'.format(stream_kernel_list[r1_sid][r1_kid].start_ms,
# stream_kernel_list[r2_sid][r2_kid].start_ms))
stream_kernel_list[0][0].start_ms = r1_start_ms
stream_kernel_list[1][0].start_ms = r2_start_ms
#print('after:')
#print('r1 start :{} r2 start : {}'.format(stream_kernel_list[r1_sid][r1_kid].start_ms,
# stream_kernel_list[r2_sid][r2_kid].start_ms))
#Dump_kern_info(stream_kernel_list[r1_sid][r1_kid])
#Dump_kern_info(stream_kernel_list[r2_sid][r2_kid])
kernels_ = []
kernels_.append(stream_kernel_list[r1_sid][r1_kid])
kernels_.append(stream_kernel_list[r2_sid][r2_kid])
SM_resList, SM_traceList = avgblk.cke_model(gtx950, SM_resList, SM_traceList, kernels_)
# find the kernel execution time from the sm trace table
result_kernel_runtime_dd = avgblk.Get_KernTime(SM_traceList)
#print result_kernel_runtime_dd
result_r1_start = result_kernel_runtime_dd[0][0]
result_r1_end = result_kernel_runtime_dd[0][1]
result_r2_start = result_kernel_runtime_dd[1][0]
result_r2_end = result_kernel_runtime_dd[1][1]
# r1 will be the 1st in dd, r2 will be the 2nd
df_all_api.set_value(r1, 'pred_end', result_r1_end)
df_all_api.set_value(r2, 'pred_end', result_r2_end) # Warning: it is better to have a pred_start
# Warning: but we care about the end timing for now
#if count == break_count: break
# check any of r1 and r2 has status done. if done, go to next
rangeT = cke.Get_pred_range(df_all_api)
print rangeT
#if count == break_count: break
extra_conc = cke.Check_cc_by_time(df_all_api, rangeT) # check whether there is conc during the rangeT
print('extra_conc {}'.format(extra_conc))
#if count == break_count: break
if extra_conc == 0:
if whichType in ['h2d', 'd2h']:
df_all_api = cke.Update_wake_transferOvlp(df_all_api, rangeT, ways = 2.0)
elif whichType == 'kern':
df_all_api = cke.Update_wake_kernOvlp(df_all_api)
else: # no overlapping
df_all_api = cke.Update_wake_noConflict(df_all_api, rangeT)
#if count == break_count: break
# check if any api is done, and update the timing for the other apis in that stream
df_all_api = cke.UpdateStreamTime(df_all_api)
#if count == break_count: break
else: # todo : when there is additional overlapping
pass
# if count == break_count:
# break
df_all_api
df_2stream_trace
df_s1
#
# run above
#
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To create a Series we need to set the column (using usecols) to use and set the parameter squeeze to True.
Step2: If the input file has only 1 column we don't need to provide the usecols argument.
Step3: On Series we can perform classic python operation using Built-In Functions!
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
asd = pd.read_csv("data/input.csv")
print type(asd)
asd.head()
# This is a Dataframe because we have multiple columns!
data = pd.read_csv("data/input.csv", usecols=["name"], squeeze=True)
print type(data)
data.head()
data.index
data = pd.read_csv("data/input_with_one_column.csv", squeeze=True)
print type(data)
# HEAD
print data.head(2), "\n"
# TAIL
print data.tail()
list(data)
dict(data)
max(data)
min(data)
dir(data)
type(data)
sorted(data)
data = pd.read_csv("data/input_with_two_column.csv", index_col="name", squeeze=True)
data.head()
data[["Alex", "asd"]]
data["Alex":"Vale"]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Contents
Step2: All these traveling weather systems tend to move warm, moist air poleward and cold, dry air equatorward. There is thus a net poleward energy transport.
Step3: At each timestep, the warm temperatures get cooler (at the equator) while the cold polar temperatures get warmer!
Step4: <a id='section4'></a>
Step5: We're going to plot the data and the best fit line, but also another line using these values
Step6: Discuss these curves...
Step7: The albedo increases markedly toward the poles.
Step8: Of course we are not fitting all the details of the observed albedo curve. But we do get the correct global mean a reasonable representation of the equator-to-pole gradient in albedo.
Step9: Example EBM using climlab
Step10: The upshot
Step11: When $D=0$, every latitude is in radiative equilibrium and the heat transport is zero. As we have already seen, this gives an equator-to-pole temperature gradient much too high.
|
<ASSISTANT_TASK:>
Python Code:
# Ensure compatibility with Python 2 and 3
from __future__ import print_function, division
from IPython.display import YouTubeVideo
YouTubeVideo('As85L34fKYQ')
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import climlab
from climlab import constants as const
# First define an initial temperature field
# that is warm at the equator and cold at the poles
# and varies smoothly with latitude in between
from climlab.utils import legendre
sfc = climlab.domain.zonal_mean_surface(num_lat=90, water_depth=10.)
lat = sfc.lat.points
initial = 12. - 40. * legendre.P2(np.sin(np.deg2rad(lat)))
fig, ax = plt.subplots()
ax.plot(lat, initial)
ax.set_xlabel('Latitude')
ax.set_ylabel('Temperature (deg C)')
## Set up the climlab diffusion process
# make a copy of initial so that it remains unmodified
Ts = climlab.Field(np.array(initial), domain=sfc)
# thermal diffusivity in W/m**2/degC
D = 0.55
# create the climlab diffusion process
# setting the diffusivity and a timestep of ONE MONTH
d = climlab.dynamics.MeridionalHeatDiffusion(name='Diffusion',
state=Ts, D=D, timestep=const.seconds_per_month)
print( d)
# We are going to step forward one month at a time
# and store the temperature each time
niter = 5
temp = np.zeros((Ts.size, niter+1))
temp[:, 0] = np.squeeze(Ts)
for n in range(niter):
d.step_forward()
temp[:, n+1] = np.squeeze(Ts)
# Now plot the temperatures
fig,ax = plt.subplots()
ax.plot(lat, temp)
ax.set_xlabel('Latitude')
ax.set_ylabel('Temperature (deg C)')
ax.legend(range(niter+1))
x = np.linspace(-1,1)
fig,ax = plt.subplots()
ax.plot(x, legendre.P2(x))
ax.set_title('$P_2(x)$')
import xarray as xr
## The NOAA ESRL server is shutdown! January 2019
ncep_url = "http://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis.derived/"
ncep_Ts = xr.open_dataset( ncep_url + "surface_gauss/skt.sfc.mon.1981-2010.ltm.nc", decode_times=False)
#url = 'http://apdrc.soest.hawaii.edu:80/dods/public_data/Reanalysis_Data/NCEP/NCEP/clima/'
#ncep_Ts = xr.open_dataset(url + 'surface_gauss/skt')
lat_ncep = ncep_Ts.lat; lon_ncep = ncep_Ts.lon
print( ncep_Ts)
Ts_ncep_annual = ncep_Ts.skt.mean(dim=('lon','time'))
ncep_ulwrf = xr.open_dataset( ncep_url + "other_gauss/ulwrf.ntat.mon.1981-2010.ltm.nc", decode_times=False)
ncep_dswrf = xr.open_dataset( ncep_url + "other_gauss/dswrf.ntat.mon.1981-2010.ltm.nc", decode_times=False)
ncep_uswrf = xr.open_dataset( ncep_url + "other_gauss/uswrf.ntat.mon.1981-2010.ltm.nc", decode_times=False)
#ncep_ulwrf = xr.open_dataset(url + "other_gauss/ulwrf")
#ncep_dswrf = xr.open_dataset(url + "other_gauss/dswrf")
#ncep_uswrf = xr.open_dataset(url + "other_gauss/uswrf")
OLR_ncep_annual = ncep_ulwrf.ulwrf.mean(dim=('lon','time'))
ASR_ncep_annual = (ncep_dswrf.dswrf - ncep_uswrf.uswrf).mean(dim=('lon','time'))
from scipy.stats import linregress
slope, intercept, r_value, p_value, std_err = linregress(Ts_ncep_annual, OLR_ncep_annual)
print( 'Best fit is A = %0.0f W/m2 and B = %0.1f W/m2/degC' %(intercept, slope))
# More standard values
A = 210.
B = 2.
fig, ax1 = plt.subplots(figsize=(8,6))
ax1.plot( Ts_ncep_annual, OLR_ncep_annual, 'o' , label='data')
ax1.plot( Ts_ncep_annual, intercept + slope * Ts_ncep_annual, 'k--', label='best fit')
ax1.plot( Ts_ncep_annual, A + B * Ts_ncep_annual, 'r--', label='B=2')
ax1.set_xlabel('Surface temperature (C)', fontsize=16)
ax1.set_ylabel('OLR (W m$^{-2}$)', fontsize=16)
ax1.set_title('OLR versus surface temperature from NCEP reanalysis', fontsize=18)
ax1.legend(loc='upper left')
ax1.grid()
days = np.linspace(1.,50.)/50 * const.days_per_year
Qann_ncep = climlab.solar.insolation.daily_insolation(lat_ncep, days ).mean(dim='day')
albedo_ncep = 1 - ASR_ncep_annual / Qann_ncep
albedo_ncep_global = np.average(albedo_ncep, weights=np.cos(np.deg2rad(lat_ncep)))
print( 'The annual, global mean planetary albedo is %0.3f' %albedo_ncep_global)
fig,ax = plt.subplots()
ax.plot(lat_ncep, albedo_ncep)
ax.grid();
ax.set_xlabel('Latitude')
ax.set_ylabel('Albedo');
# Add a new curve to the previous figure
a0 = albedo_ncep_global
a2 = 0.25
ax.plot(lat_ncep, a0 + a2 * legendre.P2(np.sin(np.deg2rad(lat_ncep))))
fig
# Some imports needed to make and display animations
from IPython.display import HTML
from matplotlib import animation
def setup_figure():
templimits = -20,32
radlimits = -340, 340
htlimits = -6,6
latlimits = -90,90
lat_ticks = np.arange(-90,90,30)
fig, axes = plt.subplots(3,1,figsize=(8,10))
axes[0].set_ylabel('Temperature (deg C)')
axes[0].set_ylim(templimits)
axes[1].set_ylabel('Energy budget (W m$^{-2}$)')
axes[1].set_ylim(radlimits)
axes[2].set_ylabel('Heat transport (PW)')
axes[2].set_ylim(htlimits)
axes[2].set_xlabel('Latitude')
for ax in axes: ax.set_xlim(latlimits); ax.set_xticks(lat_ticks); ax.grid()
fig.suptitle('Diffusive energy balance model with annual-mean insolation', fontsize=14)
return fig, axes
def initial_figure(model):
# Make figure and axes
fig, axes = setup_figure()
# plot initial data
lines = []
lines.append(axes[0].plot(model.lat, model.Ts)[0])
lines.append(axes[1].plot(model.lat, model.ASR, 'k--', label='SW')[0])
lines.append(axes[1].plot(model.lat, -model.OLR, 'r--', label='LW')[0])
lines.append(axes[1].plot(model.lat, model.net_radiation, 'c-', label='net rad')[0])
lines.append(axes[1].plot(model.lat, model.heat_transport_convergence, 'g--', label='dyn')[0])
lines.append(axes[1].plot(model.lat,
model.net_radiation+model.heat_transport_convergence, 'b-', label='total')[0])
axes[1].legend(loc='upper right')
lines.append(axes[2].plot(model.lat_bounds, model.heat_transport)[0])
lines.append(axes[0].text(60, 25, 'Day 0'))
return fig, axes, lines
def animate(day, model, lines):
model.step_forward()
# The rest of this is just updating the plot
lines[0].set_ydata(model.Ts)
lines[1].set_ydata(model.ASR)
lines[2].set_ydata(-model.OLR)
lines[3].set_ydata(model.net_radiation)
lines[4].set_ydata(model.heat_transport_convergence)
lines[5].set_ydata(model.net_radiation+model.heat_transport_convergence)
lines[6].set_ydata(model.heat_transport)
lines[-1].set_text('Day {}'.format(int(model.time['days_elapsed'])))
return lines
# A model starting from isothermal initial conditions
e = climlab.EBM_annual()
e.Ts[:] = 15. # in degrees Celsius
e.compute_diagnostics()
# Plot initial data
fig, axes, lines = initial_figure(e)
ani = animation.FuncAnimation(fig, animate, frames=np.arange(1, 100), fargs=(e, lines))
HTML(ani.to_html5_video())
D = 0.1
model = climlab.EBM_annual(A=210, B=2, D=D, a0=0.354, a2=0.25)
print( model)
model.param
model.integrate_years(10)
fig, axes = plt.subplots(1,2, figsize=(12,4))
ax = axes[0]
ax.plot(model.lat, model.Ts, label=('D = %0.1f' %D))
ax.plot(lat_ncep, Ts_ncep_annual, label='obs')
ax.set_ylabel('Temperature (degC)')
ax = axes[1]
energy_in = np.squeeze(model.ASR - model.OLR)
ax.plot(model.lat, energy_in, label=('D = %0.1f' %D))
ax.plot(lat_ncep, ASR_ncep_annual - OLR_ncep_annual, label='obs')
ax.set_ylabel('Net downwelling radiation at TOA (W m$^{-2}$)')
for ax in axes:
ax.set_xlabel('Latitude'); ax.legend(); ax.grid();
def inferred_heat_transport( energy_in, lat_deg ):
'''Returns the inferred heat transport (in PW) by integrating the net energy imbalance from pole to pole.'''
from scipy import integrate
from climlab import constants as const
lat_rad = np.deg2rad( lat_deg )
return ( 1E-15 * 2 * np.math.pi * const.a**2 *
integrate.cumtrapz( np.cos(lat_rad)*energy_in,
x=lat_rad, initial=0. ) )
fig, ax = plt.subplots()
ax.plot(model.lat, inferred_heat_transport(energy_in, model.lat), label=('D = %0.1f' %D))
ax.set_ylabel('Heat transport (PW)')
ax.legend(); ax.grid()
ax.set_xlabel('Latitude')
Darray = np.arange(0., 2.05, 0.05)
model_list = []
Tmean_list = []
deltaT_list = []
Hmax_list = []
for D in Darray:
ebm = climlab.EBM_annual(A=210, B=2, a0=0.354, a2=0.25, D=D)
ebm.integrate_years(20., verbose=False)
Tmean = ebm.global_mean_temperature()
deltaT = np.max(ebm.Ts) - np.min(ebm.Ts)
energy_in = np.squeeze(ebm.ASR - ebm.OLR)
Htrans = inferred_heat_transport(energy_in, ebm.lat)
Hmax = np.max(Htrans)
model_list.append(ebm)
Tmean_list.append(Tmean)
deltaT_list.append(deltaT)
Hmax_list.append(Hmax)
color1 = 'b'
color2 = 'r'
fig = plt.figure(figsize=(8,6))
ax1 = fig.add_subplot(111)
ax1.plot(Darray, deltaT_list, color=color1)
ax1.plot(Darray, Tmean_list, 'b--')
ax1.set_xlabel('D (W m$^{-2}$ K$^{-1}$)', fontsize=14)
ax1.set_xticks(np.arange(Darray[0], Darray[-1], 0.2))
ax1.set_ylabel('$\Delta T$ (equator to pole)', fontsize=14, color=color1)
for tl in ax1.get_yticklabels():
tl.set_color(color1)
ax2 = ax1.twinx()
ax2.plot(Darray, Hmax_list, color=color2)
ax2.set_ylabel('Maximum poleward heat transport (PW)', fontsize=14, color=color2)
for tl in ax2.get_yticklabels():
tl.set_color(color2)
ax1.set_title('Effect of diffusivity on temperature gradient and heat transport in the EBM', fontsize=16)
ax1.grid()
ax1.plot([0.6, 0.6], [0, 140], 'k-');
%load_ext version_information
%version_information numpy, scipy, matplotlib, xarray, climlab
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exploring data file
Step2: Ok..
Step3: Now we need to know the sampling rate
Step4: So, the format seems to be 'hours
Step5: That's pretty low.
Step6: That's actually not bad!
Step7: The first thing to note is that amplitude varies dramatically. Let's run it through a bandpass filter and take out all frequencies that definitely are not heart rate.
Step8: Still low quality but at least the heart rate is quite visible now!
Step9: That seems a reasonable result. By far the most peaks are marked correctly, and most peaksin noisy sections (low confidence) are simply rejected.
Step10: Hmmm, not much luck yet, but an idea
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import heartpy as hp
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('raw_ppg.csv')
df.keys()
plt.figure(figsize=(12,6))
plt.plot(df['ppg'].values)
plt.show()
signal = df['ppg'].values[14500:20500]
timer = df['timer'].values[14500:20500]
plt.plot(signal)
plt.show()
timer[0:20]
help(hp.get_samplerate_datetime)
#Seems easy enough, right? Now let's determine the sample rate
sample_rate = hp.get_samplerate_datetime(timer, timeformat = '%H:%M:%S.%f')
print('sampling rate is: %.3f Hz' %sample_rate)
from datetime import datetime
#let's create a list 'newtimer' to house our datetime objects
newtimer = [datetime.strptime(x, '%H:%M:%S.%f') for x in timer]
#let's compute the real distances from entry to entry
elapsed = []
for i in range(len(newtimer) - 1):
elapsed.append(1 / ((newtimer[i+1] - newtimer[i]).microseconds / 1000000))
#and plot the results
plt.figure(figsize=(12,4))
plt.plot(elapsed)
plt.xlabel('Sample number')
plt.ylabel('Actual sampling rate in Hz')
plt.show()
print('mean sampling rate: %.3f' %np.mean(elapsed))
print('median sampling rate: %.3f'%np.median(elapsed))
print('standard deviation: %.3f'%np.std(elapsed))
#Let's plot 4 minutes of the segment we selected to get a view
#of what we're working with
plt.figure(figsize=(12,6))
plt.plot(signal[0:int(240 * sample_rate)])
plt.title('original signal')
plt.show()
#Let's run it through a standard butterworth bandpass implementation to remove everything < 0.8 and > 3.5 Hz.
filtered = hp.filter_signal(signal, [0.7, 3.5], sample_rate=sample_rate,
order=3, filtertype='bandpass')
#let's plot first 240 seconds and work with that!
plt.figure(figsize=(12,12))
plt.subplot(211)
plt.plot(signal[0:int(240 * sample_rate)])
plt.title('original signal')
plt.subplot(212)
plt.plot(filtered[0:int(240 * sample_rate)])
plt.title('filtered signal')
plt.show()
plt.figure(figsize=(12,6))
plt.plot(filtered[0:int(sample_rate * 60)])
plt.title('60 second segment of filtered signal')
plt.show()
#let's resample to ~100Hz as well
#10Hz is low for the adaptive threshold analysis HeartPy uses
from scipy.signal import resample
resampled = resample(filtered, len(filtered) * 10)
#don't forget to compute the new sampling rate
new_sample_rate = sample_rate * 10
#run HeartPy over a few segments, fingers crossed, and plot results of each
for s in [[0, 10000], [10000, 20000], [20000, 30000], [30000, 40000], [40000, 50000]]:
wd, m = hp.process(resampled[s[0]:s[1]], sample_rate = new_sample_rate,
high_precision=True, clean_rr=True)
hp.plotter(wd, m, title = 'zoomed in section', figsize=(12,6))
hp.plot_poincare(wd, m)
plt.show()
for measure in m.keys():
print('%s: %f' %(measure, m[measure]))
raw = df['ppg'].values
plt.plot(raw)
plt.show()
import sys
from scipy.signal import resample
windowsize = 100
std = []
for i in range(len(raw) // windowsize):
start = i * windowsize
end = (i + 1) * windowsize
sliced = raw[start:end]
try:
std.append(np.std(sliced))
except:
print(i)
plt.plot(std)
plt.show()
plt.plot(raw)
plt.show()
plt.plot(raw[0:(len(raw) // windowsize) * windowsize] - resample(std, len(std)*windowsize))
plt.show()
(len(raw) // windowsize) * windowsize
mx = np.max(raw)
mn = np.min(raw)
global_range = mx - mn
windowsize = 100
filtered = []
for i in range(len(raw) // windowsize):
start = i * windowsize
end = (i + 1) * windowsize
sliced = raw[start:end]
rng = np.max(sliced) - np.min(sliced)
if ((rng >= (0.5 * global_range))
or
(np.max(sliced) >= 0.9 * mx)
or
(np.min(sliced) <= mn + (0.1 * mn))):
for x in sliced:
filtered.append(0)
else:
for x in sliced:
filtered.append(x)
plt.figure(figsize=(12,6))
plt.plot(raw)
plt.show()
plt.figure(figsize=(12,6))
plt.plot(filtered)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Echo client
Step2: Usage
Step3: Fibonacci example
Step4: Nacci client
Step5: Usage
Step6: Exercise
|
<ASSISTANT_TASK:>
Python Code:
%%writefile echo.py
#!/usr/bin/env python
import zmq
from zmq.eventloop import ioloop, zmqstream
def echo(stream, message):
stream.send_multipart(message)
io_loop = ioloop.IOLoop()
context = zmq.Context()
socket = context.socket(zmq.ROUTER)
stream = zmqstream.ZMQStream(socket, io_loop=io_loop)
stream.on_recv_stream(echo)
socket.bind('tcp://0.0.0.0:11235')
io_loop.start()
%%writefile client.py
#!/usr/bin/env python
import zmq
context = zmq.Context()
s = context.socket(zmq.DEALER)
s.connect('tcp://127.0.0.1:11235')
for i in range(10):
s.send('Hello world')
print(s.recv())
%%bash
python client.py
%%writefile fibo.py
#!/usr/bin/env python
import zmq
from zmq.eventloop import ioloop, zmqstream
def fibonacci(n):
a, b = 0, 1
while n >= a:
yield(a)
a, b = b, a + b
return
def fibo(stream, message):
n = int(message[1])
reply = [message[0]] + [str(n in [x for x in fibonacci(n)])]
stream.send_multipart(reply)
io_loop = ioloop.IOLoop()
context = zmq.Context()
socket = context.socket(zmq.ROUTER)
stream = zmqstream.ZMQStream(socket, io_loop=io_loop)
stream.on_recv_stream(fibo)
socket.bind('tcp://0.0.0.0:11235')
io_loop.start()
%%writefile nacci.py
#!/usr/bin/env python
import zmq
from random import randint
context = zmq.Context()
s = context.socket(zmq.DEALER)
s.connect('tcp://127.0.0.1:11235')
for i in range(15):
n = randint(0,200)
s.send(str(n))
print(str(n) + ' ' + s.recv())
%%bash
python nacci.py
%%writefile broker.py
#!/usr/bin/env python
import argparse
import zmq
from zmq.eventloop import ioloop, zmqstream
clients = set()
def action_register(message):
address = message.split()[1].strip()
if address in clients:
return 'HOP'
else:
clients.add(address)
return 'OK'
def action_list(message):
return ' '.join(clients)
def handle(stream, message):
addr, text = message
print('BROKER: ' + text)
action = text.split()[0].lower()
try:
reply = globals()['action_' + action](text)
except KeyError:
print('BROKER: Unknown action', action)
reply = 'ERROR'
stream.send_multipart((addr, reply))
io_loop = ioloop.IOLoop()
context = zmq.Context()
socket = context.socket(zmq.ROUTER)
stream = zmqstream.ZMQStream(socket, io_loop=io_loop)
stream.on_recv_stream(handle)
parser = argparse.ArgumentParser()
parser.add_argument('-b', '--bind-address', default='tcp://0.0.0.0:5555')
if __name__ == '__main__':
args = parser.parse_args()
socket.bind(args.bind_address)
io_loop.start()
%%writefile cities.txt
Barcelona
Berlin
Madrid
New York
Londres
Igualada
%%writefile hider.py
#!/usr/bin/env python
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-p', '--port', default='5556')
parser.add_argument('-b', '--broker', default='tcp://127.0.0.1:5555')
args = parser.parse_args()
%%writefile seeker.py
#!/usr/bin/env python
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('-b', '--broker', default='tcp://127.0.0.1:5555')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: NeoWS module
Step2: You can also search by IAU number or SPK-ID (there is a faster neows.orbit_from_spk_id() function in that case, although)
Step3: Since neows relies on Small-Body Database browser to get the SPK-ID given a body name, you can use the wildcards from that browser
Step4: <div class="alert alert-info">Note that epoch is provided by the Web Service itself, so if you need orbit on another epoch, you have to propagate it
Step5: Given that we are using NASA APIs, there is a maximum number of requests. If you want to make many requests, it is recommended getting a NASA API key. You can use your API key adding the api_key parameter to the function
Step6: DASTCOM5 module
Step7: Keep in mind that this function returns a list of orbits matching your string. This is made on purpose given that there are comets which have several records in the database (one for each orbit determination in history) what allow plots like this one
Step8: While neows can only be used to get Orbit objects, dastcom5 can also provide asteroid and comet complete database.
Step9: <div class="alert alert-info">Asteroid and comet parameters are not exactly the same (although they are very close)
Step10: The number of Atira NEOs we use using this method is
Step11: Which is consistent with the stats published by CNEOS
Step12: We only need to get the 16 orbits from these 16 ndarrays.
Step13: If we needed also the names of each asteroid, we could do
Step14: <div class="alert alert-info">We knew beforehand that there are no `Atira` comets, only asteroids (comet orbits are usually more eccentric), but we could use the same method with `com_db` if we wanted.</div>
Step15: Also, in this function, DASTCOM5 data (specially strings) is ready to use (decoded and improved strings, etc)
Step16: Panda offers many functionalities, and can also be used in the same way as the ast_db and comet_db functions
Step17: What? I said they can be used in the same way!
Step18: So, rewriting our condition
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
plt.ion()
from astropy import time
from poliastro.twobody.orbit import Orbit
from poliastro.bodies import Earth
from poliastro.plotting import OrbitPlotter
from poliastro.neos import neows
eros = neows.orbit_from_name('Eros')
frame = OrbitPlotter()
frame.plot(eros, label='Eros');
ganymed = neows.orbit_from_name('1036') # Ganymed IAU number
amor = neows.orbit_from_name('2001221') # Amor SPK-ID
eros = neows.orbit_from_spk_id('2000433') # Eros SPK-ID
frame = OrbitPlotter()
frame.plot(ganymed, label='Ganymed')
frame.plot(amor, label='Amor')
frame.plot(eros, label='Eros');
neows.orbit_from_name('*alley')
eros.epoch.iso
epoch = time.Time(2458000.0, scale='tdb', format='jd')
eros_november = eros.propagate(epoch)
eros_november.epoch.iso
neows.orbit_from_name('Toutatis', api_key='DEMO_KEY')
from poliastro.neos import dastcom5
atira = dastcom5.orbit_from_name('atira')[0] # NEO
wikipedia = dastcom5.orbit_from_name('wikipedia')[0] # Asteroid, but not NEO.
frame = OrbitPlotter()
frame.plot(atira, label='Atira (NEO)')
frame.plot(wikipedia, label='Wikipedia (asteroid)');
halleys = dastcom5.orbit_from_name('1P')
frame = OrbitPlotter()
frame.plot(halleys[0], label='Halley')
frame.plot(halleys[5], label='Halley')
frame.plot(halleys[10], label='Halley')
frame.plot(halleys[20], label='Halley')
frame.plot(halleys[-1], label='Halley');
ast_db = dastcom5.asteroid_db()
comet_db = dastcom5.comet_db()
ast_db.dtype.names[:20] # They are more than 100, but that would be too much lines in this notebook :P
aphelion_condition = 2 * ast_db['A'] - ast_db['QR'] < 0.983
axis_condition = ast_db['A'] < 1.3
atiras = ast_db[aphelion_condition & axis_condition]
len(atiras)
from poliastro.twobody.orbit import Orbit
from poliastro.bodies import Earth
earth = Orbit.from_body_ephem(Earth)
frame = OrbitPlotter()
frame.plot(earth, label='Earth')
for record in atiras['NO']:
ss = dastcom5.orbit_from_record(record).to_icrs()
frame.plot(ss, color="#666666")
frame = OrbitPlotter()
frame.plot(earth, label='Earth')
for i in range(len(atiras)):
record = atiras['NO'][i]
label = atiras['ASTNAM'][i].decode().strip() # DASTCOM5 strings are binary
ss = dastcom5.orbit_from_record(record).to_icrs()
frame.plot(ss, label=label)
db = dastcom5.entire_db()
db.columns
db[db.NAME == 'Halley'] # As you can see, Halley is the name of an asteroid too, did you know that?
aphelion_condition = (2 * db['A'] - db['QR']) < 0.983
axis_condition = db['A'] < 1.3
atiras = db[aphelion_condition & axis_condition]
len(atiras)
len(atiras[atiras.A < 0])
axis_condition = (db['A'] < 1.3) & (db['A'] > 0)
atiras = db[aphelion_condition & axis_condition]
len(atiras)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: feature mapping
Step2: regularized cost
Step3: this is the same as the not regularized cost because we init theta as zeros...
Step4: fit the parameters
Step5: predict
|
<ASSISTANT_TASK:>
Python Code:
sns.set(context="notebook", style="ticks", font_scale=1.5)
sns.lmplot('test1', 'test2', hue='accepted', data=df,
size=6,
fit_reg=False,
scatter_kws={"s": 50}
)
plt.title('Regularized Logistic Regression')
x1 = np.array(df.test1)
x2 = np.array(df.test2)
data = lr.feature_mapping(x1, x2, power=6)
print(data.shape)
data.head()
data.describe()
theta = np.zeros(data.shape[1])
X = lr.feature_mapping(x1, x2, power=6, as_ndarray=True)
print(X.shape)
y = general.get_y(df)
print(y.shape)
lr.regularized_cost(theta, X, y, l=1)
lr.regularized_gradient(theta, X, y)
import scipy.optimize as opt
print('init cost = {}'.format(lr.regularized_cost(theta, X, y)))
res = opt.minimize(fun=lr.regularized_cost, x0=theta, args=(X, y), method='Newton-CG', jac=lr.regularized_gradient)
res
final_theta = res.x
y_pred = lr.predict(X, final_theta)
print(classification_report(y, y_pred))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: map()
Step2: reduce()
Step3: pipelining
Step4: Ok, too easy, this is supposed to be a map & reduce tutorial...
Step5: RDD
Step6: flatMap() vs map()
Step7: lambda functions
Step8: filter()
Step9: Exercise
Step10: Exercise
Step11: Exercise
Step12: Exercise
Step13: Which, as you all know, means "the state of being able to achieve honours".
Step14: groupBy()
Step15: reduceByKey
Step16: Exercise
Step17: Exercise
Step18: DataFrame
Step19: <a id='Pandas_interface'></a>
Step20: Exercise
Step22: <a id='SQL_interface'></a>
Step23: Exercise
|
<ASSISTANT_TASK:>
Python Code:
import pyspark
sc = pyspark.SparkContext('local[*]')
# We define our input
l = range(10)
l
# We "upload" it as an RDD
rdd = sc.parallelize(l)
rdd
# We define a map function
def power_of_2(k):
return 2**k
# And we apply it to our RDD
rdd.map(power_of_2)
# So we use collect() to retrieve all results.
rdd.map(power_of_2).collect()
### WARNING ###
# Never do that in real cases, or you will transfer ALL data to your browser, effectibly killing it.
# What about summing, everything?
# We define a reduce function
def sum_everything(k1, k2):
return k1 + k2
# And we apply the reduce operation
rdd.reduce(sum_everything)
# Or we can use the built in operation `sum`
rdd.sum()
# What if I wanted to compute the sum of the powers of 2?
rdd.map(power_of_2).reduce(sum_everything)
# or
rdd.map(power_of_2).sum()
# How can we count the number of elements in the array?
rdd.count()
def set_to_1(k):
return 1
rdd.map(set_to_1).reduce(sum_everything)
# Load all Shakespeare works
import os
shakespeare = sc.textFile(os.path.normpath('file:///../../resources/shakespeare.txt'))
# Show the first lines
shakespeare.take(10)
# Get the longest line
def keep_longest(k1, k2):
if len(k1) > len(k2):
return k1
else:
return k2
shakespeare.reduce(keep_longest)
# Compute the average line length
def line_length(k):
return len(k)
shakespeare.map(line_length).sum() / shakespeare.count()
# Split the text in words
def split_in_words(k):
return k.split()
shakespeare.map(split_in_words).take(2)
shakespeare.flatMap(split_in_words).take(15)
shakespeare.flatMap(
lambda k: k.split() # Split in words
).take(15)
# Retrieve 10 words longer than 15 characters
shakespeare.flatMap(
lambda k: k.split() # Split in words
).filter(
lambda k: len(k)>15 # Keep words longer than 15 characters
).take(10)
%load -r 1-9 solutions/13_01_Big_Data.py
%load -r 10-19 solutions/13_01_Big_Data.py
%load -r 20-29 solutions/13_01_Big_Data.py
%load -r 30-39 solutions/13_01_Big_Data.py
words = shakespeare.flatMap(
lambda k: k.split() # Split in words
).filter(
lambda k: not (set('.,-') & set(k)) # Drop words with special characters
)
words.groupBy(lambda k: k).take(10)
# That method returns an iterable for each different word. This iterable contains a list of all the appearances of the word.
# Lets print its contents
tuples = words.groupBy(lambda k: k).take(5)
for t in tuples:
print(t[0], list(t[1]))
# Now, to compute the number of appearances, we just have to count the elements in the iterator
words.groupBy(
lambda k: k
).map(
lambda t: (t[0], len(list(t[1])))
).take(5)
# But this is VERY EXPENSIVE in terms of memory,
# as all the word instances must be stored in a list before they can be counted.
# We can do it much better!
words.map(
lambda w: (w, 1)
).take(10)
words.map(
lambda w: (w, 1)
).reduceByKey(
lambda k1, k2: k1 + k2
).take(10)
%load -r 40-49 solutions/13_01_Big_Data.py
%load -r 50-69 solutions/13_01_Big_Data.py
%load -r 70-79 solutions/13_01_Big_Data.py
from pyspark.sql import SQLContext
sqlc = SQLContext(sc)
gaia = sqlc.read.csv('../resources/gaia.csv.bz2', comment='#', header=True, inferSchema=True)
gaia
gaia.count()
gaia.head(5)
%matplotlib inline
import pyspark.sql.functions as func
g_hist = gaia.groupBy(
(
func.floor(gaia.mag_g * 10) / 10
).alias('mag_g'),
).count().orderBy(
'mag_g'
)
g_hist.take(10)
g_hist.toPandas().set_index('mag_g').plot(loglog=True)
%load -r 90-99 solutions/13_01_Big_Data.py
sqlc.registerDataFrameAsTable(gaia, "gaia")
g_hist = sqlc.sql(
SELECT CAST(FLOOR(mag_g*10)/10. AS FLOAT) AS mag_g, COUNT(*) AS `count`
FROM gaia
GROUP BY 1
ORDER BY 1
)
g_hist.take(10)
g_hist.toPandas().set_index('mag_g').plot(loglog=True)
%load -r 100-109 solutions/13_01_Big_Data.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define an infinite integer set
Step2: Test if interger value is in the set
Step3: Test if another integer set is a subset
Step4: Find the intersection of two IntegerSet objects
Step5: Find the union of two IntegerSet objects
Step6: We can also find unions and intersections of integer sets with ordinary Python sets, as long as these sets contain integer values
Step7: The following line of code with cause the AttributeSetError exception
Step8: Now, define objects of RegexSet
Step9: Test if strings are matched by regex's defining our RegexSet objects
Step10: Test if one regex set is a subset of another
Step11: Find the intersection of two regex sets
Step12: Find the union of two regex sets
Step13: Subtract a finite set of strings from a regex set
Step14: The result may be not extremely readable, but we can test it in the following way
Step15: Now, we can also wrap Python set objects into FiniteSet class provided in ReGraph.
Step16: It allows us to apply to them any set operations from the common interface of ReGraph’s attribute sets. For example
Step17: ReGraph provides two special classes of attribute sets
|
<ASSISTANT_TASK:>
Python Code:
from math import inf
import regraph.attribute_sets as atsets
ints = atsets.IntegerSet({(0, 8), (11, inf)})
print(ints.contains(5))
print(ints.contains(9))
a = atsets.IntegerSet({(0, 3), (20, 30)})
print(a.issubset(ints))
b = atsets.IntegerSet({(0, 10)})
print(b.issubset(ints))
a_and_ints = a.intersection(ints)
print(a_and_ints)
b_and_ints = b.intersection(ints)
print(b_and_ints)
a_or_ints = a.union(ints)
print(a_or_ints)
b_or_ints = b.union(ints)
print(b_or_ints)
a.union({13, 14})
print(a)
try:
a.union({13, 14, "a"})
except Exception as e:
print("Error message: ", e)
print("Type: ", type(e))
words = atsets.RegexSet("[A-Za-z]+")
integers = atsets.RegexSet("\d+")
alphanums = atsets.RegexSet("[A-Za-z\d]+")
print(words.match("42"))
print(integers.match("42"))
print(words.match("hello"))
print(integers.match("hello"))
print(integers.issubset(words))
print(integers.issubset(alphanums))
print(integers.intersection(words))
print(integers.intersection(alphanums))
print(integers.union(words))
print(words.difference({"hi", "bye"}))
no_hi_bye = words.difference({"hi", "bye"})
print(no_hi_bye.match("hi"))
print(no_hi_bye.match("bye"))
print(no_hi_bye.match("afternoon"))
a = atsets.FiniteSet({1, 2, 3})
int_regex = atsets.RegexSet("\d+")
positive_integers = atsets.IntegerSet([(0, inf)])
print(a.issubset(int_regex))
print(a.issubset(positive_integers))
univ = atsets.UniversalSet()
empty = atsets.EmptySet()
print(univ.union(empty))
print(univ.intersection(empty))
a = atsets.FiniteSet({1, 2, 3})
print(a.issubset(univ))
print(a.issubset(empty))
print(univ.intersection(a))
print(univ.union(a))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Euler's method
Step4: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation
Step6: You are now going to solve the following differential equation
Step7: In the following cell you are going to solve the above ODE using four different algorithms
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
def solve_euler(derivs, y0, x):
Solve a 1d ODE using Euler's method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where
y and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
# YOUR CODE HERE
#raise NotImplementedError()
y = np.empty_like(x)
y[0] = y0
h = x[1] - x[0]
for n in range (0, len(x) - 1):
y[n + 1] = y[n] + h * derivs(y[n],x[n])
return y
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
def solve_midpoint(derivs, y0, x):
Solve a 1d ODE using the Midpoint method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where y
and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
# YOUR CODE HERE
#raise NotImplementedError()
y = np.empty_like(x)
y[0] = y0
h = x[1] - x[0]
for n in range (0, len(x) - 1):
# y[n + 1] = y[n] + h * ((derivs(y[n]+(h/2)) * derivs(y[n],x[n]), x[n]) * (y[n] + (h/2) * derivs(y[n],x[n]) + (h/2)))
y[n+1] = y[n] + h * derivs(y[n] + h/2 * derivs(y[n],x[n]), x[n] + h/2)
return y
assert np.allclose(solve_midpoint(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
def solve_exact(x):
compute the exact solution to dy/dx = x + 2y.
Parameters
----------
x : np.ndarray
Array of x values to compute the solution at.
Returns
-------
y : np.ndarray
Array of solutions at y[i] = y(x[i]).
# YOUR CODE HERE
#raise NotImplementedError()
y = 0.25*np.exp(2*x) - 0.5*x - 0.25
return y
assert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))
# YOUR CODE HERE
# raise NotImplementedError()
x = np.linspace(0,1.0,11)
y = np.empty_like(x)
y0 = y[0]
def derivs(y, x):
return x+2*y
plt.plot(solve_euler(derivs, y0, x), label = 'euler')
plt.plot(solve_midpoint(derivs, y0, x), label = 'midpoint')
plt.plot(solve_exact(x), label = 'exact')
plt.plot(odeint(derivs, y0, x), label = 'odeint')
assert True # leave this for grading the plots
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: To import the tensor module of PySINGA, run
Step2: The ground-truth
Step3: Generating the trainin data
Step4: Training via SGD
Step5: SINGA tensor module supports basic linear algebra operations, like + - * /, and advanced functions including axpy, gemm, gemv, and random function (e.g., Gaussian and Uniform).
Step6: We can see that the learned line is becoming closer to the ground truth line (in blue color).
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import division
from __future__ import print_function
from builtins import range
from past.utils import old_div
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from singa import tensor
a, b = 3, 2
f = lambda x: a * x + b
gx = np.linspace(0.,1,100)
gy = [f(x) for x in gx]
plt.plot(gx, gy, label='y=f(x)')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc='best')
nb_points = 30
# generate training data
train_x = np.asarray(np.random.uniform(0., 1., nb_points), np.float32)
train_y = np.asarray(f(train_x) + np.random.rand(30), np.float32)
plt.plot(train_x, train_y, 'bo', ms=7)
def plot(idx, x, y):
global gx, gy, axes
# print the ground truth line
axes[idx//5, idx%5].plot(gx, gy, label='y=f(x)')
# print the learned line
axes[idx//5, idx%5].plot(x, y, label='y=kx+b')
axes[idx//5, idx%5].legend(loc='best')
# set hyper-parameters
max_iter = 15
alpha = 0.05
# init parameters
k, b = 2.,0.
# to plot the intermediate results
fig, axes = plt.subplots(3, 5, figsize=(12, 8))
x = tensor.from_numpy(train_x)
y = tensor.from_numpy(train_y)
# sgd
for idx in range(max_iter):
y_ = x * k + b
err = y_ - y
loss = old_div(tensor.sum(err * err), nb_points)
print('loss at iter %d = %f' % (idx, loss))
da1 = old_div(tensor.sum(err * x), nb_points)
db1 = old_div(tensor.sum(err), nb_points)
# update the parameters
k -= da1 * alpha
b -= db1 * alpha
plot(idx, tensor.to_numpy(x), tensor.to_numpy(y_))
# to plot the intermediate results
fig, axes = plt.subplots(3, 5, figsize=(12, 8))
x = tensor.from_numpy(train_x)
y = tensor.from_numpy(train_y)
# sgd
for idx in range(max_iter):
y_ = x * k + b
err = y_ - y
loss = old_div(tensor.sum(err * err), nb_points)
print('loss at iter %d = %f' % (idx, loss))
da1 = old_div(tensor.sum(err * x), nb_points)
db1 = old_div(tensor.sum(err), nb_points)
# update the parameters
k -= da1 * alpha
b -= db1 * alpha
plot(idx, tensor.to_numpy(x), tensor.to_numpy(y_))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fitting a decaying oscillation
Step2: Now, using curve_fit to fit this model and determine the estimates and uncertainties for the parameters
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
# YOUR CODE HERE
data = np.load('decay_osc.npz')
tdata = data['tdata']
ydata = data['ydata']
dy = data['dy']
data.close
plt.figure(figsize=(7,5))
plt.errorbar(tdata, ydata, dy, fmt='og', ecolor='gray')
plt.xlabel('t')
plt.ylabel('y')
plt.grid();
assert True # leave this to grade the data import and raw data plot
# YOUR CODE HERE
def model(t, a, lamb, omega, delta):
y = a * np.exp(-lamb * t)*np.cos(omega*t) + delta
return y
theta_best, theta_cov = opt.curve_fit(model, tdata, ydata, absolute_sigma=True)
print('a = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))
print('lambda = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1])))
print('omega = {0:.3f} +/- {1:.3f}'.format(theta_best[2], np.sqrt(theta_cov[0,0])))
print('delta = {0:.3f} +/- {1:.3f}'.format(theta_best[3], np.sqrt(theta_cov[1,1])))
xfit = np.linspace(0,20)
yfit = model(xfit, theta_best[0], theta_best[1], theta_best[2], theta_best[3])
plt.figure(figsize=(7,5))
plt.plot(xfit, yfit)
plt.errorbar(tdata, ydata, dy, fmt='og', ecolor='gray')
plt.xlabel('t')
plt.ylabel('y')
plt.grid();
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note that
Step2: Those locations are also used in intermediate representation
Step3: In miasm, each expression embeds its size. The location doesn't have a size. To use a location in IR code, you have to wrap it in the Miasm word ExprLoc
|
<ASSISTANT_TASK:>
Python Code:
# Import and create a locationDB
from miasm.core.locationdb import LocationDB
loc_db = LocationDB()
print(repr(loc_db))
# Create a location with default attributes (no offset, no symbol name)
loc_a = loc_db.add_location()
print(loc_a)
# Create a second location with an offset
loc_b = loc_db.add_location(offset=112233)
print(loc_b)
# Add a location with a name
loc_c = loc_db.add_location(name="main")
# Add another alias name to this location
loc_db.add_location_name(loc_c, "_main_")
# Add another location
loc_d = loc_db.add_location()
# Display LocationDB
print(loc_db)
# Associate an offset to an existing location
loc_db.set_location_offset(loc_a, 0x5678)
print(loc_db)
# Remove a name from an existing location
loc_db.remove_location_name(loc_c, "_main_")
print(loc_db)
# Get the offset of a location
hex(loc_db.get_location_offset(loc_a))
# Location with no offset
print(loc_db.get_location_offset(loc_c))
# Display locations
loc_db.pretty_str(loc_a)
loc_db.pretty_str(loc_b)
loc_db.pretty_str(loc_c)
loc_db.pretty_str(loc_d)
from miasm.analysis.binary import Container
from miasm.analysis.machine import Machine
# Create a LocationDB
loc_db = LocationDB()
# Create a container of bytes
cont = Container.from_string(
b"\x83\xf8\x10\x74\x07\x89\xc6\x0f\x47\xc3\xeb\x08\x89\xc8\xe8\x31\x33\x22\x11\x40\xc3",
loc_db
)
# Instantiate a x86 32 bit architecture
machine = Machine("x86_32")
# Instantiate a disassembler engine, using the previous bin_stream and its
# associated location DB.
mdis = machine.dis_engine(cont.bin_stream, loc_db=loc_db)
# Run a recursive traversal disassembling from address 0
asmcfg = mdis.dis_multiblock(0)
# Display each basic blocks
for block in asmcfg.blocks:
print(block)
block = asmcfg.getby_offset(0)
print(block)
# The basic block is placed at a location, which can be retrieved using `.loc_key`
print(block.loc_key)
# We can add a name to this first location
loc_db.add_location_name(block.loc_key, "entry")
print(loc_db)
# And we can re-display the block:
print(block)
# We will give an arbitrary name to location at offset 0xC
loc_c = loc_db.get_offset_location(0xc)
loc_db.add_location_name(loc_c, "quiche")
print(block)
# Get a lifter
lifter = machine.lifter_model_call(loc_db)
# Get the intermediate representation of the asmcfg
ircfg = lifter.new_ircfg_from_asmcfg(asmcfg)
# Get location at 0
loc_entry = loc_db.get_offset_location(0)
# Get irblock at this location
irblock = ircfg.blocks[loc_entry]
# Display IRBlock
print(irblock)
# Get the irblock destination (IRDst value)
dst = irblock.dst
print(dst)
print(repr(dst))
# It's an ExprCond. We retrieve here the possible values
src1, src2 = dst.src1, dst.src2
print(repr(src1), repr(src2))
# Retrieve the location of the ExprLoc
loc = src1.loc_key
print(loc)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we can start to define the actual convolution code. We start by defining an object that represents a single layer of convolution that does the actual convolution operation followed by pooling over the output of that convolution. These layers will be stacked in the final model.
Step2: This next method uses the convolution layer above to make a stack of them and adds a hidden layer followed by a logistic regression classification layer on top.
Step3: This cell runs the model and allows you to play with a few hyperparameters. The ones below take about 1 to 2 minutes to run.
Step4: For most convolution model it can be interesting to show what the trained filters look like. The code below does that from the parameters returned by the training function above. In this model there isn't much of an effect since the filters are 5x5 and we can't see much unfortunately.
Step5: What can also be interesting is to draw the outputs of the filters for an example. This works somewhat better for this model.
Step6: Some things you can try with this model
|
<ASSISTANT_TASK:>
Python Code:
import numpy
import theano
import theano.tensor as T
from logistic_sgd import LogisticRegression
from mlp import HiddenLayer
from theano.tensor.signal import downsample
from theano.tensor.nnet import conv
class LeNetConvPoolLayer(object):
def __init__(self, rng, input, filter_shape, image_shape, poolsize=(2, 2)):
assert image_shape[1] == filter_shape[1]
self.input = input
# there are "num input feature maps * filter height * filter width"
# inputs to each hidden unit
fan_in = numpy.prod(filter_shape[1:])
# each unit in the lower layer receives a gradient from:
# "num output feature maps * filter height * filter width" / pooling size
fan_out = (filter_shape[0] * numpy.prod(filter_shape[2:]) /
numpy.prod(poolsize))
# initialize weights with random weights
W_bound = numpy.sqrt(6. / (fan_in + fan_out))
self.W = theano.shared(
numpy.asarray(
rng.uniform(low=-W_bound, high=W_bound, size=filter_shape),
dtype=theano.config.floatX
),
borrow=True
)
# the bias is a 1D tensor -- one bias per output feature map
b_values = numpy.zeros((filter_shape[0],), dtype=theano.config.floatX)
self.b = theano.shared(value=b_values, borrow=True)
# convolve input feature maps with filters
conv_out = conv.conv2d(
input=input,
filters=self.W,
filter_shape=filter_shape,
image_shape=image_shape
)
# downsample each feature map individually, using maxpooling
pooled_out = downsample.max_pool_2d(
input=conv_out,
ds=poolsize,
ignore_border=True
)
# add the bias term. Since the bias is a vector (1D array), we first
# reshape it to a tensor of shape (1, n_filters, 1, 1). Each bias will
# thus be broadcasted across mini-batches and feature map
# width & height
self.output = T.tanh(pooled_out + self.b.dimshuffle('x', 0, 'x', 'x'))
# store parameters of this layer
self.params = [self.W, self.b]
import time
import fuel
from fuel.streams import DataStream
from fuel.schemes import SequentialScheme
from fuel.transformers import Cast
fuel.config.floatX = theano.config.floatX = 'float32'
def evaluate_lenet5(train, test, valid,
learning_rate=0.1, n_epochs=200,
nkerns=[20, 50], batch_size=500):
rng = numpy.random.RandomState(23455)
train_stream = DataStream.default_stream(
train, iteration_scheme=SequentialScheme(train.num_examples,
batch_size))
valid_stream = DataStream.default_stream(
valid, iteration_scheme=SequentialScheme(valid.num_examples,
batch_size))
test_stream = DataStream.default_stream(
test, iteration_scheme=SequentialScheme(test.num_examples,
batch_size))
x = T.tensor4('x')
yi = T.imatrix('y')
y = yi.reshape((yi.shape[0],))
# Construct the first convolutional pooling layer:
# filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24)
# maxpooling reduces this further to (24/2, 24/2) = (12, 12)
# 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12)
layer0 = LeNetConvPoolLayer(
rng,
input=x,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5),
poolsize=(2, 2)
)
# Construct the second convolutional pooling layer
# filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8)
# maxpooling reduces this further to (8/2, 8/2) = (4, 4)
# 4D output tensor is thus of shape (batch_size, nkerns[1], 4, 4)
layer1 = LeNetConvPoolLayer(
rng,
input=layer0.output,
image_shape=(batch_size, nkerns[0], 12, 12),
filter_shape=(nkerns[1], nkerns[0], 5, 5),
poolsize=(2, 2)
)
# the HiddenLayer being fully-connected, it operates on 2D matrices of
# shape (batch_size, num_pixels) (i.e matrix of rasterized images).
# This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4),
# or (500, 50 * 4 * 4) = (500, 800) with the default values.
layer2_input = layer1.output.flatten(2)
# construct a fully-connected sigmoidal layer
layer2 = HiddenLayer(
rng,
input=layer2_input,
n_in=nkerns[1] * 4 * 4,
n_out=500,
activation=T.tanh
)
# classify the values of the fully-connected sigmoidal layer
layer3 = LogisticRegression(input=layer2.output, n_in=500, n_out=10)
# the cost we minimize during training is the NLL of the model
cost = layer3.negative_log_likelihood(y)
# create a function to compute the mistakes that are made by the model
model_errors = theano.function(
[x, yi],
layer3.errors(y)
)
# create a list of all model parameters to be fit by gradient descent
params = layer3.params + layer2.params + layer1.params + layer0.params
# create a list of gradients for all model parameters
grads = T.grad(cost, params)
# train_model is a function that updates the model parameters by
# SGD Since this model has many parameters, it would be tedious to
# manually create an update rule for each model parameter. We thus
# create the updates list by automatically looping over all
# (params[i], grads[i]) pairs.
updates = [
(param_i, param_i - learning_rate * grad_i)
for param_i, grad_i in zip(params, grads)
]
train_model = theano.function(
[x, yi],
cost,
updates=updates
)
# early-stopping parameters
patience = 10000 # look as this many examples regardless
patience_increase = 2 # wait this much longer when a new best is found
# a relative improvement of this much is considered significant
improvement_threshold = 0.995
n_train_batches = (train.num_examples + batch_size - 1) // batch_size
# go through this many minibatches before checking the network on
# the validation set; in this case we check every epoch
validation_frequency = min(n_train_batches, patience / 2)
best_validation_loss = numpy.inf
best_iter = 0
test_score = 0.
start_time = time.clock()
epoch = 0
iter = 0
done_looping = False
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
minibatch_index = 0
for minibatch in train_stream.get_epoch_iterator():
iter += 1
minibatch_index += 1
if iter % 100 == 0:
print('training @ iter = ', iter)
error = train_model(minibatch[0], minibatch[1])
if (iter + 1) % validation_frequency == 0:
# compute zero-one loss on validation set
validation_losses = [model_errors(vb[0], vb[1]) for vb
in valid_stream.get_epoch_iterator()]
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f %%' %
(epoch, minibatch_index + 1, n_train_batches,
this_validation_loss * 100.))
# if we got the best validation score until now
if this_validation_loss < best_validation_loss:
# improve patience if loss improvement is good enough
if this_validation_loss < best_validation_loss * improvement_threshold:
patience = max(patience, iter * patience_increase)
# save best validation score and iteration number
best_validation_loss = this_validation_loss
best_iter = iter
# test it on the test set
test_losses = [
model_errors(tb[0], tb[1])
for tb in test_stream.get_epoch_iterator()
]
test_score = numpy.mean(test_losses)
print((' epoch %i, minibatch %i/%i, test error of '
'best model %f %%') %
(epoch, minibatch_index + 1, n_train_batches,
test_score * 100.))
if patience <= iter:
done_looping = True
break
end_time = time.clock()
print('Optimization complete.')
print('Best validation score of %f %% obtained at iteration %i, '
'with test performance %f %%' %
(best_validation_loss * 100., best_iter + 1, test_score * 100.))
print('The code ran for %.2fm' % ((end_time - start_time) / 60.))
# This is to make the pretty pictures in the cells below
layer0_out = theano.function([x], layer0.output)
layer1_out = theano.function([x], layer1.output)
return params, layer0_out, layer1_out
from fuel.datasets import MNIST
train = MNIST(which_sets=('train',), subset=slice(0, 50000))
valid = MNIST(which_sets=('train',), subset=slice(50000, 60000))
test = MNIST(which_sets=('test',))
params, layer0_out, layer1_out = evaluate_lenet5(train, test, valid,
learning_rate=0.1, n_epochs=10,
nkerns=[10, 25], batch_size=50)
%matplotlib inline
import matplotlib.pyplot as plt
from utils import tile_raster_images
filts1 = params[6].get_value()
filts2 = params[4].get_value()
plt.clf()
# Increase the size of the figure
plt.gcf().set_size_inches(15, 10)
# Make a grid for the two layers
gs = plt.GridSpec(1, 2, width_ratios=[1, 25], height_ratios=[1, 1])
a = plt.subplot(gs[0])
b = plt.subplot(gs[1])
# Show the first layer filters (the small column)
a.imshow(tile_raster_images(filts1.reshape(10, 25), img_shape=(5, 5), tile_shape=(10, 1), tile_spacing=(1,1)),
cmap="Greys", interpolation="none")
a.axis('off')
# Show the second layer filters (the large block)
b.imshow(tile_raster_images(filts2.reshape(250, 25), img_shape=(5, 5), tile_shape=(10, 25), tile_spacing=(1,1)),
cmap="Greys", interpolation="none")
b.axis('off')
%matplotlib inline
import matplotlib.pyplot as plt
from utils import tile_raster_images
# Grab some input examples from the test set (we cheat a bit here)
sample = test.get_data(None, slice(0, 50))[0]
# We will print this example amongst the batch
example = 7
plt.gcf()
# Increase the size of the figure
plt.gcf().set_size_inches(15, 10)
gs = plt.GridSpec(1, 3, width_ratios=[1, 1, 1], height_ratios=[1, 1, 1])
# Draw the input data
a = plt.subplot(gs[0])
a.imshow(sample[example, 0], cmap="Greys", interpolation='none')
a.axis('off')
# Compute first layer output
out0 = layer0_out(sample)[example]
# Draw its output
b = plt.subplot(gs[1])
b.imshow(tile_raster_images(out0.reshape(10, 144), img_shape=(12, 12), tile_shape=(5, 2), tile_spacing=(1, 1)),
cmap="Greys", interpolation='none')
b.axis('off')
# Compute the second layer output
out1 = layer1_out(sample)[example]
# Draw it
c = plt.subplot(gs[2])
c.imshow(tile_raster_images(out1.reshape(25, 16), img_shape=(4, 4), tile_shape=(5, 5), tile_spacing=(1, 1)),
cmap="Greys", interpolation='none')
c.axis('off')
%load lenet.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-3', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Red Line is a subway that connects Cambridge and Boston, Massachusetts. When I was working in Cambridge I took the Red Line from Kendall Square to South Station and caught the commuter rail to Needham. During rush hour Red Line trains run every 7–8 minutes, on average.
Step2: I'll convert them to minutes and use kde_from_sample to estimate the distribution.
Step3: Here's what it looks like.
Step4: The Update
Step5: So here's the first update.
Step6: Here's what the posterior distribution looks like.
Step7: Because I am more likely to arrive during a longer gap, the distribution is shifted to the right.
Step8: This shift is an example of the "inspection paradox", which I wrote an article about.
Step9: make_elapsed_dist takes a hypothetical gap and an array of possible times.
Step10: Here's an example that represents a uniform distribution from 0 to 0.6 minutes.
Step11: The last element of the sequence is uniform from 0 to 20 minutes.
Step12: Now we can use make_mixture to make a weighted mixture of uniform distributions, where the weights are the probabilities from posterior_z.
Step13: The mean elapsed time is 4.4 minutes, half the posterior mean of z.
Step14: With this likelihood, we can compute the posterior distribution of x.
Step15: Here's what it looks like
Step16: Based on the number of passengers, we think it has been about 5 minutes since the last train.
Step17: Wait time
Step18: Well, almost. That distribution contains some negative values, which are impossible.
Step19: Based on the information so far, here are the distributions for x, y, and z, shown as CDFs.
Step20: Because of rounding errors, posterior_y contains quantities that are not in posterior_x and posterior_z; that's why I plotted it as a CDF, and why it appears jaggy.
Step21: I'll use this augmented sample to make a new estimate for the posterior distribution of z.
Step22: Here's what it looks like.
Step24: Now let's take the analysis from the previous sections and wrap it in a function.
Step25: Given the number of passengers when we arrive at the station, it computes the posterior distribution of y.
Step26: We can use it to compute the mean wait time and the probability of waiting more than 15 minutes.
Step27: If we see 10 passengers, we expect to wait a little less than 5 minutes, and the chance of waiting more than 15 minutes is about 1%.
Step28: Here's the mean wait as a function of the number of passengers.
Step29: If there are no passengers on the platform when I arrive, I infer that I just missed a train; in that case, the expected wait time is the mean of augmented_posterior_z.
|
<ASSISTANT_TASK:>
Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
observed_gap_times = [
428.0, 705.0, 407.0, 465.0, 433.0, 425.0, 204.0, 506.0, 143.0, 351.0,
450.0, 598.0, 464.0, 749.0, 341.0, 586.0, 754.0, 256.0, 378.0, 435.0,
176.0, 405.0, 360.0, 519.0, 648.0, 374.0, 483.0, 537.0, 578.0, 534.0,
577.0, 619.0, 538.0, 331.0, 186.0, 629.0, 193.0, 360.0, 660.0, 484.0,
512.0, 315.0, 457.0, 404.0, 740.0, 388.0, 357.0, 485.0, 567.0, 160.0,
428.0, 387.0, 901.0, 187.0, 622.0, 616.0, 585.0, 474.0, 442.0, 499.0,
437.0, 620.0, 351.0, 286.0, 373.0, 232.0, 393.0, 745.0, 636.0, 758.0,
]
import numpy as np
zs = np.array(observed_gap_times) / 60
from utils import kde_from_sample
qs = np.linspace(0, 20, 101)
pmf_z = kde_from_sample(zs, qs)
from utils import decorate
pmf_z.plot()
decorate(xlabel='Time (min)',
ylabel='PDF',
title='Distribution of time between trains')
likelihood = pmf_z.qs
posterior_z = pmf_z * pmf_z.qs
posterior_z.normalize()
pmf_z.plot(label='prior', color='C5')
posterior_z.plot(label='posterior', color='C4')
decorate(xlabel='Time (min)',
ylabel='PDF',
title='Distribution of time between trains')
pmf_z.mean(), posterior_z.mean()
from empiricaldist import Pmf
def make_elapsed_dist(gap, qs):
qs = qs[qs <= gap]
n = len(qs)
return Pmf(1/n, qs)
qs = posterior_z.qs
pmf_seq = [make_elapsed_dist(gap, qs) for gap in qs]
pmf_seq[3]
pmf_seq[-1].plot()
decorate(xlabel='Time (min)',
ylabel='PDF',
title='Distribution of wait time in 20 min gap')
from utils import make_mixture
pmf_x = make_mixture(posterior_z, pmf_seq)
pmf_z.plot(label='prior gap', color='C5')
posterior_z.plot(label='posterior gap', color='C4')
pmf_x.plot(label='elapsed time', color='C1')
decorate(xlabel='Time (min)',
ylabel='PDF',
title='Distribution of gap and elapsed times')
posterior_z.mean(), pmf_x.mean()
from scipy.stats import poisson
lam = 2
num_passengers = 10
likelihood = poisson(lam * pmf_x.qs).pmf(num_passengers)
posterior_x = pmf_x * likelihood
posterior_x.normalize()
pmf_x.plot(label='prior', color='C1')
posterior_x.plot(label='posterior', color='C2')
decorate(xlabel='Time (min)',
ylabel='PDF',
title='Distribution of time since last train')
pmf_x.mean(), posterior_x.mean()
posterior_y = Pmf.sub_dist(posterior_z, posterior_x)
nonneg = (posterior_y.qs >= 0)
posterior_y = Pmf(posterior_y[nonneg])
posterior_y.normalize()
posterior_x.make_cdf().plot(label='posterior of x', color='C2')
posterior_y.make_cdf().plot(label='posterior of y', color='C3')
posterior_z.make_cdf().plot(label='posterior of z', color='C4')
decorate(xlabel='Time (min)',
ylabel='PDF',
title='Distribution of elapsed time, wait time, gap')
sample = posterior_z.sample(260)
delays = [30, 40, 50]
augmented_sample = np.append(sample, delays)
qs = np.linspace(0, 60, 101)
augmented_posterior_z = kde_from_sample(augmented_sample, qs)
augmented_posterior_z.plot(label='augmented posterior of z', color='C4')
decorate(xlabel='Time (min)',
ylabel='PDF',
title='Distribution of time between trains')
qs = augmented_posterior_z.qs
pmf_seq = [make_elapsed_dist(gap, qs) for gap in qs]
pmf_x = make_mixture(augmented_posterior_z, pmf_seq)
lam = 2
num_passengers = 10
def compute_posterior_y(num_passengers):
Distribution of wait time based on `num_passengers`.
likelihood = poisson(lam * qs).pmf(num_passengers)
posterior_x = pmf_x * likelihood
posterior_x.normalize()
posterior_y = Pmf.sub_dist(augmented_posterior_z, posterior_x)
nonneg = (posterior_y.qs >= 0)
posterior_y = Pmf(posterior_y[nonneg])
posterior_y.normalize()
return posterior_y
posterior_y = compute_posterior_y(10)
posterior_y.mean()
1 - posterior_y.make_cdf()(15)
nums = np.arange(0, 37, 3)
posteriors = [compute_posterior_y(num) for num in nums]
mean_wait = [posterior_y.mean()
for posterior_y in posteriors]
import matplotlib.pyplot as plt
plt.plot(nums, mean_wait)
decorate(xlabel='Number of passengers',
ylabel='Expected time until next train',
title='Expected wait time based on number of passengers')
prob_late = [1 - posterior_y.make_cdf()(15)
for posterior_y in posteriors]
plt.plot(nums, prob_late)
decorate(xlabel='Number of passengers',
ylabel='Probability of being late',
title='Probability of being late based on number of passengers')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: 16.1 - Estimating the mean and standard deviation of a normal distribution
Step3: Figure 16.3
Step4: 16.2 - Outliers and robust estimation
Step5: Figure 16.9
Step6: 16.2 - Two Groups
Step7: Figure 16.12
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import pymc3 as pm
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
from scipy.stats import norm, t
from IPython.display import Image
%matplotlib inline
plt.style.use('seaborn-white')
color = '#87ceeb'
%load_ext watermark
%watermark -p pandas,numpy,pymc3,matplotlib,seaborn,scipy
df = pd.read_csv('data/TwoGroupIQ.csv', dtype={'Group':'category'})
df.info()
df.head()
# Mean and standard deviation
df.groupby('Group').agg(['mean', 'std'])
fg = sns.FacetGrid(df, col='Group', height=4)
fg.map(sns.distplot, 'Score', kde=False, color='#87ceeb');
# We are only interested in the scores of group 'Smart Drug'
y = df['Score'][df.Group == 'Smart Drug']
Image('images/fig16_2.png', width=300)
with pm.Model() as model:
mu = pm.Normal('mu', y.mean(), sd=y.std())
sigma = pm.Uniform('sigma', y.std()/1000, y.std()*1000)
# PyMC's Normal likelihood can take either precision or standard deviation as an argument.
likelihood = pm.Normal('likelihood', mu, sd=sigma, observed=y)
pm.model_to_graphviz(model)
with model:
trace = pm.sample(2000, cores=4, nuts_kwargs={'target_accept': 0.95})
pm.traceplot(trace);
fig, [(ax1, ax2), (ax3, ax4)] = plt.subplots(2,2, figsize=(10,6))
font_d = {'size':16}
# Upper left
pm.plot_posterior(trace['mu'], point_estimate='mode', ref_val=100, ax=ax1, color=color)
ax1.set_xlabel('$\mu$', fontdict=font_d)
ax1.set_title('Mean', fontdict=font_d)
# Upper right
tr_len = len(trace)
# Plot only 20 posterior prediction curves.
n_curves = 20
# Create an index of length 20 with which we step through the trace.
stepIdxVec = np.arange(0, tr_len, tr_len//n_curves)
x_range = np.arange(y.min(), y.max())
x = np.tile(x_range.reshape(-1,1), (1,20))
ax2.hist(y, bins=25, density=True, color='steelblue')
ax2.plot(x, norm.pdf(x, trace['mu'][stepIdxVec], trace['sigma'][stepIdxVec]), c=color)
ax2.set_xlabel('y', fontdict=font_d)
ax2.set_title('Data w. Post. Pred.\nN=63')
[ax2.spines[spine].set_visible(False) for spine in ['left', 'right', 'top']]
ax2.yaxis.set_visible(False)
# Lower left
pm.plot_posterior(trace['sigma'], point_estimate='mode', ref_val=15, ax=ax3, color=color)
ax3.set_xlabel('$\sigma$', fontdict=font_d)
ax3.set_title('Std. Dev.', fontdict=font_d)
# Lower right
pm.plot_posterior((trace['mu']-100)/trace['sigma'], point_estimate='mode', ref_val=0,
ax=ax4, color=color)
ax4.set_title('Effect Size', fontdict=font_d)
ax4.set_xlabel('$(\mu - 100)/\sigma$', fontdict=font_d)
plt.tight_layout();
with pm.Model() as model2:
mu = pm.Normal('mu', y.mean(), sd=y.std())
sigma = pm.Uniform('sigma', y.std()/1000, y.std()*1000)
nu_minus1 = pm.Exponential('nu_minus1', 1/29)
nu = pm.Deterministic('nu', nu_minus1+1)
likelihood = pm.StudentT('likelihood', nu, mu, sd=sigma, observed=y)
pm.model_to_graphviz(model2)
with model2:
trace2 = pm.sample(5000, cores=4, nuts_kwargs={'target_accept': 0.95})
pm.traceplot(trace2);
fig, [(ax1, ax2), (ax3, ax4), (ax5, ax6)] = plt.subplots(3,2, figsize=(10,8))
# Upper left
pm.plot_posterior(trace2['mu'], point_estimate='mode', ref_val=100, ax=ax1, color=color)
ax1.set_xlabel('$\mu$', fontdict=font_d)
ax1.set_title('Mean', fontdict=font_d)
# Upper right
tr_len = len(trace)
n_curves = 20
stepIdxVec = np.arange(0, tr_len, tr_len//n_curves)
x_range = np.arange(y.min(), y.max())
x = np.tile(x_range.reshape(-1,1), (1,20))
ax2.hist(y, bins=25, density=True, color='steelblue')
ax2.plot(x, norm.pdf(x, trace2['mu'][stepIdxVec], trace2['sigma'][stepIdxVec]), c='#87ceeb')
ax2.set_xlabel('y', fontdict=font_d)
ax2.set_title('Data w. Post. Pred.')
[ax2.spines[spine].set_visible(False) for spine in ['left', 'right', 'top']]
ax2.yaxis.set_visible(False)
# Middle left
pm.plot_posterior(trace2['sigma'], point_estimate='mode', ref_val=15, ax=ax3, color=color)
ax3.set_xlabel('$\sigma$', fontdict=font_d)
ax3.set_title('Std. Dev.', fontdict=font_d)
# Middle right
pm.plot_posterior((trace2['mu']-100)/trace2['sigma'], point_estimate='mode', ref_val=0,
ax=ax4, color=color)
ax4.set_title('Effect Size', fontdict=font_d)
ax4.set_xlabel('$(\mu - 100)/\sigma$', fontdict=font_d)
# Lower left
pm.plot_posterior(np.log10(trace2['nu']), point_estimate='mode', ax=ax5, color=color)
ax5.set_title('Normality', fontdict=font_d)
ax5.set_xlabel(r'log10($\nu$)', fontdict=font_d)
plt.tight_layout();
ax6.set_visible(False)
Image('images/fig16_11.png', width=400)
grp_idx = df.Group.cat.codes.values
grp_codes = df.Group.cat.categories
n_grps = grp_codes.size
with pm.Model() as model3:
mu = pm.Normal('mu', df.Score.mean(), sd=df.Score.std(), shape=n_grps)
sigma = pm.Uniform('sigma', df.Score.std()/1000, df.Score.std()*1000, shape=n_grps)
nu_minus1 = pm.Exponential('nu_minus1', 1/29)
nu = pm.Deterministic('nu', nu_minus1+1)
likelihood = pm.StudentT('likelihood', nu, mu[grp_idx], sd=sigma[grp_idx], observed=df.Score)
pm.model_to_graphviz(model3)
with model3:
trace3 = pm.sample(5000, cores=4, nuts_kwargs={'target_accept': 0.95})
pm.traceplot(trace3);
tr3_mu0 = trace3['mu'][:,0]
tr3_mu1 = trace3['mu'][:,1]
tr3_sigma0 = trace3['sigma'][:,0]
tr3_sigma1 = trace3['sigma'][:,1]
tr3_nu = np.log10(trace3['nu'])
fig, axes = plt.subplots(5,2, figsize=(12, 12))
# Left column figs
l_trace_vars = (tr3_mu0, tr3_mu1, tr3_sigma0, tr3_sigma1, tr3_nu)
l_axes_idx = np.arange(5)
l_xlabels = ('$\mu_0$', '$\mu_1$', '$\sigma_0$', '$\sigma_1$', r'log10($\nu$)')
l_titles = ('Placebo Mean', 'Smart Drug Mean', 'Placebo Scale', 'Smart Drug Scale', 'Normality')
for var, ax_i, xlabel, title in zip(l_trace_vars, l_axes_idx, l_xlabels, l_titles):
pm.plot_posterior(var, point_estimate='mode', ax=axes[ax_i,0], color=color)
axes[ax_i,0].set_xlabel(xlabel, font_d)
axes[ax_i,0].set_title(title, font_d)
# Right column figs
tr_len = len(trace3)
n_curves = 20
stepIdxVec = np.arange(0, tr_len, tr_len//n_curves)
x_range = np.arange(df.Score.min(), df.Score.max())
x = np.tile(x_range.reshape(-1,1), (1,20))
# 1
axes[0,1].hist(df.Score[df.Group == 'Placebo'], bins=25, density=True, color='steelblue')
axes[0,1].plot(x, t.pdf(x, loc=tr3_mu0[stepIdxVec], scale=tr3_sigma0[stepIdxVec],
df=trace3['nu'][stepIdxVec]), c='#87ceeb')
axes[0,1].set_xlabel('y', font_d)
[axes[0,1].spines[spine].set_visible(False) for spine in ['left', 'right', 'top']]
axes[0,1].yaxis.set_visible(False)
axes[0,1].set_title('Data for Placebo w. Post. Pred.', font_d)
# 2
axes[1,1].hist(df.Score[df.Group == 'Smart Drug'], bins=25, density=True, color='steelblue')
axes[1,1].plot(x, t.pdf(x, loc=tr3_mu1[stepIdxVec], scale=tr3_sigma1[stepIdxVec],
df=trace3['nu'][stepIdxVec]), c='#87ceeb')
axes[1,1].set_xlabel('y', font_d)
[axes[1,1].spines[spine].set_visible(False) for spine in ['left', 'right', 'top']]
axes[1,1].yaxis.set_visible(False)
axes[1,1].set_title('Data for Smart Drug w. Post. Pred.', font_d)
# 3-5
r_vars = (tr3_mu1-tr3_mu0,
tr3_sigma1-tr3_sigma0,
(tr3_mu1-tr3_mu0)/np.sqrt((tr3_sigma0**2+tr3_sigma1**2)/2))
r_axes_idx = np.arange(start=2, stop=5)
r_xlabels = ('$\mu_1 - \mu_0$',
'$\sigma_1 - \sigma_0$',
r'$\frac{(\mu_1-\mu_0)}{\sqrt{(\sigma_0^2+\sigma_1^2)/2}}$')
r_titles = ('Difference of Means',
'Difference of Scales',
'Effect Size')
for var, ax_i, xlabel, title in zip(r_vars, r_axes_idx, r_xlabels, r_titles):
pm.plot_posterior(var, point_estimate='mode', ref_val=0, ax=axes[ax_i,1], color=color)
axes[ax_i,1].set_xlabel(xlabel, font_d)
axes[ax_i,1].set_title(title, font_d)
plt.tight_layout();
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Computing the forward operator
Step2: Visualizing the coregistration
Step3: Compute Source Space
Step4: The surface based source space src contains two parts, one for the left
Step5: To compute a volume based source space defined with a grid of candidate
Step6: To compute a volume based source space defined with a grid of candidate
Step7: <div class="alert alert-info"><h4>Note</h4><p>Some sources may appear to be outside the BEM inner skull contour.
Step8: Compute forward solution
Step9: Note that the
Step10: <div class="alert alert-danger"><h4>Warning</h4><p>Forward computation can remove vertices that are too close to (or outside)
Step11: We can explore the content of fwd to access the numpy array that contains
Step12: To extract the numpy array containing the forward operator corresponding to
|
<ASSISTANT_TASK:>
Python Code:
import os.path as op
import mne
from mne.datasets import sample
data_path = sample.data_path()
# the raw file containing the channel location + types
sample_dir = op.join(data_path, 'MEG', 'sample',)
raw_fname = op.join(sample_dir, 'sample_audvis_raw.fif')
# The paths to Freesurfer reconstructions
subjects_dir = op.join(data_path, 'subjects')
subject = 'sample'
plot_bem_kwargs = dict(
subject=subject, subjects_dir=subjects_dir,
brain_surfaces='white', orientation='coronal',
slices=[50, 100, 150, 200])
mne.viz.plot_bem(**plot_bem_kwargs)
# The transformation file obtained by coregistration
trans = op.join(sample_dir, 'sample_audvis_raw-trans.fif')
info = mne.io.read_info(raw_fname)
# Here we look at the dense head, which isn't used for BEM computations but
# is useful for coregistration.
mne.viz.plot_alignment(info, trans, subject=subject, dig=True,
meg=['helmet', 'sensors'], subjects_dir=subjects_dir,
surfaces='head-dense')
src = mne.setup_source_space(subject, spacing='oct4', add_dist='patch',
subjects_dir=subjects_dir)
print(src)
mne.viz.plot_bem(src=src, **plot_bem_kwargs)
sphere = (0.0, 0.0, 0.04, 0.09)
vol_src = mne.setup_volume_source_space(
subject, subjects_dir=subjects_dir, sphere=sphere, sphere_units='m',
add_interpolator=False) # just for speed!
print(vol_src)
mne.viz.plot_bem(src=vol_src, **plot_bem_kwargs)
surface = op.join(subjects_dir, subject, 'bem', 'inner_skull.surf')
vol_src = mne.setup_volume_source_space(
subject, subjects_dir=subjects_dir, surface=surface,
add_interpolator=False) # Just for speed!
print(vol_src)
mne.viz.plot_bem(src=vol_src, **plot_bem_kwargs)
fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,
surfaces='white', coord_frame='mri',
src=src)
mne.viz.set_3d_view(fig, azimuth=173.78, elevation=101.75,
distance=0.30, focalpoint=(-0.03, -0.01, 0.03))
conductivity = (0.3,) # for single layer
# conductivity = (0.3, 0.006, 0.3) # for three layers
model = mne.make_bem_model(subject='sample', ico=4,
conductivity=conductivity,
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
fwd = mne.make_forward_solution(raw_fname, trans=trans, src=src, bem=bem,
meg=True, eeg=False, mindist=5.0, n_jobs=1,
verbose=True)
print(fwd)
print(f'Before: {src}')
print(f'After: {fwd["src"]}')
leadfield = fwd['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
fwd_fixed = mne.convert_forward_solution(fwd, surf_ori=True, force_fixed=True,
use_cps=True)
leadfield = fwd_fixed['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Get Authorization URL
Step5: Create Authorization URL Helper
Step6: Get Authorization Code
Step7: Cut and paste that PIN here
Step9: Get Access Token
Step10: POST to that URL to get a response containing an access token
Step11: It seems like the access token can only be created once and has a 10 year expiration time.
|
<ASSISTANT_TASK:>
Python Code:
import os
DEN_CLIENT_ID = os.environ["DEN_CLIENT_ID"]
DEN_CLIENT_SECRET = os.environ["DEN_CLIENT_SECRET"]
import uuid
def _get_state():
Get a unique id string.
return str(uuid.uuid1())
_get_state()
API_PROTOCOL = "https"
API_LOCATION = "home.nest.com"
from urlparse import SplitResult, urlunsplit
from urllib import urlencode
def _get_url(path, query, netloc=API_LOCATION):
Get a URL for the given path and query.
split = SplitResult(scheme=API_PROTOCOL, netloc=netloc, path=path, query=query, fragment="")
return urlunsplit(split)
def get_auth_url(client_id=DEN_CLIENT_ID):
Get an authorization URL for the given client id.
path = "login/oauth2"
query = urlencode({"client_id": client_id, "state": _get_state()})
return _get_url(path, query)
get_auth_url()
!open "{get_auth_url()}"
pin = ""
def get_access_token_url(client_id=DEN_CLIENT_ID, client_secret=DEN_CLIENT_SECRET, code=pin):
Get an access token URL for the given client id.
path = "oauth2/access_token"
query = urlencode({"client_id": client_id,
"client_secret": client_secret,
"code": code,
"grant_type": "authorization_code"})
return _get_url(path, query, "api." + API_LOCATION)
get_access_token_url()
import requests
r = requests.post(get_access_token_url())
print r.status_code
assert r.status_code == requests.codes.OK
r.json()
access_token = r.json()["access_token"]
access_token
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.