Unnamed: 0 int64 0 16k | text_prompt stringlengths 110 62.1k | code_prompt stringlengths 37 152k |
|---|---|---|
9,900 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-2', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: SANDBOX-2
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
9,901 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Non-parametric between conditions cluster statistic on single trial power
This script shows how to compare clusters in time-frequency
power estimates between conditions. It uses a non-parametric
statistical procedure based on permutations and cluster
level statistics.
The procedure consists in
Step1: Set parameters
Step2: Factor to downsample the temporal dimension of the TFR computed by
tfr_morlet. Decimation occurs after frequency decomposition and can
be used to reduce memory usage (and possibly comptuational time of downstream
operations such as nonparametric statistics) if you don't need high
spectrotemporal resolution.
Step3: Compute statistic
Step4: View time-frequency plots | Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import permutation_cluster_test
from mne.datasets import sample
print(__doc__)
Explanation: Non-parametric between conditions cluster statistic on single trial power
This script shows how to compare clusters in time-frequency
power estimates between conditions. It uses a non-parametric
statistical procedure based on permutations and cluster
level statistics.
The procedure consists in:
extracting epochs for 2 conditions
compute single trial power estimates
baseline line correct the power estimates (power ratios)
compute stats to see if the power estimates are significantly different
between conditions.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
tmin, tmax = -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
ch_name = 'MEG 1332' # restrict example to one channel
# Load condition 1
reject = dict(grad=4000e-13, eog=150e-6)
event_id = 1
epochs_condition_1 = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0),
reject=reject, preload=True)
epochs_condition_1.pick_channels([ch_name])
# Load condition 2
event_id = 2
epochs_condition_2 = mne.Epochs(raw, events, event_id, tmin, tmax,
picks=picks, baseline=(None, 0),
reject=reject, preload=True)
epochs_condition_2.pick_channels([ch_name])
Explanation: Set parameters
End of explanation
decim = 2
freqs = np.arange(7, 30, 3) # define frequencies of interest
n_cycles = 1.5
tfr_epochs_1 = tfr_morlet(epochs_condition_1, freqs,
n_cycles=n_cycles, decim=decim,
return_itc=False, average=False)
tfr_epochs_2 = tfr_morlet(epochs_condition_2, freqs,
n_cycles=n_cycles, decim=decim,
return_itc=False, average=False)
tfr_epochs_1.apply_baseline(mode='ratio', baseline=(None, 0))
tfr_epochs_2.apply_baseline(mode='ratio', baseline=(None, 0))
epochs_power_1 = tfr_epochs_1.data[:, 0, :, :] # only 1 channel as 3D matrix
epochs_power_2 = tfr_epochs_2.data[:, 0, :, :] # only 1 channel as 3D matrix
Explanation: Factor to downsample the temporal dimension of the TFR computed by
tfr_morlet. Decimation occurs after frequency decomposition and can
be used to reduce memory usage (and possibly comptuational time of downstream
operations such as nonparametric statistics) if you don't need high
spectrotemporal resolution.
End of explanation
threshold = 6.0
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_test([epochs_power_1, epochs_power_2],
n_permutations=100, threshold=threshold, tail=0)
Explanation: Compute statistic
End of explanation
times = 1e3 * epochs_condition_1.times # change unit to ms
evoked_condition_1 = epochs_condition_1.average()
evoked_condition_2 = epochs_condition_2.average()
plt.figure()
plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43)
plt.subplot(2, 1, 1)
# Create new stats image with only significant clusters
T_obs_plot = np.nan * np.ones_like(T_obs)
for c, p_val in zip(clusters, cluster_p_values):
if p_val <= 0.05:
T_obs_plot[c] = T_obs[c]
plt.imshow(T_obs,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', cmap='gray')
plt.imshow(T_obs_plot,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', cmap='RdBu_r')
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title('Induced power (%s)' % ch_name)
ax2 = plt.subplot(2, 1, 2)
evoked_contrast = mne.combine_evoked([evoked_condition_1, evoked_condition_2],
weights=[1, -1])
evoked_contrast.plot(axes=ax2)
plt.show()
Explanation: View time-frequency plots
End of explanation |
9,902 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GCE Lab 3 - Constrain Galaxy Model
This notebook presents how to plot the basic galaxy evolution properties of your simple Milky Way model. Those plots will allow you to calibrate your model against several observations (taken from Kubryk et al. 2015).
Step1: Your Tasks
Understand the impact of the star formation efficiency and the galactic inflow rate on the general properties of your Milky Way model.
Find a set of input parameters that reproduce the observational constraints.
Key Equation for Star Formation
The global star formation rate ($\dot{M}\star$) inside the galaxy model at time $t$ depends on the mass of gas $M{gas}$ inside the galaxy and the star formation efficiency $f_\star$ [yr$^{-1}$].
$$\dot{M}\star(t)=f\star M_\mathrm{gas}(t)\quad\mbox{[M$_\odot$ yr$^{-1}$]}$$
1. Run GCE Model
In the following example, you will be using a galactic inflow prescription that is similar to the two-infall model presented in Chiappini et al. (1997).
Step2: 2. Plot the Star Formation History
The current star formation rate (SFR) is about 2 M$_\odot$ yr$^{-1}$. This is the value your model should have at the end of the simulation (at a Galactic age $t=13$ Gyr).
Useful Information
Step3: 3. Plot the Evolution of the Mass of Gas
The current total mass of gas in the Milky Way is about $7\times10^{9}$ M$_\odot$. This is the value your model should have at the end of the simulation (at a Galactic age $t=13$ Gyr).
Useful Information
Step4: 4. Plot the Evolution of Iron Abundance [Fe/H]
$[A/B]=\log(n_A/n_B)-\log(n_A/n_B)_\odot$
To represent the solar neighbourhood, [Fe/H] in your model should reach zero (solar value) about 4.6 Gyr before the end of the simulation, representing the moment the Sun formed.
Useful Information
Step5: 5. Plot the Evolution of the Galactic Inflow Rate
The current galactic inflow rate estimated for the Milky Way is about 1 M$_\odot$ yr$^{-1}$. | Python Code:
# Import the standard Python packages
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# Two-zone galactic chemical evolution code
import JINAPyCEE.omega_plus as omega_plus
# Matplotlib option
%matplotlib inline
Explanation: GCE Lab 3 - Constrain Galaxy Model
This notebook presents how to plot the basic galaxy evolution properties of your simple Milky Way model. Those plots will allow you to calibrate your model against several observations (taken from Kubryk et al. 2015).
End of explanation
# \\\\\\\\\\ Modify below \\\\\\\\\\\\
# ====================================
# Star formation efficiency (f_\star) --> [dimensionless]
# Original value in the notebook --> 1e-11
sfe = 1e-11
# Magnitude (strength) of the galactic inflow rate
# Original value in the notebook --> 0.1
in_mag = 0.1
# ====================================
# ////////// Modify above ////////////
# Run OMEGA+ with the first set of parameters
op = omega_plus.omega_plus(sfe=sfe, special_timesteps=200, t_star=1.0, \
exp_infall = [[in_mag*40, 0.0, 0.8e9], [in_mag*5, 1.0e9, 7.0e9]])
Explanation: Your Tasks
Understand the impact of the star formation efficiency and the galactic inflow rate on the general properties of your Milky Way model.
Find a set of input parameters that reproduce the observational constraints.
Key Equation for Star Formation
The global star formation rate ($\dot{M}\star$) inside the galaxy model at time $t$ depends on the mass of gas $M{gas}$ inside the galaxy and the star formation efficiency $f_\star$ [yr$^{-1}$].
$$\dot{M}\star(t)=f\star M_\mathrm{gas}(t)\quad\mbox{[M$_\odot$ yr$^{-1}$]}$$
1. Run GCE Model
In the following example, you will be using a galactic inflow prescription that is similar to the two-infall model presented in Chiappini et al. (1997).
End of explanation
# Set the figure size
fig = plt.figure(figsize=(12,7))
matplotlib.rcParams.update({'font.size': 16.0})
# Plot the evolution of the star formation rate (SFR)
op.inner.plot_star_formation_rate(color='r', marker=' ', label="Prediction")
# Plot the observational constraint (cyan color)
plt.plot([13e9,13e9], [0.65,3.0], linewidth=6, color='c', alpha=0.5, label="Observation")
# Labels and legend
plt.xlabel('Galactic age [yr]', fontsize=16)
plt.legend()
# Print the total stellar mass formed
print("Integrated stellar mass:",'%.2e'%sum(op.inner.history.m_locked),'M_sun')
print(".. Should between 3.00e+10 and 5.00e+10 M_sun")
Explanation: 2. Plot the Star Formation History
The current star formation rate (SFR) is about 2 M$_\odot$ yr$^{-1}$. This is the value your model should have at the end of the simulation (at a Galactic age $t=13$ Gyr).
Useful Information: With a higher star formation efficiency, the gas reservoir will be converted into stars more rapidly.
Useful Information: The magnitude of the star formation rate is very sensitive to the galactic inflow rate.
End of explanation
# Set the figure size
fig = plt.figure(figsize=(12,7))
matplotlib.rcParams.update({'font.size': 16.0})
# Plot the evolution of the mass of gas in the interstellar medium (ISM)
op.inner.plot_totmasses(color='r', marker=' ', label="Prediction")
# Plot the observational constraint
plt.plot([12.9e9,12.9e9], [3.6e9,12.6e9], linewidth=6, color='c', alpha=0.5, label="Observation")
# Labels and legend
plt.xscale('linear')
plt.xlabel('Galactic age [yr]')
plt.ylim(1e8,1e11)
plt.legend()
Explanation: 3. Plot the Evolution of the Mass of Gas
The current total mass of gas in the Milky Way is about $7\times10^{9}$ M$_\odot$. This is the value your model should have at the end of the simulation (at a Galactic age $t=13$ Gyr).
Useful Information: The mass of gas depends strongly on the galactic inflow rate.
End of explanation
# Set the figure size
fig = plt.figure(figsize=(12,7.0))
matplotlib.rcParams.update({'font.size': 16.0})
# Plot the evolution of [Fe/H], the iron abundance of the gas inside the galaxy
op.inner.plot_spectro(color='r', marker=" ", label="Prediction")
# Plot the solar value (black dotted lines)
t_Sun = 13.0e9 - 4.6e9
plt.plot([t_Sun,t_Sun], [-2,1], ':k')
plt.plot([0,13e9], [0,0], ':k')
# Labels and legend
plt.ylim(-2,1)
plt.xscale('linear')
plt.xlabel('Galactic age [yr]')
plt.legend()
Explanation: 4. Plot the Evolution of Iron Abundance [Fe/H]
$[A/B]=\log(n_A/n_B)-\log(n_A/n_B)_\odot$
To represent the solar neighbourhood, [Fe/H] in your model should reach zero (solar value) about 4.6 Gyr before the end of the simulation, representing the moment the Sun formed.
Useful Information: The [Fe/H] is mostly sensitive to the star formation efficiency. In other words, it is sensitive to the mass of gas (H) in which stars inject their metals (Fe).
End of explanation
# Set the figure size
fig = plt.figure(figsize=(10,6.0))
matplotlib.rcParams.update({'font.size': 16.0})
# Plot the evolution of the inflow rate
op.inner.plot_inflow_rate(color='r', marker=" ", label="Prediction")
# Plot the observational constraint
plt.plot([13e9,13e9], [0.6,1.6], linewidth=6, color='c', alpha=0.5, label="Observation")
# Labels and legend
plt.yscale('log')
plt.xlabel('Galactic age [yr]')
plt.legend()
Explanation: 5. Plot the Evolution of the Galactic Inflow Rate
The current galactic inflow rate estimated for the Milky Way is about 1 M$_\odot$ yr$^{-1}$.
End of explanation |
9,903 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem
Step1: Unit Test
The following unit test is expected to fail until you solve the challenge. | Python Code:
%run ../../stacks_queues/stack/stack.py
%load ../../stacks_queues/stack/stack.py
def hanoi(num_disks, src, dest, buff):
# TODO: Implement me
pass
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement the Towers of Hanoi with 3 towers and N disks.
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
Can we assume we already have a stack class that can be used for this problem?
Yes
Test Cases
None tower(s)
0 disks
1 disk
2 or more disks
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
# %load test_hanoi.py
from nose.tools import assert_equal
class TestHanoi(object):
def test_hanoi(self):
num_disks = 3
src = Stack()
buff = Stack()
dest = Stack()
print('Test: None towers')
hanoi(num_disks, None, None, None)
print('Test: 0 disks')
hanoi(num_disks, src, dest, buff)
assert_equal(dest.pop(), None)
print('Test: 1 disk')
src.push(5)
hanoi(num_disks, src, dest, buff)
assert_equal(dest.pop(), 5)
print('Test: 2 or more disks')
for i in range(num_disks, -1, -1):
src.push(i)
hanoi(num_disks, src, dest, buff)
for i in range(0, num_disks):
assert_equal(dest.pop(), i)
print('Success: test_hanoi')
def main():
test = TestHanoi()
test.test_hanoi()
if __name__ == '__main__':
main()
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation |
9,904 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import Modules
Step1: Import Scrapped Reviews
Step2: Import Hu & Liu (2004) Word Dictionary and Wrangled Large Users
Step3: Connect to the AWS Instance and get the restaurant reviews from the cleaned data database
Step4: Testing
Supply User ID
Get all restaurant IDs that the user has reviewed
Do a random 75 (training)/25(testing) split of the restaurant IDs
For the training sample, get all of the user's reviews for those restaurants
For each restaurant in the testing sample, get all of that restaurant's reviews
Train each of the (feature, model) combinations on the reviews in the training sample
For each review in the testing sample, classify that review using the model
If the total proportion of positive reviews is greater than 70% for each restaurant, classify that restaurant's rating as positive.
Else, classify that restaurant's rating as negative
For each restaurant's predicted rating, check against what the user actually thought 9. Use these to determine log-loss, accuracy, and precision
The features and models we use are
Step5: The best performing (feature, model) combination for this user was a combination of all of the features in a linear support vector machine.
Step6: Make a Recommendation
Step7: Let's look at the top 5 restaurants for each location
Step8: Let's look at the bottom 5 restaurants for each location
Step9: Let's take a step back and look at the user's word choice and tone. First let's look at the user_df dataframe, which contains all of the user's reviews and ratings.
Step10: Let's look at the most important TF-IDF features for our linear SVM model
Step11: Let's take a look at the topics generated by LDA. Specifically, we focus only on the good reviews and plot the topics that appeared most often in the good reviews.
Step12: Let's look at the top 5 words in each of these topics
Step13: Next, let's take a look at what kind of restaurants the user likes
Step14: The restaurants that we've recommended include many traditional and new American restaurants, as well as Burger places. This suggests that our recommendation system does a good job picking up on the latent preferences of the user.
With LSA, we arbitrarily chose the number of topics to be the mean of the singular values. This is a meaningless choice and completely arbitrary.
The words in the below representation represent word vectors in the initial term-document matrix, the coefficients on each word vector is a result of the singular value decomposition. | Python Code:
import json
import pandas as pd
import re
import random
import matplotlib.pyplot as plt
%matplotlib inline
from ast import literal_eval as make_tuple
from scipy import sparse
import numpy as np
from pymongo import MongoClient
from nltk.corpus import stopwords
from sklearn import svm
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
from sklearn.metrics import accuracy_score
from sklearn.naive_bayes import GaussianNB
from sklearn.feature_extraction.text import TfidfVectorizer
import sys
sys.path.append('../machine_learning')
import yelp_ml as yml
reload(yml)
from gensim import corpora, models, similarities, matutils
import tqdm
Explanation: Import Modules
End of explanation
dc_reviews = json.load(open("../Yelp_web_scrapper/dc_reviews.json"))
newyork_reviews = json.load(open("../Yelp_web_scrapper/newyork_reviews.json"))
austin_reviews = json.load(open("../Yelp_web_scrapper/austin_reviews.json"))
chicago_reviews = json.load(open("../Yelp_web_scrapper/chicago_reviews.json"))
la_reviews = json.load(open("../Yelp_web_scrapper/la_reviews.json"))
scrapped_reviews = {'dc': dc_reviews, 'ny': newyork_reviews,
'austin': austin_reviews, 'chicago': chicago_reviews,
'la': la_reviews}
Explanation: Import Scrapped Reviews
End of explanation
lh_neg = open('../input/negative-words.txt', 'r').read()
lh_neg = lh_neg.split('\n')
lh_pos = open('../input/positive-words.txt', 'r').read()
lh_pos = lh_pos.split('\n')
users = json.load(open("cleaned_large_user_dictionary.json"))
word_list = list(set(lh_pos + lh_neg))
Explanation: Import Hu & Liu (2004) Word Dictionary and Wrangled Large Users
End of explanation
ip = '54.175.170.119'
conn = MongoClient(ip, 27017)
conn.database_names()
db = conn.get_database('cleaned_data')
reviews = db.get_collection('restaurant_reviews')
Explanation: Connect to the AWS Instance and get the restaurant reviews from the cleaned data database
End of explanation
string_keys_dict = {}
for j in tqdm.tqdm(range(55, 56)):
#Generate a dataframe that has the user's review text, review rating, and restaurant ID
test_results = {}
user_df = yml.make_user_df(users[users.keys()[j]])
#Only predict for the user if they have at least 20 bad ratings
if len([x for x in user_df['rating'] if x < 4]) < 20:
string_keys_dict[str(users.keys()[j])] = test_results
continue
else:
business_ids = list(set(user_df['biz_id']))
restreview = {}
#Create a training and test sample from the user reviewed restaurants
#using a random 25% subset of all the restaurants the user has reviewed
split_samp = .25
len_random = int(len(business_ids) * split_samp)
test_set = random.sample(business_ids, len_random)
training_set = [x for x in business_ids if x not in test_set]
sub_train_reviews, train_labels, train_reviews, train_ratings = [], [], [], []
#Create a list with the tuple (training review, training rating)
for rest_id in training_set:
train_reviews.append((user_df[user_df['biz_id'] == rest_id]['review_text'].iloc[0],
user_df[user_df['biz_id'] == rest_id]['rating'].iloc[0]))
#Note that the distribution is heavily skewed towards good reviews.
#Therefore, we create a training sample with the same amount of
#positive and negative reviews
sample_size = min(len([x[1] for x in train_reviews if x[1] < 4]),
len([x[1] for x in train_reviews if x[1] >= 4]))
bad_reviews = [x for x in train_reviews if x[1] < 4]
good_reviews = [x for x in train_reviews if x[1] >= 4]
for L in range(0, int(float(sample_size)/float(2))):
sub_train_reviews.append(bad_reviews[L][0])
sub_train_reviews.append(good_reviews[L][0])
train_labels.append(bad_reviews[L][1])
train_labels.append(good_reviews[L][1])
#Make the train labels binary
train_labels = [1 if x >=4 else 0 for x in train_labels]
#Sanity check for non-empty training reviews
if not sub_train_reviews:
string_keys_dict[str(users.keys()[j])] = test_results
continue
else:
for i in range(0, len(business_ids)):
rlist = []
for obj in reviews.find({'business_id':business_ids[i]}):
rlist.append(obj)
restreview[business_ids[i]] = rlist
restaurant_df = yml.make_biz_df(users.keys()[j], restreview)
#Make a FeatureUnion object with the desired features then fit to train reviews
feature_selection = {"sent_tf":(True, True, False),
"sent": (True,False,False),
"tf_lda": (False,True,True),
"all": (True, True, True)}
for feature in feature_selection.keys():
#Make a FeatureUnion object with the desired features then fit to train reviews
comb_features = yml.make_featureunion(sent_percent=feature_selection[feature][0],
tf = feature_selection[feature][1],
lda = feature_selection[feature][2])
delta_vect = None
comb_features.fit(sub_train_reviews)
train_features = comb_features.transform(sub_train_reviews)
#Fit LSI model and return number of LSI topics
lsi, topics, dictionary = yml.fit_lsi(sub_train_reviews)
train_lsi = yml.get_lsi_features(sub_train_reviews, lsi, topics, dictionary)
#Stack the LSI and combined features together
train_features = sparse.hstack((train_features, train_lsi))
train_features = train_features.todense()
#fit each model in turn
model_runs = {"svm": (True, False, False),
"rf": (False, True, False),
"naive_bayes": (False, False, True)}
for model_run in model_runs.keys():
clf = yml.fit_model(train_features, train_labels, svm_clf = model_runs[model_run][0],
RandomForest = model_runs[model_run][1],
nb = model_runs[model_run][2])
threshold = 0.7
error = yml.test_user_set(test_set, clf, restaurant_df, user_df, comb_features,
threshold, lsi, topics, dictionary, delta_vect)
test_results[str((feature, model_run))] = (yml.get_log_loss(error),
yml.get_accuracy_score(error),
yml.get_precision_score(error))
string_keys_dict[str(users.keys()[j])] = test_results
with open('test_results.json', 'wb') as fp:
json.dump(string_keys_dict, fp)
Explanation: Testing
Supply User ID
Get all restaurant IDs that the user has reviewed
Do a random 75 (training)/25(testing) split of the restaurant IDs
For the training sample, get all of the user's reviews for those restaurants
For each restaurant in the testing sample, get all of that restaurant's reviews
Train each of the (feature, model) combinations on the reviews in the training sample
For each review in the testing sample, classify that review using the model
If the total proportion of positive reviews is greater than 70% for each restaurant, classify that restaurant's rating as positive.
Else, classify that restaurant's rating as negative
For each restaurant's predicted rating, check against what the user actually thought 9. Use these to determine log-loss, accuracy, and precision
The features and models we use are:
Features:
(Sentiment %, TF-IDF w/ (2,2) N-Gram, LSA)
(Sentiment %, LSA)
(TF-IDF w/ (2,2) N-Gram, LDA, LSA)
(Sentiment %, TF-IDF w/ (2,2) N-Gram, LDA, LSA)
Models:
Linear Support Vector Machine
Random Forest
Naive Bayes
End of explanation
test_results_df = pd.DataFrame.from_dict(test_results, orient = 'index')
test_results_df['model'] = test_results_df.index
test_results_df.columns = ['log_loss', 'accuracy', 'precision', 'model']
test_results_df.plot(x = 'model', y = ['log_loss', 'accuracy', 'precision'], kind = 'bar')
ax = plt.subplot(111)
ax.legend(bbox_to_anchor=(1.4, 1))
plt.show()
Explanation: The best performing (feature, model) combination for this user was a combination of all of the features in a linear support vector machine.
End of explanation
top_results = []
#Get feature and model combination that yields the highest precision
for key in test_results.keys():
feat_model = make_tuple(key)
if not top_results:
top_results = [(feat_model,test_results[key][2])]
else:
if test_results[key][2] > top_results[0][1]:
top_results.pop()
top_results = [(feat_model, test_results[key][2])]
feat_result = top_results[0][0][0]
model_result = top_results[0][0][1]
for j in tqdm.tqdm(range(55, 56)):
user_df = yml.make_user_df(users[users.keys()[j]])
business_ids = list(set(user_df['biz_id']))
#Create a list of training reviews and training ratings
for rest_id in business_ids:
train_reviews.append((user_df[user_df['biz_id'] == rest_id]['review_text'].iloc[0],
user_df[user_df['biz_id'] == rest_id]['rating'].iloc[0]))
#Create an even sample s.t. len(positive_reviews) = len(negative_reviews)
sample_size = min(len([x[1] for x in train_reviews if x[1] < 4]),
len([x[1] for x in train_reviews if x[1] >= 4]))
bad_reviews = [x for x in train_reviews if x[1] < 4]
good_reviews = [x for x in train_reviews if x[1] >= 4]
train_labels = []
sub_train_reviews = []
for L in range(0, int(float(sample_size)/float(2))):
sub_train_reviews.append(bad_reviews[L][0])
sub_train_reviews.append(good_reviews[L][0])
train_labels.append(bad_reviews[L][1])
train_labels.append(good_reviews[L][1])
#Make the train labels binary
train_labels = [1 if x >=4 else 0 for x in train_labels]
#Fit LSI model and return number of LSI topics
lsi, topics, dictionary = yml.fit_lsi(sub_train_reviews)
#Make a FeatureUnion object with the desired features then fit to train reviews
feature_selection = {"sent_tf":(True, True, False),
"sent": (True,False,False),
"tf_lda": (False,True,True),
"all": (True, True, True)}
top_feature = feature_selection['all']
comb_features = yml.make_featureunion(sent_percent=top_feature[0],
tf = top_feature[1],
lda = top_feature[2])
comb_features.fit(sub_train_reviews)
train_features = comb_features.transform(sub_train_reviews)
train_lsi = yml.get_lsi_features(sub_train_reviews, lsi, topics, dictionary)
train_features = sparse.hstack((train_features, train_lsi))
train_features = train_features.todense()
#Fit LSI model and return number of LSI topics
lsi, topics, dictionary = yml.fit_lsi(sub_train_reviews)
#Get the top performing model and fit using that model
model_runs = {"svm": (True, False, False),
"rf": (False, True, False),
"naive_bayes": (False, False, True)}
top_model = model_runs['svm']
clf = yml.fit_model(train_features, train_labels, svm_clf = top_model[0],
RandomForest = top_model[1],
nb = top_model[2])
threshold = 0.7
user_results = {}
for key in scrapped_reviews.keys():
user_results[key] = yml.make_rec(scrapped_reviews[key], clf, threshold, comb_features,
lsi, topics, dictionary)
################################################################
#Collect the results into a list of tuples, then select the top
#5 most confident good recs and top 5 most confident bad recs
#for each location
################################################################
tuple_results = {}
for key in user_results.keys():
tuple_results[key] = []
for result in user_results[key]:
tuple_results[key].append((result[1], result[2], result[3]))
tuple_results[key] = sorted(tuple_results[key], key=lambda tup: tup[1])
Explanation: Make a Recommendation
End of explanation
for key in tuple_results.keys():
print "The top 5 recommendations for " + key + " are: "
print tuple_results[key][-5:]
Explanation: Let's look at the top 5 restaurants for each location
End of explanation
for key in tuple_results.keys():
print "The bottom 5 recommendations for " + key + " are: "
print tuple_results[key][0:5]
Explanation: Let's look at the bottom 5 restaurants for each location
End of explanation
user_df = yml.make_user_df(users[users.keys()[j]])
user_df.head()
Explanation: Let's take a step back and look at the user's word choice and tone. First let's look at the user_df dataframe, which contains all of the user's reviews and ratings.
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.svm import LinearSVC
import matplotlib.pyplot as plt
tfv = TfidfVectorizer(ngram_range = (2,2), stop_words = 'english')
tfv.fit(sub_train_reviews)
X_train = tfv.transform(sub_train_reviews)
ex_clf = svm.LinearSVC()
ex_clf.fit(X_train, train_labels)
yml.plot_coefficients(ex_clf, tfv.get_feature_names())
Explanation: Let's look at the most important TF-IDF features for our linear SVM model
End of explanation
#Let's take a look at sample weightings for the user's GOOD reviews
good_reviews = [a for (a,b) in zip(sub_train_reviews, train_labels) if b == 1]
vectorizer = TfidfVectorizer(ngram_range = (2,2), stop_words = 'english')
tf = vectorizer.fit(sub_train_reviews)
lda_fit = LatentDirichletAllocation(n_topics=50).fit(tf.transform(sub_train_reviews))
tf_good = tf.transform(good_reviews)
lda_good = lda_fit.transform(tf_good)
#Take the average of each topic weighting amongst good reviews and graph each topic
topic_strings = ["Topic " + str(x) for x in range(0,50)]
topic_dict = {}
for review in lda_good:
for x in range(0,50):
try:
topic_dict[topic_strings[x]].append(review[x])
except:
topic_dict[topic_strings[x]] = [review[x]]
average_top_weight = {}
for x in range(0,50):
average_top_weight[topic_strings[x]] = reduce(lambda x, y: x + y, topic_dict[topic_strings[x]])
/ len(topic_dict[topic_strings[x]])
##############
#Plot the average weights for each topic in the good reviews
##############
#Find the average topic weights for each topic
average_topics = pd.DataFrame.from_dict(average_top_weight, orient = 'index')
average_topics.columns = ['topic_weight']
average_topics['topic'] = average_topics.index
average_topics['topic'] = [int(x[5:8]) for x in average_topics['topic']]
average_topics = average_topics.sort_values(['topic'])
x_max = average_topics.sort_values('topic_weight')['topic_weight'][-2] + 1
#Make the plot
good_plt = average_topics.plot(x='topic', y='topic_weight', kind='scatter', legend=False)
yml.label_point(average_topics.topic, average_topics.topic_weight, good_plt)
good_plt.set_ylim(0, x_max)
good_plt.set_xlim(0, 50)
Explanation: Let's take a look at the topics generated by LDA. Specifically, we focus only on the good reviews and plot the topics that appeared most often in the good reviews.
End of explanation
#View the top words in the outlier topics from the LDA representation
yml.label_point(average_topics.topic, average_topics.topic_weight, good_plt)
a = pd.concat({'x': average_topics.topic,
'y': average_topics.topic_weight}, axis=1)
top_topics = [a[a['y'] == max(a['y'])]['x'][0]]
a = a[a['y'] != max(a['y'])]
for i, point in a.iterrows():
if (point['y'] > (a['y'].mean() + 1.5 * a['y'].std()) ):
top_topics.append(int(point['x']))
#Display top words in each topic
no_top_words = 10
tf_feature_names = vectorizer.get_feature_names()
yml.display_topics(lda_fit, tf_feature_names, no_top_words, top_topics)
Explanation: Let's look at the top 5 words in each of these topics
End of explanation
good_restaurants = user_df[user_df['rating'] >= 4]['biz_id']
bis_data = db.get_collection('restaurants')
#Pull each restaurant attribute from the MongoDB
restreview_good = {}
for i in tqdm.tqdm(range(0, len(good_restaurants))):
rlist = []
for obj in bis_data.find({'business_id':business_ids[i]}):
rlist.append(obj)
restreview_good[business_ids[i]] = rlist
#Get all the categories for the good restaurants
good_list = []
for key in restreview_good.keys():
good_list.extend(restreview_good[key][0]['categories'])
good_list = [word for word in good_list if (word != u'Restaurants')]
good_list = [word for word in good_list if (word != u'Food')]
unique_categories = list(set(good_list))
category_count = [good_list.count(cat) for cat in unique_categories]
category_list = [(a,b) for (a,b) in zip(unique_categories, category_count) if b >= 10]
unique_categories = [a for (a,b) in category_list]
category_count = [b for (a,b) in category_list]
biz_category = pd.DataFrame({'category': unique_categories, 'count': category_count})
#Plot only categories that show up at least 10 times
good_plt = biz_category.plot(x='category', y='count', kind='bar', legend=False)
good_plt.set_ylim(0,40)
Explanation: Next, let's take a look at what kind of restaurants the user likes
End of explanation
print "The number of LSA topics is: " + str(lsi.num_topics)
for x in lsi.show_topics():
print str(x) + "\n"
#Let's take a look at sample weightings for the user's GOOD reviews
good_reviews = [a for (a,b) in zip(sub_train_reviews, train_labels) if b == 1]
lsi, topics, dictionary = yml.fit_lsi(good_reviews)
train_lsi = yml.get_lsi_features(good_reviews, lsi, topics, dictionary).todense()
train_lsi = [np.array(x[0])[0] for x in train_lsi]
#Take the average of each topic weighting amongst good reviews and graph each topic
lsi_topic_strings = ["Topic " + str(x) for x in range(0,int(lsi.num_topics))]
lsi_topic_dict = {}
for review in train_lsi:
for x in range(0,int(lsi.num_topics)):
try:
lsi_topic_dict[lsi_topic_strings[x]].append(review[x])
except:
lsi_topic_dict[lsi_topic_strings[x]] = [review[x]]
average_lsi_weight = {}
for x in range(0,int(lsi.num_topics)):
average_lsi_weight[lsi_topic_strings[x]] = reduce(lambda x, y: x + y,
lsi_topic_dict[lsi_topic_strings[x]])
/ len(lsi_topic_dict[lsi_topic_strings[x]])
##############
#Plot the average weights for each topic in the good reviews
##############
#Find the average topic weights for each topic
lsi_average_topics = pd.DataFrame.from_dict(average_lsi_weight, orient = 'index')
lsi_average_topics.columns = ['topic_weight']
lsi_average_topics['topic'] = lsi_average_topics.index
lsi_average_topics['topic'] = [int(x[5:8]) for x in lsi_average_topics['topic']]
lsi_average_topics = lsi_average_topics.sort_values(['topic'])
x_max = lsi_average_topics.sort_values('topic_weight')['topic_weight'][-2] + 1
#Make the plot
lsi_good_plt = lsi_average_topics.plot(x='topic', y='topic_weight', kind='scatter', legend=False)
yml.label_point(lsi_average_topics.topic, lsi_average_topics.topic_weight, lsi_good_plt)
Explanation: The restaurants that we've recommended include many traditional and new American restaurants, as well as Burger places. This suggests that our recommendation system does a good job picking up on the latent preferences of the user.
With LSA, we arbitrarily chose the number of topics to be the mean of the singular values. This is a meaningless choice and completely arbitrary.
The words in the below representation represent word vectors in the initial term-document matrix, the coefficients on each word vector is a result of the singular value decomposition.
End of explanation |
9,905 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: Task
Include the curvature as an extra parameter to your likelihood
Step4: Now let's analize the chains
First just an histogram
Step5: How to obtain the confidence regions of a variable?
Step6: You can use the "central credible interval" | Python Code:
import numpy as np
import scipy.integrate as integrate
def E(z,OmDE,OmM):
This function computes the integrand for the computation of the luminosity distance for a flat universe
z -> float
OmDE -> float
OmM -> float
gives
E -> float
Omk=1-OmDE-OmM
return 1/np.sqrt(OmM*(1+z)**3+OmDE+Omk*(1+z)**2)
def dl(z,OmDE,OmM,h=0.7):
This function computes the luminosity distance
z -> float
OmDE -> float
h ->float
returns
dl -> float
inte=integrate.quad(E,0,z,args=(OmDE,OmM))
# Velocidad del sonido en km/s
c = 299792.458
# Factor de Hubble
Ho = 100*h
Omk=1-OmDE-OmM
distance_factor = c*(1+z)/Ho
if Omk>1e-10:
omsqrt = np.sqrt(Omk)
return distance_factor / omsqrt * np.sinh(omsqrt * inte[0])
elif Omk<-1e-10:
omsqrt = np.sqrt(-Omk)
return distance_factor / omsqrt * np.sin(omsqrt * inte[0])
else:
return distance_factor * inte[0]
zandmu = np.loadtxt('../data/SCPUnion2.1_mu_vs_z.txt', skiprows=5,usecols=(1,2))
covariance = np.loadtxt('../data/SCPUnion2.1_covmat_sys.txt')
inv_cov = np.linalg.inv(covariance)
dl = np.vectorize(dl)
def loglike(params,h=0.7):
This function computes the logarithm of the likelihood. It recieves a vector
params-> vector with one component (Omega Dark Energy, Omega Matter)
OmDE = params[0]
OmM = params[1]
# Ahora quiero calcular la diferencia entre el valor reportado y el calculado
muteo = 5.*np.log10(dl(zandmu[:,0],OmDE,OmM,h))+25
print dl(zandmu[:,0],OmDE,OmM,h)
delta = muteo-zandmu[:,1]
chisquare=np.dot(delta,np.dot(inv_cov,delta))
return -chisquare/2
loglike([0.6,0.3])
def markovchain(steps, step_width, pasoinicial):
chain=[pasoinicial]
likechain=[loglike(chain[0])]
accepted = 0
for i in range(steps):
rand = np.random.normal(0.,1.,len(pasoinicial))
newpoint = chain[i] + step_width*rand
liketry = loglike(newpoint)
if np.isnan(liketry) :
print 'Paso algo raro'
liketry = -1E50
accept_prob = 0
elif liketry > likechain[i]:
accept_prob = 1
else:
accept_prob = np.exp(liketry - likechain[i])
if accept_prob >= np.random.uniform(0.,1.):
chain.append(newpoint)
likechain.append(liketry)
accepted += 1
else:
chain.append(chain[i])
likechain.append(likechain[i])
chain = np.array(chain)
likechain = np.array(likechain)
print "Razon de aceptacion =",float(accepted)/float(steps)
return chain, likechain
chain1, likechain1 = markovchain(100,[0.1,0.1],[0.7,0.3])
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(chain1[:,0],'o')
plt.plot(chain1[:,1],'o')
columna1=chain1[:,0]
np.sqrt(np.mean(columna1**2)-np.mean(columna1)**2)
Explanation: Task
Include the curvature as an extra parameter to your likelihood
End of explanation
import seaborn as sns
omegade=chain1[:,0]
omegadm=chain1[:,1]
sns.distplot(omegade)
sns.distplot(omegadm)
sns.jointplot(x=omegadm,y=omegade)
import corner
corner.corner(chain1)
Explanation: Now let's analize the chains
First just an histogram
End of explanation
mean_de=np.mean(omegade)
print(mean_de)
Explanation: How to obtain the confidence regions of a variable?
End of explanation
dimde=len(omegade)
points_outside = np.int((1-0.68)*dimde/2)
de_sorted=np.sort(omegade)
print de_sorted[points_outside], de_sorted[dimde-points_outside]
dimde=len(omegade)
points_outside = np.int((1-0.95)*dimde/2)
de_sorted=np.sort(omegade)
print de_sorted[points_outside], de_sorted[dimde-points_outside]
Explanation: You can use the "central credible interval"
End of explanation |
9,906 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GMaps on Android
The goal of this experiment is to test out GMaps on a Pixel device running Android and collect results.
Step1: Test environment setup
For more details on this please check out examples/utils/testenv_example.ipynb.
devlib requires the ANDROID_HOME environment variable configured to point to your local installation of the Android SDK. If you have not this variable configured in the shell used to start the notebook server, you need to run a cell to define where your Android SDK is installed or specify the ANDROID_HOME in your target configuration.
In case more than one Android device are conencted to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.
Step2: Workload execution
Step3: Trace analysis
For more information on this please check examples/trace_analysis/TraceAnalysis_TasksLatencies.ipynb.
Step4: EAS-induced wakeup latencies
In this example we are looking at a specific task
Step5: Traces visualisation
For more information on this please check examples/trace_analysis/TraceAnalysis_TasksLatencies.ipynb.
Here each latency is plotted in order to double-check that it was truly induced by an EAS decision. In LISA, the latency plots have a red background when the system is overutilized, which shouldn't be the case here.
Step6: Overall latencies
This plot displays the whole duration of the experiment, it can be used to see how often the system was overutilized or how much latency was involved.
Step7: Kernelshark analysis | Python Code:
from conf import LisaLogging
LisaLogging.setup()
%pylab inline
import json
import os
# Support to access the remote target
import devlib
from env import TestEnv
# Import support for Android devices
from android import Screen, Workload
# Support for trace events analysis
from trace import Trace
# Suport for FTrace events parsing and visualization
import trappy
import pandas as pd
import sqlite3
from IPython.display import display
def experiment():
# Configure governor
target.cpufreq.set_all_governors('sched')
# Get workload
wload = Workload.getInstance(te, 'GMaps')
# Run GMaps
wload.run(out_dir=te.res_dir,
collect="ftrace",
location_search="London British Museum",
swipe_count=10)
# Dump platform descriptor
te.platform_dump(te.res_dir)
Explanation: GMaps on Android
The goal of this experiment is to test out GMaps on a Pixel device running Android and collect results.
End of explanation
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'android',
"board" : 'pixel',
# Device serial ID
# Not required if there is only one device connected to your computer
"device" : "HT67M0300128",
# Android home
# Not required if already exported in your .bashrc
"ANDROID_HOME" : "/home/vagrant/lisa/tools/",
# Folder where all the results will be collected
"results_dir" : "Gmaps_example",
# Define devlib modules to load
"modules" : [
'cpufreq' # enable CPUFreq support
],
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"sched_overutilized",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_load_waking_task",
"cpu_capacity",
"cpu_frequency",
"cpu_idle",
"sched_tune_config",
"sched_tune_tasks_update",
"sched_tune_boostgroup_update",
"sched_tune_filter",
"sched_boost_cpu",
"sched_boost_task",
"sched_energy_diff"
],
"buffsize" : 100 * 1024,
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'taskset'],
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False)
target = te.target
Explanation: Test environment setup
For more details on this please check out examples/utils/testenv_example.ipynb.
devlib requires the ANDROID_HOME environment variable configured to point to your local installation of the Android SDK. If you have not this variable configured in the shell used to start the notebook server, you need to run a cell to define where your Android SDK is installed or specify the ANDROID_HOME in your target configuration.
In case more than one Android device are conencted to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.
End of explanation
results = experiment()
Explanation: Workload execution
End of explanation
# Load traces in memory (can take several minutes)
platform_file = os.path.join(te.res_dir, 'platform.json')
with open(platform_file, 'r') as fh:
platform = json.load(fh)
trace_file = os.path.join(te.res_dir, 'trace.dat')
trace = Trace(platform, trace_file, events=my_conf['ftrace']['events'], normalize_time=False)
# Find exact task name & PID
for pid, name in trace.getTasks().iteritems():
if "GLRunner" in name:
glrunner = {"pid" : pid, "name" : name}
print("name=\"" + glrunner["name"] + "\"" + " pid=" + str(glrunner["pid"]))
Explanation: Trace analysis
For more information on this please check examples/trace_analysis/TraceAnalysis_TasksLatencies.ipynb.
End of explanation
# Helper functions to pinpoint issues
def find_prev_cpu(trace, taskname, time):
sdf = trace.data_frame.trace_event('sched_switch')
sdf = sdf[sdf.prev_comm == taskname]
sdf = sdf[sdf.index <= time]
sdf = sdf.tail(1)
wdf = trace.data_frame.trace_event('sched_wakeup')
wdf = wdf[wdf.comm == taskname]
wdf = wdf[wdf.index <= time]
# We're looking for the previous wake event,
# not the one related to the wake latency
wdf = wdf.tail(2)
stime = sdf.index[0]
wtime = wdf.index[1]
if stime > wtime:
res = wdf["target_cpu"].values[0]
else:
res = sdf["__cpu"].values[0]
return res
def find_next_cpu(trace, taskname, time):
wdf = trace.data_frame.trace_event('sched_wakeup')
wdf = wdf[wdf.comm == taskname]
wdf = wdf[wdf.index <= time].tail(1)
return wdf["target_cpu"].values[0]
def trunc(value, precision):
offset = pow(10, precision)
res = int(value * offset)
return float(res) / offset
# Look for latencies > 1 ms
df = trace.data_frame.latency_wakeup_df(glrunner["pid"])
df = df[df.wakeup_latency > 0.001]
# Load times at which system was overutilized (EAS disabled)
ou_df = trace.data_frame.overutilized()
# Find which wakeup latencies were induced by EAS
# Times to look at will be saved in a times.txt file
eas_latencies = []
times_file = te.res_dir + "/times.txt"
!touch {times_file}
for time, cols in df.iterrows():
# Check if cpu was over-utilized (EAS disabled)
ou_row = ou_df[:time].tail(1)
if ou_row.empty:
continue
was_eas = ou_row.iloc[0, 1] < 1.0
if (was_eas):
toprint = "{:.1f}ms @ {}".format(cols[0] * 1000, trunc(time, 5))
next_cpu = find_next_cpu(trace, glrunner["name"], time)
prev_cpu = find_prev_cpu(trace, glrunner["name"], time)
if (next_cpu != prev_cpu):
toprint += " [CPU SWITCH]"
print toprint
eas_latencies.append([time, cols[0]])
!echo {toprint} >> {times_file}
Explanation: EAS-induced wakeup latencies
In this example we are looking at a specific task : GLRunner. GLRunner is a very CPU-heavy task, and is also boosted (member of the top-app group) in EAS, which makes it an interesting task to study.
To study the behaviour of GLRunner, we'll be looking at the wakeup decisions of the scheduler. We'll be looking for times at which the task took "too long" to wake up, i.e it was runnable and had to wait some time to actually be run. In our case that latency treshold is (arbitrarily) set to 1ms.
We're also on the lookout for when the task has been moved from one CPU to another. Depending on several parameters (kernel version, boost values, etc), the task could erroneously be switched to another CPU which could induce wakeup latencies.
Finally, we're only interested in scheduling decisions made by EAS, so we'll only be looking at wakeup latencies that occured when the system was not overutilized, i.e EAS was enabled.
End of explanation
# Plot each EAS-induced latency (blue cross)
# If the background is red, system was over-utilized and the latency wasn't caused by EAS
for start, latency in eas_latencies:
trace.setXTimeRange(start - 0.002, start + 0.002)
trace.analysis.latency.plotLatency(task=glrunner["pid"], tag=str(start))
trace.analysis.cpus.plotCPU(cpus=[2,3])
Explanation: Traces visualisation
For more information on this please check examples/trace_analysis/TraceAnalysis_TasksLatencies.ipynb.
Here each latency is plotted in order to double-check that it was truly induced by an EAS decision. In LISA, the latency plots have a red background when the system is overutilized, which shouldn't be the case here.
End of explanation
# Plots all of the latencies over the duration of the experiment
trace.setXTimeRange(trace.window[0] + 1, trace.window[1])
trace.analysis.latency.plotLatency(task=glrunner["pid"])
Explanation: Overall latencies
This plot displays the whole duration of the experiment, it can be used to see how often the system was overutilized or how much latency was involved.
End of explanation
!kernelshark {trace_file} 2>/dev/null
Explanation: Kernelshark analysis
End of explanation |
9,907 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detección de caras mediante Machine Learning
Previamente a utilizar esta técnica para reconocer los fitolitos en nuestras imagenes, utilizaremos esta técnica para reconocer caras en diversas imagenes. Si el reconocimiento de caras es efectivo, más tarde aplicaremos esta técnica para reconocer fitolitos.
Antes que nada deberemos extraer las caracteristicas de los datos, las cuales trataremos de obtener mediante la técnica HoG ( Histogram of Oriented Gradients), la cual transforma los pixeles en un vector que contiene información mucho más significativa.
Y finalmente, utilizaremos una SVM para construir nuestro reconocedor de caras.
Este notebook ha sido creado y está basado partiendo del notebook "Application
Step1: Histogram of Oriented Gradients (HoG)
Como veniamos contando, HoG es una técnica para la extracción de características, desarrollada en el contexto del procesado de imagenes, que involucra los siguientes pasos
Step2: 2. Crear un conjunto de entrenamiento de imagenes de no-caras que supongan falsos-positivos
Una vez obtenido nuestro conjunto de positivos, necesitamos obtener un conjunto de imagenes que no tengan caras. Para ello, la técnica que se utiliza en el notebook en el que me estoy basando es obtener diversas imagenes de las cuales se obtiene subimagenes o miniaturas,thumbnails en ingles, con diversas escalas.
Step3: 3. Extraer las características de HoG del conjunto de entrenamiento
Este tercer paso resulta de especial interes, puesto que vamos a obtener las características de HoG sobre las que previamente hemos hablado.
Step4: 4. Entrenar el clasificador SVM
A COMPLETAR EXPLICACIONES
Step5: Probando con nuevas caras
Una vez entrenado nuestro clasificador, vamos a probar con una nueva imagen. Como ya explicabamos en la introducción, cuando le enviamos nuevas imagenes a nuestro clasificador este deberá de realizar dos pasos
Step6: Obtenemos la imagen y la convertimos a escala de grises
Step7: Como podemos observar nuestro clasificador reconoce perfectamente una cara. Ahora vamos a probar con una imagen en la que aparezcan varias caras.
Step8: A primera impresión parece que el clasificador es bastante efectivo. En el caso de esta última imagen ha sido capaz de reconocer la mayoría de las caras, aun siendo una imagen compleja para su procesado por el distinto tamaño de las caras y la incompletitud de algunas.
Non-Maximum Suppresion
Como hemos podido observar nuestro clasificador detecta muchas más caras de las que realmente hay. Debido a la razón de que normalmente los clasificadores detectan multiples ventanas en torno al objeto a detectar, en este caso caras. Esta problemática viene a ser solucionada mediante Non-Maximum Suppresion
Step9: Pruebas con algunas imágenes | Python Code:
#Imports
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Detección de caras mediante Machine Learning
Previamente a utilizar esta técnica para reconocer los fitolitos en nuestras imagenes, utilizaremos esta técnica para reconocer caras en diversas imagenes. Si el reconocimiento de caras es efectivo, más tarde aplicaremos esta técnica para reconocer fitolitos.
Antes que nada deberemos extraer las caracteristicas de los datos, las cuales trataremos de obtener mediante la técnica HoG ( Histogram of Oriented Gradients), la cual transforma los pixeles en un vector que contiene información mucho más significativa.
Y finalmente, utilizaremos una SVM para construir nuestro reconocedor de caras.
Este notebook ha sido creado y está basado partiendo del notebook "Application: A Face Detection Pipeline" del libro Python Data Science Handbook de Jake VanderPlas cuyo contenido se encuentra en GitHub
End of explanation
from sklearn.datasets import fetch_lfw_people
faces = fetch_lfw_people()
positive_patches = faces.images
positive_patches.shape
Explanation: Histogram of Oriented Gradients (HoG)
Como veniamos contando, HoG es una técnica para la extracción de características, desarrollada en el contexto del procesado de imagenes, que involucra los siguientes pasos:
Pre-normalizado de las imagenes. Supone una mayor dependencía de las características que varían segun la iluminación.
Aplicar a la imagen dos filtros sensibles al brillo tanto horizontal como vertical. Lo cual nos aporta información sobre bordes, contornos y texturas.
Subdividir la imagen en celdas de un tamaño concreto y calcular el histograma del gradiente para cada celda.
Normalizar los histogramas, previamente calculados, mediante la comparación con sus vecinos. Eliminando así el efecto de la iluminación en la imagen.
Construir un vector de caracteristicas unidimensional de la información de cada celda.
¿Más explicaciones de HoG?
Detector de caras
Utilizando, como ya indicabamos, una SVM junto con nuestro extractor de características HoG construiremos un detector de caras. Para la construcción de este detector se deberán de seguir los siguiente pasos:
Crear un conjunto de entrenamiento de imagenes de caras que supongan positivos.
Crear un conjunto de entrenamiento de imagenes de no-caras que supongan falsos-positivos.
Extraer las características de HoG del conjunto de entrenamiento.
Entrenar el clasificador SVM.
Una vez realizados dichos pasos, podríamos enviar nuevas imagenes al clasificador para que tratase de reconocer nuevas caras. Para ello seguiría los dos siguiente pasos:
Pasa una ventana por toda la imagen, comprobando si la ventana contiene una cara.
Si existe solapamiento en la detección de la cara, se deben de combinar dichos solapamientos.
1. Crear un conjunto de entrenamiento de imagenes de caras que supongan positivos
Scikit nos proporciona un conjunto de imagenes variadas de caras que nos permitiran obtener un conjunto de entrenamiento de positivos para nuestro objetivo. Más de 13000 caras para ser concretos.
End of explanation
from skimage import feature, color, data, transform
imgs_to_use = ['camera', 'text', 'coins', 'moon',
'page', 'clock', 'immunohistochemistry',
'chelsea', 'coffee', 'hubble_deep_field']
images = [color.rgb2gray(getattr(data, name)())
for name in imgs_to_use]
from sklearn.feature_extraction.image import PatchExtractor
def extract_patches(img, N, scale=1.0, patch_size=positive_patches[0].shape):
extracted_patch_size = tuple((scale * np.array(patch_size)).astype(int))
extractor = PatchExtractor(patch_size=extracted_patch_size,
max_patches=N, random_state=0)
patches = extractor.transform(img[np.newaxis])
if scale != 1:
patches = np.array([transform.resize(patch, patch_size)
for patch in patches])
return patches
negative_patches = np.vstack([extract_patches(im, 1000, scale)
for im in images for scale in [0.5, 1.0, 2.0]])
negative_patches.shape
Explanation: 2. Crear un conjunto de entrenamiento de imagenes de no-caras que supongan falsos-positivos
Una vez obtenido nuestro conjunto de positivos, necesitamos obtener un conjunto de imagenes que no tengan caras. Para ello, la técnica que se utiliza en el notebook en el que me estoy basando es obtener diversas imagenes de las cuales se obtiene subimagenes o miniaturas,thumbnails en ingles, con diversas escalas.
End of explanation
from itertools import chain
X_train = np.array([feature.hog(im)
for im in chain(positive_patches,
negative_patches)])
y_train = np.zeros(X_train.shape[0])
y_train[:positive_patches.shape[0]] = 1
Explanation: 3. Extraer las características de HoG del conjunto de entrenamiento
Este tercer paso resulta de especial interes, puesto que vamos a obtener las características de HoG sobre las que previamente hemos hablado.
End of explanation
from sklearn.naive_bayes import GaussianNB
from sklearn.cross_validation import cross_val_score
cross_val_score(GaussianNB(), X_train, y_train)
from sklearn.svm import LinearSVC
from sklearn.grid_search import GridSearchCV
grid = GridSearchCV(LinearSVC(), {'C': [1.0, 2.0, 4.0, 8.0]})
grid.fit(X_train, y_train)
grid.best_score_
grid.best_params_
model = grid.best_estimator_
model.fit(X_train, y_train)
Explanation: 4. Entrenar el clasificador SVM
A COMPLETAR EXPLICACIONES
End of explanation
def sliding_window(img, patch_size=positive_patches[0].shape,
istep=2, jstep=2, scale=1.0):
Ni, Nj = (int(scale * s) for s in patch_size)
for i in range(0, img.shape[0] - Ni, istep):
for j in range(0, img.shape[1] - Ni, jstep):
patch = img[i:i + Ni, j:j + Nj]
if scale != 1:
patch = transform.resize(patch, patch_size)
yield (i, j), patch
Explanation: Probando con nuevas caras
Una vez entrenado nuestro clasificador, vamos a probar con una nueva imagen. Como ya explicabamos en la introducción, cuando le enviamos nuevas imagenes a nuestro clasificador este deberá de realizar dos pasos:
Pasa una ventana por toda la imagen, comprobando si la ventana contiene una cara.
Si existe solapamiento en la detección de la cara, se deben de combinar dichos solapamientos.
La nueva imagen que le vamos a enviar a nuestro clasificador contendrá un único ejemplo.
Creamos la función que recorre con una ventana la imagen
End of explanation
from skimage.exposure import rescale_intensity
from skimage import io
from skimage.transform import rescale
img1 = io.imread("../../rsc/img/my_face.jpg")
# Convertimos imagen a escala de grises
from skimage.color import rgb2gray
img1 = rgb2gray(img1)
img1 = rescale(img1, 0.3)
# Mostramos la imagen resultante
plt.imshow(img1, cmap='gray')
plt.axis('off');
indices, patches = zip(*sliding_window(img1))
patches_hog = np.array([feature.hog(patch) for patch in patches])
patches_hog.shape
labels = model.predict(patches_hog)
labels.sum()
fig, ax = plt.subplots()
ax.imshow(img1, cmap='gray')
ax.axis('off')
Ni, Nj = positive_patches[0].shape
indices = np.array(indices)
boxes1 = list()
for i, j in indices[labels == 1]:
boxes1.append((j,i,i+Ni,j+Nj))
ax.add_patch(plt.Rectangle((j, i), Nj, Ni, edgecolor='red',
alpha=0.3, lw=2, facecolor='none'))
boxes1 = np.array(boxes1)
Explanation: Obtenemos la imagen y la convertimos a escala de grises
End of explanation
img2 = io.imread("../../rsc/img/faces_test.jpg")
# Convertimos imagen a escala de grises
from skimage.color import rgb2gray
img2 = rgb2gray(img2)
img2 = rescale(img2, 0.65)
# Mostramos la imagen resultante
plt.imshow(img2, cmap='gray')
plt.axis('off');
indices, patches = zip(*sliding_window(img2))
patches_hog = np.array([feature.hog(patch) for patch in patches])
patches_hog.shape
labels = model.predict(patches_hog)
labels.sum()
fig, ax = plt.subplots()
ax.imshow(img2, cmap='gray')
ax.axis('off')
Ni, Nj = positive_patches[0].shape
indices = np.array(indices)
boxes2 = list()
for i, j in indices[labels == 1]:
boxes2.append((j,i,j+Nj,i+Ni))
ax.add_patch(plt.Rectangle((j, i), Nj, Ni, edgecolor='red',
alpha=0.3, lw=2, facecolor='none'))
boxes2 = np.array(boxes2)
Explanation: Como podemos observar nuestro clasificador reconoce perfectamente una cara. Ahora vamos a probar con una imagen en la que aparezcan varias caras.
End of explanation
# import the necessary packages
import numpy as np
# Malisiewicz et al.
def non_max_suppression_fast(boxes, overlapThresh):
# if there are no boxes, return an empty list
if len(boxes) == 0:
return []
# if the bounding boxes integers, convert them to floats --
# this is important since we'll be doing a bunch of divisions
if boxes.dtype.kind == "i":
boxes = boxes.astype("float")
# initialize the list of picked indexes
pick = []
# grab the coordinates of the bounding boxes
x1 = boxes[:,0]
y1 = boxes[:,1]
x2 = boxes[:,2]
y2 = boxes[:,3]
# compute the area of the bounding boxes and sort the bounding
# boxes by the bottom-right y-coordinate of the bounding box
area = (x2 - x1 + 1) * (y2 - y1 + 1)
idxs = np.argsort(y2)
# keep looping while some indexes still remain in the indexes
# list
while len(idxs) > 0:
# grab the last index in the indexes list and add the
# index value to the list of picked indexes
last = len(idxs) - 1
i = idxs[last]
pick.append(i)
# find the largest (x, y) coordinates for the start of
# the bounding box and the smallest (x, y) coordinates
# for the end of the bounding box
xx1 = np.maximum(x1[i], x1[idxs[:last]])
yy1 = np.maximum(y1[i], y1[idxs[:last]])
xx2 = np.minimum(x2[i], x2[idxs[:last]])
yy2 = np.minimum(y2[i], y2[idxs[:last]])
# compute the width and height of the bounding box
w = np.maximum(0, xx2 - xx1 + 1)
h = np.maximum(0, yy2 - yy1 + 1)
# compute the ratio of overlap
overlap = (w * h) / area[idxs[:last]]
# delete all indexes from the index list that have
idxs = np.delete(idxs, np.concatenate(([last],
np.where(overlap > overlapThresh)[0])))
# return only the bounding boxes that were picked using the
# integer data type
return boxes[pick].astype("int")
Explanation: A primera impresión parece que el clasificador es bastante efectivo. En el caso de esta última imagen ha sido capaz de reconocer la mayoría de las caras, aun siendo una imagen compleja para su procesado por el distinto tamaño de las caras y la incompletitud de algunas.
Non-Maximum Suppresion
Como hemos podido observar nuestro clasificador detecta muchas más caras de las que realmente hay. Debido a la razón de que normalmente los clasificadores detectan multiples ventanas en torno al objeto a detectar, en este caso caras. Esta problemática viene a ser solucionada mediante Non-Maximum Suppresion
End of explanation
import cv2
# load the image and clone it
#print "[x] %d initial bounding boxes" % (len(boundingBoxes))
image = img1
orig = image.copy()
boundingBoxes = boxes1
# loop over the bounding boxes for each image and draw them
for (startX, startY, endX, endY) in boundingBoxes:
cv2.rectangle(orig, (startX, startY), (endX, endY), (0, 0, 255), 2)
# perform non-maximum suppression on the bounding boxes
pick = non_max_suppression_fast(boundingBoxes, 0.2)
#print "[x] after applying non-maximum, %d bounding boxes" % (len(pick))
# loop over the picked bounding boxes and draw them
for (startX, startY, endX, endY) in pick:
cv2.rectangle(image, (startX, startY), (endX, endY), (0, 255, 0), 2)
# display the images
cv2.imshow("Original", orig)
cv2.imshow("After NMS", image)
cv2.waitKey(0)
# load the image and clone it
#print "[x] %d initial bounding boxes" % (len(boundingBoxes))
image = img2
orig = image.copy()
boundingBoxes = boxes2
# loop over the bounding boxes for each image and draw them
for (startX, startY, endX, endY) in boundingBoxes:
cv2.rectangle(orig, (startX, startY), (endX, endY), (0, 0, 255), 2)
# perform non-maximum suppression on the bounding boxes
pick = non_max_suppression_fast(boundingBoxes, 0.2)
#print "[x] after applying non-maximum, %d bounding boxes" % (len(pick))
# loop over the picked bounding boxes and draw them
for (startX, startY, endX, endY) in pick:
cv2.rectangle(image, (startX, startY), (endX, endY), (0, 255, 0), 2)
# display the images
cv2.imshow("Original", orig)
cv2.imshow("After NMS", image)
cv2.waitKey(0)
from skimage.exposure import rescale_intensity
from skimage import io
from skimage.transform import rescale
img3 = io.imread("../../rsc/img/family.jpg")
# Convertimos imagen a escala de grises
from skimage.color import rgb2gray
img3 = rgb2gray(img3)
img3 = rescale(img3, 0.5)
# Mostramos la imagen resultante
plt.imshow(img3, cmap='gray')
plt.axis('off');
indices, patches = zip(*sliding_window(img3))
patches_hog = np.array([feature.hog(patch) for patch in patches])
patches_hog.shape
labels = model.predict(patches_hog)
labels.sum()
fig, ax = plt.subplots()
ax.imshow(img3, cmap='gray')
ax.axis('off')
Ni, Nj = positive_patches[0].shape
indices = np.array(indices)
boxes3 = list()
for i, j in indices[labels == 1]:
boxes3.append((j,i,j+Nj,i+Ni))
ax.add_patch(plt.Rectangle((j, i), Nj, Ni, edgecolor='red',
alpha=0.3, lw=2, facecolor='none'))
boxes3 = np.array(boxes3)
# load the image and clone it
#print "[x] %d initial bounding boxes" % (len(boundingBoxes))
image = img3
orig = image.copy()
boundingBoxes = boxes3
# loop over the bounding boxes for each image and draw them
for (startX, startY, endX, endY) in boundingBoxes:
cv2.rectangle(orig, (startX, startY), (endX, endY), (0, 0, 255), 2)
# perform non-maximum suppression on the bounding boxes
pick = non_max_suppression_fast(boundingBoxes, 0.1)
#print "[x] after applying non-maximum, %d bounding boxes" % (len(pick))
# loop over the picked bounding boxes and draw them
for (startX, startY, endX, endY) in pick:
cv2.rectangle(image, (startX, startY), (endX, endY), (0, 255, 0), 2)
# display the images
cv2.imshow("Original", orig)
cv2.imshow("After NMS", image)
cv2.waitKey(0)
Explanation: Pruebas con algunas imágenes
End of explanation |
9,908 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DiscreteDP Example
Step1: We follow the state-action pairs formulation approach.
We let the state space consist of the possible values of the asset price
and the state indicating that "the option has been exercised".
Step2: The backward induction algorithm for finite horizon dynamic programs is offered
as the function backward_induction.
(By default, the terminal value function is set to the vector of zeros.)
Step3: In the returns, vs is an $(N+1) \times n$ array that contains the optimal value functions,
where vs[0] is the value vector at the current period (i.e., period $0$)
for different prices
(with the value vs[0, -1] = 0 for the state "the option has been exercised" included),
and sigmas is an $N \times n$ array that contins the optimal policy functions. | Python Code:
%matplotlib inline
import numpy as np
from scipy import sparse
import matplotlib.pyplot as plt
import quantecon as qe
from quantecon.markov import DiscreteDP, backward_induction, sa_indices
T = 0.5 # Time expiration (years)
vol = 0.2 # Annual volatility
r = 0.05 # Annual interest rate
strike = 2.1 # Strike price
p0 = 2 # Current price
N = 100 # Number of periods to expiration
# Time length of a period
tau = T/N
# Discount factor
beta = np.exp(-r*tau)
# Up-jump factor
u = np.exp(vol*np.sqrt(tau))
# Up-jump probability
q = 1/2 + np.sqrt(tau)*(r - (vol**2)/2)/(2*vol)
Explanation: DiscreteDP Example: Option Pricing
Daisuke Oyama
Faculty of Economics, University of Tokyo
From Miranda and Fackler, <i>Applied Computational Economics and Finance</i>, 2002,
Section 7.6.4
End of explanation
# Possible price values
ps = u**np.arange(-N, N+1) * p0
# Number of states
n = len(ps) + 1 # State n-1: "the option has been exercised"
# Number of actions
m = 2 # 0: hold, 1: exercise
# Number of feasible state-action pairs
L = n*m - 1 # At state n-1, there is only one action "do nothing"
# Arrays of state and action indices
s_indices, a_indices = sa_indices(n, m)
s_indices, a_indices = s_indices[:-1], a_indices[:-1]
# Reward vector
R = np.empty((n, m))
R[:, 0] = 0
R[:-1, 1] = strike - ps
R = R.ravel()[:-1]
# Transition probability array
Q = sparse.lil_matrix((L, n))
for i in range(L-1):
if a_indices[i] == 0:
Q[i, min(s_indices[i]+1, len(ps)-1)] = q
Q[i, max(s_indices[i]-1, 0)] = 1 - q
else:
Q[i, n-1] = 1
Q[L-1, n-1] = 1
# Create a DiscreteDP
ddp = DiscreteDP(R, Q, beta, s_indices, a_indices)
Explanation: We follow the state-action pairs formulation approach.
We let the state space consist of the possible values of the asset price
and the state indicating that "the option has been exercised".
End of explanation
vs, sigmas = backward_induction(ddp, N)
Explanation: The backward induction algorithm for finite horizon dynamic programs is offered
as the function backward_induction.
(By default, the terminal value function is set to the vector of zeros.)
End of explanation
v = vs[0]
max_exercise_price = ps[sigmas[::-1].sum(-1)-1]
fig, axes = plt.subplots(1, 2, figsize=(12, 4))
axes[0].plot([0, strike], [strike, 0], 'k--')
axes[0].plot(ps, v[:-1])
axes[0].set_xlim(0, strike*2)
axes[0].set_xticks(np.linspace(0, 4, 5, endpoint=True))
axes[0].set_ylim(0, strike)
axes[0].set_yticks(np.linspace(0, 2, 5, endpoint=True))
axes[0].set_xlabel('Asset Price')
axes[0].set_ylabel('Premium')
axes[0].set_title('Put Option Value')
axes[1].plot(np.linspace(0, T, N), max_exercise_price)
axes[1].set_xlim(0, T)
axes[1].set_ylim(1.6, strike)
axes[1].set_xlabel('Time to Maturity')
axes[1].set_ylabel('Asset Price')
axes[1].set_title('Put Option Optimal Exercise Boundary')
axes[1].tick_params(right='on')
plt.show()
Explanation: In the returns, vs is an $(N+1) \times n$ array that contains the optimal value functions,
where vs[0] is the value vector at the current period (i.e., period $0$)
for different prices
(with the value vs[0, -1] = 0 for the state "the option has been exercised" included),
and sigmas is an $N \times n$ array that contins the optimal policy functions.
End of explanation |
9,909 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Base Question Visualization
Step1: Reading data back from npz file
Step2: I use interact on my plotter function to plot the positions of the stars and galaxies in my system at every time value, with a slider to choose which time value to view
Step3: As can be seen, the stars behave similarly to the stars from Toomre and Toomre's paper
For an easier visual experience, I also created an animation, featured in the Animator notebook. The animation was uploaded to Youtube and is shown below
Step4: Static plots at certain times
Step5: Interactive plot around center of mass between the two galaxies
Step6: Animation around center of mass
Step7: Static plots around center of mass | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import YouTubeVideo
from plotting_function import plotter,static_plot,com_plot,static_plot_com
Explanation: Base Question Visualization
End of explanation
f = open('base_question_data.npz','r')
r = np.load('base_question_data.npz')
sol_base = r['arr_0']
ic_base = r['arr_1']
f.close()
Explanation: Reading data back from npz file
End of explanation
interact(plotter,ic=fixed(ic_base),sol=fixed(sol_base),n=(0,len(np.linspace(0,1.2,100))-1,1));
Explanation: I use interact on my plotter function to plot the positions of the stars and galaxies in my system at every time value, with a slider to choose which time value to view
End of explanation
YouTubeVideo('C1RoQTVU-ao',width=600,height=600)
Explanation: As can be seen, the stars behave similarly to the stars from Toomre and Toomre's paper
For an easier visual experience, I also created an animation, featured in the Animator notebook. The animation was uploaded to Youtube and is shown below:
End of explanation
specific_t = [0,25,30,35,40,45,50,55,60,70,80,90,100]
plt.figure(figsize=(20,30))
i = 1
for n in specific_t:
if i > 13:
break
else:
plt.subplot(5,3,i)
static_plot(ic_base,sol_base,n)
i += 1
plt.tight_layout()
Explanation: Static plots at certain times:
End of explanation
interact(com_plot,ic=fixed(ic_base),sol=fixed(sol_base),M=fixed(1e11),S=fixed(1e11),n=(0,len(np.linspace(0,1.2,100))-1,1));
Explanation: Interactive plot around center of mass between the two galaxies:
End of explanation
YouTubeVideo('O1_HkrwtvPw',width=600,height=600)
Explanation: Animation around center of mass:
End of explanation
specific_t = [0,25,30,35,40,45,50,55,60,70,80,90,100]
plt.figure(figsize=(20,30))
i = 1
for n in specific_t:
if i > 13:
break
else:
plt.subplot(5,3,i)
static_plot_com(ic_base,sol_base,1e11,1e11,n)
i += 1
plt.tight_layout()
Explanation: Static plots around center of mass:
End of explanation |
9,910 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Committees
Step1: Party
Step2: [
{
"id" | Python Code:
#List all committees
query = 'classification:Committee'
r = requests.get('http://api.openhluttaw.org/en/search/organizations?q='+query)
pages = r.json()['num_pages']
committees = []
for page in range(1,pages+1):
r = requests.get('http://api.openhluttaw.org/en/search/organizations?q='+query+'&page='+str(page))
orgs = r.json()['results']
for org in orgs:
committees.append(org)
for committee in committees:
print committee['name']
import json
json_export = []
for committee in committees:
json_export.append({'name':committee['name'],
'id':committee['id']})
print json.dumps(json_export,sort_keys=True, indent=4)
#Looking up a specific committee will list down all members and persons details, a committee is just
#organization class same as party, same as upper house, lower house
#"name": "Amyotha Hluttaw Local and Overseas Employment Committee"
r = requests.get('http://api.openhluttaw.org/en/organizations/9f3448056d2b48e1805475a45a4ae1ed')
committee = r.json()['result']
#List committee members
# missing on behalf_of expanded for organizations https://github.com/Sinar/popit_ng/issues/200
for member in committee['memberships']:
print member['person']['id']
print member['person']['name']
print member['person']['image']
Explanation: Committees
End of explanation
#List all committees
query = 'classification:Party'
r = requests.get('http://api.openhluttaw.org/en/search/organizations?q='+query)
pages = r.json()['num_pages']
parties = []
for page in range(1,pages+1):
r = requests.get('http://api.openhluttaw.org/en/search/organizations?q='+query+'&page='+str(page))
orgs = r.json()['results']
for org in orgs:
parties.append(org)
# BUG in https://github.com/Sinar/popit_ng/issues/197
# use JSON party lookup below to lookup values directly on client side
for party in parties:
print party['name']
Explanation: Party
End of explanation
#Listing people by party in org Amyotha 897739b2831e41109713ac9d8a96c845
#Pyithu org id would be 7f162ebef80e4a4aba12361ea1151fce
#We list by membership and specific organization_id and on_behalf_of_id of parties above
#Amyotha Members represented by Arakan National Party
query = 'organization_id:897739b2831e41109713ac9d8a96c845 AND on_behalf_of_id:016a8ad7b40343ba96e0c03f47019680'
r = requests.get('http://api.openhluttaw.org/en/search/memberships?q='+query)
pages = r.json()['num_pages']
memberships = []
for page in range(1,pages+1):
r = requests.get('http://api.openhluttaw.org/en/search/memberships?q='+query+'&page='+str(page))
members = r.json()['results']
for member in members:
memberships.append(member)
for member in memberships:
print member['post']['label']
print member['person']['id']
print member['person']['name']
print member['person']['image']
Explanation: [
{
"id": "fd24165b8e814a758cd1098dc7a9038a",
"name": "National League for Democracy"
},
{
"id": "9462adf5cffa41c386e621fee28c59eb",
"name": "Union Solidarity and Development Party"
},
{
"id": "7997379fe27c4e448af522c85e306bfb",
"name": "\"Wa\" Democratic Party"
},
{
"id": "90e4903937bf4b8ba9185157dde06345",
"name": "Kokang Democracy and Unity Party"
},
{
"id": "2d2c795149c74b6f91cdea8caf28e968",
"name": "Zomi Congress for Democracy"
},
{
"id": "b366273152a84d579c4e19b14d36c0b5",
"name": "Ta'Arng Palaung National Party"
},
{
"id": "f2189158953e4d9e9296efeeffe7cf35",
"name": "National Unity Party"
},
{
"id": "d53d27fef3ac4b2bb4b7bf346215f626",
"name": "Pao National Organization"
},
{
"id": "2f0c09d5eb05432d8fcf247b5cb1885f",
"name": "Mon National Party"
},
{
"id": "dc69205c7eb54a7aaf68b3d2e3d9c23e",
"name": "Rakhine National Party"
},
{
"id": "e67bf2cdb4ff4ce89167cba3a514a6df",
"name": "Shan Nationalities League for Democracy"
},
{
"id": "a7a1ac9d2f20470d87e556af41dfaa19",
"name": "Lisu National Development Party"
},
{
"id": "8cc2d69bed8743bbaa229b164afecf9a",
"name": "Independent"
},
{
"id": "016a8ad7b40343ba96e0c03f47019680",
"name": "Arakan National Party"
},
{
"id": "6e76561e385946e0a3761d4f25293912",
"name": "The Taaung (Palaung) National Party"
},
{
"id": "63ec5681df974c67b7a217873fa9cdf5",
"name": "Kachin Democratic Party"
}
]
End of explanation |
9,911 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Feature Reduction For Training Data
Select the top 10%, 20%, 30%, 40% and 50% of features with the most variance. Starting with 1006 ASM features
the process will select about 100, 200... features. Then write the reduced feature sets to files.
Step1: 1.1 Feature Reduction to 10%
Step2: 1.2 Feature Reduction to 20%
Step3: 1.3 Feature Reduction to 30%
Step4: 1.4 Feature Reduction to 40%
Step5: 1.5 Feature Reduction to 50%
Step6: 2. Feature Reduction Of Test Data
Use columns names from reduced ASM train data feature set to select best ASM features from test data
and write to a file.
Step7: 3. Sort and Write Byte Feature Sets
Sort the test file feature set data frames on the filename column and write to sorted data to file.
Step8: 4. Sort and Reduce Image Data for Test and Train Files
Step9: 4.1 Feature Reduction to 10%
Step10: 4.2 Feature Reduction to 20%
Step11: 4.3 Feature Reduction to 30%
Step12: 4.4 Feature Reduction to 40%
Step13: 4.5 Feature Reduction to 50%
Step14: 6. Run ExtraTreeClassifiers With 10-Fold Cross Validation
Now we can have a quick look at how well the feature set can be classified
Step15: 7. TEST/EXPERIMENTAL CODE ONLY | Python Code:
train_data = pd.read_csv('data/train-malware-features-asm.csv')
labels = pd.read_csv('data/trainLabels.csv')
sorted_train_data = train_data.sort(columns='filename', axis=0, ascending=True, inplace=False)
sorted_train_labels = labels.sort(columns='Id', axis=0, ascending=True, inplace=False)
X = sorted_train_data.iloc[:,1:]
y = np.array(sorted_train_labels.iloc[:,1])
print(X.shape)
print(y.shape)
sorted_train_data.head()
sorted_train_labels.head()
train_data.head()
y
Explanation: 1. Feature Reduction For Training Data
Select the top 10%, 20%, 30%, 40% and 50% of features with the most variance. Starting with 1006 ASM features
the process will select about 100, 200... features. Then write the reduced feature sets to files.
End of explanation
# find the top 10 percent variance features, from 1006 -> 101 features
fsp = SelectPercentile(chi2, 10)
X_new_10 = fsp.fit_transform(X,y)
X_new_10.shape
X_new_10
selected_names = fsp.get_support(indices=True)
selected_names = selected_names + 1
selected_names
data_trimmed = sorted_train_data.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_train_data['filename'])
data_reduced = data_fnames.join(data_trimmed)
data_reduced.head()
data_reduced.to_csv('data/sorted-train-malware-features-asm-10percent.csv', index=False)
sorted_train_labels.to_csv('data/sorted-train-labels.csv', index=False)
Explanation: 1.1 Feature Reduction to 10%
End of explanation
# find the top 20 percent variance features, from 1006 -> 201 features
fsp = SelectPercentile(chi2, 20)
X_new_20 = fsp.fit_transform(X,y)
X_new_20.shape
selected_names = fsp.get_support(indices=True)
selected_names = selected_names + 1
selected_names
data_trimmed = sorted_train_data.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_train_data['filename'])
data_reduced = data_fnames.join(data_trimmed)
data_reduced.head()
data_reduced.to_csv('data/sorted-train-malware-features-asm-20percent.csv', index=False)
Explanation: 1.2 Feature Reduction to 20%
End of explanation
# find the top 30 percent variance features, from 1006 -> 301 features
fsp = SelectPercentile(chi2, 30)
X_new_30 = fsp.fit_transform(X,y)
X_new_30.shape
selected_names = fsp.get_support(indices=True)
selected_names = selected_names + 1
selected_names
data_trimmed = sorted_train_data.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_train_data['filename'])
data_reduced = data_fnames.join(data_trimmed)
data_reduced.head()
data_reduced.to_csv('data/sorted-train-malware-features-asm-30percent.csv', index=False)
Explanation: 1.3 Feature Reduction to 30%
End of explanation
# find the top 40 percent variance features, from 1006 -> 401 features
fsp = SelectPercentile(chi2, 40)
X_new_40 = fsp.fit_transform(X,y)
X_new_40.shape
selected_names = fsp.get_support(indices=True)
selected_names = selected_names + 1
selected_names
data_trimmed = sorted_train_data.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_train_data['filename'])
data_reduced = data_fnames.join(data_trimmed)
data_reduced.head()
data_reduced.to_csv('data/sorted-train-malware-features-asm-40percent.csv', index=False)
Explanation: 1.4 Feature Reduction to 40%
End of explanation
# find the top 50 percent variance features, from 1006 -> 503 features
fsp = SelectPercentile(chi2, 50)
X_new_50 = fsp.fit_transform(X,y)
X_new_50.shape
selected_names = fsp.get_support(indices=True)
selected_names = selected_names + 1
selected_names
data_trimmed = sorted_train_data.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_train_data['filename'])
data_reduced = data_fnames.join(data_trimmed)
data_reduced.head()
data_reduced.to_csv('data/sorted-train-malware-features-asm-50percent.csv', index=False)
Explanation: 1.5 Feature Reduction to 50%
End of explanation
test_data = pd.read_csv('data/test-malware-features-asm.csv')
sorted_test_data = test_data.sort(columns='filename', axis=0, ascending=True, inplace=False)
sorted_test_data.shape
sorted_test_data.head()
# Get the feature names from the reduced train dataframe
column_names = data_reduced.columns
print(column_names)
# Extract the reduced feature set from the full test feature set
sorted_test_data_reduced = sorted_test_data.loc[:,column_names]
sorted_test_data_reduced.head()
sorted_test_data_reduced.to_csv('data/sorted-test-malware-features-asm-10percent.csv', index=False)
sorted_test_data_reduced.to_csv('data/sorted-test-malware-features-asm-20percent.csv', index=False)
sorted_test_data_reduced.to_csv('data/sorted-test-malware-features-asm-30percent.csv', index=False)
sorted_test_data_reduced.to_csv('data/sorted-test-malware-features-asm-40percent.csv', index=False)
sorted_test_data_reduced.to_csv('data/sorted-test-malware-features-asm-50percent.csv', index=False)
Explanation: 2. Feature Reduction Of Test Data
Use columns names from reduced ASM train data feature set to select best ASM features from test data
and write to a file.
End of explanation
# First load the .asm training features and training labels
#sorted_train_data_asm = pd.read_csv('data/sorted-train-malware-features-asm-reduced.csv')
#sorted_train_labels = pd.read_csv('data/sorted-train-labels.csv','r')
# Next load the .byte training features and sort
train_data_byte = pd.read_csv('data/train-malware-features-byte.csv')
sorted_train_data_byte = train_data_byte.sort(columns='filename', axis=0, ascending=True, inplace=False)
# Next load the .byte test features and sort
test_data_byte = pd.read_csv('data/test-malware-features-byte.csv')
sorted_test_data_byte = test_data_byte.sort(columns='filename', axis=0, ascending=True, inplace=False)
#combined_train_data = pd.DataFrame.merge(sorted_train_data_asm, sorted_train_data_byte, on='filename', how='inner', sort=False)
# Now write all the sorted feature sets to file
#f = open('data/sorted-train-features-combined.csv', 'w')
#combined_train_data.to_csv(f, index=False)
#f.close()
f = open('data/sorted-train-malware-features-byte.csv', 'w')
sorted_train_data_byte.to_csv(f, index=False)
f.close()
f = open('data/sorted-test-malware-features-byte.csv', 'w')
sorted_test_data_byte.to_csv(f, index=False)
f.close()
Explanation: 3. Sort and Write Byte Feature Sets
Sort the test file feature set data frames on the filename column and write to sorted data to file.
End of explanation
# Load and sort asm image data for test and train files
train_image_asm = pd.read_csv('data/train-image-features-asm.csv')
sorted_train_image_asm = train_image_asm.sort(columns='filename', axis=0, ascending=True, inplace=False)
test_image_asm = pd.read_csv('data/test-image-features-asm.csv')
sorted_test_image_asm = test_image_asm.sort(columns='filename', axis=0, ascending=True, inplace=False)
# NOTE: byte file images have low standard deviation and mean variance, not very useful for learning.
# Load and sort byte image data for test and train files
# train_image_byte = pd.read_csv('data/train-image-features-byte.csv')
# sorted_train_image_byte = train_image_byte.sort(columns='filename', axis=0, ascending=True, inplace=False)
# test_image_byte = pd.read_csv('data/test-image-features-byte.csv')
#sorted_test_image_byte = test_image_byte.sort(columns='filename', axis=0, ascending=True, inplace=False)
# Now write all the sorted image feature sets to file
f = open('data/sorted-train-image-features-asm.csv', 'w')
sorted_train_image_asm.to_csv(f, index=False)
f.close()
f = open('data/sorted-test-image-features-asm.csv', 'w')
sorted_test_image_asm.to_csv(f, index=False)
f.close()
#f = open('data/sorted-train-image-features-byte.csv', 'w')
#sorted_train_image_byte.to_csv(f, index=False)
#f.close()
#f = open('data/sorted-test-image-features-byte.csv', 'w')
#sorted_test_image_byte.to_csv(f, index=False)
#f.close()
sorted_train_image_asm.head()
Explanation: 4. Sort and Reduce Image Data for Test and Train Files
End of explanation
# Now select 10% best train image asm features by variance
sorted_train_labels = pd.read_csv('data/sorted-train-labels.csv')
X = sorted_train_image_asm.iloc[:,1:]
y = np.array(sorted_train_labels.iloc[:,1])
fsp = SelectPercentile(chi2, 10)
X_new = fsp.fit_transform(X,y)
X_new.shape
selected_names = fsp.get_support(indices=True)
selected_names = selected_names + 1
selected_names
data_trimmed = sorted_train_image_asm.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_train_image_asm['filename'])
sorted_train_image_asm_reduced = data_fnames.join(data_trimmed)
sorted_train_image_asm_reduced.head()
data_trimmed = sorted_test_image_asm.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_test_image_asm['filename'])
sorted_test_image_asm_reduced = data_fnames.join(data_trimmed)
sorted_test_image_asm_reduced.head()
# Now write all the sorted and reduced image feature sets to file
f = open('data/sorted-train-image-features-asm-10percent.csv', 'w')
sorted_train_image_asm_reduced.to_csv(f, index=False)
f.close()
f = open('data/sorted-test-image-features-asm-10percent.csv', 'w')
sorted_test_image_asm_reduced.to_csv(f, index=False)
f.close()
#f = open('data/sorted-train-image-features-byte-reduced.csv', 'w')
#sorted_train_image_byte_reduced.to_csv(f, index=False)
#f.close()
#f = open('data/sorted-test-image-features-byte-reduced.csv', 'w')
#sorted_test_image_byte_reduced.to_csv(f, index=False)
#f.close()
Explanation: 4.1 Feature Reduction to 10%
End of explanation
# Now select 20% best train image asm features by variance
sorted_train_labels = pd.read_csv('data/sorted-train-labels.csv')
X = sorted_train_image_asm.iloc[:,1:]
y = np.array(sorted_train_labels.iloc[:,1])
fsp = SelectPercentile(chi2, 20)
X_new = fsp.fit_transform(X,y)
X_new.shape
selected_names = fsp.get_support(indices=True)
selected_names = selected_names + 1
selected_names
data_trimmed = sorted_train_image_asm.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_train_image_asm['filename'])
sorted_train_image_asm_reduced = data_fnames.join(data_trimmed)
sorted_train_image_asm_reduced.head()
data_trimmed = sorted_test_image_asm.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_test_image_asm['filename'])
sorted_test_image_asm_reduced = data_fnames.join(data_trimmed)
sorted_test_image_asm_reduced.head()
# Now write all the sorted and reduced image feature sets to file
f = open('data/sorted-train-image-features-asm-20percent.csv', 'w')
sorted_train_image_asm_reduced.to_csv(f, index=False)
f.close()
f = open('data/sorted-test-image-features-asm-20percent.csv', 'w')
sorted_test_image_asm_reduced.to_csv(f, index=False)
f.close()
Explanation: 4.2 Feature Reduction to 20%
End of explanation
# Now select 30% best train image asm features by variance
sorted_train_image_asm = pd.read_csv('data/sorted-train-image-features-asm.csv')
sorted_test_image_asm = pd.read_csv('data/sorted-test-image-features-asm.csv')
sorted_train_labels = pd.read_csv('data/sorted-train-labels.csv')
X = sorted_train_image_asm.iloc[:,1:]
y = sorted_train_labels['Class'].values.tolist()
fsp = SelectPercentile(chi2, 30)
X_new = fsp.fit_transform(X,y)
X_new.shape
selected_names = fsp.get_support(indices=True)
selected_names = selected_names + 1
selected_names
data_trimmed = sorted_train_image_asm.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_train_image_asm['filename'])
sorted_train_image_asm_reduced = data_fnames.join(data_trimmed)
sorted_train_image_asm_reduced.head()
data_trimmed = sorted_test_image_asm.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_test_image_asm['filename'])
sorted_test_image_asm_reduced = data_fnames.join(data_trimmed)
sorted_test_image_asm_reduced.head()
# Now write all the sorted and reduced image feature sets to file
f = open('data/sorted-train-image-features-asm-30percent.csv', 'w')
sorted_train_image_asm_reduced.to_csv(f, index=False)
f.close()
f = open('data/sorted-test-image-features-asm-30percent.csv', 'w')
sorted_test_image_asm_reduced.to_csv(f, index=False)
f.close()
Explanation: 4.3 Feature Reduction to 30%
End of explanation
# Now select 40% best train image asm features by variance
sorted_train_image_asm = pd.read_csv('data/sorted-train-image-features-asm.csv')
sorted_test_image_asm = pd.read_csv('data/sorted-test-image-features-asm.csv')
sorted_train_labels = pd.read_csv('data/sorted-train-labels.csv')
X = sorted_train_image_asm.iloc[:,1:]
y = np.array(sorted_train_labels.iloc[:,1])
fsp = SelectPercentile(chi2, 40)
X_new = fsp.fit_transform(X,y)
X_new.shape
selected_names = fsp.get_support(indices=True)
selected_names = selected_names + 1
selected_names
data_trimmed = sorted_train_image_asm.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_train_image_asm['filename'])
sorted_train_image_asm_reduced = data_fnames.join(data_trimmed)
sorted_train_image_asm_reduced.head()
data_trimmed = sorted_test_image_asm.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_test_image_asm['filename'])
sorted_test_image_asm_reduced = data_fnames.join(data_trimmed)
sorted_test_image_asm_reduced.head()
# Now write all the sorted and reduced image feature sets to file
f = open('data/sorted-train-image-features-asm-40percent.csv', 'w')
sorted_train_image_asm_reduced.to_csv(f, index=False)
f.close()
f = open('data/sorted-test-image-features-asm-40percent.csv', 'w')
sorted_test_image_asm_reduced.to_csv(f, index=False)
f.close()
Explanation: 4.4 Feature Reduction to 40%
End of explanation
# Now select 50% best train image asm features by variance
sorted_train_image_asm = pd.read_csv('data/sorted-train-image-features-asm.csv')
sorted_test_image_asm = pd.read_csv('data/sorted-test-image-features-asm.csv')
sorted_train_labels = pd.read_csv('data/sorted-train-labels.csv')
X = sorted_train_image_asm.iloc[:,1:]
y = np.array(sorted_train_labels.iloc[:,1])
fsp = SelectPercentile(chi2, 50)
X_new = fsp.fit_transform(X,y)
X_new.shape
selected_names = fsp.get_support(indices=True)
selected_names = selected_names + 1
selected_names
data_trimmed = sorted_train_image_asm.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_train_image_asm['filename'])
sorted_train_image_asm_reduced = data_fnames.join(data_trimmed)
sorted_train_image_asm_reduced.head()
data_trimmed = sorted_test_image_asm.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_test_image_asm['filename'])
sorted_test_image_asm_reduced = data_fnames.join(data_trimmed)
sorted_test_image_asm_reduced.head()
# Now write all the sorted and reduced image feature sets to file
f = open('data/sorted-train-image-features-asm-50percent.csv', 'w')
sorted_train_image_asm_reduced.to_csv(f, index=False)
f.close()
f = open('data/sorted-test-image-features-asm-50percent.csv', 'w')
sorted_test_image_asm_reduced.to_csv(f, index=False)
f.close()
Explanation: 4.5 Feature Reduction to 50%
End of explanation
def run_cv(X,y, clf):
# Construct a kfolds object
kf = KFold(len(y),n_folds=10,shuffle=True)
y_prob = np.zeros((len(y),9))
y_pred = np.zeros(len(y))
# Iterate through folds
for train_index, test_index in kf:
print(test_index, train_index)
X_train = X.loc[train_index,:]
X_test = X.loc[test_index,:]
y_train = y[train_index]
clf.fit(X_train,y_train)
y_prob[test_index] = clf.predict_proba(X_test)
y_pred[test_index] = clf.predict(X_test)
return y_prob, y_pred
ytrain = np.array(y)
X = data_reduced.iloc[:,1:]
X.shape
# At last we can build a hypothesis
clf1 = ExtraTreesClassifier(n_estimators=1000, max_features=None, min_samples_leaf=1, min_samples_split=9, n_jobs=4, criterion='gini')
p1, pred1 = run_cv(X,ytrain,clf1)
print("logloss = ", log_loss(y, p1))
print("score = ", accuracy_score(ytrain, pred1))
cm = confusion_matrix(y, pred1)
print(cm)
# Finally shove the test feature set into the classifier
test_X = test_data_reduced.iloc[:,1:]
test_predictions = clf1.predict(test_X)
test_predictions
# Write out the predictions to a csv file
out_test_y = pd.DataFrame(columns=['filename', 'class'])
out_test_y['filename'] = test_data_reduced['filename']
out_test_y['class'] = pd.DataFrame(test_predictions, columns=['class'])
out_test_y.head()
out_test_y.to_csv('data/test-label-etc-predictions.csv', index=False)
Explanation: 6. Run ExtraTreeClassifiers With 10-Fold Cross Validation
Now we can have a quick look at how well the feature set can be classified
End of explanation
# go through the features and delete any that sum to less than 200
colsum = X.sum(axis=0, numeric_only=True)
zerocols = colsum[(colsum[:] == 0)]
zerocols
zerocols = colsum[(colsum[:] < 110)]
zerocols.shape
reduceX = X
for col in reduceX.columns:
if sum(reduceX[col]) < 100:
del reduceX[col]
reduceX.shape
skb = SelectKBest(chi2, k=20)
X_kbestnew = skb.fit_transform(X, y)
X_kbestnew.shape
y = [0]*labels.shape[0]
fnames = train_data['filename']
for i in range(len(y)):
fname = train_data.loc[i,'filename']
row = labels[labels['Id'] == fname]
y[i] = row.iloc[0,1]
# DO NOT USE BYTE IMAGE DATA
# Now select 10% best train image byte features by variance
sorted_train_labels = pd.read_csv('data/sorted-train-labels.csv')
X = sorted_train_image_byte.iloc[:,1:]
y = np.array(sorted_train_labels.iloc[:,1])
fsp = SelectPercentile(chi2, 10)
X_new = fsp.fit_transform(X,y)
X_new.shape
selected_names = fsp.get_support(indices=True)
selected_names = selected_names + 1
selected_names
data_trimmed = sorted_train_image_byte.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_train_image_byte['filename'])
sorted_train_image_byte_reduced = data_fnames.join(data_trimmed)
sorted_train_image_byte_reduced.head()
data_trimmed = sorted_test_image_byte.iloc[:,selected_names]
data_fnames = pd.DataFrame(sorted_test_image_byte['filename'])
sorted_test_image_byte_reduced = data_fnames.join(data_trimmed)
sorted_test_image_byte_reduced.head()
Explanation: 7. TEST/EXPERIMENTAL CODE ONLY
End of explanation |
9,912 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Running a Hyperparameter Tuning Job with Vertex Training
Learning objectives
In this notebook, you learn how to
Step1: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step2: Set up your Google Cloud project
Enable the Vertex AI API and Compute Engine API.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Otherwise, set your project ID here.
Step4: Set project ID
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step6: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Cloud SDK, you will need to provide a staging bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Import libraries and define constants
Step10: Write Dockerfile
The first step in containerizing your code is to create a Dockerfile. In the Dockerfile, you'll include all the commands needed to run the image such as installing the necessary libraries and setting up the entry point for the training code.
This Dockerfile uses the Deep Learning Container TensorFlow Enterprise 2.5 GPU Docker image. The Deep Learning Containers on Google Cloud come with many common ML and data science frameworks pre-installed. After downloading that image, this Dockerfile installs the CloudML Hypertune library and sets up the entrypoint for the training code.
Step11: Create training application code
Next, you create a trainer directory with a task.py script that contains the code for your training application.
Step16: In the next cell, you write the contents of the training script, task.py. This file downloads the horses or humans dataset from TensorFlow datasets and trains a tf.keras functional model using MirroredStrategy from the tf.distribute module.
There are a few components that are specific to using the hyperparameter tuning service
Step17: Build the Container
In the next cells, you build the container and push it to Google Container Registry.
Step18: Create and run hyperparameter tuning job on Vertex AI
Once your container is pushed to Google Container Registry, you use the Vertex SDK to create and run the hyperparameter tuning job.
You define the following specifications
Step19: Create a CustomJob.
Step20: Then, create and run a HyperparameterTuningJob.
There are a few arguments to note
Step21: It will nearly take 50 mintues to complete the job successfully.
Click on the generated link in the output to see your run in the Cloud Console. When the job completes, you will see the results of the tuning trials.
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
# Install necessary dependencies
! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform
Explanation: Running a Hyperparameter Tuning Job with Vertex Training
Learning objectives
In this notebook, you learn how to:
Create a Vertex AI custom job for training a model.
Launch hyperparameter tuning job with the Python SDK.
Cleanup resources.
Overview
This notebook demonstrates how to run a hyperparameter tuning job with Vertex Training to discover optimal hyperparameter values for an ML model. To speed up the training process, MirroredStrategy from the tf.distribute module is used to distribute training across multiple GPUs on a single machine.
In this notebook, you create a custom-trained model from a Python script in a Docker container. You learn how to modify training application code for hyperparameter tuning and submit a Vertex Training hyperparameter tuning job with the Python SDK.
Dataset
The dataset used for this tutorial is the horses or humans dataset from TensorFlow Datasets. The trained model predicts if an image is of a horse or a human.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook
Install additional packages
Install the latest version of Vertex SDK for Python.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = "qwiklabs-gcp-00-b9e7121a76ba" # Replace your Project ID here
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Set up your Google Cloud project
Enable the Vertex AI API and Compute Engine API.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "qwiklabs-gcp-00-b9e7121a76ba" # Replace your Project ID here
Explanation: Otherwise, set your project ID here.
End of explanation
! gcloud config set project $PROJECT_ID
Explanation: Set project ID
End of explanation
# Import necessary librarary
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
BUCKET_URI = "gs://qwiklabs-gcp-00-b9e7121a76ba" # Replace your Bucket name here
REGION = "us-central1" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://qwiklabs-gcp-00-b9e7121a76ba": # Replace your Bucket name here
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
print(BUCKET_URI)
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Cloud SDK, you will need to provide a staging bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
# Create your bucket
! gsutil mb -l $REGION $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
# Give access to your Cloud Storage bucket
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
# Import necessary libraries
import os
import sys
from google.cloud import aiplatform
from google.cloud.aiplatform import hyperparameter_tuning as hpt
Explanation: Import libraries and define constants
End of explanation
%%writefile Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-gpu.2-5
WORKDIR /
# Installs hypertune library
RUN pip install cloudml-hypertune
# Copies the trainer code to the docker image.
COPY trainer /trainer
# Sets up the entry point to invoke the trainer.
ENTRYPOINT ["python", "-m", "trainer.task"]
Explanation: Write Dockerfile
The first step in containerizing your code is to create a Dockerfile. In the Dockerfile, you'll include all the commands needed to run the image such as installing the necessary libraries and setting up the entry point for the training code.
This Dockerfile uses the Deep Learning Container TensorFlow Enterprise 2.5 GPU Docker image. The Deep Learning Containers on Google Cloud come with many common ML and data science frameworks pre-installed. After downloading that image, this Dockerfile installs the CloudML Hypertune library and sets up the entrypoint for the training code.
End of explanation
# Create trainer directory
! mkdir trainer
Explanation: Create training application code
Next, you create a trainer directory with a task.py script that contains the code for your training application.
End of explanation
%%writefile trainer/task.py
import argparse
import hypertune
import tensorflow as tf
import tensorflow_datasets as tfds
def get_args():
Parses args. Must include all hyperparameters you want to tune.
parser = argparse.ArgumentParser()
parser.add_argument(
'--learning_rate', required=True, type=float, help='learning rate')
parser.add_argument(
'--momentum', required=True, type=float, help='SGD momentum value')
parser.add_argument(
'--units',
required=True,
type=int,
help='number of units in last hidden layer')
parser.add_argument(
'--epochs',
required=False,
type=int,
default=10,
help='number of training epochs')
args = parser.parse_args()
return args
def preprocess_data(image, label):
Resizes and scales images.
image = tf.image.resize(image, (150, 150))
return tf.cast(image, tf.float32) / 255., label
def create_dataset(batch_size):
Loads Horses Or Humans dataset and preprocesses data.
data, info = tfds.load(
name='horses_or_humans', as_supervised=True, with_info=True)
# Create train dataset
train_data = data['train'].map(preprocess_data)
train_data = train_data.shuffle(1000)
train_data = train_data.batch(batch_size)
# Create validation dataset
validation_data = data['test'].map(preprocess_data)
validation_data = validation_data.batch(64)
return train_data, validation_data
def create_model(units, learning_rate, momentum):
Defines and compiles model.
inputs = tf.keras.Input(shape=(150, 150, 3))
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(inputs)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(64, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(units, activation='relu')(x)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(
loss='binary_crossentropy',
optimizer=tf.keras.optimizers.SGD(
learning_rate=learning_rate, momentum=momentum),
metrics=['accuracy'])
return model
def main():
args = get_args()
# Create Strategy
strategy = tf.distribute.MirroredStrategy()
# Scale batch size
GLOBAL_BATCH_SIZE = 64 * strategy.num_replicas_in_sync
train_data, validation_data = create_dataset(GLOBAL_BATCH_SIZE)
# Wrap model variables within scope
with strategy.scope():
model = create_model(args.units, args.learning_rate, args.momentum)
# Train model
history = model.fit(
train_data, epochs=args.epochs, validation_data=validation_data)
# Define Metric
hp_metric = history.history['val_accuracy'][-1]
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=hp_metric,
global_step=args.epochs)
if __name__ == '__main__':
main()
Explanation: In the next cell, you write the contents of the training script, task.py. This file downloads the horses or humans dataset from TensorFlow datasets and trains a tf.keras functional model using MirroredStrategy from the tf.distribute module.
There are a few components that are specific to using the hyperparameter tuning service:
The script imports the hypertune library. Note that the Dockerfile included instructions to pip install the hypertune library.
The function get_args() defines a command-line argument for each hyperparameter you want to tune. In this example, the hyperparameters that will be tuned are the learning rate, the momentum value in the optimizer, and the number of units in the last hidden layer of the model. The value passed in those arguments is then used to set the corresponding hyperparameter in the code.
At the end of the main() function, the hypertune library is used to define the metric to optimize. In this example, the metric that will be optimized is the the validation accuracy. This metric is passed to an instance of HyperTune.
End of explanation
# Set the IMAGE_URI
IMAGE_URI = f"gcr.io/{PROJECT_ID}/horse-human:hypertune"
# Build the docker image
! docker build -f Dockerfile -t $IMAGE_URI ./
# Push it to Google Container Registry:
! docker push $IMAGE_URI
Explanation: Build the Container
In the next cells, you build the container and push it to Google Container Registry.
End of explanation
# Define required specifications
worker_pool_specs = [
{
"machine_spec": {
"machine_type": "n1-standard-4",
"accelerator_type": "NVIDIA_TESLA_T4",
"accelerator_count": 2,
},
"replica_count": 1,
"container_spec": {"image_uri": IMAGE_URI},
}
]
metric_spec = {"accuracy": "maximize"}
parameter_spec = {
"learning_rate": hpt.DoubleParameterSpec(min=0.001, max=1, scale="log"),
"momentum": hpt.DoubleParameterSpec(min=0, max=1, scale="linear"),
"units": hpt.DiscreteParameterSpec(values=[64, 128, 512], scale=None),
}
Explanation: Create and run hyperparameter tuning job on Vertex AI
Once your container is pushed to Google Container Registry, you use the Vertex SDK to create and run the hyperparameter tuning job.
You define the following specifications:
* worker_pool_specs: Dictionary specifying the machine type and Docker image. This example defines a single node cluster with one n1-standard-4 machine with two NVIDIA_TESLA_T4 GPUs.
* parameter_spec: Dictionary specifying the parameters to optimize. The dictionary key is the string assigned to the command line argument for each hyperparameter in your training application code, and the dictionary value is the parameter specification. The parameter specification includes the type, min/max values, and scale for the hyperparameter.
* metric_spec: Dictionary specifying the metric to optimize. The dictionary key is the hyperparameter_metric_tag that you set in your training application code, and the value is the optimization goal.
End of explanation
print(BUCKET_URI)
# Create a CustomJob
JOB_NAME = "horses-humans-hyperparam-job" + TIMESTAMP
my_custom_job = # TODO 1: Your code goes here(
display_name=JOB_NAME,
project=PROJECT_ID,
worker_pool_specs=worker_pool_specs,
staging_bucket=BUCKET_URI,
)
Explanation: Create a CustomJob.
End of explanation
# Create and run HyperparameterTuningJob
hp_job = # TODO 2: Your code goes here(
display_name=JOB_NAME,
custom_job=my_custom_job,
metric_spec=metric_spec,
parameter_spec=parameter_spec,
max_trial_count=15,
parallel_trial_count=3,
project=PROJECT_ID,
search_algorithm=None,
)
hp_job.run()
Explanation: Then, create and run a HyperparameterTuningJob.
There are a few arguments to note:
max_trial_count: Sets an upper bound on the number of trials the service will run. The recommended practice is to start with a smaller number of trials and get a sense of how impactful your chosen hyperparameters are before scaling up.
parallel_trial_count: If you use parallel trials, the service provisions multiple training processing clusters. The worker pool spec that you specify when creating the job is used for each individual training cluster. Increasing the number of parallel trials reduces the amount of time the hyperparameter tuning job takes to run; however, it can reduce the effectiveness of the job overall. This is because the default tuning strategy uses results of previous trials to inform the assignment of values in subsequent trials.
search_algorithm: The available search algorithms are grid, random, or default (None). The default option applies Bayesian optimization to search the space of possible hyperparameter values and is the recommended algorithm.
End of explanation
# Set this to true only if you'd like to delete your bucket
delete_bucket = # TODO 3: Your code goes here
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_URI
Explanation: It will nearly take 50 mintues to complete the job successfully.
Click on the generated link in the output to see your run in the Cloud Console. When the job completes, you will see the results of the tuning trials.
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
End of explanation |
9,913 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting
There are several plotting modules in python. Matplolib is the most complete/versatile package for all 2D plotting. The easiest way to construct a new plot is to have a look at http
Step1: Proper use of Matplotlib
We will use interactive plots inline in the notebook. This feature is enabled through
Step2: Add a cruve with a title to the plot
Step3: A long list of markers can be found at http
Step4: Add a labels to the x and y axes
Step5: Finally dump the figure to a png file
Step6: Lets define a function that creates an empty base plot to which we will add
stuff for each demonstration. The function returns the figure and the axes object.
Step7: Log plots
Step8: This is equivelant to
Step9: Histograms
Step10: Subplots
Making subplots is relatively easy. Just pass the shape of the grid of plots to plt.subplots() that was used in the above examples.
Step11: create a 3x3 grid of plots
Step12: Images and contours
Step13: Animation
Step14: Styles
Configuring matplotlib
Most of the matplotlib code chunk that are written are usually about styling and not actual plotting.
One feature that might be of great help if you are in this case is to use the matplotlib.style module.
In this notebook, we will go through the available matplotlib styles and their corresponding configuration files. Then we will explain the two ways of using the styles and finally show you how to write a personalized style.
Pre-configured style files
An available variable returns a list of the names of some pre-configured matplotlib style files.
Step15: Content of the style files
A matplotlib style file is a simple text file containing the desired matplotlib rcParam configuration, with the .mplstyle extension.
Let's display the content of the 'ggplot' style.
Step16: Maybe the most interesting feature of this style file is the redefinition of the color cycle using hexadecimal notation. This allows the user to define is own color palette for its multi-line plots.
use versus context
There are two ways of using the matplotlib styles.
plt.style.use(style)
plt.style.context(style)
Step17: The 'ggplot' style has been applied to the current session. One of its features that differs from standard matplotlib configuration is to put the ticks outside the main figure (axes.axisbelow
Step18: Now using the 'dark_background' style as a context, we can spot the main changes (background, line color, axis color) and we can also see the outside ticks, although they are not part of this particular style. This is the 'ggplot' axes.axisbelow setup that has not been overwritten by the new style.
Once the with block has ended, the style goes back to its previous status, that is the 'ggplot' style.
Step19: Custom style file
Starting from these configured files, it is easy to now create our own styles for textbook figures and talk figures and switch from one to another in a single code line plt.style.use('mystyle') at the beginning of the plotting script.
Where to create it ?
matplotlib will look for the user style files at the following path
Step20: Note
Step21: One can now copy an existing style file to serve as a boilerplate.
Step22: D3
Step23: Seaborn | Python Code:
%matplotlib
import numpy as np
import matplotlib.pyplot as plt
# To get interactive plotting (otherwise you need to
# type plt.show() at the end of the plotting commands)
plt.ion()
x = np.linspace(0, 10)
y = np.sin(x)
# basic X/Y line plotting with '--' dashed line and linewidth of 2
plt.plot(x, y, '--', label='first line')
# overplot a dotted line on the previous plot
plt.plot(x, np.cos(x)*np.cos(x/2), '.', linewidth=3, label='other')
x_axis_label = plt.xlabel('x axis') #change the label of the xaxis
# change your mind about the label : you do not need to replot everything !
plt.xlabel('another x axis')
# or you can use the re-tuned object
x_axis_label.set_text('changed it from the object itself')
# simply add the legend (from the previous label)
legend = plt.legend()
plt.savefig('plot.png') # save the current figure in png
plt.savefig('plot.eps') # save the current figure in ps, no need to redo it !
!ls
Explanation: Plotting
There are several plotting modules in python. Matplolib is the most complete/versatile package for all 2D plotting. The easiest way to construct a new plot is to have a look at http://matplotlib.org/gallery.html and get inspiration from the available examples. The official documentation can be found at: http://matplotlib.org/contents.html
Quick plots, or Matplotlib dirty usage
Proper use of Matplotlib
Subplots
Images and contours
Animation
Styles
D3
Other honerable mentions
Quick plots, or Matplotlib dirty usage
End of explanation
%matplotlib
import matplotlib.pyplot as plt
import numpy as np
# define a figure which can contains several plots, you can define resolution and so on here...
fig2 = plt.figure()
# add one axis, axes are actual plots where you can put data.fits (nx, ny, index)
ax = fig2.add_subplot(1, 1, 1)
Explanation: Proper use of Matplotlib
We will use interactive plots inline in the notebook. This feature is enabled through:
End of explanation
x = np.linspace(0, 2*np.pi)
ax.plot(x, np.sin(x), '+')
ax.set_title('this title')
plt.show()
# is a simpler syntax to add one axis into the figure (we will stick to this)
fig, ax = plt.subplots()
ax.plot(x, np.sin(x), '+')
ax.set_title('simple subplot')
Explanation: Add a cruve with a title to the plot
End of explanation
print(type(fig))
print(dir(fig))
print(fig.axes)
print('This is the x-axis object', fig.axes[0].xaxis)
print('And this is the y-axis object', fig.axes[0].yaxis)
# arrow pointing to the origin of the axes
ax_arrow = ax.annotate('ax = fig.axes[0]',
xy=(0, -1), # tip of the arrow
xytext=(1, -0.5), # location of the text
arrowprops={'facecolor':'red', 'shrink':0.05})
# arrow pointing to the x axis
x_ax_arrow = ax.annotate('ax.xaxis',
xy=(3, -1), # tip of the arrow
xytext=(3, -0.5), # location of the text
arrowprops={'facecolor':'red', 'shrink':0.05})
xax = ax.xaxis
# arrow pointing to the y axis
y_ax_arrow = ax.annotate('ax.yaxis',
xy=(0, 0), # tip of the arrow
xytext=(1, 0.5), # location of the text
arrowprops={'facecolor':'red', 'shrink':0.05})
Explanation: A long list of markers can be found at http://matplotlib.org/api/markers_api.html
as for the colors, there is a nice discussion at http://stackoverflow.com/questions/22408237/named-colors-in-matplotlib
All the components of a figure can be accessed throught the 'Figure' object
End of explanation
# add some ascii text label
# this is equivelant to:
# ax.set_xlabel('x')
xax.set_label_text('x')
# add latex rendered text to the y axis
ax.set_ylabel('$sin(x)$', size=20, color='g', rotation=0)
Explanation: Add a labels to the x and y axes
End of explanation
fig.savefig('myplot.png')
!ls
!eog myplot.png
Explanation: Finally dump the figure to a png file
End of explanation
from matplotlib import pyplot as plt
import numpy as np
def create_base_plot():
fig, ax = plt.subplots()
ax.set_title('sample figure')
return fig, ax
def plot_something():
fig, ax = create_base_plot()
x = np.linspace(0, 2*np.pi)
ax.semilogx(x, np.cos(x)*np.cos(x/2), 'r--.')
plt.show()
Explanation: Lets define a function that creates an empty base plot to which we will add
stuff for each demonstration. The function returns the figure and the axes object.
End of explanation
fig, ax = create_base_plot()
# normal-xlog plots
ax.semilogx(x, np.cos(x)*np.cos(x/2), 'r--.')
# clear the plot and plot a function using the y axis in log scale
ax.clear()
ax.semilogy(x, np.exp(x))
# you can (un)set it, whenever you want
#ax.set_yscale('linear') # change they y axis to linear scale
#ax.set_yscale('log') # change the y axis to log scale
# you can also make loglog plots
#ax.clear()
#ax.loglog(x, np.exp(x)*np.sin(x))
plt.setp(ax, **dict(yscale='log', xscale='log'))
Explanation: Log plots
End of explanation
plt.setp(ax, 'xscale', 'linear', 'xlim', [1, 5], 'ylim', [0.1, 10], 'xlabel', 'x',
'ylabel', 'y', 'title', 'foo',
'xticks', [1, 2, 3, 4, 5],
'yticks', [0.1, 1, 10],
'yticklabels', ['low', 'medium', 'high'])
Explanation: This is equivelant to:
ax.plot(x, np.exp(x)*np.sin(x))
plt.setp(ax, 'yscale', 'log', 'xscale', 'log')
here we have introduced a new method of setting property values via pyplot.setp.
setp takes as first argument a matplotlib object. Each pair of positional argument
after that is treated as a key value pair for the set method name and its value. For
example:
ax.set_scale('linear')
becomes
plt.setp(ax, 'scale', 'linear')
This is useful if you need to set lots of properties, such as:
End of explanation
fig1, ax = create_base_plot()
n, bins, patches = ax.hist(np.random.normal(0, 0.1, 10000), bins=50)
Explanation: Histograms
End of explanation
# Create one figure with two plots/axes, with their xaxis shared
fig, (ax1, ax2) = plt.subplots(2, sharex=True)
ax1.plot(x, np.sin(x), '-.', color='r', label='first line')
other = ax2.plot(x, np.cos(x)*np.cos(x/2), 'o-', linewidth=3, label='other')
ax1.legend()
ax2.legend()
# adjust the spacing between the axes
fig.subplots_adjust(hspace=0.0)
# add a scatter plot to the first axis
ax1.scatter(x, np.sin(x)+np.random.normal(0, 0.1, np.size(x)))
Explanation: Subplots
Making subplots is relatively easy. Just pass the shape of the grid of plots to plt.subplots() that was used in the above examples.
End of explanation
fig, axs = plt.subplots(3, 3)
print(axs.shape)
# add an index to all the subplots
for ax_index, ax in enumerate(axs.flatten()):
ax.set_title(ax_index)
# remove all ticks
for ax in axs.flatten():
plt.setp(ax, 'xticks', [], 'yticks', [])
fig.subplots_adjust(hspace=0, wspace=0)
# plot a curve in the diagonal subplots
for ax, func in zip(axs.diagonal(), [np.sin, np.cos, np.exp]):
ax.plot(x, func(x))
Explanation: create a 3x3 grid of plots
End of explanation
xx, yy = np.mgrid[-2:2:100j, -2:2:100j]
img = np.sin(xx) + np.cos(yy)
fig, ax = create_base_plot()
# to have 0,0 in the lower left corner and no interpolation
img_plot = ax.imshow(img, origin='lower', interpolation='None')
# to add a grid to any axis
ax.grid()
img_plot.set_cmap('hot') # changing the colormap
img_plot.set_cmap('spectral') # changing the colormap
colorb = fig.colorbar(img_plot) # adding a color bar
img_plot.set_clim(-0.5, 0.5) # changing the dynamical range
# add contour levels
img_contours = ax.contour(img, [-1, -0.5, 0.0, 0.5])
plt.clabel(img_contours, inline=True, fontsize=20)
Explanation: Images and contours
End of explanation
from IPython.display import HTML
import matplotlib.animation as animation
def f(x, y):
return np.sin(x) + np.cos(y)
fig, ax = create_base_plot()
im = ax.imshow(f(xx, yy), cmap=plt.get_cmap('viridis'))
def updatefig(*args):
global xx, yy
xx += np.pi / 15.
yy += np.pi / 20.
im.set_array(f(xx, yy))
return im,
ani = animation.FuncAnimation(fig, updatefig, interval=50, blit=True)
_ = ani.to_html5_video()
# change title during animation!!
ax.set_title('runtime title')
Explanation: Animation
End of explanation
print('\n'.join(plt.style.available))
x = np.arange(0, 10, 0.01)
def f(x, t):
return np.sin(x) * np.exp(1 - x / 10 + t / 2)
def simple_plot(style):
plt.figure()
with plt.style.context(style, after_reset=True):
for t in range(5):
plt.plot(x, f(x, t))
plt.title('Simple plot')
simple_plot('ggplot')
simple_plot('dark_background')
simple_plot('grayscale')
simple_plot('fivethirtyeight')
simple_plot('bmh')
Explanation: Styles
Configuring matplotlib
Most of the matplotlib code chunk that are written are usually about styling and not actual plotting.
One feature that might be of great help if you are in this case is to use the matplotlib.style module.
In this notebook, we will go through the available matplotlib styles and their corresponding configuration files. Then we will explain the two ways of using the styles and finally show you how to write a personalized style.
Pre-configured style files
An available variable returns a list of the names of some pre-configured matplotlib style files.
End of explanation
import os
ggplotfile = os.path.join(plt.style.core.BASE_LIBRARY_PATH, 'ggplot.mplstyle')
!cat $ggplotfile
Explanation: Content of the style files
A matplotlib style file is a simple text file containing the desired matplotlib rcParam configuration, with the .mplstyle extension.
Let's display the content of the 'ggplot' style.
End of explanation
plt.style.use('ggplot')
plt.figure()
plt.plot(x, f(x, 0))
Explanation: Maybe the most interesting feature of this style file is the redefinition of the color cycle using hexadecimal notation. This allows the user to define is own color palette for its multi-line plots.
use versus context
There are two ways of using the matplotlib styles.
plt.style.use(style)
plt.style.context(style):
The use method applied at the beginning of a script will be the default choice in most cases when the style is to be set for the entire script. The only issue is that it sets the matplotlib style for the given Python session, meaning that a second call to use with a different style will only apply new style parameters and not reset the first style. That is if the axes.grid is set to True by the first style and there is nothing concerning the grid in the second style config, the grid will remain set to True which is not matplotlib default.
On the contrary, the context method will be useful when only one or two figures are to be set to a given style. It shall be used with the with statement to create a context manager in which the plot will be made.
Let's illustrate this.
End of explanation
with plt.style.context('dark_background'):
plt.figure()
plt.plot(x, f(x, 1))
Explanation: The 'ggplot' style has been applied to the current session. One of its features that differs from standard matplotlib configuration is to put the ticks outside the main figure (axes.axisbelow: True)
End of explanation
plt.figure()
plt.plot(x, f(x, 2))
Explanation: Now using the 'dark_background' style as a context, we can spot the main changes (background, line color, axis color) and we can also see the outside ticks, although they are not part of this particular style. This is the 'ggplot' axes.axisbelow setup that has not been overwritten by the new style.
Once the with block has ended, the style goes back to its previous status, that is the 'ggplot' style.
End of explanation
print(plt.style.core.USER_LIBRARY_PATHS)
Explanation: Custom style file
Starting from these configured files, it is easy to now create our own styles for textbook figures and talk figures and switch from one to another in a single code line plt.style.use('mystyle') at the beginning of the plotting script.
Where to create it ?
matplotlib will look for the user style files at the following path :
End of explanation
styledir = plt.style.core.USER_LIBRARY_PATHS[0]
!mkdir -p $styledir
Explanation: Note: The directory corresponding to this path will most probably not exist so one will need to create it.
End of explanation
mystylefile = os.path.join(styledir, 'mystyle.mplstyle')
!cp $ggplotfile $mystylefile
!cd $styledir
%%file mystyle.mplstyle
font.size: 16.0 # large font
axes.linewidth: 2
axes.grid: True
axes.titlesize: x-large
axes.labelsize: x-large
axes.labelcolor: 555555
axes.axisbelow: True
xtick.color: 555555
xtick.direction: out
ytick.color: 555555
ytick.direction: out
grid.color: white
grid.linestyle: : # dotted line
Explanation: One can now copy an existing style file to serve as a boilerplate.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import mpld3
mpld3.enable_notebook()
# Scatter points
fig, ax = plt.subplots(subplot_kw=dict(axisbg='#EEEEEE'))
ax.grid(color='white', linestyle='solid')
N = 50
scatter = ax.scatter(np.random.normal(size=N),
np.random.normal(size=N),
c=np.random.random(size=N),
s = 1000 * np.random.random(size=N),
alpha=0.3,
cmap=plt.cm.jet)
ax.set_title("D3 Scatter Plot", size=18);
import mpld3
mpld3.display(fig)
from mpld3 import plugins
fig, ax = plt.subplots(subplot_kw=dict(axisbg='#EEEEEE'))
ax.grid(color='white', linestyle='solid')
N = 50
scatter = ax.scatter(np.random.normal(size=N),
np.random.normal(size=N),
c=np.random.random(size=N),
s = 1000 * np.random.random(size=N),
alpha=0.3,
cmap=plt.cm.jet)
ax.set_title("D3 Scatter Plot (with tooltips!)", size=20)
labels = ['point {0}'.format(i + 1) for i in range(N)]
fig.plugins = [plugins.PointLabelTooltip(scatter, labels)]
Explanation: D3
End of explanation
%matplotlib
plot_something()
import seaborn
plot_something()
Explanation: Seaborn
End of explanation |
9,914 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of discrete and inverse discrete Fourier transform
Step1: In this notebook, we provide examples of the discrete Fourier transform (DFT) and its inverse, and how xrft automatically harnesses the metadata. We compare the results to conventional numpy.fft (hereon npft) to highlight the strengths of xrft.
A case with synthetic data
Generate synthetic data centered around zero
Step2: Let's take the Fourier transform
We will compare the Fourier transform with and without taking into consideration about the phase information.
Step3: xrft.dft, xrft.fft (and npft.fft with careful npft.fftshifting) all give the same amplitudes as theory (as the coordinates of the original data was centered) but the latter two get the sign wrong due to losing the phase information. It is perhaps worth noting that the latter two (xrft.fft and npft.fft) require the amplitudes to be multiplied by $dx$ to be consistent with theory while xrft.dft automatically takes care of this with the flag true_amplitude=True
Step4: Although xrft.ifft misses the amplitude scaling (viz. resolution in wavenumber or frequency), since it is the inverse of the Fourier transform uncorrected for $dx$, the result becomes consistent with xrft.idft. In other words, xrft.fft (and npft.fft) misses the $dx$ scaling and xrft.ifft (and npft.ifft) misses the $df\ (=1/(N\times dx))$ scaling. When applying the two operators in conjuction by doing ifft(fft()), there is a $1/N\ (=dx\times df)$ factor missing which is, in fact, arbitrarily included in the ifft definition as a normalization factor.
By incorporating the right scalings in xrft.dft and xrft.idft, there is no more consideration of the number of data points ($N$)
Step5: We consider again the Fourier transform.
Step6: The expected additional phase (i.e. the complex term; $e^{-i2\pi kx_0}$) that appears in theory is retrieved with xrft.dft but not with xrft.fft nor npft.fft. This is because in npft.fft, the input data is expected to be centered around zero. In the current version of xrft, the behavior of xrft.dft defaults to xrft.fft so set the flags true_phase=True and true_amplitude=True in order to have the results matching with theory.
Now, let's take the inverse transform.
Step7: Note that we are only able to match the inverse transforms of xrft.ifft and npft.ifft to the data nda to it being Fourier transformed because we "know" the original data da was shifted by nshift datapoints as we see in x[nshift
Step8: The coordinate metadata is lost during the DFT (or any Fourier transform) operation so we need to specify the lag to retrieve the latitudes back in the inverse transform. The original latitudes are centered around 45$^\circ$ so we set the lag to lag=45. | Python Code:
import numpy as np
import numpy.testing as npt
import xarray as xr
import xrft
import numpy.fft as npft
import scipy.signal as signal
import dask.array as dsar
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Example of discrete and inverse discrete Fourier transform
End of explanation
k0 = 1/0.52
T = 4.
dx = 0.02
x = np.arange(-2*T,2*T,dx)
y = np.cos(2*np.pi*k0*x)
y[np.abs(x)>T/2]=0.
da = xr.DataArray(y, dims=('x',), coords={'x':x})
fig, ax = plt.subplots(figsize=(12,4))
fig.set_tight_layout(True)
da.plot(ax=ax, marker='+', label='original signal')
ax.set_xlim([-8,8]);
Explanation: In this notebook, we provide examples of the discrete Fourier transform (DFT) and its inverse, and how xrft automatically harnesses the metadata. We compare the results to conventional numpy.fft (hereon npft) to highlight the strengths of xrft.
A case with synthetic data
Generate synthetic data centered around zero
End of explanation
da_dft = xrft.dft(da, true_phase=True, true_amplitude=True) # Fourier Transform w/ consideration of phase
da_fft = xrft.fft(da) # Fourier Transform w/ numpy.fft-like behavior
da_npft = npft.fft(da)
k = da_dft.freq_x # wavenumber axis
TF_s = T/2*(np.sinc(T*(k-k0)) + np.sinc(T*(k+k0))) # Theoretical result of the Fourier transform
fig, (ax1,ax2) = plt.subplots(figsize=(12,8), nrows=2, ncols=1)
fig.set_tight_layout(True)
(da_dft.real).plot(ax=ax1, linestyle='-', lw=3, c='k', label='phase preservation')
((da_fft*dx).real).plot(ax=ax1, linestyle='', marker='+',label='no phase preservation')
ax1.plot(k, (npft.fftshift(da_npft)*dx).real, linestyle='', marker='x',label='numpy fft')
ax1.plot(k, TF_s.real, linestyle='--', label='Theory')
ax1.set_xlim([-10,10])
ax1.set_ylim([-2,2])
ax1.legend()
ax1.set_title('REAL PART')
(da_dft.imag).plot(ax=ax2, linestyle='-', lw=3, c='k', label='phase preservation')
((da_fft*dx).imag).plot(ax=ax2, linestyle='', marker='+', label='no phase preservation')
ax2.plot(k, (npft.fftshift(da_npft)*dx).imag, linestyle='', marker='x',label='numpy fft')
ax2.plot(k, TF_s.imag, linestyle='--', label='Theory')
ax2.set_xlim([-10,10])
ax2.set_ylim([-2,2])
ax2.legend()
ax2.set_title('IMAGINARY PART');
Explanation: Let's take the Fourier transform
We will compare the Fourier transform with and without taking into consideration about the phase information.
End of explanation
ida_dft = xrft.idft(da_dft, true_phase=True, true_amplitude=True) # Signal in direct space
ida_fft = xrft.ifft(da_fft)
fig, ax = plt.subplots(figsize=(12,4))
fig.set_tight_layout(True)
ida_dft.real.plot(ax=ax, linestyle='-', c='k', lw=4, label='phase preservation')
ax.plot(x, ida_fft.real, linestyle='', marker='+', label='no phase preservation', alpha=.6) # w/out the phase information, the coordinates are lost
da.plot(ax=ax, ls='--', lw=3, label='original signal')
ax.plot(x, npft.ifft(da_npft).real, ls=':', label='inverse of numpy fft')
ax.set_xlim([-8,8])
ax.legend(loc='upper left');
Explanation: xrft.dft, xrft.fft (and npft.fft with careful npft.fftshifting) all give the same amplitudes as theory (as the coordinates of the original data was centered) but the latter two get the sign wrong due to losing the phase information. It is perhaps worth noting that the latter two (xrft.fft and npft.fft) require the amplitudes to be multiplied by $dx$ to be consistent with theory while xrft.dft automatically takes care of this with the flag true_amplitude=True:
$$\mathcal{F}(da)(f) = \int_{-\infty}^{+\infty}da(x)e^{-2\pi ifx} dx
\rightarrow
\text{xrft.dft}(da)(f[m]) = \sum_n da(x[n]) e^{-2\pi i f[m] x[n]} \Delta x$$
Perform the inverse transform
End of explanation
nshift = 70 # defining a shift
x0 = dx*nshift
nda = da.shift(x=nshift).dropna('x')
fig, ax = plt.subplots(figsize=(12,4))
fig.set_tight_layout(True)
da.plot(ax=ax, label='original (centered) signal')
nda.plot(ax=ax, marker='+', label='shifted signal', alpha=.6)
ax.set_xlim([-8,nda.x.max()])
ax.legend();
Explanation: Although xrft.ifft misses the amplitude scaling (viz. resolution in wavenumber or frequency), since it is the inverse of the Fourier transform uncorrected for $dx$, the result becomes consistent with xrft.idft. In other words, xrft.fft (and npft.fft) misses the $dx$ scaling and xrft.ifft (and npft.ifft) misses the $df\ (=1/(N\times dx))$ scaling. When applying the two operators in conjuction by doing ifft(fft()), there is a $1/N\ (=dx\times df)$ factor missing which is, in fact, arbitrarily included in the ifft definition as a normalization factor.
By incorporating the right scalings in xrft.dft and xrft.idft, there is no more consideration of the number of data points ($N$):
$$\mathcal{F}^{-1}(\mathcal{F}(da))(x) = \frac{1}{2\pi}\int_{-\infty}^{+\infty}\mathcal{F}(da)(f)e^{2\pi ifx} df
\rightarrow
\text{xrft.idft}(\text{xrft.dft}(da))(x[n]) = \sum_m \text{xrft.dft}(da)(f[m]) e^{2\pi i f[m] x[n]} \Delta f$$
Synthetic data not centered around zero
Now let's shift the coordinates so that they are not centered.
This is where the xrft magic happens. With the relevant flags, xrft's dft can preserve information about the data's location in its original space. This information is not preserved in a numpy fourier transform. This section demonstrates how to preserve this information using the true_phase=True, true_amplitude=True flags.
End of explanation
nda_dft = xrft.dft(nda, true_phase=True, true_amplitude=True) # Fourier Transform w/ phase preservation
nda_fft = xrft.fft(nda) # Fourier Transform w/out phase preservation
nda_npft = npft.fft(nda)
nk = nda_dft.freq_x # wavenumber axis
TF_ns = T/2*(np.sinc(T*(nk-k0)) + np.sinc(T*(nk+k0)))*np.exp(-2j*np.pi*nk*x0) # Theoretical FT (Note the additional phase)
fig, (ax1,ax2) = plt.subplots(figsize=(12,8), nrows=2, ncols=1)
fig.set_tight_layout(True)
(nda_dft.real).plot(ax=ax1, linestyle='-', lw=3, c='k', label='phase preservation')
((nda_fft*dx).real).plot(ax=ax1, linestyle='', marker='+',label='no phase preservation')
ax1.plot(nk, (npft.fftshift(nda_npft)*dx).real, linestyle='', marker='x',label='numpy fft')
ax1.plot(nk, TF_ns.real, linestyle='--', label='Theory')
ax1.set_xlim([-10,10])
ax1.set_ylim([-2.,2])
ax1.legend()
ax1.set_title('REAL PART')
(nda_dft.imag).plot(ax=ax2, linestyle='-', lw=3, c='k', label='phase preservation')
((nda_fft*dx).imag).plot(ax=ax2, linestyle='', marker='+', label='no phase preservation')
ax2.plot(nk, (npft.fftshift(nda_npft)*dx).imag, linestyle='', marker='x',label='numpy fft')
ax2.plot(nk, TF_ns.imag, linestyle='--', label='Theory')
ax2.set_xlim([-10,10])
ax2.set_ylim([-2.,2.])
ax2.legend()
ax2.set_title('IMAGINARY PART');
Explanation: We consider again the Fourier transform.
End of explanation
inda_dft = xrft.idft(nda_dft, true_phase=True, true_amplitude=True) # Signal in direct space
inda_fft = xrft.ifft(nda_fft)
fig, ax = plt.subplots(figsize=(12,4))
fig.set_tight_layout(True)
inda_dft.real.plot(ax=ax, linestyle='-', c='k', lw=4, label='phase preservation')
ax.plot(x[:len(inda_fft.real)], inda_fft.real, linestyle='', marker='o', alpha=.7,
label='no phase preservation (w/out shifting)')
ax.plot(x[nshift:], inda_fft.real, linestyle='', marker='+', label='no phase preservation')
nda.plot(ax=ax, ls='--', lw=3, label='original signal')
ax.plot(x[nshift:], npft.ifft(nda_npft).real, ls=':', label='inverse of numpy fft')
ax.set_xlim([nda.x.min(),nda.x.max()])
ax.legend(loc='upper left');
Explanation: The expected additional phase (i.e. the complex term; $e^{-i2\pi kx_0}$) that appears in theory is retrieved with xrft.dft but not with xrft.fft nor npft.fft. This is because in npft.fft, the input data is expected to be centered around zero. In the current version of xrft, the behavior of xrft.dft defaults to xrft.fft so set the flags true_phase=True and true_amplitude=True in order to have the results matching with theory.
Now, let's take the inverse transform.
End of explanation
da = xr.tutorial.open_dataset("air_temperature").air
da
Fda = xrft.dft(da.isel(time=0), dim="lat", true_phase=True, true_amplitude=True)
Fda
Explanation: Note that we are only able to match the inverse transforms of xrft.ifft and npft.ifft to the data nda to it being Fourier transformed because we "know" the original data da was shifted by nshift datapoints as we see in x[nshift:] (compare the blue dots and orange crosses where without the knowledge of the shift, we may assume that the data were centered around zero). Using xrft.idft along with xrft.dft with the flags true_phase=True and true_amplitude=True automatically takes care of the information of shifted coordinates.
A case with real data
Load atmosheric temperature from the NMC reanalysis.
End of explanation
Fda_1 = xrft.idft(Fda, dim="freq_lat", true_phase=True, true_amplitude=True, lag=45)
Fda_1
fig, (ax1,ax2) = plt.subplots(figsize=(12,4), nrows=1, ncols=2)
da.isel(time=0).plot(ax=ax1)
Fda_1.real.plot(ax=ax2)
Explanation: The coordinate metadata is lost during the DFT (or any Fourier transform) operation so we need to specify the lag to retrieve the latitudes back in the inverse transform. The original latitudes are centered around 45$^\circ$ so we set the lag to lag=45.
End of explanation |
9,915 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Deep Learning
Project
Step5: Using MNIST dataset
Loading pickled MNIST dataset
Step6: Question 1
What approach did you take in coming up with a solution to this problem?
Answer
Step7: Question 4
Describe how you set up the training and testing data for your model. How does the model perform on a realistic dataset?
Answer
Step8: Question 7
Choose five candidate images of numbers you took from around you and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult?
Answer
Step9: Question 10
How well does your model localize numbers on the testing set from the realistic dataset? Do your classification results change at all with localization included?
Answer | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import random
import os
import sys
import tarfile
import cPickle
import gzip
import theano
import theano.tensor as T
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
# Config the matplotlib backend as plotting inline in IPython
%matplotlib inline
Explanation: Machine Learning Engineer Nanodegree
Deep Learning
Project: Build a Digit Recognition Program
In this notebook, a template is provided for you to implement your functionality in stages which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission, if necessary. Sections that begin with 'Implementation' in the header indicate where you should begin your implementation for your project. Note that some sections of implementation are optional, and will be marked with 'Optional' in the header.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Step 1: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize sequences of digits. Train the model using synthetic data generated by concatenating character images from notMNIST or MNIST. To produce a synthetic sequence of digits for testing, you can for example limit yourself to sequences up to five digits, and use five classifiers on top of your deep network. You would have to incorporate an additional ‘blank’ character to account for shorter number sequences.
There are various aspects to consider when thinking about this problem:
- Your model can be derived from a deep neural net or a convolutional network.
- You could experiment sharing or not the weights between the softmax classifiers.
- You can also use a recurrent network in your deep neural net to replace the classification layers and directly emit the sequence of digits one-at-a-time.
You can use Keras to implement your model. Read more at keras.io.
Here is an example of a published baseline model on this problem. (video). You are not expected to model your architecture precisely using this model nor get the same performance levels, but this is more to show an exampe of an approach used to solve this particular problem. We encourage you to try out different architectures for yourself and see what works best for you. Here is a useful forum post discussing the architecture as described in the paper and here is another one discussing the loss function.
Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
End of explanation
# Load the dataset
f = gzip.open('data/mnist.pkl.gz', 'rb')
train_set, valid_set, test_set = cPickle.load(f)
f.close()
def shared_dataset(data_xy):
Function that loads the dataset into shared variables
The reason we store our dataset in shared variables is to allow
Theano to copy it into the GPU memory (when code is run on GPU).
Since copying data into the GPU is slow, copying a minibatch everytime
is needed (the default behaviour if the data is not in a shared
variable) would lead to a large decrease in performance.
data_x, data_y = data_xy
shared_x = theano.shared(np.asarray(data_x, dtype=theano.config.floatX))
shared_y = theano.shared(np.asarray(data_y, dtype=theano.config.floatX))
# When storing data on the GPU it has to be stored as floats
# therefore we will store the labels as ``floatX`` as well
# (``shared_y`` does exactly that). But during our computations
# we need them as ints (we use labels as index, and if they are
# floats it doesn't make sense) therefore instead of returning
# ``shared_y`` we will have to cast it to int. This little hack
# lets us get around this issue
return shared_x, T.cast(shared_y, 'int32')
test_set_x, test_set_y = shared_dataset(test_set)
valid_set_x, valid_set_y = shared_dataset(valid_set)
train_set_x, train_set_y = shared_dataset(train_set)
batch_size = 500 # size of the minibatch
# accessing the third minibatch of the training set
data = train_set_x[2 * batch_size: 3 * batch_size]
label = train_set_y[2 * batch_size: 3 * batch_size]
plt.imshow(train_set[np.random.randint(train_set.shape[0])])
index = 0
img = np.asarray(test_set[index]).reshape(28,28)
plt.imshow(img)
print(img.shape)
print(bytes(t_labels[index]).decode('utf-8'))
print train_set
images = loadMNISTImages('data/train-images-idx3-ubyte');
labels = loadMNISTLabels('data/train-labels-idx1-ubyte');
% We are using display_network from the autoencoder code
display_network(images(:,1:100)); % Show the first 100 images
disp(labels(1:10));
#Functions definition
def download_progress_hook(count, blockSize, totalSize):
A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 5% change in download progress.
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
dest_filename = os.path.join(data_root, filename)
if force or not os.path.exists(dest_filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(dest_filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', dest_filename)
else:
raise Exception(
'Failed to verify ' + dest_filename + '. Can you get to it with a browser?')
return dest_filename
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall(data_root)
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_savez(data_folders, min_num_images_per_class, force=False):
dataset_name = data_folders[0][:-1]+'images.npz'
dataset = {}
if force or not os.path.exists(dataset_name):
for folder in data_folders:
print(folder[-1], end='')
dataset[folder[-1:]]=load_letter(folder, min_num_images_per_class)
try:
np.savez(dataset_name, **dataset)
except Exception as e:
print('Unable to save data to', dataset_name, ':', e)
return dataset_name
def gen_data_dict(dataset):
data_dict = {}
all_data = np.load(dataset)
for letter in all_data.files:
try:
data_dict[letter] = all_data[letter]
except Exception as e:
print('Unable to process data from', dataset, ':', e)
raise
all_data.close()
return data_dict
url = 'http://commondatastorage.googleapis.com/books1000/'
dataset_name = 'notMNIST_data.npz'
num_classes = 10
image_size = 28
pixel_depth = 255.0
random.seed(0)
last_percent_reported = None
data_root = 'data/' # Change me to store data elsewhere
print('Download files')
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
print('Download Complete')
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
print('Extract Complete')
train_datasets = maybe_savez(train_folders, 45000)
test_datasets = maybe_savez(test_folders, 1800)
print('Saving Complete')
train_data = gen_data_dict(train_datasets)
test_data = gen_data_dict(test_datasets)
print('Data Dictionaries Built')
num_classes = 10
np.random.seed(133)
Explanation: Using MNIST dataset
Loading pickled MNIST dataset
End of explanation
### Your code implementation goes here.
### Feel free to use as many code cells as needed.
Explanation: Question 1
What approach did you take in coming up with a solution to this problem?
Answer:
Question 2
What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.)
Answer:
Question 3
How did you train your model? How did you generate your synthetic dataset? Include examples of images from the synthetic data you constructed.
Answer:
Step 2: Train a Model on a Realistic Dataset
Once you have settled on a good architecture, you can train your model on real data. In particular, the Street View House Numbers (SVHN) dataset is a good large-scale dataset collected from house numbers in Google Street View. Training on this more challenging dataset, where the digits are not neatly lined-up and have various skews, fonts and colors, likely means you have to do some hyperparameter exploration to perform well.
Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
End of explanation
### Your code implementation goes here.
### Feel free to use as many code cells as needed.
Explanation: Question 4
Describe how you set up the training and testing data for your model. How does the model perform on a realistic dataset?
Answer:
Question 5
What changes did you have to make, if any, to achieve "good" results? Were there any options you explored that made the results worse?
Answer:
Question 6
What were your initial and final results with testing on a realistic dataset? Do you believe your model is doing a good enough job at classifying numbers correctly?
Answer:
Step 3: Test a Model on Newly-Captured Images
Take several pictures of numbers that you find around you (at least five), and run them through your classifier on your computer to produce example results. Alternatively (optionally), you can try using OpenCV / SimpleCV / Pygame to capture live images from a webcam and run those through your classifier.
Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
End of explanation
### Your code implementation goes here.
### Feel free to use as many code cells as needed.
Explanation: Question 7
Choose five candidate images of numbers you took from around you and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult?
Answer:
Question 8
Is your model able to perform equally well on captured pictures or a live camera stream when compared to testing on the realistic dataset?
Answer:
Optional: Question 9
If necessary, provide documentation for how an interface was built for your model to load and classify newly-acquired images.
Answer: Leave blank if you did not complete this part.
Step 4: Explore an Improvement for a Model
There are many things you can do once you have the basic classifier in place. One example would be to also localize where the numbers are on the image. The SVHN dataset provides bounding boxes that you can tune to train a localizer. Train a regression loss to the coordinates of the bounding box, and then test it.
Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
End of explanation
### Your optional code implementation goes here.
### Feel free to use as many code cells as needed.
Explanation: Question 10
How well does your model localize numbers on the testing set from the realistic dataset? Do your classification results change at all with localization included?
Answer:
Question 11
Test the localization function on the images you captured in Step 3. Does the model accurately calculate a bounding box for the numbers in the images you found? If you did not use a graphical interface, you may need to investigate the bounding boxes by hand. Provide an example of the localization created on a captured image.
Answer:
Optional Step 5: Build an Application or Program for a Model
Take your project one step further. If you're interested, look to build an Android application or even a more robust Python program that can interface with input images and display the classified numbers and even the bounding boxes. You can for example try to build an augmented reality app by overlaying your answer on the image like the Word Lens app does.
Loading a TensorFlow model into a camera app on Android is demonstrated in the TensorFlow Android demo app, which you can simply modify.
If you decide to explore this optional route, be sure to document your interface and implementation, along with significant results you find. You can see the additional rubric items that you could be evaluated on by following this link.
Optional Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
End of explanation |
9,916 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this tutorial, you will learn how to use cross-validation for better measures of model performance.
Introduction
Machine learning is an iterative process.
You will face choices about what predictive variables to use, what types of models to use, what arguments to supply to those models, etc. So far, you have made these choices in a data-driven way by measuring model quality with a validation (or holdout) set.
But there are some drawbacks to this approach. To see this, imagine you have a dataset with 5000 rows. You will typically keep about 20% of the data as a validation dataset, or 1000 rows. But this leaves some random chance in determining model scores. That is, a model might do well on one set of 1000 rows, even if it would be inaccurate on a different 1000 rows.
At an extreme, you could imagine having only 1 row of data in the validation set. If you compare alternative models, which one makes the best predictions on a single data point will be mostly a matter of luck!
In general, the larger the validation set, the less randomness (aka "noise") there is in our measure of model quality, and the more reliable it will be. Unfortunately, we can only get a large validation set by removing rows from our training data, and smaller training datasets mean worse models!
What is cross-validation?
In cross-validation, we run our modeling process on different subsets of the data to get multiple measures of model quality.
For example, we could begin by dividing the data into 5 pieces, each 20% of the full dataset. In this case, we say that we have broken the data into 5 "folds".
Then, we run one experiment for each fold
Step1: Then, we define a pipeline that uses an imputer to fill in missing values and a random forest model to make predictions.
While it's possible to do cross-validation without pipelines, it is quite difficult! Using a pipeline will make the code remarkably straightforward.
Step2: We obtain the cross-validation scores with the cross_val_score() function from scikit-learn. We set the number of folds with the cv parameter.
Step3: The scoring parameter chooses a measure of model quality to report | Python Code:
#$HIDE$
import pandas as pd
# Read the data
data = pd.read_csv('../input/melbourne-housing-snapshot/melb_data.csv')
# Select subset of predictors
cols_to_use = ['Rooms', 'Distance', 'Landsize', 'BuildingArea', 'YearBuilt']
X = data[cols_to_use]
# Select target
y = data.Price
Explanation: In this tutorial, you will learn how to use cross-validation for better measures of model performance.
Introduction
Machine learning is an iterative process.
You will face choices about what predictive variables to use, what types of models to use, what arguments to supply to those models, etc. So far, you have made these choices in a data-driven way by measuring model quality with a validation (or holdout) set.
But there are some drawbacks to this approach. To see this, imagine you have a dataset with 5000 rows. You will typically keep about 20% of the data as a validation dataset, or 1000 rows. But this leaves some random chance in determining model scores. That is, a model might do well on one set of 1000 rows, even if it would be inaccurate on a different 1000 rows.
At an extreme, you could imagine having only 1 row of data in the validation set. If you compare alternative models, which one makes the best predictions on a single data point will be mostly a matter of luck!
In general, the larger the validation set, the less randomness (aka "noise") there is in our measure of model quality, and the more reliable it will be. Unfortunately, we can only get a large validation set by removing rows from our training data, and smaller training datasets mean worse models!
What is cross-validation?
In cross-validation, we run our modeling process on different subsets of the data to get multiple measures of model quality.
For example, we could begin by dividing the data into 5 pieces, each 20% of the full dataset. In this case, we say that we have broken the data into 5 "folds".
Then, we run one experiment for each fold:
- In Experiment 1, we use the first fold as a validation (or holdout) set and everything else as training data. This gives us a measure of model quality based on a 20% holdout set.
- In Experiment 2, we hold out data from the second fold (and use everything except the second fold for training the model). The holdout set is then used to get a second estimate of model quality.
- We repeat this process, using every fold once as the holdout set. Putting this together, 100% of the data is used as holdout at some point, and we end up with a measure of model quality that is based on all of the rows in the dataset (even if we don't use all rows simultaneously).
When should you use cross-validation?
Cross-validation gives a more accurate measure of model quality, which is especially important if you are making a lot of modeling decisions. However, it can take longer to run, because it estimates multiple models (one for each fold).
So, given these tradeoffs, when should you use each approach?
- For small datasets, where extra computational burden isn't a big deal, you should run cross-validation.
- For larger datasets, a single validation set is sufficient. Your code will run faster, and you may have enough data that there's little need to re-use some of it for holdout.
There's no simple threshold for what constitutes a large vs. small dataset. But if your model takes a couple minutes or less to run, it's probably worth switching to cross-validation.
Alternatively, you can run cross-validation and see if the scores for each experiment seem close. If each experiment yields the same results, a single validation set is probably sufficient.
Example
We'll work with the same data as in the previous tutorial. We load the input data in X and the output data in y.
End of explanation
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
my_pipeline = Pipeline(steps=[('preprocessor', SimpleImputer()),
('model', RandomForestRegressor(n_estimators=50,
random_state=0))
])
Explanation: Then, we define a pipeline that uses an imputer to fill in missing values and a random forest model to make predictions.
While it's possible to do cross-validation without pipelines, it is quite difficult! Using a pipeline will make the code remarkably straightforward.
End of explanation
from sklearn.model_selection import cross_val_score
# Multiply by -1 since sklearn calculates *negative* MAE
scores = -1 * cross_val_score(my_pipeline, X, y,
cv=5,
scoring='neg_mean_absolute_error')
print("MAE scores:\n", scores)
Explanation: We obtain the cross-validation scores with the cross_val_score() function from scikit-learn. We set the number of folds with the cv parameter.
End of explanation
print("Average MAE score (across experiments):")
print(scores.mean())
Explanation: The scoring parameter chooses a measure of model quality to report: in this case, we chose negative mean absolute error (MAE). The docs for scikit-learn show a list of options.
It is a little surprising that we specify negative MAE. Scikit-learn has a convention where all metrics are defined so a high number is better. Using negatives here allows them to be consistent with that convention, though negative MAE is almost unheard of elsewhere.
We typically want a single measure of model quality to compare alternative models. So we take the average across experiments.
End of explanation |
9,917 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Burst statistics of 5 smFRET samples
Step1: 8-spot paper plot style
Step2: Compute Burst Data
Burst Search Parameters
Step3: Multispot
Correction Factors
Load the leakage coefficient from disk (computed in Multi-spot 5-Samples analysis - Leakage coefficient - Summary)
Step4: Load the direct excitation coefficient ($d_{dirT}$) from disk (computed in usALEX - Corrections - Direct excitation physical parameter)
Step5: Multispot Data files
Step6: usALEX
Step7: Load Data
Step8: Correcting na
$$Dir = d_{dirT} \, (n_a + \gamma \, n_d)$$
$$n_a = n_a^* - Lk - Dir = n_{al} - Dir = n_{al} - d_{dirT} \, (n_a + \gamma \, n_d)$$
$$(1 + d_{dirT}) \, n_a = n_{al} - d_{dirT} \gamma \, n_d$$
$$ n_a = \frac{n_{al} - d_{dirT} \gamma \, n_d}{1 + d_{dirT}}$$
Step9: Figures
Step10: Corrected burst size
Step11: Bursts Counts
DexAem Counts
Step12: DexDem Counts
Step13: Bursts Counts (8-spot mean)
Step14: Burst duration
Step15: NOTE
Step16: Peak photon rate, acceptor-ref
Step17: Peak photon rate Aem
Step18: NOTE | Python Code:
from fretbursts import *
sns = init_notebook()
import os
from glob import glob
import pandas as pd
from IPython.display import display
%config InlineBackend.figure_format='retina' # for hi-dpi displays
import lmfit
print('lmfit version:', lmfit.__version__)
figure_size = (5, 4)
default_figure = lambda: plt.subplots(figsize=figure_size)
save_figures = True
def savefig(filename, **kwargs):
if not save_figures:
return
import os
dir_ = 'figures/'
kwargs_ = dict(dpi=300, bbox_inches='tight')
#frameon=True, facecolor='white', transparent=False)
kwargs_.update(kwargs)
plt.savefig(dir_ + filename, **kwargs_)
print('Saved: %s' % (dir_ + filename))
Explanation: Burst statistics of 5 smFRET samples
End of explanation
PLOT_DIR = './figure/'
import matplotlib as mpl
from cycler import cycler
bmap = sns.color_palette("Set1", 9)
colors = np.array(bmap)[(1,0,2,3,4,8,6,7), :]
mpl.rcParams['axes.prop_cycle'] = cycler('color', colors)
colors_labels = ['blue', 'red', 'green', 'violet', 'orange', 'gray', 'brown', 'pink', ]
for c, cl in zip(colors, colors_labels):
locals()[cl] = tuple(c) # assign variables with color names
sns.palplot(colors)
Explanation: 8-spot paper plot style
End of explanation
m = 10
rate_th = 25e3
ph_sel = Ph_sel(Dex='DAem')
bg_kwargs_auto = dict(fun=bg.exp_fit,
time_s = 30,
tail_min_us = 'auto',
F_bg=1.7,)
samples = ('7d', '12d', '17d', '22d', '27d', 'DO')
Explanation: Compute Burst Data
Burst Search Parameters
End of explanation
leakage_coeff_fname = 'results/Multi-spot - leakage coefficient KDE wmean DexDem.csv'
leakageM = np.loadtxt(leakage_coeff_fname, ndmin=1)
print('Multispot Leakage Coefficient:', leakageM)
Explanation: Multispot
Correction Factors
Load the leakage coefficient from disk (computed in Multi-spot 5-Samples analysis - Leakage coefficient - Summary):
End of explanation
dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_t beta.csv'
dir_ex_t = np.loadtxt(dir_ex_coeff_fname, ndmin=1)
print('Direct excitation coefficient (dir_ex_t):', dir_ex_t)
gamma_fname = 'results/Multi-spot - gamma factor.csv'
gammaM = np.loadtxt(gamma_fname, ndmin=1)
print('Multispot Gamma Factor (gamma):', gammaM)
Explanation: Load the direct excitation coefficient ($d_{dirT}$) from disk (computed in usALEX - Corrections - Direct excitation physical parameter):
End of explanation
data_dir = './data/multispot/'
file_list = sorted(glob(data_dir + '*.hdf5'))
labels = ['7d', '12d', '17d', '22d', '27d', 'DO']
files_dict = {lab: fname for lab, fname in zip(sorted(labels), file_list)}
files_dict
BurstsM = {}
for data_id in samples:
dx = loader.photon_hdf5(files_dict[data_id])
dx.calc_bg(**bg_kwargs_auto)
dx.leakage = leakageM
dx.burst_search(m=m, min_rate_cps=rate_th, ph_sel=ph_sel)
dx.calc_max_rate(m=m, ph_sel=Ph_sel(Dex='Aem'))
max_rate_Aem = dx.max_rate
dx.calc_max_rate(m=m, ph_sel=Ph_sel(Dex='Dem'))
max_rate_Dem = dx.max_rate
dx.calc_max_rate(m=m, ph_sel=Ph_sel(Dex='DAem'))
bursts = pd.concat([bext.burst_data(dx, ich=ich)
.assign(ich=ich)
.assign(max_rate_Dem=max_rate_Dem[ich])
.assign(max_rate_Aem=max_rate_Aem[ich])
for ich in range(8)])
bursts = bursts.round({'E': 6, 'bg_d': 3, 'bg_a': 3, 'nd': 3, 'na': 3, 'nt': 3,
'width_ms': 4, 'max_rate': 3, 'max_rate_Dem': 3, 'max_rate_Aem': 3})
bursts.to_csv('results/bursts_multispot_{}_m={}_rate_th={}_ph={}.csv'.format(data_id, m, rate_th, ph_sel))
BurstsM[data_id] = bursts
Explanation: Multispot Data files
End of explanation
leakage_coeff_fname = 'results/usALEX - leakage coefficient DexDem.csv'
leakageA = np.loadtxt(leakage_coeff_fname, ndmin=1)
print('usALEX Leakage coefficient:', leakageA)
dir_ex_coeff_aa_fname = 'results/usALEX - direct excitation coefficient dir_ex_aa.csv'
dir_ex_aa = np.loadtxt(dir_ex_coeff_aa_fname, ndmin=1)
print('usALEX Direct excitation coefficient (dir_ex_aa):', dir_ex_aa)
gamma_fname = 'results/usALEX - gamma factor - all-ph.csv'
gammaA = np.loadtxt(gamma_fname, ndmin=1)
print('usALEX Gamma Factorr (gamma):', gammaA)
data_dir = './data/singlespot/'
file_list = sorted(glob(data_dir + '*.hdf5'))
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
BurstsA = {}
for data_id in samples:
if data_id not in files_dict:
continue
dx = loader.photon_hdf5(files_dict[data_id])
loader.usalex_apply_period(dx)
dx.calc_bg(**bg_kwargs_auto)
dx.leakage = leakageA
dx.burst_search(m=m, min_rate_cps=rate_th, ph_sel=ph_sel)
dx.calc_max_rate(m=m, ph_sel=Ph_sel(Dex='Aem'), compact=True)
max_rate_Aem = dx.max_rate
dx.calc_max_rate(m=m, ph_sel=Ph_sel(Dex='Dem'), compact=True)
max_rate_Dem = dx.max_rate
dx.calc_max_rate(m=m, ph_sel=Ph_sel(Dex='DAem'), compact=True)
bursts = (bext.burst_data(dx)
.assign(max_rate_Dem=max_rate_Dem[0])
.assign(max_rate_Aem=max_rate_Aem[0]))
bursts = bursts.round({'E': 6, 'S': 6, 'bg_d': 3, 'bg_a': 3, 'bg_aa': 3, 'nd': 3, 'na': 3, 'naa': 3, 'nda': 3, 'nt': 3,
'width_ms': 4, 'max_rate': 3, 'max_rate_Dem': 3, 'max_rate_Aem': 3})
bursts.to_csv('results/bursts_usALEX_{}_m={}_rate_th={}_ph={}.csv'.format(data_id, m, rate_th, ph_sel))
BurstsA[data_id] = bursts
Dex_fraction = bl.get_alex_fraction(dx.D_ON[0], dx.alex_period)
Dex_fraction
#Dex_fraction = 0.4375 # Use this when not loading the files
Explanation: usALEX
End of explanation
import pandas as pd
ph_sel
BurstsM = {}
for data_id in samples:
fname = 'results/bursts_multispot_{}_m={}_rate_th={}_ph={}.csv'.format(data_id, m, rate_th, ph_sel)
bursts = pd.read_csv(fname, index_col=0)
BurstsM[data_id] = bursts
BurstsA = {}
for data_id in samples:
if data_id == 'DO':
continue
fname = 'results/bursts_usALEX_{}_m={}_rate_th={}_ph={}.csv'.format(data_id, m, rate_th, ph_sel)
bursts = pd.read_csv(fname, index_col=0)
BurstsA[data_id] = bursts
bursts.head()
Explanation: Load Data
End of explanation
def corr_na(nal, nd, gamma, dir_ex_t):
return (nal - dir_ex_t * gamma * nd) / (1 + dir_ex_t)
# for bursts in BurstsA.values():
# bursts['na_corr1'] = corr_na(bursts.na, bursts.nd, gammaA, dir_ex_t)
# for bursts in BurstsA.values():
# bursts['na_corr2'] = bursts.na - dir_ex_aa*bursts.naa
# for bursts in BurstsA.values():
# plt.figure()
# plt.plot((bursts.na - bursts.na_corr1) - (bursts.na - bursts.na_corr2), 'o', alpha=0.2)
# plt.ylim(-3, 3)
Explanation: Correcting na
$$Dir = d_{dirT} \, (n_a + \gamma \, n_d)$$
$$n_a = n_a^* - Lk - Dir = n_{al} - Dir = n_{al} - d_{dirT} \, (n_a + \gamma \, n_d)$$
$$(1 + d_{dirT}) \, n_a = n_{al} - d_{dirT} \gamma \, n_d$$
$$ n_a = \frac{n_{al} - d_{dirT} \gamma \, n_d}{1 + d_{dirT}}$$
End of explanation
sns.set(style='ticks', font_scale=1.4, palette=colors)
lw = 2.5
Explanation: Figures
End of explanation
size_th = 20
fig, ax = plt.subplots(1, 2, figsize=(14, 5), sharey=True)
plt.subplots_adjust(wspace=0.05)
bins = np.arange(0, 200, 2)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
for ich in range(8):
for i, s in enumerate(samples[:]):
bursts = BurstsM[s]
color = colors[i]
mask = bursts.ich == ich
sizes = bursts.na[mask] + gammaM*bursts.nd[mask]
sizes = sizes.loc[sizes > size_th]
counts, bins = np.histogram(sizes, bins, normed=True)
label = s if ich == 0 else ''
ax[1].plot(x, counts, marker='o', ls='', color=color, label=label)
if ich == 0 and 'DO' not in s:
bursts = BurstsA[s]
sizes = bursts.na + bursts.nd * gammaA
sizes = sizes.loc[sizes > size_th]
counts, bins = np.histogram(sizes, bins, normed=True)
ax[0].plot(x, counts, marker='o', ls='', color=color, label=label)
plt.yscale('log')
ax[1].legend(title='Sample')
for a in ax:
sns.despine(ax=a)
a.set_title('Burst Size Distribution')
a.set_xlabel('Corrected Burst Size')
gammaA, gammaM
size_th = 15
fig, ax = plt.subplots(1, 2, figsize=(14, 5), sharey=True)
plt.subplots_adjust(wspace=0.05)
bins = np.arange(0, 400, 2)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
for ich in range(8):
for i, s in enumerate(samples[:1]):
bursts = BurstsM[s]
color = colors[i]
mask = bursts.ich == ich
sizes = bursts.na[mask]/gammaM + bursts.nd[mask]
sizes = sizes.loc[sizes > size_th]
counts, bins = np.histogram(sizes, bins, normed=True)
label = s if ich == 0 else ''
ax[1].plot(x, counts, marker='o', ls='', color=color, label=label)
if ich == 0 and 'DO' not in s:
bursts = BurstsA[s]
sizes = bursts.na/gammaA + bursts.nd
sizes = sizes.loc[sizes > size_th]
counts, bins = np.histogram(sizes, bins, normed=True)
ax[0].plot(x, counts, marker='o', ls='', color=color, label=label)
plt.yscale('log')
ax[1].legend(title='Sample')
for a in ax:
sns.despine(ax=a)
a.set_title('Burst Size Distribution')
a.set_xlabel('Corrected Burst Size')
Explanation: Corrected burst size
End of explanation
var = 'na'
size_th = 15
fig, ax = plt.subplots(1, 2, figsize=(11, 4.5), sharey=True, sharex=True)
plt.subplots_adjust(hspace=0.05)
#kws = dict(marker='o', ls='')
kws = dict(lw=lw)
var_labels = dict(na='DexAem', nd='DexDem')
bins = np.arange(0, 350, 5)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
for ich in range(8):
for i, s in enumerate(samples[:]):
bursts = BurstsM[s]
bursts = bursts.loc[bursts.ich == ich]
color = colors[i]
sizes = bursts.na + bursts.nd * gammaM
mask = (sizes > size_th)
data = bursts.loc[mask, var]
counts, bins = np.histogram(data, bins, normed=True)
if ich == 0:
ax[1].plot([], label=s, **kws) # empty lines for the legend
counts[counts == 0] = np.nan # break lines at zeros in log-scale
ax[1].plot(x, counts, color=color, alpha=0.5, **kws)
if ich == 0 and 'DO' not in s:
bursts = BurstsA[s]
sizes = bursts.na + bursts.nd * gammaA
mask = (sizes > size_th)
data = bursts.loc[mask, var]
counts, bins = np.histogram(data, bins, normed=True)
counts[counts == 0] = np.nan # break lines at zeros
ax[0].plot(x, counts, color=color, label=label, **kws)
plt.yscale('log')
plt.ylim(1e-4)
if var == 'na':
plt.xlim(0, 140)
ax[1].legend(title='Sample')
for a in ax:
sns.despine(ax=a)
#a.set_title('DexAem Burst Size Distribution')
a.set_xlabel('Photon Counts (%s)' % var_labels[var])
title_kw = dict(fontdict={'verticalalignment': 'top'}, fontsize=18)
ax[0].set_title('μs-ALEX', **title_kw)
ax[1].set_title('Multispot', **title_kw);
savefig('%s distribution usALEX vs multispot, size_th=%d' % (var, size_th))
savefig('%s distribution usALEX vs multispot, size_th=%d.svg' % (var, size_th))
Explanation: Bursts Counts
DexAem Counts
End of explanation
var = 'nd'
size_th = 15
fig, ax = plt.subplots(1, 2, figsize=(11, 4.5), sharey=True, sharex=True)
plt.subplots_adjust(hspace=0.05)
#kws = dict(marker='o', ls='')
kws = dict(lw=lw)
var_labels = dict(na='DexAem', nd='DexDem')
bins = np.arange(0, 350, 5)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
for ich in range(8):
for i, s in enumerate(samples[:]):
bursts = BurstsM[s]
bursts = bursts.loc[bursts.ich == ich]
color = colors[i]
sizes = bursts.na + bursts.nd * gammaM
mask = (sizes > size_th)
data = bursts.loc[mask, var]
counts, bins = np.histogram(data, bins, normed=True)
if ich == 0:
ax[1].plot([], label=s, **kws) # empty lines for the legend
counts[counts == 0] = np.nan # break lines at zeros in log-scale
ax[1].plot(x, counts, color=color, alpha=0.5, **kws)
if ich == 0 and 'DO' not in s:
bursts = BurstsA[s]
sizes = bursts.na + bursts.nd * gammaA
mask = (sizes > size_th)
data = bursts.loc[mask, var]
counts, bins = np.histogram(data, bins, normed=True)
counts[counts == 0] = np.nan # break lines at zeros
ax[0].plot(x, counts, color=color, label=label, **kws)
plt.yscale('log')
plt.ylim(1e-4)
if var == 'na':
plt.xlim(0, 140)
ax[1].legend(title='Sample')
for a in ax:
sns.despine(ax=a)
#a.set_title('DexAem Burst Size Distribution')
a.set_xlabel('Photon Counts (%s)' % var_labels[var])
title_kw = dict(fontdict={'verticalalignment': 'top'}, fontsize=18)
ax[0].set_title('μs-ALEX', **title_kw)
ax[1].set_title('Multispot', **title_kw);
savefig('%s distribution usALEX vs multispot, size_th=%d' % (var, size_th))
size_th = 20
fig, ax = plt.subplots(1, 2, figsize=(11, 4.5), sharey=True, sharex=True)
plt.subplots_adjust(hspace=0.05)
#kws = dict(marker='o', ls='')
kws = dict(lw=lw)
bins = np.arange(0, 200, 5)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
for ich in range(8):
for i, s in enumerate(samples[:]):
bursts = BurstsM[s]
bursts = bursts.loc[bursts.ich == ich]
color = colors[i]
sizes = bursts.na + bursts.nd * gammaM
burstss = bursts.loc[sizes > size_th]
data = burstss.na + burstss.nd * gammaM
counts, bins = np.histogram(data, bins, normed=True)
if ich == 0:
ax[1].plot([], label=s, **kws) # empty lines for the legend
counts[counts == 0] = np.nan # break lines at zeros
ax[1].plot(x, counts, color=color, alpha=0.5, **kws)
if ich == 0 and 'DO' not in s:
bursts = BurstsA[s]
sizes = bursts.na + bursts.nd * gammaA
burstss = bursts.loc[sizes > size_th]
data = burstss.na + burstss.nd * gammaA
counts, bins = np.histogram(data, bins, normed=True)
counts[counts == 0] = np.nan # break lines at zeros
ax[0].plot(x, counts, color=color, label=label, **kws)
plt.yscale('log')
plt.ylim(1e-4)
plt.xlim(0, 180)
ax[1].legend(title='Sample')
for a in ax:
sns.despine(ax=a)
#a.set_title('DexAem Burst Size Distribution')
a.set_xlabel('Photon Counts ($n_a + \gamma n_d$)')
title_kw = dict(fontdict={'verticalalignment': 'top'}, fontsize=18)
ax[0].set_title('μs-ALEX', **title_kw)
ax[1].set_title('Multispot', **title_kw);
savefig('nt distribution usALEX vs multispot, size_th=%d' % size_th)
Explanation: DexDem Counts
End of explanation
var = 'na'
size_th = 15
fig, ax = plt.subplots(1, 2, figsize=(11, 4.5), sharey=True, sharex=True)
plt.subplots_adjust(hspace=0.05)
#kws = dict(marker='o', ls='')
kws = dict(lw=lw)
bins = np.arange(0, 150, 4)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
for i, s in enumerate(samples[:]):
bursts = BurstsM[s]
#bursts = bursts.loc[bursts.ich == ich]
color = colors[i]
sizes = bursts.na + bursts.nd * gammaM
data = bursts.loc[sizes > size_th, var]
counts, bins = np.histogram(data, bins, normed=True)
label = s# if ich == 0 else ''
counts[counts == 0] = np.nan
ax[1].plot(x, counts, color=color, label=label, **kws)
if 'DO' not in s:
bursts = BurstsA[s]
sizes = bursts.na + bursts.nd * gammaA
data = bursts.loc[sizes > size_th, var]
counts, bins = np.histogram(data, bins, normed=True)
counts[counts == 0] = np.nan
ax[0].plot(x, counts, color=color, label=label, **kws)
plt.yscale('log')
plt.ylim(1e-4)
ax[1].legend(title='Sample')
for a in ax:
sns.despine(ax=a)
#a.set_title('DexAem Burst Size Distribution')
a.set_xlabel('Photon Counts')
title_kw = dict(fontdict={'verticalalignment': 'top'}, fontsize=18)
ax[0].set_title('μs-ALEX', **title_kw)
ax[1].set_title('Multispot', **title_kw);
savefig('%s distribution usALEX vs multispot mean, size_th=%d' % (var, size_th))
size_th = 15
fig, ax = plt.subplots(1, 2, figsize=(11, 4.5), sharey=True, sharex=True)
plt.subplots_adjust(hspace=0.05)
#kws = dict(marker='o', ls='')
kws = dict(lw=lw)
bins = np.arange(0, 300, 4)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
for i, s in enumerate(samples[:]):
bursts = BurstsM[s]
#bursts = bursts.loc[bursts.ich == ich]
color = colors[i]
sizes = bursts.na + bursts.nd * gammaM
data = bursts.loc[sizes > size_th, 'nd']
counts, bins = np.histogram(data, bins, normed=True)
label = s# if ich == 0 else ''
counts[counts == 0] = np.nan
ax[1].plot(x, counts, color=color, label=label, **kws)
if 'DO' not in s:
bursts = BurstsA[s]
sizes = bursts.na + bursts.nd * gammaA
data = bursts.nd.loc[sizes > size_th]
counts, bins = np.histogram(data, bins, normed=True)
counts[counts == 0] = np.nan
ax[0].plot(x, counts, color=color, label=label, **kws)
plt.yscale('log')
plt.ylim(1e-4)
ax[1].legend(title='Sample')
for a in ax:
sns.despine(ax=a, trim=True)
a.set_title('DexDem Burst Size Distribution')
a.set_xlabel('Photon Counts')
Explanation: Bursts Counts (8-spot mean)
End of explanation
var = 'width_ms'
size_th = 15
fig, ax = plt.subplots(1, 2, figsize=(11, 4.5), sharey=True, sharex=True)
plt.subplots_adjust(hspace=0.05)
kws = dict(lw=lw)
bins = np.arange(0, 8, 0.2)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
for i, s in enumerate(samples[:]):
for ich in range(8):
bursts = BurstsM[s]
color = colors[i]
burstsc = bursts.loc[bursts.ich == ich]
sizes = burstsc.na + burstsc.nd * gammaM
widths = burstsc.loc[sizes > size_th, var]
counts, bins = np.histogram(widths, bins, normed=True)
#label = s if ich == 0 else ''
if ich == 0:
ax[1].plot([], label=s, **kws) # empty lines for the legend
counts[counts == 0] = np.nan # break lines at zeros
ax[1].plot(x, counts, color=color, alpha=0.5,
zorder=5-i, **kws)
if ich == 0 and 'DO' not in s:
bursts = BurstsA[s]
sizes = bursts.na + bursts.nd * gammaA
widths = bursts.loc[sizes > size_th, var]
counts, bins = np.histogram(widths, bins, normed=True)
counts[counts == 0] = np.nan # break lines at zeros
ax[0].plot(x, counts, color=color, label=label, **kws)
plt.yscale('log')
plt.ylim(1e-3)
ax[1].legend(title='Sample')
for a in ax:
sns.despine(ax=a, trim=True)
#a.set_title('Burst Duration Distribution')
a.set_xlabel('Burst Duration (ms)')
title_kw = dict(fontdict={'verticalalignment': 'top'}, fontsize=18)
ax[0].set_title('μs-ALEX', **title_kw)
ax[1].set_title('Multispot', **title_kw);
savefig('%s distribution usALEX vs multispot, size_th=%d' % (var, size_th))
var = 'width_ms'
size_th = 15
fig, ax = plt.subplots(1, 2, figsize=(11, 4.5), sharey=True, sharex=True)
plt.subplots_adjust(hspace=0.05)
kws = dict(lw=lw)
bins = np.arange(0, 8, 0.2)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
for i, s in enumerate(samples[:]):
bursts = BurstsM[s]
color = colors[i]
sizes = bursts.na + bursts.nd * gammaM
widths = bursts.loc[sizes > size_th, var]
counts, bins = np.histogram(widths, bins, normed=True)
counts[counts == 0] = np.nan # break lines at zeros
ax[1].plot(x, counts, color=color, label=s, **kws)
if 'DO' not in s:
bursts = BurstsA[s]
sizes = bursts.na + bursts.nd * gammaA
widths = bursts.loc[sizes > size_th, var]
counts, bins = np.histogram(widths, bins, normed=True)
counts[counts == 0] = np.nan # break lines at zeros
ax[0].plot(x, counts, color=color, label=label, **kws)
plt.yscale('log')
plt.ylim(1e-3)
ax[1].legend(title='Sample')
for a in ax:
sns.despine(ax=a, trim=True)
#a.set_title('Burst Duration Distribution')
a.set_xlabel('Burst Duration (ms)')
title_kw = dict(fontdict={'verticalalignment': 'top'}, fontsize=18)
ax[0].set_title('μs-ALEX', **title_kw)
ax[1].set_title('Multispot', **title_kw);
savefig('%s distribution usALEX vs multispot mean, size_th=%d' % (var, size_th))
savefig('%s distribution usALEX vs multispot mean, size_th=%d.svg' % (var, size_th))
bins = np.arange(0, 8, 0.1)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
for ich in [5]:
for i, s in enumerate(samples[:4]):
bursts = BurstsM[s]
bursts = bursts.loc[bursts.ich == ich]
#color = colors[ich]
sizes = bursts.na + bursts.nd * gammaM
burstsm = bursts.loc[sizes > size_th]
counts, bins = np.histogram(burstsm.width_ms, bins=bins, normed=True)
label = s #if ich == 0 else ''
counts[counts == 0] = np.nan # break lines at zeros in log-scale
plt.plot(x, counts, marker='o', ls='-', alpha=1, label=label)
plt.yscale('log')
plt.legend(title='Sample')
sns.despine()
Explanation: Burst duration
End of explanation
size_th = 20
fig, ax = plt.subplots(1, 2, figsize=(11, 4.5), sharey=True, sharex=True)
plt.subplots_adjust(wspace=0.05)
kws = dict(lw=lw)
bins = np.arange(0, 600, 5)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
for i, s in enumerate(samples[:]):
bursts = BurstsM[s].fillna(0)
color = colors[i]
sizes = bursts.na + bursts.nd * gammaM
burstsm = bursts.loc[sizes > size_th]
max_rates = burstsm.max_rate_Aem + burstsm.max_rate_Dem * gammaM
counts, bins = np.histogram(max_rates*1e-3, bins=bins, normed=True)
label = s# if ich == 0 else ''
counts[counts == 0] = np.nan # break lines at zeros
ax[1].plot(x, counts, alpha=1, color=color, label=label, **kws)
if 'DO' not in s:
bursts = BurstsA[s].fillna(0)
sizes = bursts.na + bursts.nd * gammaA
burstsm = bursts.loc[sizes > size_th]
max_rates = burstsm.max_rate_Aem + burstsm.max_rate_Dem * gammaA
counts, bins = np.histogram(max_rates*1e-3, bins=bins, normed=True)
counts[counts == 0] = np.nan # break lines at zeros
ax[0].plot(x, counts, color=color, label=s, **kws)
plt.yscale('log')
plt.ylim(1e-4)
ax[1].legend(title='Sample')
for a in ax:
sns.despine(ax=a, trim=True)
a.set_title('Peak Photon-Rate Distribution (D*g + A)')
a.set_xlabel('Peak Photon Rate (kcps)')
Explanation: NOTE: No effect of reduced diffusion time in 22d sample is visible. FCS shows a 25% reduction instead.
Peak photon rate, donor-ref
End of explanation
size_th = 20
fig, ax = plt.subplots(1, 2, figsize=(11, 4.5), sharey=True, sharex=True)
plt.subplots_adjust(wspace=0.05)
kws = dict(lw=lw)
bins = np.arange(0, 800, 10)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
for i, s in enumerate(samples[:]):
bursts = BurstsM[s].fillna(0)
color = colors[i]
sizes = bursts.na/gammaM + bursts.nd
burstsm = bursts.loc[sizes > size_th]
max_rates = burstsm.max_rate_Aem/gammaM + burstsm.max_rate_Dem
counts, bins = np.histogram(max_rates*1e-3, bins=bins, normed=True)
label = s# if ich == 0 else ''
counts[counts == 0] = np.nan # break lines at zeros
ax[1].plot(x, counts, color=color, label=label, **kws)
if 'DO' not in s:
bursts = BurstsA[s].fillna(0)
sizes = bursts.na/gammaA + bursts.nd
burstsm = bursts.loc[sizes > size_th]
max_rates = burstsm.max_rate_Aem/gammaA + burstsm.max_rate_Dem
counts, bins = np.histogram(max_rates*1e-3, bins=bins, normed=True)
counts[counts == 0] = np.nan # break lines at zeros
ax[0].plot(x, counts, color=color, label=s, **kws)
plt.yscale('log')
plt.ylim(1e-4)
ax[1].legend(title='Sample')
for a in ax:
sns.despine(ax=a, trim=True)
a.set_title('Peak Photon-Rate Distribution (D + A/g)')
a.set_xlabel('Peak Photon Rate (kcps)')
Explanation: Peak photon rate, acceptor-ref
End of explanation
var = 'max_rate_Aem'
size_th = 15
fig, ax = plt.subplots(1, 2, figsize=(11, 4.5), sharey=True, sharex=True)
plt.subplots_adjust(hspace=0.05)
kws = dict(lw=lw)
bins = np.arange(0, 500, 10)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
for i, s in enumerate(samples[:]):
bursts = BurstsM[s].fillna(0)
color = colors[i]
sizes = bursts.na + bursts.nd * gammaM
burstsm = bursts.loc[sizes > size_th]
max_rates = burstsm.max_rate_Aem
counts, bins = np.histogram(max_rates*1e-3, bins=bins, normed=True)
counts[counts == 0] = np.nan # break lines at zeros
label = s# if ich == 0 else ''
ax[1].plot(x, counts, color=color, label=label, **kws)
if 'DO' not in s:
bursts = BurstsA[s].fillna(0)
sizes = bursts.na + bursts.nd * gammaA
burstsm = bursts.loc[sizes > size_th]
max_rates = burstsm.max_rate_Aem
counts, bins = np.histogram(max_rates*1e-3, bins=bins, normed=True)
counts[counts == 0] = np.nan # break lines at zeros
ax[0].plot(x, counts, color=color, label=s, **kws)
plt.yscale('log')
plt.ylim(1e-4)
ax[1].legend(title='Sample')
for a in ax:
sns.despine(ax=a, trim=True)
a.set_title('Peak Photon-Rate Distribution (Aem)')
a.set_xlabel('Peak Photon Rate (kcps)')
Dex_fraction
var = 'max_rate_Aem'
size_th = 15
fig, ax = plt.subplots(1, 2, figsize=(11, 4.5), sharey=True, sharex=True)
plt.subplots_adjust(wspace=0.08)
kws = dict(lw=lw)
bins = np.arange(0, 400, 10)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
for ich in range(8):
for i, s in enumerate(samples[:]):
bursts = BurstsM[s].fillna(0)
bursts = bursts.loc[bursts.ich == ich]
color = colors[i]
sizes = bursts.na + bursts.nd * gammaM
mask = (sizes > size_th)
data = bursts.loc[mask, var] * 1e-3
counts, bins = np.histogram(data, bins=bins, normed=True)
counts[counts == 0] = np.nan # break lines at zeros
if ich == 0:
ax[1].plot([], label=s, **kws) # empty lines for the legend
counts[counts == 0] = np.nan # break lines at zeros in log-scale
ax[1].plot(x, counts, color=color, alpha=0.5, **kws)
if ich == 0 and 'DO' not in s:
bursts = BurstsA[s].fillna(0)
sizes = bursts.na + bursts.nd * gammaA
mask = (sizes > size_th)
data = bursts.loc[mask, var] * 1e-3 * Dex_fraction
counts, bins = np.histogram(data, bins=bins, normed=True)
counts[counts == 0] = np.nan # break lines at zeros
ax[0].plot(x, counts, color=color, label=s, **kws)
plt.yscale('log')
plt.ylim(3e-5)
ax[1].legend(title='Sample')
for a in ax:
sns.despine(ax=a)
#a.set_title('Peak Photon-Rate Distribution (Aem)')
a.set_xlabel('Peak Photon Rate (kcps)')
title_kw = dict(fontdict={'verticalalignment': 'top'}, fontsize=18)
ax[0].set_title('μs-ALEX', **title_kw)
ax[1].set_title('Multispot', **title_kw);
savefig('%s distribution usALEX vs multispot, size_th=%d' % (var, size_th))
savefig('%s distribution usALEX vs multispot, size_th=%d.svg' % (var, size_th))
Explanation: Peak photon rate Aem
End of explanation
size_th = 15
fig, ax = plt.subplots(1, 2, figsize=(11, 4.5), sharey=True, sharex=True)
plt.subplots_adjust(wspace=0.05)
kws = dict(lw=lw)
bins = np.arange(0, 1000, 10)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
for i, s in enumerate(samples[:]):
bursts = BurstsM[s].fillna(0)
color = colors[i]
sizes = bursts.na + bursts.nd * gammaM
burstsm = bursts.loc[sizes > size_th]
max_rates = burstsm.max_rate_Dem
counts, bins = np.histogram(max_rates*1e-3, bins=bins, normed=True)
counts[counts == 0] = np.nan # break lines at zeros
label = s# if ich == 0 else ''
ax[1].plot(x, counts, color=color, label=label, **kws)
if 'DO' not in s:
bursts = BurstsA[s].fillna(0)
sizes = bursts.na + bursts.nd * gammaA
burstsm = bursts.loc[sizes > size_th]
max_rates = burstsm.max_rate_Dem
counts, bins = np.histogram(max_rates*1e-3, bins=bins, normed=True)
counts[counts == 0] = np.nan # break lines at zeros
ax[0].plot(x, counts, color=color, label=s, **kws)
plt.yscale('log')
ax[1].legend(title='Sample')
for a in ax:
sns.despine(ax=a, trim=True)
a.set_title('Peak Photon-Rate Distribution (Dem)')
a.set_xlabel('Peak Photon Rate (kcps)')
def gauss(x, sig):
return np.exp(-0.5 * (x/sig)**2)
box = dict(facecolor='y', alpha=0.2, pad=10)
x = np.arange(-10, 10, 0.01)
y = gauss(x, 1)
y2 = 1.19 * gauss(x, 1)
with plt.xkcd():
plt.plot(x, y, lw=3, label='reference, $\sigma$ = 1')
plt.plot(x, y2, lw=3, label='20% higher peak, $\sigma$ = 1')
plt.axhline(0.2, lw=2, ls='--', color='k')
plt.text(4, 0.22, 'burst search threshold')
sns.despine()
plt.legend(loc=(0.65, 0.7))
plt.text(-8.5, 0.95, 'Area Ratio: %.1f' % 1.2, bbox=box, fontsize=18)
x = np.arange(-10, 10, 0.01)
y = gauss(x, 1)
y3 = 1.19 * gauss(x, 5/3)
with plt.xkcd():
plt.plot(x, y, lw=3, label='reference, $\sigma$ = 1')
plt.plot(x, y3, lw=3, label='20%% higher peak and $\sigma$ = %.1f' % (5/3))
plt.axhline(0.2, lw=2, ls='--', color='k')
plt.text(4, 0.22, 'burst search threshold')
sns.despine()
plt.legend(loc=(0.65, 0.7))
plt.text(-8.5, 0.95, 'Area Ratio: %.1f' % (1.2 * (5/3)), bbox=box, fontsize=18)
Explanation: NOTE: The usALEX peak rates are computed removing the alternation gaps.
Therefore the rates estimated are the ones that would be reached with a CW
Dex, therefore with a higher mean excitation power. Therefore to make the
usALEX rates comparable with the multispot ones we need to mutliply the former
by Dex_fraction (i.e. ~2).
We observe that the multispot rates on the acceptor channel are ~ 20% larger
than in the usALEX case.
End of explanation |
9,918 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Atomic Charge Prediction
Introduction
In this notebook we will machine-learn the relationship between an atomic descriptor and its electron density using neural networks.
The atomic descriptor is a numerical representation of the chemical environment of the atom. Several choices are available for testing.
Reference Mulliken charges were calculated for 134k molecules at the CCSD level
Step1: Let's pick a descriptor and an atomic type.
Available descriptors are
Step2: Load the databases with the descriptors (input) and the correct charge densities (output). Databases are quite big, so we can decide how many samples to use for training.
Step3: Next we setup a multilayer perceptron of suitable size. Out package of choice is scikit-learn, but more efficient ones are available.<br>
Check the scikit-learn <a href="http
Step4: Training
Now comes the tough part! The idea of training is to evaluate the ANN with the training inputs and measure its error (since we know the correct outputs). It is then possible to compute the derivative (gradient) of the error w.r.t. each parameter (connections and biases). By shifting the parameters in the opposite direction of the gradient, we obtain a better set of parameters, that should give smaller error.
This procedure can be repeated until the error is minimised.<br><br>
It may take a while...
Step5: Check the ANN quality with a regression plot, showing the mismatch between the exact and NN predicted outputs for the validation set.
Step6: Exercises
1. Compare descriptors
Test the accuracy of different descriptors with the same NN size.
Step7: 2. Optimal NN
Find the smallest NN that gives good accuracy.
Step8: 3. Training sample size issues
Check whether the descriptor fails because it does not contain enough information, or because there was not enough training data.
Step9: 4. Combine with Principal Component Analysis - Advanced
Reduce the descriptor size with PCA (check the PCA.ipynb notebook) and train again. Can you get similar accuracy with much smaller networks?
Step10: 5. Putting it all together
After training NNs for each atomic species (MBTR), combine them into one program that predicts charges for all atoms in a molecule.
Compute local MBTR for the molecule below.
Compute all atomic charges with the NNs.
Is the total charge zero? If not, normalise it.
Note | Python Code:
# --- INITIAL DEFINITIONS ---
from sklearn.neural_network import MLPRegressor
import numpy, math, random
from scipy.sparse import load_npz
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from ase import Atoms
from visualise import view
Explanation: Atomic Charge Prediction
Introduction
In this notebook we will machine-learn the relationship between an atomic descriptor and its electron density using neural networks.
The atomic descriptor is a numerical representation of the chemical environment of the atom. Several choices are available for testing.
Reference Mulliken charges were calculated for 134k molecules at the CCSD level: each took 1-4 hours.
The problem is not trivial, even for humans.
<table><tr>
<td>
<img src="./images/complex-CX.png" width="350pt">
</td><td>
<img src="./images/complex-CH3-X.png" width="350pt">
</td></tr>
</table>
On the left we see the distribution of s electron density on C atoms in the database. Different chemical environments are shown with different colours. The stacked histograms on the right show the details of charge density for CH$_3$-CX depending on the environment of CX. The total amount of possible environments for C up to the second order exceeds 100, and the figure suggests the presence of third order effects. This is too complex to treat accurately with human intuition.
Let's see if we can train neural networks to give accurate predictions in milliseconds!
Setup
Here we use the ANN to model the relationship between the descriptors of atoms in molecules and the partial atomic charge density.
End of explanation
# Z is the atom type: allowed values are 1, 6, 7, 8, or 9
Z = 6
# TYPE is the descriptor type
TYPE = "mbtr"
#show descriptor details
print("\nDescriptor details")
desc = open("./data/descriptor."+TYPE+".txt","r").readlines()
for l in desc: print(l.strip())
print(" ")
Explanation: Let's pick a descriptor and an atomic type.
Available descriptors are:
* boba: bag of bonds (per atom)
* boba2: bag of bonds - advanced (per atom)
* acsf: atom centered symmetry functions - 40k atoms max for each type
* gnn: graph based fingerprint from NanoLayers
* soap: smooth overlap of atomic positions (per atom)
* mbtr: manybody tensor representation (per atom)
Possible atom types are:
* 1 = Hydrogen
* 6 = Carbon
* 7 = Nitrogen
* 8 = Oxygen
* 9 = Fluorine
End of explanation
# load input/output data
trainIn = load_npz("./data/charge."+str(Z)+".input."+TYPE+".npz").toarray()
trainOut = numpy.load("./data/charge."+str(Z)+".output.npy")
trainOut = trainOut[0:trainIn.shape[0]]
# decide how many samples to take from the database
samples = min(trainIn.shape[0], 9000) # change the number here if needed!
print("training samples: "+str(samples))
print("validation samples: "+str(trainIn.shape[0]-samples))
print("number of features: {}".format(trainIn.shape[1]))
# 70-30 split between training and validation
validIn = trainIn[samples:]
validOut = trainOut[samples:]
trainIn = trainIn[0:samples]
trainOut = trainOut[0:samples]
# shift and scale the inputs - OPTIONAL
train_mean = numpy.mean(trainIn, axis=0)
train_std = numpy.std(trainIn, axis=0)
train_std[train_std == 0] = 1
for a in range(trainIn.shape[1]):
trainIn[:,a] -= train_mean[a]
trainIn[:,a] /= train_std[a]
# also for validation set
for a in range(validIn.shape[1]):
validIn[:,a] -= train_mean[a]
validIn[:,a] /= train_std[a]
# show the first few descriptors
print("\nDescriptors for the first 5 atoms:")
print(trainIn[0:5])
Explanation: Load the databases with the descriptors (input) and the correct charge densities (output). Databases are quite big, so we can decide how many samples to use for training.
End of explanation
# setup the neural network
# alpha is a regularisation parameter, explained later
nn = MLPRegressor(hidden_layer_sizes=(10), activation='tanh', solver='adam', alpha=0.01, learning_rate='adaptive')
Explanation: Next we setup a multilayer perceptron of suitable size. Out package of choice is scikit-learn, but more efficient ones are available.<br>
Check the scikit-learn <a href="http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html">documentation</a> for a list of parameters.
End of explanation
# use this to change some parameters during training if needed
nn.set_params(solver='lbfgs')
nn.fit(trainIn, trainOut);
Explanation: Training
Now comes the tough part! The idea of training is to evaluate the ANN with the training inputs and measure its error (since we know the correct outputs). It is then possible to compute the derivative (gradient) of the error w.r.t. each parameter (connections and biases). By shifting the parameters in the opposite direction of the gradient, we obtain a better set of parameters, that should give smaller error.
This procedure can be repeated until the error is minimised.<br><br>
It may take a while...
End of explanation
# evaluate the training and validation error
trainMLOut = nn.predict(trainIn)
validMLOut = nn.predict(validIn)
print ("Mean Abs Error (training) : ", (numpy.abs(trainMLOut-trainOut)).mean())
print ("Mean Abs Error (validation): ", (numpy.abs(validMLOut-validOut)).mean())
plt.plot(validOut,validMLOut,'o')
plt.plot([Z-1,Z+1],[Z-1,Z+1]) # perfect fit line
plt.xlabel('correct output')
plt.ylabel('NN output')
plt.show()
# error histogram
plt.hist(validMLOut-validOut,50)
plt.xlabel("Error")
plt.ylabel("Occurrences")
plt.show()
Explanation: Check the ANN quality with a regression plot, showing the mismatch between the exact and NN predicted outputs for the validation set.
End of explanation
# DIY code here...
Explanation: Exercises
1. Compare descriptors
Test the accuracy of different descriptors with the same NN size.
End of explanation
# DIY code here...
Explanation: 2. Optimal NN
Find the smallest NN that gives good accuracy.
End of explanation
# DIY code here...
Explanation: 3. Training sample size issues
Check whether the descriptor fails because it does not contain enough information, or because there was not enough training data.
End of explanation
# DIY code here...
Explanation: 4. Combine with Principal Component Analysis - Advanced
Reduce the descriptor size with PCA (check the PCA.ipynb notebook) and train again. Can you get similar accuracy with much smaller networks?
End of explanation
# atomic positions as matrix
molxyz = numpy.load("./data/molecule.coords.npy")
# atom types
moltyp = numpy.load("./data/molecule.types.npy")
atoms_sys = Atoms(positions=molxyz, numbers=moltyp)
view(atoms_sys)
# compute MBTR descriptor for the molecule
# ...
# compute all atomic charghes using previously trained NNs
# ...
Explanation: 5. Putting it all together
After training NNs for each atomic species (MBTR), combine them into one program that predicts charges for all atoms in a molecule.
Compute local MBTR for the molecule below.
Compute all atomic charges with the NNs.
Is the total charge zero? If not, normalise it.
Note: Careful about the training: if the training data was transformed, the MBTR here should be as well.
End of explanation |
9,919 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 5
CHE 116
Step1: $$2^{10} \approx 10^3$$
$$2^{20} \approx 10^6$$
$$2^{10n} \approx 10^{3n}$$
1.2 Answer
Step2: 1.3 Answer
Step3: 1.4 Answer
Step4: 2. Watching Youtube with the Geometric Distribution (15 Points)
Write what quantity the question asks for symbolically, write the equation you need to compute symbolically (if necessary) and compute your answer in Python.
You accidentally open youtube while doing your homework. After watching one video, you find another interesting video you must watch with probability 25%. Define the sample space and define success to reflect a geometric distribution.
What's the probability that you will return after watching exactly one video?
What's the probability that you will return after watching exactly two videos?
Your friend wants to know when you will get back to homework. You explain the sample space is unbounded, but you guarantee there is a 99% chance you will return to your homework after how many videos? You must use python tutor on your loop to receive full credit.
What is the expected number of videos you will watch? You may use any method to compute this.
2.1 Answer
The sample space is $[1, \infty]$ and success is not watching another video and returning to your homework.
2.2 Answer
$$P(1) = 0.75$$
2.3 Answer
$$P(2) = 0.75\times 0.25 = 0.1874$$
Step5: 2.4 Answer
$$P(n < x) \geq 0.9$$
Step6: You will return after watching the 4th video
2.5 Answer
$$E[N] = \frac{1}{p} = \frac{4}{3}$$
3. Living Expenses (6 Points)
The per-capita (expected) income in the US is \$27,500 and its standard deviation is \$15,000. Assuming the normal distribution, answer the following questions. You may use scipy stats for all of these but you must use Z values.
First, let's check our assumption of normality. What is $P(x < \$0)$? Why does this matter for the normality assumption?
What's the probability of making over the starting salary of a chemical engineer, \$67,000? Does this make you think our assumption of normality is correct?
According to this distribution, how much money would you need to be in the top 1% of incomes?
3.1 Answer
$$\DeclareMathOperator{\erf}{erf}$$
$$Z = \frac{0 - 27,500}{15,000}$$
$$P(-\infty < x < 0)$$
Step7: The assumption is OK, only 3% of our probability is in "impossible" values of negative numbers
3.2 Answer
$$\int_{$67,000}^{\infty} \cal{N}(\$27500, \$15000)$$
Step8: The probability is 0.4%. It appears this is a bad model since income is much more spread than this.
3.3 Answer
We're trying to find $a$ such that $$P(a < x < \infty) = 0.99$$
Step9: The top 1% of earners is anyone above $62,395
4. The Excellent Retirement Simulator - Deterministic (20 Points)
We're going to write a program to discover how much to save in our retirement accounts. You are going to start with $P$ dollars, your principal. You invest in low-cost index funds which have an expected return of 5% (your principal is 105% of last years value). To live after your retirement, you withdraw \$30,000 per year. Complete the following tasks
Step10: 5. The Excellent Retirement Simulator - Stochastic (9 Points)
Rewrite your annual function to sample random numbers. Youur investment rate of return (maturation rate) should be sampled from a normal distribution with standard deviation 0.03 and mean 0.05. Your withdrawl should come from a normal distribution with mean \$30,000 and standard deviation \$10,000. Call your new annual, s_annual and your new terminator s_terminator. Answer the following questions | Python Code:
import numpy as np
ints = np.arange(1,21)
pows = 2**ints
print(pows)
print(pows[9], pows[19])
Explanation: Homework 5
CHE 116: Numerical Methods and Statistics
Prof. Andrew White
Version 1.1 (2/9/2016)
1. Python Practice (20 Points)
Answer the following problems in Python
[4 points] Using Numpy, create the first 20 powers of 2. What is 2^10? What about 2^20? What's the pattern?
[4 points] Using a for loop, sum integers from 1 to 100 and break if your sum is greater than 200. Print the integer which causes your sum to exceed 200.
[4 points] Define a function which takes in $n$ and $k$ and returns the number of permutations of $n$ objects in a sequence of length $k$. Don't forget to add a docstring!
[8 points] You're making post-cards for the Rochester Animal Shelter. They have 30 distinct cats and you can arrange them into a line. Each ordering of cats is considered a different post-card. Each post-card as the same number of cats in it. What's the smallest number of cats that should be in each post-card to achieve at least one million unique post-cards? Use a while loop, a break statement, and your function from the previous question. You must use python tutor on this problem to receive full credit
1.1 Answer
End of explanation
sum = 0
for i in range(1, 100):
sum += i
if sum > 200:
print(i, sum)
break
Explanation: $$2^{10} \approx 10^3$$
$$2^{20} \approx 10^6$$
$$2^{10n} \approx 10^{3n}$$
1.2 Answer
End of explanation
from scipy.special import factorial
def fxn(n, k):
'''Computes the number of permutations of n objects in a k-length sequence.
Args:
n: The number of objects
k: The sequence length
Retunrs:
The number of permutations of fixed length.
'''
return factorial(n) / factorial(n - k)
Explanation: 1.3 Answer
End of explanation
cats = 1
while fxn(n=30, k=cats) < 10**6:
cats += 1
print(cats)
Explanation: 1.4 Answer
End of explanation
0.75 * 0.25
Explanation: 2. Watching Youtube with the Geometric Distribution (15 Points)
Write what quantity the question asks for symbolically, write the equation you need to compute symbolically (if necessary) and compute your answer in Python.
You accidentally open youtube while doing your homework. After watching one video, you find another interesting video you must watch with probability 25%. Define the sample space and define success to reflect a geometric distribution.
What's the probability that you will return after watching exactly one video?
What's the probability that you will return after watching exactly two videos?
Your friend wants to know when you will get back to homework. You explain the sample space is unbounded, but you guarantee there is a 99% chance you will return to your homework after how many videos? You must use python tutor on your loop to receive full credit.
What is the expected number of videos you will watch? You may use any method to compute this.
2.1 Answer
The sample space is $[1, \infty]$ and success is not watching another video and returning to your homework.
2.2 Answer
$$P(1) = 0.75$$
2.3 Answer
$$P(2) = 0.75\times 0.25 = 0.1874$$
End of explanation
gsum = 0
n = 0
p = 0.75
while gsum < 0.99:
n += 1
gsum += (1 - p)**(n - 1) * p
print(n, gsum)
Explanation: 2.4 Answer
$$P(n < x) \geq 0.9$$
End of explanation
from scipy import stats as ss
mu = 27500
sig = 15000
Z = (0 - mu) / sig
print(ss.norm.cdf(Z))
Explanation: You will return after watching the 4th video
2.5 Answer
$$E[N] = \frac{1}{p} = \frac{4}{3}$$
3. Living Expenses (6 Points)
The per-capita (expected) income in the US is \$27,500 and its standard deviation is \$15,000. Assuming the normal distribution, answer the following questions. You may use scipy stats for all of these but you must use Z values.
First, let's check our assumption of normality. What is $P(x < \$0)$? Why does this matter for the normality assumption?
What's the probability of making over the starting salary of a chemical engineer, \$67,000? Does this make you think our assumption of normality is correct?
According to this distribution, how much money would you need to be in the top 1% of incomes?
3.1 Answer
$$\DeclareMathOperator{\erf}{erf}$$
$$Z = \frac{0 - 27,500}{15,000}$$
$$P(-\infty < x < 0)$$
End of explanation
Z = (67000 - mu) / sig
print(1 - ss.norm.cdf(Z))
Explanation: The assumption is OK, only 3% of our probability is in "impossible" values of negative numbers
3.2 Answer
$$\int_{$67,000}^{\infty} \cal{N}(\$27500, \$15000)$$
End of explanation
ss.norm.ppf(0.99, scale=sig, loc=mu)
Explanation: The probability is 0.4%. It appears this is a bad model since income is much more spread than this.
3.3 Answer
We're trying to find $a$ such that $$P(a < x < \infty) = 0.99$$
End of explanation
def annual(P, r=0.05, W=30000):
'''Computes the change in principal after one year
Args:
P: The principal - amount of money at the beginning of the year
r: The rate of return from principal
W: The amount withdrawn
Returns:
The new principal'''
P -= W
P *= (r + 1)
return P
def terminator(P, r=0.05, W=30000, upper_limit=50):
'''Finds the number of years before the principal is exhausted.
Args:
P: The principal - amount of money at the beginning of the year
r: The rate of return from principal
W: The amount withdrawn
upper_limit: The maximum iterations before giving up.
Returns:
The new principal'''
for i in range(upper_limit):
if(P < 0):
break
P = annual(P, r, W)
return i
terminator(250000)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
ps = [i / 1000. for i in range(10**5, int(5 * 10**5), 100)]
ys = [terminator(p * 1000) for p in ps]
plt.plot(ps, ys)
plt.xlabel('Thousands of Dollars')
plt.ylabel('Years')
plt.show()
ps = [i / 1000. for i in range(10**5, int(7.5 * 10**5), 100)]
ys = [terminator(p * 1000, upper_limit=1000) for p in ps]
plt.plot(ps, ys)
plt.show()
Explanation: The top 1% of earners is anyone above $62,395
4. The Excellent Retirement Simulator - Deterministic (20 Points)
We're going to write a program to discover how much to save in our retirement accounts. You are going to start with $P$ dollars, your principal. You invest in low-cost index funds which have an expected return of 5% (your principal is 105% of last years value). To live after your retirement, you withdraw \$30,000 per year. Complete the following tasks:
Write a function which takes in a principal, $P$, maturation rate of $r$, and withdrawl amount of $W$. It returns the amount of money remaining. Withdrawl occurs before maturation. Using your function, with a principal of \$250,000 and the numbers given above, how much money remains after 1 year? Call this function annual.
Write a new function called terminator which takes in $P$, $r$ and $W$. It should use your annual function and a for loop. It returns the number of years before your principal is gone and you have no more retirement money. Since you're using a for loop, you should have some upper bound. Let's use 50 for that upper bound. To test your method, you should get 11 years using the numbers from part 1.
Make a graph of principal vs number of years. You should use a for loop, not numpy, since your functions aren't built for numpy arrays. Your principals should run from \$100,000 to \$500,000.
Make your terminator (for this part only) have an upper bonud for 1000 years. You are a vampire. Make a plot from \$100,000 to \$750,000. How much money do you need in retirement as a vampire?
End of explanation
from scipy import stats as ss
def s_annual(P, r=0.05, W=30000, sig_r=0.03, sig_W=10000):
'''Computes the change in principal after one year with stochastic
Args:
P: The principal - amount of money at the beginning of the year
r: The rate of return from principal
W: The amount withdrawn
Returns:
The new principal'''
P -= ss.norm.rvs(size=1,scale=sig_W, loc=W)
P *= (ss.norm.rvs(size=1, scale=sig_r, loc=r) + 1)
return P
def s_terminator(P, r=0.05, W=30000, upper_limit=50):
for i in range(upper_limit):
if(P < 0):
break
P = s_annual(P, r, W)
return i
samples = []
for i in range(1000):
samples.append(s_terminator(2.5 * 10**5))
plt.hist(samples)
plt.show()
def s_threshold(P, y):
'''Returns the fraction of times the principal P lasts longer than y'''
success = 0
for i in range(1000):
if s_terminator(P) > y:
success += 1
return success / 1000
s_threshold(2.5 * 10**5, 10)
p = np.linspace(1 * 10**5, 10 * 10**5, 200)
for pi in p:
if s_threshold(pi, 25) > 0.95:
print(pi)
break
import random
import scipy.stats
scipy.stats.geom?
Explanation: 5. The Excellent Retirement Simulator - Stochastic (9 Points)
Rewrite your annual function to sample random numbers. Youur investment rate of return (maturation rate) should be sampled from a normal distribution with standard deviation 0.03 and mean 0.05. Your withdrawl should come from a normal distribution with mean \$30,000 and standard deviation \$10,000. Call your new annual, s_annual and your new terminator s_terminator. Answer the following questions:
Previously you calculated you can live for 11 years off of a principal of \$250,000. Using your new terminator and annual functions, make a histogram for how many years you can live off of \$250,000. Use 1000 samples.
Create a function which takes in a principal and a target number of years for the principal to last. It should return what fraction of terminator runs succeed in lasting that many years. For example, a principal of \$250,000 should have about 55%-60% success with 10 years.
Using any method you would like (e.g., trial and error or plotting), what should your principal be to ensure retirement for 25 years in 95% of the samples?
End of explanation |
9,920 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mean field inference for $\mathbb{Z}_2$ Syncronization
We illustrate the $\mathbb{Z}_2$ syncronization inference problem using pyro.
Step1: The model
Our model is
$$
Y_{ij} = \frac{\lambda}{n}\sigma_i\sigma_j+ \frac{W_{ij}}{\sqrt{n}},
$$
with $\sigma_{i}\in{\pm 1}$, $i=1,\ldots n$, where $W_{i>j}\in \mathcal{N}(0,1)$, with $W_{ij}=W_{ji}$ and $W_{ii}=0$. Thus we need to sample from the distribution
$$
p(\sigma,Y;m) = \prod_i p(\sigma_i;m_i) \prod_{i>j} \sqrt{\frac{n}{2\pi}}\exp\left[-\frac{N\left(Y_{ij} - \lambda \sigma_i \sigma_j/n\right)^2}{2}\right],
$$
where the first factor describes the Bernoulli distributions, parameterized in terms of their expectations $m_i$.
$$
p(\sigma_i=\pm 1;m_i) = \frac{1\pm m_i}{2}.
$$
Actually, we want to obtain $p(\sigma|Y)$, which amounts to determining posterior $m_i(Y)$.
The planted ensemble
First we need to make some observations, using the above model. We will observe the $Y_{i>j}$ with a Gaussian likelihood, with the mean set by the variables $\sigma_j$. This is what is called the planted ensemble in this review.
Step2: Setting it up in Pyro
As per this guide, to do variational inference in Pyro, we need to define a model and a guide. The model consists of
Observations (pyro.observe), in our case $Y_{ij}$
Latent random variables (pyro.sample), $\sigma_j$
Parameters (pyro.param), $m_i$
The guide is the variational distribution. It is also a stochastic function, but without pyro.observe statements. | Python Code:
# import some dependencies
import torch
from torch.autograd import Variable
import numpy as np
import pyro
import pyro.distributions as dist
import pyro
from pyro.infer import SVI
Explanation: Mean field inference for $\mathbb{Z}_2$ Syncronization
We illustrate the $\mathbb{Z}_2$ syncronization inference problem using pyro.
End of explanation
np.random.standard_normal([2,3])
Explanation: The model
Our model is
$$
Y_{ij} = \frac{\lambda}{n}\sigma_i\sigma_j+ \frac{W_{ij}}{\sqrt{n}},
$$
with $\sigma_{i}\in{\pm 1}$, $i=1,\ldots n$, where $W_{i>j}\in \mathcal{N}(0,1)$, with $W_{ij}=W_{ji}$ and $W_{ii}=0$. Thus we need to sample from the distribution
$$
p(\sigma,Y;m) = \prod_i p(\sigma_i;m_i) \prod_{i>j} \sqrt{\frac{n}{2\pi}}\exp\left[-\frac{N\left(Y_{ij} - \lambda \sigma_i \sigma_j/n\right)^2}{2}\right],
$$
where the first factor describes the Bernoulli distributions, parameterized in terms of their expectations $m_i$.
$$
p(\sigma_i=\pm 1;m_i) = \frac{1\pm m_i}{2}.
$$
Actually, we want to obtain $p(\sigma|Y)$, which amounts to determining posterior $m_i(Y)$.
The planted ensemble
First we need to make some observations, using the above model. We will observe the $Y_{i>j}$ with a Gaussian likelihood, with the mean set by the variables $\sigma_j$. This is what is called the planted ensemble in this review.
End of explanation
def Z2_model(λ, n, data):
m_0 = Variable(torch.ones(n)*0.5) # 50% success rate
var = Variable(torch.ones(1)) / np.sqrt(N)
σ = 2 * pyro.sample('σ', dist.bernoulli, m_0) - 1 # σ variables live in {-1,1}
for i in range(n):
for j in range(i):
pyro.observe(f"obs_{i}{j}",
dist.normal, Z2_data[i][j], λ*σ[i]*σ[j] / n, var)
def Z2_guide(λ, n, data):
m_var_0 = Variable(torch.ones(n)*0.5, requires_grad=True)
m_var = pyro.param("m_var", m_var_0)
pyro.sample('σ', dist.bernoulli, m_var)
svi = SVI(model, guide, optimizer, loss="ELBO")
Explanation: Setting it up in Pyro
As per this guide, to do variational inference in Pyro, we need to define a model and a guide. The model consists of
Observations (pyro.observe), in our case $Y_{ij}$
Latent random variables (pyro.sample), $\sigma_j$
Parameters (pyro.param), $m_i$
The guide is the variational distribution. It is also a stochastic function, but without pyro.observe statements.
End of explanation |
9,921 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <h1 align="center">Visualizing TensorFlow
Step2: The TensorFlow Execution Graph
If we now launch tensorboard and navigate to http
Step3: After re-running with the loss function summary and re-launching tensorboard, we'll see something like this under the SCALARS tab | Python Code:
%pylab inline
pylab.style.use('ggplot')
import numpy as np
import tensorflow as tf
import os
import shutil
from contextlib import contextmanager
@contextmanager
def event_logger(logdir, session):
Hands out a managed tensorflow summary writer.
Cleans up the event log directory before every run.
if os.path.isdir(logdir):
shutil.rmtree(logdir)
os.makedirs(logdir)
writer = tf.summary.FileWriter(logdir, session.graph)
yield writer
writer.flush()
writer.close()
x1 = np.random.rand(50)
x2 = np.random.rand(50)
y_ = 2*x1 + 3*x2 + 5
X_data = np.column_stack([x1, x2, np.ones(50)])
y_data = np.atleast_2d(y_).T
# Same as before, but this time with TensorBoard output
log_event_dir = r'C:\Temp\ols_logs\run_1'
# This is necessary to avoid appending the tensors in our OLS example
# repeatedly into the default graph each time this cell is re-run.
tf.reset_default_graph()
X = tf.placeholder(shape=[50, 3], dtype=np.float64, name='X')
y = tf.placeholder(shape=[50, 1], dtype=np.float64, name='y')
w = tf.Variable(np.random.rand(3, 1), dtype=np.float64, name='w')
y_hat = tf.matmul(X, w)
loss_func = tf.reduce_mean(tf.squared_difference(y_hat, y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5)
train_op = optimizer.minimize(loss_func)
with tf.Session() as session:
with event_logger(log_event_dir, session):
tensorboard_cmd = 'tensorboard --logdir={}'.format(log_event_dir)
print('Logging events to {}, use \n\n\t{}\n\n to start a new tensorboard session.'.format(
log_event_dir, tensorboard_cmd))
init_op = tf.global_variables_initializer()
session.run(init_op)
feed_dict = {X: X_data, y: y_data}
for step in range(1, 501):
session.run(train_op, feed_dict=feed_dict)
if step % 50 == 0:
current_w = np.squeeze(w.eval(session=session))
print('Result after {} iterations: {}'.format(step, current_w))
Explanation: <h1 align="center">Visualizing TensorFlow: TensorBoard</h1>
Tensorboard is a suite of visualization tools to simplify analysis of TensorFlow programs.
We can use TensorBoard to
Visualize your TensorFlow graph
Plot quantitative metrics about the execution of your graph
An example snapshot of TensorBoard:
<img src="https://www.tensorflow.org/versions/r0.10/images/mnist_tensorboard.png" />
Let's attach an event logger to our OLS example.
End of explanation
# Same as before, but this time with a TensorBoard output for the loss function
log_event_dir = r'C:\Temp\ols_logs\run_2'
# This is necessary to avoid appending the tensors in our OLS example
# repeatedly into the default graph each time this cell is re-run.
tf.reset_default_graph()
X = tf.placeholder(shape=[50, 3], dtype=np.float64, name='X')
y = tf.placeholder(shape=[50, 1], dtype=np.float64, name='y')
w = tf.Variable(np.random.rand(3, 1), dtype=np.float64, name='w')
y_hat = tf.matmul(X, w)
loss_func = tf.reduce_mean(tf.squared_difference(y_hat, y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5)
train_op = optimizer.minimize(loss_func)
# Add a tensor with summary of the loss function
loss_summary = tf.summary.scalar('loss', loss_func)
summary_op = tf.summary.merge_all()
with tf.Session() as session:
with event_logger(log_event_dir, session) as recorder:
tensorboard_cmd = 'tensorboard --logdir={}'.format(log_event_dir)
print('Logging events to {}, use \n\n\t{}\n\n to start a new tensorboard session.'.format(
log_event_dir, tensorboard_cmd))
init_op = tf.global_variables_initializer()
session.run(init_op)
feed_dict = {X: X_data, y: y_data}
for step in range(1, 501):
_, summary_result = session.run([train_op, summary_op], feed_dict=feed_dict)
if step % 10 == 0:
recorder.add_summary(summary_result, step)
if step % 50 == 0:
current_w = np.squeeze(w.eval(session=session))
print('Result after {} iterations: {}'.format(step, current_w))
Explanation: The TensorFlow Execution Graph
If we now launch tensorboard and navigate to http://localhost:6006, we'll see something like this (under the GRAPHS tab):
<img src="tensorbord_ols_graph.PNG" />
Emitting Custom Events
Since we're using a Gradient Descent based optimizer to minimize the MSE, an important diagnostic information is the value of the loss fuction as a function of number of iterations. So let's add a custom event to record the value of the loss function in the TensorFlow event logger infrastructure that we can later examine with TensorBoard.
End of explanation
# Same as before, but this time with a TensorBoard output for the loss function AND regression coefficients
log_event_dir = r'C:\Temp\ols_logs\run_3'
# This is necessary to avoid appending the tensors in our OLS example
# repeatedly into the default graph each time this cell is re-run.
tf.reset_default_graph()
X = tf.placeholder(shape=[50, 3], dtype=np.float64, name='X')
y = tf.placeholder(shape=[50, 1], dtype=np.float64, name='y')
w = tf.Variable(np.random.rand(3, 1), dtype=np.float64, name='w')
y_hat = tf.matmul(X, w)
loss_func = tf.reduce_mean(tf.squared_difference(y_hat, y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5)
train_op = optimizer.minimize(loss_func)
# summary for the loss function
loss_summary = tf.summary.scalar('loss', loss_func)
# summary for w
w_summary = tf.summary.histogram('coefficients', w)
summary_op = tf.summary.merge_all()
with tf.Session() as session:
with event_logger(log_event_dir, session) as recorder:
tensorboard_cmd = 'tensorboard --logdir={}'.format(log_event_dir)
print('Logging events to {}, use \n\n\t{}\n\n to start a new tensorboard session.'.format(
log_event_dir, tensorboard_cmd))
init_op = tf.global_variables_initializer()
session.run(init_op)
feed_dict = {X: X_data, y: y_data}
for step in range(1, 501):
_, summary_result = session.run([train_op, summary_op], feed_dict=feed_dict)
if step % 10 == 0:
recorder.add_summary(summary_result, step)
if step % 50 == 0:
current_w = np.squeeze(w.eval(session=session))
print('Result after {} iterations: {}'.format(step, current_w))
Explanation: After re-running with the loss function summary and re-launching tensorboard, we'll see something like this under the SCALARS tab:
<img src="tensorbord_ols_loss_function.PNG" />
End of explanation |
9,922 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PANDAS
Module 1
Step1: Overview
Two primary data structures of pandas
* Series (1-dimensional) array
* DataFrame (tabular, spreadsheet)
Indexing and slicing of pandas objects
Arithmetic operations and function applications
Series
One-dimensional array-like object containing
Step2: Get index object of the Series via its index attributes, or create our own index
Step3: Operations
Any operations preserve the index-value link
Step4: Indexing
Selecting items from an object by index
Step5: Automatic Alignment
Series automatically aligns differently indexed data in arithmetic operations
Step6: DataFrames
Tabular, spreadsheet-like data structure
Contains an ordered collection of columns, each of which can be a different value type
Step7: Extracting COLUMNS
A column in DataFrame can be retrieved as a Series either by
Step8: Extracting ROWS
Rows can be retrieved by a position or name by a couple of methods
Step9: Modifying Data
Columns can be modified by assignment
Step10: Assigning a Series
If we assign a Series, it will be conformed exactly to the DataFrame’s index, inserting missing values in any hole.
Step11: Transposing a Matrix – .T method
Step12: Values
The values attribute returns the data contained in the DataFrame as a 2D array
Step13: INDEXING and SLICING with pandas objects
Index Objects
Panda’s Index objects are responsible for holding the axis labels and other metadata (like the axis name or names).
Any array or other sequence of labels used when constructing a Series or DataFrame is internally converted to an Index
Step14: ReIndexing Series
ReIndexing creates a new object with the data conformed to a new index
Step15: Forward Filling
The method option ffill forward fills the values
Step16: Indexing Series
Step17: Slicing Series
Step18: Setting works just as with arrays
Step19: INDEXING DataFrames
Indexing a DataFrame retrieves one or more columns either with a single value or sequence
* Retrieve single columns
* Select multiple columns
Step20: Selecting rows
Step21: pandas FUNCTION Applications
How to apply
Step22: Adding Series data objects
Alignment is performed on both the rows and the columns
Adding two Series returns a DataFrame whose index and columns are unions of each Series
Where a value is included in one object, but not the other, will return NaN
Step23: Arithmetic methods with fill values
In arithmetic operations between differently indexed objects, you might want to fill with a special value, like 0, when an axis label is found in one object, but not in the other
We specify add, with fill_value, and the missing value in other Series assumes value of 0
Step24: Operations between DataFrame and Series are similar
By default, arithmetic between a DataFrame and a Series
Step25: Importing from Excel
Pandas support various import and export options, very easy to import file from Excel
For example, suppose we have an Excel file “Excel_table” with contents
Step26: Function Applications
NumPy ufuncs (element-wise array methods) work with pandas object
Use abs() function to obtain absolute values of object d
Returns DF with all positive values
Step27: Function Definition
We can define and then apply a function to find both min and max values in each column
Returns a Series that will contain pairs of min and max values for each of three columns
Use keyword apply and then classify function to apply in parentheses
Step28: Sorting Series object
sort_index() method returns a new object sorted lexi-cographically by index
Step29: Sorting DataFrame
Sort by index on either axis
With sort_index, rows will be sorted alphabetically, and can specify the axis (axis=0, rows)
if we specify axis = 1, the the columns will be sorted alphabetically
Step30: Sorting by Values
To sort a Series by its values, use its sort_values method
Step31: Sorting with Missing Values
Using sort_values will sort numerical values first
NaN missing values will be included at end of Series with index of their position in original
Step32: Sum a DataFrame
Calling DataFrame’s sum method returns a Series containing column sums
Default returns sum across columns; set axis=1 to sum across rows
Step33: idxmin and idxmax
return the index value where the minimum or maximum values are attained | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: PANDAS
Module 1: Data Structure and Applications
pandas is an open source, BSD-licensed Python library providing fast, flexible, easy-to-use, data structures and data analysis tools for working with “relational” or “labeled” data
Pandas is built on top of NumPy and is intended to integrate well within a scientific computing environment with many other 3rd party libraries.
Created by Wes McKinney in 2008 used to manipulate, clean, query data
https://github.com/wesm/pydata-book
Documentation
http://pandas.pydata.org/pandas-docs/stable/
http://pandas.pydata.org/pandas-docs/stable/10min.html
Import: Pandas, NumPy, Matplotlib
End of explanation
from pandas import Series, DataFrame
s = Series([3, -1, 0, 5])
s
Explanation: Overview
Two primary data structures of pandas
* Series (1-dimensional) array
* DataFrame (tabular, spreadsheet)
Indexing and slicing of pandas objects
Arithmetic operations and function applications
Series
One-dimensional array-like object containing:
* Array of data (of any NumPy data type)
* Associated array of data labels, called its index
* Mapping of index values to data values
* Array operations preserve the index-value link
* Series automatically align differently indexed data in arithmetic operations
End of explanation
s.index
s2 = Series([13, -3, 5, 9],
index = ["a", "b","c", "d"])
s2
Explanation: Get index object of the Series via its index attributes, or create our own index:
End of explanation
s + 3
s2 * 3
Explanation: Operations
Any operations preserve the index-value link
End of explanation
s[s > 0]
s2[["b","c"]]
Explanation: Indexing
Selecting items from an object by index
End of explanation
names1 = ["Ann", "Bob", "Carl", "Doris"]
balance1 = [200, 100, 300, 400]
account1 = Series(balance1, index=names1)
account1
names2 = ["Carl", "Doris", "Ann", "Bob"]
balance2 = [20, 10, 30, 40]
account2 = Series(balance2, index=names2)
account2
# Automatic alignment by index
account1 + account2
Explanation: Automatic Alignment
Series automatically aligns differently indexed data in arithmetic operations
End of explanation
# values are equal length lists; keys
data = {"Name": ["Ann", "Bob", "Carl", "Doris"],
"HW1": [ 90, 85, 70, 100],
"HW2": [ 80, 70, 90, 90]}
# Create a data frame
grades = DataFrame(data)
grades
grades = DataFrame(data, columns = ["Name", "HW1", "HW2"])
grades
Explanation: DataFrames
Tabular, spreadsheet-like data structure
Contains an ordered collection of columns, each of which can be a different value type:
Numeric
String
Boolean
DataFrame, has both a row and column index
* Extract rows and columns
* Handle missing data
Easiest way to construct DataFrame is from a dictionary of equal-length lists or NumPy arrays
Example: DataFrame
* First, create a dictionary with 3 key-value pairs
* Then, create a data frame
End of explanation
grades["Name"]
Explanation: Extracting COLUMNS
A column in DataFrame can be retrieved as a Series either by:
* dict-like notation: grades[“Name”]
* by attribute: grades.Name
End of explanation
grades.iloc[2]
Explanation: Extracting ROWS
Rows can be retrieved by a position or name by a couple of methods:
* .loc for label based indexing
* .iloc for positional indexing
End of explanation
grades["HW3"] = 0
grades
Explanation: Modifying Data
Columns can be modified by assignment
End of explanation
HW3 = Series([70, 90], index = [1, 3])
grades["HW3"] = HW3
grades
Explanation: Assigning a Series
If we assign a Series, it will be conformed exactly to the DataFrame’s index, inserting missing values in any hole.
End of explanation
grades.T
Explanation: Transposing a Matrix – .T method
End of explanation
grades.values
Explanation: Values
The values attribute returns the data contained in the DataFrame as a 2D array
End of explanation
grades = Series([ 60, 90, 80, 75],
index= ["a", "b", "c", "d"])
grades
Explanation: INDEXING and SLICING with pandas objects
Index Objects
Panda’s Index objects are responsible for holding the axis labels and other metadata (like the axis name or names).
Any array or other sequence of labels used when constructing a Series or DataFrame is internally converted to an Index
End of explanation
grades = Series([ 60, 90, 80, 75],
index = ["Bob", "Tom", "Ann", "Jane"])
grades
Explanation: ReIndexing Series
ReIndexing creates a new object with the data conformed to a new index
End of explanation
a = Series(['A', 'B', 'C'], index = [0, 3, 5])
a
a.reindex(range(6), method="ffill")
Explanation: Forward Filling
The method option ffill forward fills the values:
End of explanation
s = Series(np.arange(5), index = ["a", "b", "c", "d", "e"])
s
s["c"]
s[3]
Explanation: Indexing Series:
Series indexing works analogously to NumPy array indexing except you can use the Series’ index values instead of only integers
End of explanation
s[1:3]
s[s >= 2]
s[["b", "e", "a"]]
Explanation: Slicing Series:
Count elements in series starting from 0
Slices up to, but not including last item in index
Can specify order of items to return
End of explanation
s["b" : "d"] = 33
s
Explanation: Setting works just as with arrays:
Setting values b through d to 33 inclusive
End of explanation
grades = DataFrame(np.arange(16).reshape((4, 4)),
index = ["Andy", "Brad", "Carla", "Donna"],
columns = ["HW1", "HW2", "HW3", "HW4"])
grades
grades["HW1"]
grades[["HW2", "HW3"]]
Explanation: INDEXING DataFrames
Indexing a DataFrame retrieves one or more columns either with a single value or sequence
* Retrieve single columns
* Select multiple columns
End of explanation
g = DataFrame(np.arange(16).reshape((4, 4)),
index = ["Ann", "Bob", "Carl", "Donna"],
columns = ["HW1", "HW2", "HW3", "HW4"])
g[g["HW3"] > 6]
# Select rows up to, but not including 2
g[:2]
Explanation: Selecting rows:
Return rows in HW3 with values greater than 6
End of explanation
a = Series([5, 4, 0, 7],
index = ["a", "c", "d", "e"])
b = Series([-1, 3, 4, -2, 1],
index = ["a", "c", "e", "f", "g"])
a
b
Explanation: pandas FUNCTION Applications
How to apply:
Arithmetic operations on DataFrames
Arithmetic operations between differently indexed-objects
Operations between a DataFrames and a Series
Applications of NumPy ufuncs (element-wise array methods) to pandas objects
Arithmetic and data alignment
When adding together two objects, if any index pairs are not the same, the respective index in the result will be the union of the index pairs
The internal data alignment introduces NaN values in the indices that don’t overlap
End of explanation
a + b
Explanation: Adding Series data objects
Alignment is performed on both the rows and the columns
Adding two Series returns a DataFrame whose index and columns are unions of each Series
Where a value is included in one object, but not the other, will return NaN
End of explanation
a.add(b, fill_value=0)
Explanation: Arithmetic methods with fill values
In arithmetic operations between differently indexed objects, you might want to fill with a special value, like 0, when an axis label is found in one object, but not in the other
We specify add, with fill_value, and the missing value in other Series assumes value of 0
End of explanation
st0 = Series([0, 1, 2, 3],
index = ["HW1", "HW2", "HW3", "HW4"])
st0
grades + st0
Explanation: Operations between DataFrame and Series are similar
By default, arithmetic between a DataFrame and a Series:
* matches the index of the Series, on the DataFrame’s columns,
* broadcasting down the rows
End of explanation
xls_file = pd.ExcelFile('Excel_table.xlsx')
t = xls_file.parse("Sheet1")
t
Explanation: Importing from Excel
Pandas support various import and export options, very easy to import file from Excel
For example, suppose we have an Excel file “Excel_table” with contents:
Need to parse the worksheet in which the data is located
End of explanation
d = DataFrame(np.random.randn(3, 3),
columns=list("xyz"),
index = ["A", "B", "C"])
d
np.abs(d)
Explanation: Function Applications
NumPy ufuncs (element-wise array methods) work with pandas object
Use abs() function to obtain absolute values of object d
Returns DF with all positive values
End of explanation
def minmax(t):
return Series([t.min(), t.max()],
index = ["min", "max"])
d.apply(minmax)
Explanation: Function Definition
We can define and then apply a function to find both min and max values in each column
Returns a Series that will contain pairs of min and max values for each of three columns
Use keyword apply and then classify function to apply in parentheses
End of explanation
a = Series(range(5),
index = ["Bob", "john", "Jane", "Ann", "Cathy"])
a.sort_index()
Explanation: Sorting Series object
sort_index() method returns a new object sorted lexi-cographically by index
End of explanation
df = DataFrame(np.arange(16).reshape((4, 4)),
index = ["B", "Q", "M", "A"],
columns = list("pseb"))
df
# Sort index alphabetically by ROW
df.sort_index(axis=0)
# Sort index alpha by COLUMN
df.sort_index(axis=1)
Explanation: Sorting DataFrame
Sort by index on either axis
With sort_index, rows will be sorted alphabetically, and can specify the axis (axis=0, rows)
if we specify axis = 1, the the columns will be sorted alphabetically
End of explanation
s= Series([7, -2, 0, 8, -1])
s.sort_values()
Explanation: Sorting by Values
To sort a Series by its values, use its sort_values method:
End of explanation
s2 = Series([7, np.nan, -2, 0, np.nan, 8, -1])
s2
s2.sort_values()
Explanation: Sorting with Missing Values
Using sort_values will sort numerical values first
NaN missing values will be included at end of Series with index of their position in original
End of explanation
df = DataFrame(np.arange(9).reshape(3, 3),
columns=list("xyz"),
index = ["A", "B", "C"])
df
df.sum()
df.sum(axis=1)
Explanation: Sum a DataFrame
Calling DataFrame’s sum method returns a Series containing column sums
Default returns sum across columns; set axis=1 to sum across rows
End of explanation
df.idxmax()
df.idxmin()
Explanation: idxmin and idxmax
return the index value where the minimum or maximum values are attained:
End of explanation |
9,923 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
===========================================================
Plot single trial activity, grouped by ROI and sorted by RT
===========================================================
This will produce what is sometimes called an event related
potential / field (ERP/ERF) image.
The EEGLAB example file - containing an experiment with button press responses
to simple visual stimuli - is read in and response times are calculated.
ROIs are determined by the channel types (in 10/20 channel notation,
even channels are right, odd are left, and 'z' are central). The
median and the Global Field Power within each channel group is calculated,
and the trials are plotted, sorted by response time.
Step1: Load EEGLAB example data (a small EEG dataset)
Step2: Create Epochs
Step3: Plot | Python Code:
# Authors: Jona Sassenhagen <jona.sassenhagen@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.datasets import testing
from mne import Epochs, io, pick_types
from mne.event import define_target_events
print(__doc__)
Explanation: ===========================================================
Plot single trial activity, grouped by ROI and sorted by RT
===========================================================
This will produce what is sometimes called an event related
potential / field (ERP/ERF) image.
The EEGLAB example file - containing an experiment with button press responses
to simple visual stimuli - is read in and response times are calculated.
ROIs are determined by the channel types (in 10/20 channel notation,
even channels are right, odd are left, and 'z' are central). The
median and the Global Field Power within each channel group is calculated,
and the trials are plotted, sorted by response time.
End of explanation
data_path = testing.data_path()
fname = data_path + "/EEGLAB/test_raw.set"
montage = data_path + "/EEGLAB/test_chans.locs"
event_id = {"rt": 1, "square": 2} # must be specified for str events
eog = {"FPz", "EOG1", "EOG2"}
raw = io.eeglab.read_raw_eeglab(fname, eog=eog, montage=montage,
event_id=event_id)
picks = pick_types(raw.info, eeg=True)
events = mne.find_events(raw)
Explanation: Load EEGLAB example data (a small EEG dataset)
End of explanation
# define target events:
# 1. find response times: distance between "square" and "rt" events
# 2. extract A. "square" events B. followed by a button press within 700 msec
tmax = .7
sfreq = raw.info["sfreq"]
reference_id, target_id = 2, 1
new_events, rts = define_target_events(events, reference_id, target_id, sfreq,
tmin=0., tmax=tmax, new_id=2)
epochs = Epochs(raw, events=new_events, tmax=tmax + .1,
event_id={"square": 2}, picks=picks)
Explanation: Create Epochs
End of explanation
# Parameters for plotting
order = rts.argsort() # sorting from fast to slow trials
rois = dict()
for pick, channel in enumerate(epochs.ch_names):
last_char = channel[-1] # for 10/20, last letter codes the hemisphere
roi = ("Midline" if last_char == "z" else
("Left" if int(last_char) % 2 else "Right"))
rois[roi] = rois.get(roi, list()) + [pick]
# The actual plots
for combine_measures in ('gfp', 'median'):
epochs.plot_image(group_by=rois, order=order, overlay_times=rts / 1000.,
sigma=1.5, combine=combine_measures,
ts_args=dict(vlines=[0, rts.mean() / 1000.]))
Explanation: Plot
End of explanation |
9,924 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!-- dom
Step1: Here follows a simple example where we set up an array of ten elements, all determined by random numbers drawn according to the normal distribution,
Step2: We defined a vector $x$ with $n=10$ elements with its values given by the Normal distribution $N(0,1)$.
Another alternative is to declare a vector as follows
Step3: Here we have defined a vector with three elements, with $x_0=1$, $x_1=2$ and $x_2=3$. Note that both Python and C++
start numbering array elements from $0$ and on. This means that a vector with $n$ elements has a sequence of entities $x_0, x_1, x_2, \dots, x_{n-1}$. We could also let (recommended) Numpy to compute the logarithms of a specific array as
Step4: In the last example we used Numpy's unary function $np.log$. This function is
highly tuned to compute array elements since the code is vectorized
and does not require looping. We normaly recommend that you use the
Numpy intrinsic functions instead of the corresponding log function
from Python's math module. The looping is done explicitely by the
np.log function. The alternative, and slower way to compute the
logarithms of a vector would be to write
Step5: We note that our code is much longer already and we need to import the log function from the math module.
The attentive reader will also notice that the output is $[1, 1, 2]$. Python interprets automagically our numbers as integers (like the automatic keyword in C++). To change this we could define our array elements to be double precision numbers as
Step6: or simply write them as double precision numbers (Python uses 64 bits as default for floating point type variables), that is
Step7: To check the number of bytes (remember that one byte contains eight bits for double precision variables), you can use simple use the itemsize functionality (the array $x$ is actually an object which inherits the functionalities defined in Numpy) as
Step8: Matrices in Python
Having defined vectors, we are now ready to try out matrices. We can
define a $3 \times 3 $ real matrix $\hat{A}$ as (recall that we user
lowercase letters for vectors and uppercase letters for matrices)
Step9: If we use the shape function we would get $(3, 3)$ as output, that is verifying that our matrix is a $3\times 3$ matrix. We can slice the matrix and print for example the first column (Python organized matrix elements in a row-major order, see below) as
Step10: We can continue this was by printing out other columns or rows. The example here prints out the second column
Step11: Numpy contains many other functionalities that allow us to slice, subdivide etc etc arrays. We strongly recommend that you look up the Numpy website for more details. Useful functions when defining a matrix are the np.zeros function which declares a matrix of a given dimension and sets all elements to zero
Step12: or initializing all elements to
Step13: or as unitarily distributed random numbers (see the material on random number generators in the statistics part)
Step14: As we will see throughout these lectures, there are several extremely useful functionalities in Numpy.
As an example, consider the discussion of the covariance matrix. Suppose we have defined three vectors
$\hat{x}, \hat{y}, \hat{z}$ with $n$ elements each. The covariance matrix is defined as
$$
\hat{\Sigma} = \begin{bmatrix} \sigma_{xx} & \sigma_{xy} & \sigma_{xz} \
\sigma_{yx} & \sigma_{yy} & \sigma_{yz} \
\sigma_{zx} & \sigma_{zy} & \sigma_{zz}
\end{bmatrix},
$$
where for example
$$
\sigma_{xy} =\frac{1}{n} \sum_{i=0}^{n-1}(x_i- \overline{x})(y_i- \overline{y}).
$$
The Numpy function np.cov calculates the covariance elements using the factor $1/(n-1)$ instead of $1/n$ since it assumes we do not have the exact mean values.
The following simple function uses the np.vstack function which takes each vector of dimension $1\times n$ and produces a $3\times n$ matrix $\hat{W}$
$$
\hat{W} = \begin{bmatrix} x_0 & y_0 & z_0 \
x_1 & y_1 & z_1 \
x_2 & y_2 & z_2 \
\dots & \dots & \dots \
x_{n-2} & y_{n-2} & z_{n-2} \
x_{n-1} & y_{n-1} & z_{n-1}
\end{bmatrix},
$$
which in turn is converted into into the $3\times 3$ covariance matrix
$\hat{\Sigma}$ via the Numpy function np.cov(). We note that we can also calculate
the mean value of each set of samples $\hat{x}$ etc using the Numpy
function np.mean(x). We can also extract the eigenvalues of the
covariance matrix through the np.linalg.eig() function.
Step15: Meet the Pandas
<!-- dom
Step16: In the above we have imported pandas with the shorthand pd, the latter has become the standard way we import pandas. We make then a list of various variables
and reorganize the aboves lists into a DataFrame and then print out a neat table with specific column labels as Name, place of birth and date of birth.
Displaying these results, we see that the indices are given by the default numbers from zero to three.
pandas is extremely flexible and we can easily change the above indices by defining a new type of indexing as
Step17: Thereafter we display the content of the row which begins with the index Aragorn
Step18: We can easily append data to this, for example
Step19: Here are other examples where we use the DataFrame functionality to handle arrays, now with more interesting features for us, namely numbers. We set up a matrix
of dimensionality $10\times 5$ and compute the mean value and standard deviation of each column. Similarly, we can perform mathematial operations like squaring the matrix elements and many other operations.
Step20: Thereafter we can select specific columns only and plot final results
Step21: We can produce a $4\times 4$ matrix
Step22: and many other operations.
The Series class is another important class included in
pandas. You can view it as a specialization of DataFrame but where
we have just a single column of data. It shares many of the same features as _DataFrame. As with DataFrame,
most operations are vectorized, achieving thereby a high performance when dealing with computations of arrays, in particular labeled arrays.
As we will see below it leads also to a very concice code close to the mathematical operations we may be interested in.
For multidimensional arrays, we recommend strongly xarray. xarray has much of the same flexibility as pandas, but allows for the extension to higher dimensions than two. We will see examples later of the usage of both pandas and xarray.
Reading Data and fitting
In order to study various Machine Learning algorithms, we need to
access data. Acccessing data is an essential step in all machine
learning algorithms. In particular, setting up the so-called design
matrix (to be defined below) is often the first element we need in
order to perform our calculations. To set up the design matrix means
reading (and later, when the calculations are done, writing) data
in various formats, The formats span from reading files from disk,
loading data from databases and interacting with online sources
like web application programming interfaces (APIs).
In handling various input formats, as discussed above, we will mainly stay with pandas,
a Python package which allows us, in a seamless and painless way, to
deal with a multitude of formats, from standard csv (comma separated
values) files, via excel, html to hdf5 formats. With pandas
and the DataFrame and Series functionalities we are able to convert text data
into the calculational formats we need for a specific algorithm. And our code is going to be
pretty close the basic mathematical expressions.
Our first data set is going to be a classic from nuclear physics, namely all
available data on binding energies. Don't be intimidated if you are not familiar with nuclear physics. It serves simply as an example here of a data set.
We will show some of the
strengths of packages like Scikit-Learn in fitting nuclear binding energies to
specific functions using linear regression first. Then, as a teaser, we will show you how
you can easily implement other algorithms like decision trees and random forests and neural networks.
But before we really start with nuclear physics data, let's just look at some simpler polynomial fitting cases, such as,
(don't be offended) fitting straight lines!
Simple linear regression model using scikit-learn
We start with perhaps our simplest possible example, using Scikit-Learn to perform linear regression analysis on a data set produced by us.
What follows is a simple Python code where we have defined a function
$y$ in terms of the variable $x$. Both are defined as vectors with $100$ entries.
The numbers in the vector $\hat{x}$ are given
by random numbers generated with a uniform distribution with entries
$x_i \in [0,1]$ (more about probability distribution functions
later). These values are then used to define a function $y(x)$
(tabulated again as a vector) with a linear dependence on $x$ plus a
random noise added via the normal distribution.
The Numpy functions are imported used the import numpy as np
statement and the random number generator for the uniform distribution
is called using the function np.random.rand(), where we specificy
that we want $100$ random variables. Using Numpy we define
automatically an array with the specified number of elements, $100$ in
our case. With the Numpy function randn() we can compute random
numbers with the normal distribution (mean value $\mu$ equal to zero and
variance $\sigma^2$ set to one) and produce the values of $y$ assuming a linear
dependence as function of $x$
$$
y = 2x+N(0,1),
$$
where $N(0,1)$ represents random numbers generated by the normal
distribution. From Scikit-Learn we import then the
LinearRegression functionality and make a prediction $\tilde{y} =
\alpha + \beta x$ using the function fit(x,y). We call the set of
data $(\hat{x},\hat{y})$ for our training data. The Python package
scikit-learn has also a functionality which extracts the above
fitting parameters $\alpha$ and $\beta$ (see below). Later we will
distinguish between training data and test data.
For plotting we use the Python package
matplotlib which produces publication
quality figures. Feel free to explore the extensive
gallery of examples. In
this example we plot our original values of $x$ and $y$ as well as the
prediction ypredict ($\tilde{y}$), which attempts at fitting our
data with a straight line.
The Python code follows here.
Step23: This example serves several aims. It allows us to demonstrate several
aspects of data analysis and later machine learning algorithms. The
immediate visualization shows that our linear fit is not
impressive. It goes through the data points, but there are many
outliers which are not reproduced by our linear regression. We could
now play around with this small program and change for example the
factor in front of $x$ and the normal distribution. Try to change the
function $y$ to
$$
y = 10x+0.01 \times N(0,1),
$$
where $x$ is defined as before. Does the fit look better? Indeed, by
reducing the role of the noise given by the normal distribution we see immediately that
our linear prediction seemingly reproduces better the training
set. However, this testing 'by the eye' is obviouly not satisfactory in the
long run. Here we have only defined the training data and our model, and
have not discussed a more rigorous approach to the cost function.
We need more rigorous criteria in defining whether we have succeeded or
not in modeling our training data. You will be surprised to see that
many scientists seldomly venture beyond this 'by the eye' approach. A
standard approach for the cost function is the so-called $\chi^2$
function (a variant of the mean-squared error (MSE))
$$
\chi^2 = \frac{1}{n}
\sum_{i=0}^{n-1}\frac{(y_i-\tilde{y}_i)^2}{\sigma_i^2},
$$
where $\sigma_i^2$ is the variance (to be defined later) of the entry
$y_i$. We may not know the explicit value of $\sigma_i^2$, it serves
however the aim of scaling the equations and make the cost function
dimensionless.
Minimizing the cost function is a central aspect of
our discussions to come. Finding its minima as function of the model
parameters ($\alpha$ and $\beta$ in our case) will be a recurring
theme in these series of lectures. Essentially all machine learning
algorithms we will discuss center around the minimization of the
chosen cost function. This depends in turn on our specific
model for describing the data, a typical situation in supervised
learning. Automatizing the search for the minima of the cost function is a
central ingredient in all algorithms. Typical methods which are
employed are various variants of gradient methods. These will be
discussed in more detail later. Again, you'll be surprised to hear that
many practitioners minimize the above function ''by the eye', popularly dubbed as
'chi by the eye'. That is, change a parameter and see (visually and numerically) that
the $\chi^2$ function becomes smaller.
There are many ways to define the cost function. A simpler approach is to look at the relative difference between the training data and the predicted data, that is we define
the relative error (why would we prefer the MSE instead of the relative error?) as
$$
\epsilon_{\mathrm{relative}}= \frac{\vert \hat{y} -\hat{\tilde{y}}\vert}{\vert \hat{y}\vert}.
$$
The squared cost function results in an arithmetic mean-unbiased
estimator, and the absolute-value cost function results in a
median-unbiased estimator (in the one-dimensional case, and a
geometric median-unbiased estimator for the multi-dimensional
case). The squared cost function has the disadvantage that it has the tendency
to be dominated by outliers.
We can modify easily the above Python code and plot the relative error instead
Step24: Depending on the parameter in front of the normal distribution, we may
have a small or larger relative error. Try to play around with
different training data sets and study (graphically) the value of the
relative error.
As mentioned above, Scikit-Learn has an impressive functionality.
We can for example extract the values of $\alpha$ and $\beta$ and
their error estimates, or the variance and standard deviation and many
other properties from the statistical data analysis.
Here we show an
example of the functionality of Scikit-Learn.
Step25: The function coef gives us the parameter $\beta$ of our fit while intercept yields
$\alpha$. Depending on the constant in front of the normal distribution, we get values near or far from $alpha =2$ and $\beta =5$. Try to play around with different parameters in front of the normal distribution. The function meansquarederror gives us the mean square error, a risk metric corresponding to the expected value of the squared (quadratic) error or loss defined as
$$
MSE(\hat{y},\hat{\tilde{y}}) = \frac{1}{n}
\sum_{i=0}^{n-1}(y_i-\tilde{y}_i)^2,
$$
The smaller the value, the better the fit. Ideally we would like to
have an MSE equal zero. The attentive reader has probably recognized
this function as being similar to the $\chi^2$ function defined above.
The r2score function computes $R^2$, the coefficient of
determination. It provides a measure of how well future samples are
likely to be predicted by the model. Best possible score is 1.0 and it
can be negative (because the model can be arbitrarily worse). A
constant model that always predicts the expected value of $\hat{y}$,
disregarding the input features, would get a $R^2$ score of $0.0$.
If $\tilde{\hat{y}}_i$ is the predicted value of the $i-th$ sample and $y_i$ is the corresponding true value, then the score $R^2$ is defined as
$$
R^2(\hat{y}, \tilde{\hat{y}}) = 1 - \frac{\sum_{i=0}^{n - 1} (y_i - \tilde{y}i)^2}{\sum{i=0}^{n - 1} (y_i - \bar{y})^2},
$$
where we have defined the mean value of $\hat{y}$ as
$$
\bar{y} = \frac{1}{n} \sum_{i=0}^{n - 1} y_i.
$$
Another quantity taht we will meet again in our discussions of regression analysis is
the mean absolute error (MAE), a risk metric corresponding to the expected value of the absolute error loss or what we call the $l1$-norm loss. In our discussion above we presented the relative error.
The MAE is defined as follows
$$
\text{MAE}(\hat{y}, \hat{\tilde{y}}) = \frac{1}{n} \sum_{i=0}^{n-1} \left| y_i - \tilde{y}_i \right|.
$$
We present the
squared logarithmic (quadratic) error
$$
\text{MSLE}(\hat{y}, \hat{\tilde{y}}) = \frac{1}{n} \sum_{i=0}^{n - 1} (\log_e (1 + y_i) - \log_e (1 + \tilde{y}_i) )^2,
$$
where $\log_e (x)$ stands for the natural logarithm of $x$. This error
estimate is best to use when targets having exponential growth, such
as population counts, average sales of a commodity over a span of
years etc.
Finally, another cost function is the Huber cost function used in robust regression.
The rationale behind this possible cost function is its reduced
sensitivity to outliers in the data set. In our discussions on
dimensionality reduction and normalization of data we will meet other
ways of dealing with outliers.
The Huber cost function is defined as
$$
H_{\delta}(a)={\begin{cases}{\frac {1}{2}}{a^{2}}&{\text{for }}|a|\leq \delta ,\\delta (|a|-{\frac {1}{2}}\delta ),&{\text{otherwise.}}\end{cases}}}.
$$
Here $a=\boldsymbol{y} - \boldsymbol{\tilde{y}}$.
We will discuss in more
detail these and other functions in the various lectures. We conclude this part with another example. Instead of
a linear $x$-dependence we study now a cubic polynomial and use the polynomial regression analysis tools of scikit-learn.
Step26: To our real data
Step27: Before we proceed, we define also a function for making our plots. You can obviously avoid this and simply set up various matplotlib commands every time you need them. You may however find it convenient to collect all such commands in one function and simply call this function.
Step29: Our next step is to read the data on experimental binding energies and
reorganize them as functions of the mass number $A$, the number of
protons $Z$ and neutrons $N$ using pandas. Before we do this it is
always useful (unless you have a binary file or other types of compressed
data) to actually open the file and simply take a look at it!
In particular, the program that outputs the final nuclear masses is written in Fortran with a specific format. It means that we need to figure out the format and which columns contain the data we are interested in. Pandas comes with a function that reads formatted output. After having admired the file, we are now ready to start massaging it with pandas. The file begins with some basic format information.
Step30: The data we are interested in are in columns 2, 3, 4 and 11, giving us
the number of neutrons, protons, mass numbers and binding energies,
respectively. We add also for the sake of completeness the element name. The data are in fixed-width formatted lines and we will
covert them into the pandas DataFrame structure.
Step31: We have now read in the data, grouped them according to the variables we are interested in.
We see how easy it is to reorganize the data using pandas. If we
were to do these operations in C/C++ or Fortran, we would have had to
write various functions/subroutines which perform the above
reorganizations for us. Having reorganized the data, we can now start
to make some simple fits using both the functionalities in numpy and
Scikit-Learn afterwards.
Now we define five variables which contain
the number of nucleons $A$, the number of protons $Z$ and the number of neutrons $N$, the element name and finally the energies themselves.
Step32: The next step, and we will define this mathematically later, is to set up the so-called design matrix. We will throughout call this matrix $\boldsymbol{X}$.
It has dimensionality $p\times n$, where $n$ is the number of data points and $p$ are the so-called predictors. In our case here they are given by the number of polynomials in $A$ we wish to include in the fit.
Step33: With scikitlearn we are now ready to use linear regression and fit our data.
Step34: Pretty simple!
Now we can print measures of how our fit is doing, the coefficients from the fits and plot the final fit together with our data.
Step35: Seeing the wood for the trees
As a teaser, let us now see how we can do this with decision trees using scikit-learn. Later we will switch to so-called random forests!
Step36: And what about using neural networks?
The seaborn package allows us to visualize data in an efficient way. Note that we use scikit-learn's multi-layer perceptron (or feed forward neural network)
functionality. | Python Code:
import numpy as np
Explanation: <!-- dom:TITLE: Data Analysis and Machine Learning: Getting started, our first data and Machine Learning encounters -->
Data Analysis and Machine Learning: Getting started, our first data and Machine Learning encounters
<!-- dom:AUTHOR: Morten Hjorth-Jensen at Department of Physics, University of Oslo & Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University -->
<!-- Author: -->
Morten Hjorth-Jensen, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University
Date: Dec 25, 2019
Copyright 1999-2019, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license
Introduction
Our emphasis throughout this series of lectures
is on understanding the mathematical aspects of
different algorithms used in the fields of data analysis and machine learning.
However, where possible we will emphasize the
importance of using available software. We start thus with a hands-on
and top-down approach to machine learning. The aim is thus to start with
relevant data or data we have produced
and use these to introduce statistical data analysis
concepts and machine learning algorithms before we delve into the
algorithms themselves. The examples we will use in the beginning, start with simple
polynomials with random noise added. We will use the Python
software package Scikit-Learn and
introduce various machine learning algorithms to make fits of
the data and predictions. We move thereafter to more interesting
cases such as data from say experiments (below we will look at experimental nuclear binding energies as an example).
These are examples where we can easily set up the data and
then use machine learning algorithms included in for example
Scikit-Learn.
These examples will serve us the purpose of getting
started. Furthermore, they allow us to catch more than two birds with
a stone. They will allow us to bring in some programming specific
topics and tools as well as showing the power of various Python
libraries for machine learning and statistical data analysis.
Here, we will mainly focus on two
specific Python packages for Machine Learning, Scikit-Learn and
Tensorflow (see below for links etc). Moreover, the examples we
introduce will serve as inputs to many of our discussions later, as
well as allowing you to set up models and produce your own data and
get started with programming.
What is Machine Learning?
Statistics, data science and machine learning form important fields of
research in modern science. They describe how to learn and make
predictions from data, as well as allowing us to extract important
correlations about physical process and the underlying laws of motion
in large data sets. The latter, big data sets, appear frequently in
essentially all disciplines, from the traditional Science, Technology,
Mathematics and Engineering fields to Life Science, Law, education
research, the Humanities and the Social Sciences.
It has become more
and more common to see research projects on big data in for example
the Social Sciences where extracting patterns from complicated survey
data is one of many research directions. Having a solid grasp of data
analysis and machine learning is thus becoming central to scientific
computing in many fields, and competences and skills within the fields
of machine learning and scientific computing are nowadays strongly
requested by many potential employers. The latter cannot be
overstated, familiarity with machine learning has almost become a
prerequisite for many of the most exciting employment opportunities,
whether they are in bioinformatics, life science, physics or finance,
in the private or the public sector. This author has had several
students or met students who have been hired recently based on their
skills and competences in scientific computing and data science, often
with marginal knowledge of machine learning.
Machine learning is a subfield of computer science, and is closely
related to computational statistics. It evolved from the study of
pattern recognition in artificial intelligence (AI) research, and has
made contributions to AI tasks like computer vision, natural language
processing and speech recognition. Many of the methods we will study are also
strongly rooted in basic mathematics and physics research.
Ideally, machine learning represents the science of giving computers
the ability to learn without being explicitly programmed. The idea is
that there exist generic algorithms which can be used to find patterns
in a broad class of data sets without having to write code
specifically for each problem. The algorithm will build its own logic
based on the data. You should however always keep in mind that
machines and algorithms are to a large extent developed by humans. The
insights and knowledge we have about a specific system, play a central
role when we develop a specific machine learning algorithm.
Machine learning is an extremely rich field, in spite of its young
age. The increases we have seen during the last three decades in
computational capabilities have been followed by developments of
methods and techniques for analyzing and handling large date sets,
relying heavily on statistics, computer science and mathematics. The
field is rather new and developing rapidly. Popular software packages
written in Python for machine learning like
Scikit-learn,
Tensorflow,
PyTorch and Keras, all
freely available at their respective GitHub sites, encompass
communities of developers in the thousands or more. And the number of
code developers and contributors keeps increasing. Not all the
algorithms and methods can be given a rigorous mathematical
justification, opening up thereby large rooms for experimenting and
trial and error and thereby exciting new developments. However, a
solid command of linear algebra, multivariate theory, probability
theory, statistical data analysis, understanding errors and Monte
Carlo methods are central elements in a proper understanding of many
of algorithms and methods we will discuss.
Types of Machine Learning
The approaches to machine learning are many, but are often split into
two main categories. In supervised learning we know the answer to a
problem, and let the computer deduce the logic behind it. On the other
hand, unsupervised learning is a method for finding patterns and
relationship in data sets without any prior knowledge of the system.
Some authours also operate with a third category, namely
reinforcement learning. This is a paradigm of learning inspired by
behavioral psychology, where learning is achieved by trial-and-error,
solely from rewards and punishment.
Another way to categorize machine learning tasks is to consider the
desired output of a system. Some of the most common tasks are:
Classification: Outputs are divided into two or more classes. The goal is to produce a model that assigns inputs into one of these classes. An example is to identify digits based on pictures of hand-written ones. Classification is typically supervised learning.
Regression: Finding a functional relationship between an input data set and a reference data set. The goal is to construct a function that maps input data to continuous output values.
Clustering: Data are divided into groups with certain common traits, without knowing the different groups beforehand. It is thus a form of unsupervised learning.
The methods we cover have three main topics in common, irrespective of
whether we deal with supervised or unsupervised learning. The first
ingredient is normally our data set (which can be subdivided into
training and test data), the second item is a model which is normally a
function of some parameters. The model reflects our knowledge of the system (or lack thereof). As an example, if we know that our data show a behavior similar to what would be predicted by a polynomial, fitting our data to a polynomial of some degree would then determin our model.
The last ingredient is a so-called cost
function which allows us to present an estimate on how good our model
is in reproducing the data it is supposed to train.
At the heart of basically all ML algorithms there are so-called minimization algorithms, often we end up with various variants of gradient methods.
Software and needed installations
We will make extensive use of Python as programming language and its
myriad of available libraries. You will find
Jupyter notebooks invaluable in your work. You can run R
codes in the Jupyter/IPython notebooks, with the immediate benefit of
visualizing your data. You can also use compiled languages like C++,
Rust, Julia, Fortran etc if you prefer. The focus in these lectures will be
on Python.
If you have Python installed (we strongly recommend Python3) and you feel
pretty familiar with installing different packages, we recommend that
you install the following Python packages via pip as
pip install numpy scipy matplotlib ipython scikit-learn mglearn sympy pandas pillow
For Python3, replace pip with pip3.
For OSX users we recommend, after having installed Xcode, to
install brew. Brew allows for a seamless installation of additional
software via for example
brew install python3
For Linux users, with its variety of distributions like for example the widely popular Ubuntu distribution,
you can use pip as well and simply install Python as
sudo apt-get install python3 (or python for pyhton2.7)
etc etc.
Python installers
If you don't want to perform these operations separately and venture
into the hassle of exploring how to set up dependencies and paths, we
recommend two widely used distrubutions which set up all relevant
dependencies for Python, namely
Anaconda,
which is an open source
distribution of the Python and R programming languages for large-scale
data processing, predictive analytics, and scientific computing, that
aims to simplify package management and deployment. Package versions
are managed by the package management system conda.
Enthought canopy
is a Python
distribution for scientific and analytic computing distribution and
analysis environment, available for free and under a commercial
license.
Furthermore, Google's Colab is a free Jupyter notebook environment that requires
no setup and runs entirely in the cloud. Try it out!
Useful Python libraries
Here we list several useful Python libraries we strongly recommend (if you use anaconda many of these are already there)
NumPy is a highly popular library for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays
The pandas library provides high-performance, easy-to-use data structures and data analysis tools
Xarray is a Python package that makes working with labelled multi-dimensional arrays simple, efficient, and fun!
Scipy (pronounced “Sigh Pie”) is a Python-based ecosystem of open-source software for mathematics, science, and engineering.
Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.
Autograd can automatically differentiate native Python and Numpy code. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can even take derivatives of derivatives of derivatives
SymPy is a Python library for symbolic mathematics.
scikit-learn has simple and efficient tools for machine learning, data mining and data analysis
TensorFlow is a Python library for fast numerical computing created and released by Google
Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano
And many more such as pytorch, Theano etc
Installing R, C++, cython or Julia
You will also find it convenient to utilize R. We will mainly
use Python during our lectures and in various projects and exercises.
Those of you
already familiar with R should feel free to continue using R, keeping
however an eye on the parallel Python set ups. Similarly, if you are a
Python afecionado, feel free to explore R as well. Jupyter/Ipython
notebook allows you to run R codes interactively in your
browser. The software library R is really tailored for statistical data analysis
and allows for an easy usage of the tools and algorithms we will discuss in these
lectures.
To install R with Jupyter notebook
follow the link here
Installing R, C++, cython, Numba etc
For the C++ aficionados, Jupyter/IPython notebook allows you also to
install C++ and run codes written in this language interactively in
the browser. Since we will emphasize writing many of the algorithms
yourself, you can thus opt for either Python or C++ (or Fortran or other compiled languages) as programming
languages.
To add more entropy, cython can also be used when running your
notebooks. It means that Python with the jupyter notebook
setup allows you to integrate widely popular softwares and tools for
scientific computing. Similarly, the
Numba Python package delivers increased performance
capabilities with minimal rewrites of your codes. With its
versatility, including symbolic operations, Python offers a unique
computational environment. Your jupyter notebook can easily be
converted into a nicely rendered PDF file or a Latex file for
further processing. For example, convert to latex as
pycod jupyter nbconvert filename.ipynb --to latex
And to add more versatility, the Python package SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) and is entirely written in Python.
Finally, if you wish to use the light mark-up language
doconce you can convert a standard ascii text file into various HTML
formats, ipython notebooks, latex files, pdf files etc with minimal edits. These lectures were generated using doconce.
Numpy examples and Important Matrix and vector handling packages
There are several central software libraries for linear algebra and eigenvalue problems. Several of the more
popular ones have been wrapped into ofter software packages like those from the widely used text Numerical Recipes. The original source codes in many of the available packages are often taken from the widely used
software package LAPACK, which follows two other popular packages
developed in the 1970s, namely EISPACK and LINPACK. We describe them shortly here.
LINPACK: package for linear equations and least square problems.
LAPACK:package for solving symmetric, unsymmetric and generalized eigenvalue problems. From LAPACK's website http://www.netlib.org it is possible to download for free all source codes from this library. Both C/C++ and Fortran versions are available.
BLAS (I, II and III): (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. Blas I is vector operations, II vector-matrix operations and III matrix-matrix operations. Highly parallelized and efficient codes, all available for download from http://www.netlib.org.
Basic Matrix Features
Matrix properties reminder.
$$
\mathbf{A} =
\begin{bmatrix} a_{11} & a_{12} & a_{13} & a_{14} \
a_{21} & a_{22} & a_{23} & a_{24} \
a_{31} & a_{32} & a_{33} & a_{34} \
a_{41} & a_{42} & a_{43} & a_{44}
\end{bmatrix}\qquad
\mathbf{I} =
\begin{bmatrix} 1 & 0 & 0 & 0 \
0 & 1 & 0 & 0 \
0 & 0 & 1 & 0 \
0 & 0 & 0 & 1
\end{bmatrix}
$$
The inverse of a matrix is defined by
$$
\mathbf{A}^{-1} \cdot \mathbf{A} = I
$$
<table border="1">
<thead>
<tr><th align="center"> Relations </th> <th align="center"> Name </th> <th align="center"> matrix elements </th> </tr>
</thead>
<tbody>
<tr><td align="center"> $A = A^{T}$ </td> <td align="center"> symmetric </td> <td align="center"> $a_{ij} = a_{ji}$ </td> </tr>
<tr><td align="center"> $A = \left (A^{T} \right )^{-1}$ </td> <td align="center"> real orthogonal </td> <td align="center"> $\sum_k a_{ik} a_{jk} = \sum_k a_{ki} a_{kj} = \delta_{ij}$ </td> </tr>
<tr><td align="center"> $A = A^{ * }$ </td> <td align="center"> real matrix </td> <td align="center"> $a_{ij} = a_{ij}^{ * }$ </td> </tr>
<tr><td align="center"> $A = A^{\dagger}$ </td> <td align="center"> hermitian </td> <td align="center"> $a_{ij} = a_{ji}^{ * }$ </td> </tr>
<tr><td align="center"> $A = \left (A^{\dagger} \right )^{-1}$ </td> <td align="center"> unitary </td> <td align="center"> $\sum_k a_{ik} a_{jk}^{ * } = \sum_k a_{ki}^{ * } a_{kj} = \delta_{ij}$ </td> </tr>
</tbody>
</table>
Some famous Matrices
Diagonal if $a_{ij}=0$ for $i\ne j$
Upper triangular if $a_{ij}=0$ for $i > j$
Lower triangular if $a_{ij}=0$ for $i < j$
Upper Hessenberg if $a_{ij}=0$ for $i > j+1$
Lower Hessenberg if $a_{ij}=0$ for $i < j+1$
Tridiagonal if $a_{ij}=0$ for $|i -j| > 1$
Lower banded with bandwidth $p$: $a_{ij}=0$ for $i > j+p$
Upper banded with bandwidth $p$: $a_{ij}=0$ for $i < j+p$
Banded, block upper triangular, block lower triangular....
More Basic Matrix Features
Some Equivalent Statements.
For an $N\times N$ matrix $\mathbf{A}$ the following properties are all equivalent
If the inverse of $\mathbf{A}$ exists, $\mathbf{A}$ is nonsingular.
The equation $\mathbf{Ax}=0$ implies $\mathbf{x}=0$.
The rows of $\mathbf{A}$ form a basis of $R^N$.
The columns of $\mathbf{A}$ form a basis of $R^N$.
$\mathbf{A}$ is a product of elementary matrices.
$0$ is not eigenvalue of $\mathbf{A}$.
Numpy and arrays
Numpy provides an easy way to handle arrays in Python. The standard way to import this library is as
End of explanation
n = 10
x = np.random.normal(size=n)
print(x)
Explanation: Here follows a simple example where we set up an array of ten elements, all determined by random numbers drawn according to the normal distribution,
End of explanation
import numpy as np
x = np.array([1, 2, 3])
print(x)
Explanation: We defined a vector $x$ with $n=10$ elements with its values given by the Normal distribution $N(0,1)$.
Another alternative is to declare a vector as follows
End of explanation
import numpy as np
x = np.log(np.array([4, 7, 8]))
print(x)
Explanation: Here we have defined a vector with three elements, with $x_0=1$, $x_1=2$ and $x_2=3$. Note that both Python and C++
start numbering array elements from $0$ and on. This means that a vector with $n$ elements has a sequence of entities $x_0, x_1, x_2, \dots, x_{n-1}$. We could also let (recommended) Numpy to compute the logarithms of a specific array as
End of explanation
import numpy as np
from math import log
x = np.array([4, 7, 8])
for i in range(0, len(x)):
x[i] = log(x[i])
print(x)
Explanation: In the last example we used Numpy's unary function $np.log$. This function is
highly tuned to compute array elements since the code is vectorized
and does not require looping. We normaly recommend that you use the
Numpy intrinsic functions instead of the corresponding log function
from Python's math module. The looping is done explicitely by the
np.log function. The alternative, and slower way to compute the
logarithms of a vector would be to write
End of explanation
import numpy as np
x = np.log(np.array([4, 7, 8], dtype = np.float64))
print(x)
Explanation: We note that our code is much longer already and we need to import the log function from the math module.
The attentive reader will also notice that the output is $[1, 1, 2]$. Python interprets automagically our numbers as integers (like the automatic keyword in C++). To change this we could define our array elements to be double precision numbers as
End of explanation
import numpy as np
x = np.log(np.array([4.0, 7.0, 8.0])
print(x)
Explanation: or simply write them as double precision numbers (Python uses 64 bits as default for floating point type variables), that is
End of explanation
import numpy as np
x = np.log(np.array([4.0, 7.0, 8.0])
print(x.itemsize)
Explanation: To check the number of bytes (remember that one byte contains eight bits for double precision variables), you can use simple use the itemsize functionality (the array $x$ is actually an object which inherits the functionalities defined in Numpy) as
End of explanation
import numpy as np
A = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))
print(A)
Explanation: Matrices in Python
Having defined vectors, we are now ready to try out matrices. We can
define a $3 \times 3 $ real matrix $\hat{A}$ as (recall that we user
lowercase letters for vectors and uppercase letters for matrices)
End of explanation
import numpy as np
A = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))
# print the first column, row-major order and elements start with 0
print(A[:,0])
Explanation: If we use the shape function we would get $(3, 3)$ as output, that is verifying that our matrix is a $3\times 3$ matrix. We can slice the matrix and print for example the first column (Python organized matrix elements in a row-major order, see below) as
End of explanation
import numpy as np
A = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))
# print the first column, row-major order and elements start with 0
print(A[1,:])
Explanation: We can continue this was by printing out other columns or rows. The example here prints out the second column
End of explanation
import numpy as np
n = 10
# define a matrix of dimension 10 x 10 and set all elements to zero
A = np.zeros( (n, n) )
print(A)
Explanation: Numpy contains many other functionalities that allow us to slice, subdivide etc etc arrays. We strongly recommend that you look up the Numpy website for more details. Useful functions when defining a matrix are the np.zeros function which declares a matrix of a given dimension and sets all elements to zero
End of explanation
import numpy as np
n = 10
# define a matrix of dimension 10 x 10 and set all elements to one
A = np.ones( (n, n) )
print(A)
Explanation: or initializing all elements to
End of explanation
import numpy as np
n = 10
# define a matrix of dimension 10 x 10 and set all elements to random numbers with x \in [0, 1]
A = np.random.rand(n, n)
print(A)
Explanation: or as unitarily distributed random numbers (see the material on random number generators in the statistics part)
End of explanation
# Importing various packages
import numpy as np
n = 100
x = np.random.normal(size=n)
print(np.mean(x))
y = 4+3*x+np.random.normal(size=n)
print(np.mean(y))
z = x**3+np.random.normal(size=n)
print(np.mean(z))
W = np.vstack((x, y, z))
Sigma = np.cov(W)
print(Sigma)
Eigvals, Eigvecs = np.linalg.eig(Sigma)
print(Eigvals)
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import sparse
eye = np.eye(4)
print(eye)
sparse_mtx = sparse.csr_matrix(eye)
print(sparse_mtx)
x = np.linspace(-10,10,100)
y = np.sin(x)
plt.plot(x,y,marker='x')
plt.show()
Explanation: As we will see throughout these lectures, there are several extremely useful functionalities in Numpy.
As an example, consider the discussion of the covariance matrix. Suppose we have defined three vectors
$\hat{x}, \hat{y}, \hat{z}$ with $n$ elements each. The covariance matrix is defined as
$$
\hat{\Sigma} = \begin{bmatrix} \sigma_{xx} & \sigma_{xy} & \sigma_{xz} \
\sigma_{yx} & \sigma_{yy} & \sigma_{yz} \
\sigma_{zx} & \sigma_{zy} & \sigma_{zz}
\end{bmatrix},
$$
where for example
$$
\sigma_{xy} =\frac{1}{n} \sum_{i=0}^{n-1}(x_i- \overline{x})(y_i- \overline{y}).
$$
The Numpy function np.cov calculates the covariance elements using the factor $1/(n-1)$ instead of $1/n$ since it assumes we do not have the exact mean values.
The following simple function uses the np.vstack function which takes each vector of dimension $1\times n$ and produces a $3\times n$ matrix $\hat{W}$
$$
\hat{W} = \begin{bmatrix} x_0 & y_0 & z_0 \
x_1 & y_1 & z_1 \
x_2 & y_2 & z_2 \
\dots & \dots & \dots \
x_{n-2} & y_{n-2} & z_{n-2} \
x_{n-1} & y_{n-1} & z_{n-1}
\end{bmatrix},
$$
which in turn is converted into into the $3\times 3$ covariance matrix
$\hat{\Sigma}$ via the Numpy function np.cov(). We note that we can also calculate
the mean value of each set of samples $\hat{x}$ etc using the Numpy
function np.mean(x). We can also extract the eigenvalues of the
covariance matrix through the np.linalg.eig() function.
End of explanation
import pandas as pd
from IPython.display import display
data = {'First Name': ["Frodo", "Bilbo", "Aragorn II", "Samwise"],
'Last Name': ["Baggins", "Baggins","Elessar","Gamgee"],
'Place of birth': ["Shire", "Shire", "Eriador", "Shire"],
'Date of Birth T.A.': [2968, 2890, 2931, 2980]
}
data_pandas = pd.DataFrame(data)
display(data_pandas)
Explanation: Meet the Pandas
<!-- dom:FIGURE: [fig/pandas.jpg, width=600 frac=0.8] -->
<!-- begin figure -->
<p></p>
<img src="fig/pandas.jpg" width=600>
<!-- end figure -->
Another useful Python package is
pandas, which is an open source library
providing high-performance, easy-to-use data structures and data
analysis tools for Python. pandas stands for panel data, a term borrowed from econometrics and is an efficient library for data analysis with an emphasis on tabular data.
pandas has two major classes, the DataFrame class with two-dimensional data objects and tabular data organized in columns and the class Series with a focus on one-dimensional data objects. Both classes allow you to index data easily as we will see in the examples below.
pandas allows you also to perform mathematical operations on the data, spanning from simple reshapings of vectors and matrices to statistical operations.
The following simple example shows how we can, in an easy way make tables of our data. Here we define a data set which includes names, place of birth and date of birth, and displays the data in an easy to read way. We will see repeated use of pandas, in particular in connection with classification of data.
End of explanation
data_pandas = pd.DataFrame(data,index=['Frodo','Bilbo','Aragorn','Sam'])
display(data_pandas)
Explanation: In the above we have imported pandas with the shorthand pd, the latter has become the standard way we import pandas. We make then a list of various variables
and reorganize the aboves lists into a DataFrame and then print out a neat table with specific column labels as Name, place of birth and date of birth.
Displaying these results, we see that the indices are given by the default numbers from zero to three.
pandas is extremely flexible and we can easily change the above indices by defining a new type of indexing as
End of explanation
display(data_pandas.loc['Aragorn'])
Explanation: Thereafter we display the content of the row which begins with the index Aragorn
End of explanation
new_hobbit = {'First Name': ["Peregrin"],
'Last Name': ["Took"],
'Place of birth': ["Shire"],
'Date of Birth T.A.': [2990]
}
data_pandas=data_pandas.append(pd.DataFrame(new_hobbit, index=['Pippin']))
display(data_pandas)
Explanation: We can easily append data to this, for example
End of explanation
import numpy as np
import pandas as pd
from IPython.display import display
np.random.seed(100)
# setting up a 10 x 5 matrix
rows = 10
cols = 5
a = np.random.randn(rows,cols)
df = pd.DataFrame(a)
display(df)
print(df.mean())
print(df.std())
display(df**2)
Explanation: Here are other examples where we use the DataFrame functionality to handle arrays, now with more interesting features for us, namely numbers. We set up a matrix
of dimensionality $10\times 5$ and compute the mean value and standard deviation of each column. Similarly, we can perform mathematial operations like squaring the matrix elements and many other operations.
End of explanation
df.columns = ['First', 'Second', 'Third', 'Fourth', 'Fifth']
df.index = np.arange(10)
display(df)
print(df['Second'].mean() )
print(df.info())
print(df.describe())
from pylab import plt, mpl
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
df.cumsum().plot(lw=2.0, figsize=(10,6))
plt.show()
df.plot.bar(figsize=(10,6), rot=15)
plt.show()
Explanation: Thereafter we can select specific columns only and plot final results
End of explanation
b = np.arange(16).reshape((4,4))
print(b)
df1 = pd.DataFrame(b)
print(df1)
Explanation: We can produce a $4\times 4$ matrix
End of explanation
# Importing various packages
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
x = np.random.rand(100,1)
y = 2*x+np.random.randn(100,1)
linreg = LinearRegression()
linreg.fit(x,y)
xnew = np.array([[0],[1]])
ypredict = linreg.predict(xnew)
plt.plot(xnew, ypredict, "r-")
plt.plot(x, y ,'ro')
plt.axis([0,1.0,0, 5.0])
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title(r'Simple Linear Regression')
plt.show()
Explanation: and many other operations.
The Series class is another important class included in
pandas. You can view it as a specialization of DataFrame but where
we have just a single column of data. It shares many of the same features as _DataFrame. As with DataFrame,
most operations are vectorized, achieving thereby a high performance when dealing with computations of arrays, in particular labeled arrays.
As we will see below it leads also to a very concice code close to the mathematical operations we may be interested in.
For multidimensional arrays, we recommend strongly xarray. xarray has much of the same flexibility as pandas, but allows for the extension to higher dimensions than two. We will see examples later of the usage of both pandas and xarray.
Reading Data and fitting
In order to study various Machine Learning algorithms, we need to
access data. Acccessing data is an essential step in all machine
learning algorithms. In particular, setting up the so-called design
matrix (to be defined below) is often the first element we need in
order to perform our calculations. To set up the design matrix means
reading (and later, when the calculations are done, writing) data
in various formats, The formats span from reading files from disk,
loading data from databases and interacting with online sources
like web application programming interfaces (APIs).
In handling various input formats, as discussed above, we will mainly stay with pandas,
a Python package which allows us, in a seamless and painless way, to
deal with a multitude of formats, from standard csv (comma separated
values) files, via excel, html to hdf5 formats. With pandas
and the DataFrame and Series functionalities we are able to convert text data
into the calculational formats we need for a specific algorithm. And our code is going to be
pretty close the basic mathematical expressions.
Our first data set is going to be a classic from nuclear physics, namely all
available data on binding energies. Don't be intimidated if you are not familiar with nuclear physics. It serves simply as an example here of a data set.
We will show some of the
strengths of packages like Scikit-Learn in fitting nuclear binding energies to
specific functions using linear regression first. Then, as a teaser, we will show you how
you can easily implement other algorithms like decision trees and random forests and neural networks.
But before we really start with nuclear physics data, let's just look at some simpler polynomial fitting cases, such as,
(don't be offended) fitting straight lines!
Simple linear regression model using scikit-learn
We start with perhaps our simplest possible example, using Scikit-Learn to perform linear regression analysis on a data set produced by us.
What follows is a simple Python code where we have defined a function
$y$ in terms of the variable $x$. Both are defined as vectors with $100$ entries.
The numbers in the vector $\hat{x}$ are given
by random numbers generated with a uniform distribution with entries
$x_i \in [0,1]$ (more about probability distribution functions
later). These values are then used to define a function $y(x)$
(tabulated again as a vector) with a linear dependence on $x$ plus a
random noise added via the normal distribution.
The Numpy functions are imported used the import numpy as np
statement and the random number generator for the uniform distribution
is called using the function np.random.rand(), where we specificy
that we want $100$ random variables. Using Numpy we define
automatically an array with the specified number of elements, $100$ in
our case. With the Numpy function randn() we can compute random
numbers with the normal distribution (mean value $\mu$ equal to zero and
variance $\sigma^2$ set to one) and produce the values of $y$ assuming a linear
dependence as function of $x$
$$
y = 2x+N(0,1),
$$
where $N(0,1)$ represents random numbers generated by the normal
distribution. From Scikit-Learn we import then the
LinearRegression functionality and make a prediction $\tilde{y} =
\alpha + \beta x$ using the function fit(x,y). We call the set of
data $(\hat{x},\hat{y})$ for our training data. The Python package
scikit-learn has also a functionality which extracts the above
fitting parameters $\alpha$ and $\beta$ (see below). Later we will
distinguish between training data and test data.
For plotting we use the Python package
matplotlib which produces publication
quality figures. Feel free to explore the extensive
gallery of examples. In
this example we plot our original values of $x$ and $y$ as well as the
prediction ypredict ($\tilde{y}$), which attempts at fitting our
data with a straight line.
The Python code follows here.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
x = np.random.rand(100,1)
y = 5*x+0.01*np.random.randn(100,1)
linreg = LinearRegression()
linreg.fit(x,y)
ypredict = linreg.predict(x)
plt.plot(x, np.abs(ypredict-y)/abs(y), "ro")
plt.axis([0,1.0,0.0, 0.5])
plt.xlabel(r'$x$')
plt.ylabel(r'$\epsilon_{\mathrm{relative}}$')
plt.title(r'Relative error')
plt.show()
Explanation: This example serves several aims. It allows us to demonstrate several
aspects of data analysis and later machine learning algorithms. The
immediate visualization shows that our linear fit is not
impressive. It goes through the data points, but there are many
outliers which are not reproduced by our linear regression. We could
now play around with this small program and change for example the
factor in front of $x$ and the normal distribution. Try to change the
function $y$ to
$$
y = 10x+0.01 \times N(0,1),
$$
where $x$ is defined as before. Does the fit look better? Indeed, by
reducing the role of the noise given by the normal distribution we see immediately that
our linear prediction seemingly reproduces better the training
set. However, this testing 'by the eye' is obviouly not satisfactory in the
long run. Here we have only defined the training data and our model, and
have not discussed a more rigorous approach to the cost function.
We need more rigorous criteria in defining whether we have succeeded or
not in modeling our training data. You will be surprised to see that
many scientists seldomly venture beyond this 'by the eye' approach. A
standard approach for the cost function is the so-called $\chi^2$
function (a variant of the mean-squared error (MSE))
$$
\chi^2 = \frac{1}{n}
\sum_{i=0}^{n-1}\frac{(y_i-\tilde{y}_i)^2}{\sigma_i^2},
$$
where $\sigma_i^2$ is the variance (to be defined later) of the entry
$y_i$. We may not know the explicit value of $\sigma_i^2$, it serves
however the aim of scaling the equations and make the cost function
dimensionless.
Minimizing the cost function is a central aspect of
our discussions to come. Finding its minima as function of the model
parameters ($\alpha$ and $\beta$ in our case) will be a recurring
theme in these series of lectures. Essentially all machine learning
algorithms we will discuss center around the minimization of the
chosen cost function. This depends in turn on our specific
model for describing the data, a typical situation in supervised
learning. Automatizing the search for the minima of the cost function is a
central ingredient in all algorithms. Typical methods which are
employed are various variants of gradient methods. These will be
discussed in more detail later. Again, you'll be surprised to hear that
many practitioners minimize the above function ''by the eye', popularly dubbed as
'chi by the eye'. That is, change a parameter and see (visually and numerically) that
the $\chi^2$ function becomes smaller.
There are many ways to define the cost function. A simpler approach is to look at the relative difference between the training data and the predicted data, that is we define
the relative error (why would we prefer the MSE instead of the relative error?) as
$$
\epsilon_{\mathrm{relative}}= \frac{\vert \hat{y} -\hat{\tilde{y}}\vert}{\vert \hat{y}\vert}.
$$
The squared cost function results in an arithmetic mean-unbiased
estimator, and the absolute-value cost function results in a
median-unbiased estimator (in the one-dimensional case, and a
geometric median-unbiased estimator for the multi-dimensional
case). The squared cost function has the disadvantage that it has the tendency
to be dominated by outliers.
We can modify easily the above Python code and plot the relative error instead
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score, mean_squared_log_error, mean_absolute_error
x = np.random.rand(100,1)
y = 2.0+ 5*x+0.5*np.random.randn(100,1)
linreg = LinearRegression()
linreg.fit(x,y)
ypredict = linreg.predict(x)
print('The intercept alpha: \n', linreg.intercept_)
print('Coefficient beta : \n', linreg.coef_)
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(y, ypredict))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(y, ypredict))
# Mean squared log error
print('Mean squared log error: %.2f' % mean_squared_log_error(y, ypredict) )
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(y, ypredict))
plt.plot(x, ypredict, "r-")
plt.plot(x, y ,'ro')
plt.axis([0.0,1.0,1.5, 7.0])
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title(r'Linear Regression fit ')
plt.show()
Explanation: Depending on the parameter in front of the normal distribution, we may
have a small or larger relative error. Try to play around with
different training data sets and study (graphically) the value of the
relative error.
As mentioned above, Scikit-Learn has an impressive functionality.
We can for example extract the values of $\alpha$ and $\beta$ and
their error estimates, or the variance and standard deviation and many
other properties from the statistical data analysis.
Here we show an
example of the functionality of Scikit-Learn.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import random
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LinearRegression
x=np.linspace(0.02,0.98,200)
noise = np.asarray(random.sample((range(200)),200))
y=x**3*noise
yn=x**3*100
poly3 = PolynomialFeatures(degree=3)
X = poly3.fit_transform(x[:,np.newaxis])
clf3 = LinearRegression()
clf3.fit(X,y)
Xplot=poly3.fit_transform(x[:,np.newaxis])
poly3_plot=plt.plot(x, clf3.predict(Xplot), label='Cubic Fit')
plt.plot(x,yn, color='red', label="True Cubic")
plt.scatter(x, y, label='Data', color='orange', s=15)
plt.legend()
plt.show()
def error(a):
for i in y:
err=(y-yn)/yn
return abs(np.sum(err))/len(err)
print (error(y))
Explanation: The function coef gives us the parameter $\beta$ of our fit while intercept yields
$\alpha$. Depending on the constant in front of the normal distribution, we get values near or far from $alpha =2$ and $\beta =5$. Try to play around with different parameters in front of the normal distribution. The function meansquarederror gives us the mean square error, a risk metric corresponding to the expected value of the squared (quadratic) error or loss defined as
$$
MSE(\hat{y},\hat{\tilde{y}}) = \frac{1}{n}
\sum_{i=0}^{n-1}(y_i-\tilde{y}_i)^2,
$$
The smaller the value, the better the fit. Ideally we would like to
have an MSE equal zero. The attentive reader has probably recognized
this function as being similar to the $\chi^2$ function defined above.
The r2score function computes $R^2$, the coefficient of
determination. It provides a measure of how well future samples are
likely to be predicted by the model. Best possible score is 1.0 and it
can be negative (because the model can be arbitrarily worse). A
constant model that always predicts the expected value of $\hat{y}$,
disregarding the input features, would get a $R^2$ score of $0.0$.
If $\tilde{\hat{y}}_i$ is the predicted value of the $i-th$ sample and $y_i$ is the corresponding true value, then the score $R^2$ is defined as
$$
R^2(\hat{y}, \tilde{\hat{y}}) = 1 - \frac{\sum_{i=0}^{n - 1} (y_i - \tilde{y}i)^2}{\sum{i=0}^{n - 1} (y_i - \bar{y})^2},
$$
where we have defined the mean value of $\hat{y}$ as
$$
\bar{y} = \frac{1}{n} \sum_{i=0}^{n - 1} y_i.
$$
Another quantity taht we will meet again in our discussions of regression analysis is
the mean absolute error (MAE), a risk metric corresponding to the expected value of the absolute error loss or what we call the $l1$-norm loss. In our discussion above we presented the relative error.
The MAE is defined as follows
$$
\text{MAE}(\hat{y}, \hat{\tilde{y}}) = \frac{1}{n} \sum_{i=0}^{n-1} \left| y_i - \tilde{y}_i \right|.
$$
We present the
squared logarithmic (quadratic) error
$$
\text{MSLE}(\hat{y}, \hat{\tilde{y}}) = \frac{1}{n} \sum_{i=0}^{n - 1} (\log_e (1 + y_i) - \log_e (1 + \tilde{y}_i) )^2,
$$
where $\log_e (x)$ stands for the natural logarithm of $x$. This error
estimate is best to use when targets having exponential growth, such
as population counts, average sales of a commodity over a span of
years etc.
Finally, another cost function is the Huber cost function used in robust regression.
The rationale behind this possible cost function is its reduced
sensitivity to outliers in the data set. In our discussions on
dimensionality reduction and normalization of data we will meet other
ways of dealing with outliers.
The Huber cost function is defined as
$$
H_{\delta}(a)={\begin{cases}{\frac {1}{2}}{a^{2}}&{\text{for }}|a|\leq \delta ,\\delta (|a|-{\frac {1}{2}}\delta ),&{\text{otherwise.}}\end{cases}}}.
$$
Here $a=\boldsymbol{y} - \boldsymbol{\tilde{y}}$.
We will discuss in more
detail these and other functions in the various lectures. We conclude this part with another example. Instead of
a linear $x$-dependence we study now a cubic polynomial and use the polynomial regression analysis tools of scikit-learn.
End of explanation
# Common imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn.linear_model as skl
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
import os
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("MassEval2016.dat"),'r')
Explanation: To our real data: nuclear binding energies. Brief reminder on masses and binding energies
Let us now dive into nuclear physics and remind ourselves briefly about some basic features about binding
energies. A basic quantity which can be measured for the ground
states of nuclei is the atomic mass $M(N, Z)$ of the neutral atom with
atomic mass number $A$ and charge $Z$. The number of neutrons is $N$. There are indeed several sophisticated experiments worldwide which allow us to measure this quantity to high precision (parts per million even).
Atomic masses are usually tabulated in terms of the mass excess defined by
$$
\Delta M(N, Z) = M(N, Z) - uA,
$$
where $u$ is the Atomic Mass Unit
$$
u = M(^{12}\mathrm{C})/12 = 931.4940954(57) \hspace{0.1cm} \mathrm{MeV}/c^2.
$$
The nucleon masses are
$$
m_p = 1.00727646693(9)u,
$$
and
$$
m_n = 939.56536(8)\hspace{0.1cm} \mathrm{MeV}/c^2 = 1.0086649156(6)u.
$$
In the 2016 mass evaluation of by W.J.Huang, G.Audi, M.Wang, F.G.Kondev, S.Naimi and X.Xu
there are data on masses and decays of 3437 nuclei.
The nuclear binding energy is defined as the energy required to break
up a given nucleus into its constituent parts of $N$ neutrons and $Z$
protons. In terms of the atomic masses $M(N, Z)$ the binding energy is
defined by
$$
BE(N, Z) = ZM_H c^2 + Nm_n c^2 - M(N, Z)c^2 ,
$$
where $M_H$ is the mass of the hydrogen atom and $m_n$ is the mass of the neutron.
In terms of the mass excess the binding energy is given by
$$
BE(N, Z) = Z\Delta_H c^2 + N\Delta_n c^2 -\Delta(N, Z)c^2 ,
$$
where $\Delta_H c^2 = 7.2890$ MeV and $\Delta_n c^2 = 8.0713$ MeV.
A popular and physically intuitive model which can be used to parametrize
the experimental binding energies as function of $A$, is the so-called
liquid drop model. The ansatz is based on the following expression
$$
BE(N,Z) = a_1A-a_2A^{2/3}-a_3\frac{Z^2}{A^{1/3}}-a_4\frac{(N-Z)^2}{A},
$$
where $A$ stands for the number of nucleons and the $a_i$s are parameters which are determined by a fit
to the experimental data.
To arrive at the above expression we have assumed that we can make the following assumptions:
There is a volume term $a_1A$ proportional with the number of nucleons (the energy is also an extensive quantity). When an assembly of nucleons of the same size is packed together into the smallest volume, each interior nucleon has a certain number of other nucleons in contact with it. This contribution is proportional to the volume.
There is a surface energy term $a_2A^{2/3}$. The assumption here is that a nucleon at the surface of a nucleus interacts with fewer other nucleons than one in the interior of the nucleus and hence its binding energy is less. This surface energy term takes that into account and is therefore negative and is proportional to the surface area.
There is a Coulomb energy term $a_3\frac{Z^2}{A^{1/3}}$. The electric repulsion between each pair of protons in a nucleus yields less binding.
There is an asymmetry term $a_4\frac{(N-Z)^2}{A}$. This term is associated with the Pauli exclusion principle and reflects the fact that the proton-neutron interaction is more attractive on the average than the neutron-neutron and proton-proton interactions.
We could also add a so-called pairing term, which is a correction term that
arises from the tendency of proton pairs and neutron pairs to
occur. An even number of particles is more stable than an odd number.
Organizing our data
Let us start with reading and organizing our data.
We start with the compilation of masses and binding energies from 2016.
After having downloaded this file to our own computer, we are now ready to read the file and start structuring our data.
We start with preparing folders for storing our calculations and the data file over masses and binding energies. We import also various modules that we will find useful in order to present various Machine Learning methods. Here we focus mainly on the functionality of scikit-learn.
End of explanation
from pylab import plt, mpl
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
def MakePlot(x,y, styles, labels, axlabels):
plt.figure(figsize=(10,6))
for i in range(len(x)):
plt.plot(x[i], y[i], styles[i], label = labels[i])
plt.xlabel(axlabels[0])
plt.ylabel(axlabels[1])
plt.legend(loc=0)
Explanation: Before we proceed, we define also a function for making our plots. You can obviously avoid this and simply set up various matplotlib commands every time you need them. You may however find it convenient to collect all such commands in one function and simply call this function.
End of explanation
This is taken from the data file of the mass 2016 evaluation.
All files are 3436 lines long with 124 character per line.
Headers are 39 lines long.
col 1 : Fortran character control: 1 = page feed 0 = line feed
format : a1,i3,i5,i5,i5,1x,a3,a4,1x,f13.5,f11.5,f11.3,f9.3,1x,a2,f11.3,f9.3,1x,i3,1x,f12.5,f11.5
These formats are reflected in the pandas widths variable below, see the statement
widths=(1,3,5,5,5,1,3,4,1,13,11,11,9,1,2,11,9,1,3,1,12,11,1),
Pandas has also a variable header, with length 39 in this case.
Explanation: Our next step is to read the data on experimental binding energies and
reorganize them as functions of the mass number $A$, the number of
protons $Z$ and neutrons $N$ using pandas. Before we do this it is
always useful (unless you have a binary file or other types of compressed
data) to actually open the file and simply take a look at it!
In particular, the program that outputs the final nuclear masses is written in Fortran with a specific format. It means that we need to figure out the format and which columns contain the data we are interested in. Pandas comes with a function that reads formatted output. After having admired the file, we are now ready to start massaging it with pandas. The file begins with some basic format information.
End of explanation
# Read the experimental data with Pandas
Masses = pd.read_fwf(infile, usecols=(2,3,4,6,11),
names=('N', 'Z', 'A', 'Element', 'Ebinding'),
widths=(1,3,5,5,5,1,3,4,1,13,11,11,9,1,2,11,9,1,3,1,12,11,1),
header=39,
index_col=False)
# Extrapolated values are indicated by '#' in place of the decimal place, so
# the Ebinding column won't be numeric. Coerce to float and drop these entries.
Masses['Ebinding'] = pd.to_numeric(Masses['Ebinding'], errors='coerce')
Masses = Masses.dropna()
# Convert from keV to MeV.
Masses['Ebinding'] /= 1000
# Group the DataFrame by nucleon number, A.
Masses = Masses.groupby('A')
# Find the rows of the grouped DataFrame with the maximum binding energy.
Masses = Masses.apply(lambda t: t[t.Ebinding==t.Ebinding.max()])
Explanation: The data we are interested in are in columns 2, 3, 4 and 11, giving us
the number of neutrons, protons, mass numbers and binding energies,
respectively. We add also for the sake of completeness the element name. The data are in fixed-width formatted lines and we will
covert them into the pandas DataFrame structure.
End of explanation
A = Masses['A']
Z = Masses['Z']
N = Masses['N']
Element = Masses['Element']
Energies = Masses['Ebinding']
print(Masses)
Explanation: We have now read in the data, grouped them according to the variables we are interested in.
We see how easy it is to reorganize the data using pandas. If we
were to do these operations in C/C++ or Fortran, we would have had to
write various functions/subroutines which perform the above
reorganizations for us. Having reorganized the data, we can now start
to make some simple fits using both the functionalities in numpy and
Scikit-Learn afterwards.
Now we define five variables which contain
the number of nucleons $A$, the number of protons $Z$ and the number of neutrons $N$, the element name and finally the energies themselves.
End of explanation
# Now we set up the design matrix X
X = np.zeros((len(A),5))
X[:,0] = 1
X[:,1] = A
X[:,2] = A**(2.0/3.0)
X[:,3] = A**(-1.0/3.0)
X[:,4] = A**(-1.0)
Explanation: The next step, and we will define this mathematically later, is to set up the so-called design matrix. We will throughout call this matrix $\boldsymbol{X}$.
It has dimensionality $p\times n$, where $n$ is the number of data points and $p$ are the so-called predictors. In our case here they are given by the number of polynomials in $A$ we wish to include in the fit.
End of explanation
clf = skl.LinearRegression().fit(X, Energies)
fity = clf.predict(X)
Explanation: With scikitlearn we are now ready to use linear regression and fit our data.
End of explanation
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(Energies, fity))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(Energies, fity))
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(Energies, fity))
print(clf.coef_, clf.intercept_)
Masses['Eapprox'] = fity
# Generate a plot comparing the experimental with the fitted values values.
fig, ax = plt.subplots()
ax.set_xlabel(r'$A = N + Z$')
ax.set_ylabel(r'$E_\mathrm{bind}\,/\mathrm{MeV}$')
ax.plot(Masses['A'], Masses['Ebinding'], alpha=0.7, lw=2,
label='Ame2016')
ax.plot(Masses['A'], Masses['Eapprox'], alpha=0.7, lw=2, c='m',
label='Fit')
ax.legend()
save_fig("Masses2016")
plt.show()
Explanation: Pretty simple!
Now we can print measures of how our fit is doing, the coefficients from the fits and plot the final fit together with our data.
End of explanation
#Decision Tree Regression
from sklearn.tree import DecisionTreeRegressor
regr_1=DecisionTreeRegressor(max_depth=5)
regr_2=DecisionTreeRegressor(max_depth=7)
regr_3=DecisionTreeRegressor(max_depth=9)
regr_1.fit(X, Energies)
regr_2.fit(X, Energies)
regr_3.fit(X, Energies)
y_1 = regr_1.predict(X)
y_2 = regr_2.predict(X)
y_3=regr_3.predict(X)
Masses['Eapprox'] = y_3
# Plot the results
plt.figure()
plt.plot(A, Energies, color="blue", label="Data", linewidth=2)
plt.plot(A, y_1, color="red", label="max_depth=5", linewidth=2)
plt.plot(A, y_2, color="green", label="max_depth=7", linewidth=2)
plt.plot(A, y_3, color="m", label="max_depth=9", linewidth=2)
plt.xlabel("$A$")
plt.ylabel("$E$[MeV]")
plt.title("Decision Tree Regression")
plt.legend()
save_fig("Masses2016Trees")
plt.show()
print(Masses)
print(np.mean( (Energies-y_1)**2))
Explanation: Seeing the wood for the trees
As a teaser, let us now see how we can do this with decision trees using scikit-learn. Later we will switch to so-called random forests!
End of explanation
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import accuracy_score
import seaborn as sns
X_train = X
Y_train = Energies
n_hidden_neurons = 100
epochs = 100
# store models for later use
eta_vals = np.logspace(-5, 1, 7)
lmbd_vals = np.logspace(-5, 1, 7)
# store the models for later use
DNN_scikit = np.zeros((len(eta_vals), len(lmbd_vals)), dtype=object)
train_accuracy = np.zeros((len(eta_vals), len(lmbd_vals)))
sns.set()
for i, eta in enumerate(eta_vals):
for j, lmbd in enumerate(lmbd_vals):
dnn = MLPRegressor(hidden_layer_sizes=(n_hidden_neurons), activation='logistic',
alpha=lmbd, learning_rate_init=eta, max_iter=epochs)
dnn.fit(X_train, Y_train)
DNN_scikit[i][j] = dnn
train_accuracy[i][j] = dnn.score(X_train, Y_train)
fig, ax = plt.subplots(figsize = (10, 10))
sns.heatmap(train_accuracy, annot=True, ax=ax, cmap="viridis")
ax.set_title("Training Accuracy")
ax.set_ylabel("$\eta$")
ax.set_xlabel("$\lambda$")
plt.show()
Explanation: And what about using neural networks?
The seaborn package allows us to visualize data in an efficient way. Note that we use scikit-learn's multi-layer perceptron (or feed forward neural network)
functionality.
End of explanation |
9,925 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Process Regression
Gaussian Process regression is a non-parametric approach to regression or data fitting that assumes that observed data points $y$ are generated by some unknown latent function $f(x)$. The latent function $f(x)$ is modeled as being multivariate normally distributed (a Gaussian Process), and is commonly denoted
\begin{equation}
f(x) \sim \mathcal{GP}(m(x;\theta), \, k(x, x';\theta)) \,.
\end{equation}
$m(x ; \theta)$ is the mean function, and $k(x, x' ;\theta)$ is the covariance function. In many applications, the mean function is set to $0$ because the data can still be fit well using just covariances.
$\theta$ is the set of hyperparameters for either the mean or covariance function. These are the unknown variables. They are usually found by maximizing the marginal likelihood. This approach is much faster computationally than MCMC, but produces a point estimate, $\theta_{\mathrm{MAP}}$.
The data in the next two examples is generated by a GP with noise that is also gaussian distributed. In sampling notation this is,
\begin{equation}
\begin{aligned}
y & = f(x) + \epsilon \
f(x) & \sim \mathcal{GP}(0, \, k(x, x'; \theta)) \
\epsilon & \sim \mathcal{N}(0, \sigma^2) \
\sigma^2 & \sim \mathrm{Prior} \
\theta & \sim \mathrm{Prior} \,.
\end{aligned}
\end{equation}
With Theano as a backend, PyMC3 is an excellent environment for developing fully Bayesian Gaussian Process models, particularly when a GP is component in a larger model. The GP functionality of PyMC3 is meant to be lightweight, highly composable, and have a clear syntax. This example is meant to give an introduction to how to specify a GP in PyMC3.
Step1: Example 1
Step2: Since there isn't much data, there will likely be a lot of uncertainty in the hyperparameter values.
We assign prior distributions that are uniform in log space, suitable for variance-type parameters. Each hyperparameter must at least be constrained to be positive valued by its prior.
None of the covariance function objects have a scaling coefficient built in. This is because random variables, such as s2_f, can be multiplied directly with a covariance function object, gp.cov.ExpQuad.
The last line is the marginal likelihood. Since the observed data $y$ is also assumed to be multivariate normally distributed, the marginal likelihood is also multivariate normal. It is obtained by integrating out $f(x)$ from the product of the data likelihood $p(y \mid f, X)$ and the GP prior $p(f \mid X)$,
\begin{equation}
p(y \mid X) = \int p(y \mid f, X) p(f \mid X) df
\end{equation}
The call in the last line f_cov.K(X) evaluates the covariance function across the inputs X. The result is a matrix. The sum of this matrix and the diagonal noise term are used as the covariance matrix for the marginal likelihood.
Step3: The results show that the hyperparameters were recovered pretty well, but definitely with a high degree of uncertainty. Lets look at the predicted fits and uncertainty next using samples from the full posterior.
Step4: The sample_gp function draws realizations of the GP from the predictive distribution.
Step5: Example 2
Step6: In the plot of the observed data, the periodic component is barely distinguishable by eye. It is plausible that there isn't a periodic component, and the observed data is just the drift component and white noise.
Step7: Lets see if we can infer the correct values of the hyperparameters.
Step8: Some large samples make the histogram of s2_p hard to read. Below is a zoomed in histogram.
Step9: Comparing the histograms of the results to the true values, we can see that the PyMC3's MCMC methods did a good job estimating the true GP hyperparameters. Although the periodic component is faintly apparent in the observed data, the GP model is able to extract it with high accuracy. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cmap
cm = cmap.inferno
import numpy as np
import scipy as sp
import theano
import theano.tensor as tt
import theano.tensor.nlinalg
import sys
sys.path.insert(0, "../../..")
import pymc3 as pm
Explanation: Gaussian Process Regression
Gaussian Process regression is a non-parametric approach to regression or data fitting that assumes that observed data points $y$ are generated by some unknown latent function $f(x)$. The latent function $f(x)$ is modeled as being multivariate normally distributed (a Gaussian Process), and is commonly denoted
\begin{equation}
f(x) \sim \mathcal{GP}(m(x;\theta), \, k(x, x';\theta)) \,.
\end{equation}
$m(x ; \theta)$ is the mean function, and $k(x, x' ;\theta)$ is the covariance function. In many applications, the mean function is set to $0$ because the data can still be fit well using just covariances.
$\theta$ is the set of hyperparameters for either the mean or covariance function. These are the unknown variables. They are usually found by maximizing the marginal likelihood. This approach is much faster computationally than MCMC, but produces a point estimate, $\theta_{\mathrm{MAP}}$.
The data in the next two examples is generated by a GP with noise that is also gaussian distributed. In sampling notation this is,
\begin{equation}
\begin{aligned}
y & = f(x) + \epsilon \
f(x) & \sim \mathcal{GP}(0, \, k(x, x'; \theta)) \
\epsilon & \sim \mathcal{N}(0, \sigma^2) \
\sigma^2 & \sim \mathrm{Prior} \
\theta & \sim \mathrm{Prior} \,.
\end{aligned}
\end{equation}
With Theano as a backend, PyMC3 is an excellent environment for developing fully Bayesian Gaussian Process models, particularly when a GP is component in a larger model. The GP functionality of PyMC3 is meant to be lightweight, highly composable, and have a clear syntax. This example is meant to give an introduction to how to specify a GP in PyMC3.
End of explanation
np.random.seed(20090425)
n = 20
X = np.sort(3*np.random.rand(n))[:,None]
with pm.Model() as model:
# f(x)
l_true = 0.3
s2_f_true = 1.0
cov = s2_f_true * pm.gp.cov.ExpQuad(1, l_true)
# noise, epsilon
s2_n_true = 0.1
K_noise = s2_n_true**2 * tt.eye(n)
K = cov(X) + K_noise
# evaluate the covariance with the given hyperparameters
K = theano.function([], cov(X) + K_noise)()
# generate fake data from GP with white noise (with variance sigma2)
y = np.random.multivariate_normal(np.zeros(n), K)
fig = plt.figure(figsize=(14,5)); ax = fig.add_subplot(111)
ax.plot(X, y, 'ok', ms=10);
ax.set_xlabel("x");
ax.set_ylabel("f(x)");
Explanation: Example 1: Non-Linear Regression
This is an example of a non-linear fit in a situation where there isn't much data. Using optimization to find hyperparameters in this situation will greatly underestimate the amount of uncertainty if using the GP for prediction. In PyMC3 it is easy to be fully Bayesian and use MCMC methods.
We generate 20 data points at random x values between 0 and 3. The true values of the hyperparameters are hardcoded in this temporary model.
End of explanation
Z = np.linspace(0,3,100)[:,None]
with pm.Model() as model:
# priors on the covariance function hyperparameters
l = pm.Uniform('l', 0, 10)
# uninformative prior on the function variance
log_s2_f = pm.Uniform('log_s2_f', lower=-10, upper=5)
s2_f = pm.Deterministic('s2_f', tt.exp(log_s2_f))
# uninformative prior on the noise variance
log_s2_n = pm.Uniform('log_s2_n', lower=-10, upper=5)
s2_n = pm.Deterministic('s2_n', tt.exp(log_s2_n))
# covariance functions for the function f and the noise
f_cov = s2_f * pm.gp.cov.ExpQuad(1, l)
y_obs = pm.gp.GP('y_obs', cov_func=f_cov, sigma=s2_n, observed={'X':X, 'Y':y})
with model:
trace = pm.sample(2000)
Explanation: Since there isn't much data, there will likely be a lot of uncertainty in the hyperparameter values.
We assign prior distributions that are uniform in log space, suitable for variance-type parameters. Each hyperparameter must at least be constrained to be positive valued by its prior.
None of the covariance function objects have a scaling coefficient built in. This is because random variables, such as s2_f, can be multiplied directly with a covariance function object, gp.cov.ExpQuad.
The last line is the marginal likelihood. Since the observed data $y$ is also assumed to be multivariate normally distributed, the marginal likelihood is also multivariate normal. It is obtained by integrating out $f(x)$ from the product of the data likelihood $p(y \mid f, X)$ and the GP prior $p(f \mid X)$,
\begin{equation}
p(y \mid X) = \int p(y \mid f, X) p(f \mid X) df
\end{equation}
The call in the last line f_cov.K(X) evaluates the covariance function across the inputs X. The result is a matrix. The sum of this matrix and the diagonal noise term are used as the covariance matrix for the marginal likelihood.
End of explanation
pm.traceplot(trace[1000:], varnames=['l', 's2_f', 's2_n'],
lines={"l": l_true,
"s2_f": s2_f_true,
"s2_n": s2_n_true});
Explanation: The results show that the hyperparameters were recovered pretty well, but definitely with a high degree of uncertainty. Lets look at the predicted fits and uncertainty next using samples from the full posterior.
End of explanation
with model:
gp_samples = pm.gp.sample_gp(trace[1000:], y_obs, Z, samples=50, random_seed=42)
fig, ax = plt.subplots(figsize=(14,5))
[ax.plot(Z, x, color=cm(0.3), alpha=0.3) for x in gp_samples]
# overlay the observed data
ax.plot(X, y, 'ok', ms=10);
ax.set_xlabel("x");
ax.set_ylabel("f(x)");
ax.set_title("Posterior predictive distribution");
Explanation: The sample_gp function draws realizations of the GP from the predictive distribution.
End of explanation
np.random.seed(200)
n = 150
X = np.sort(40*np.random.rand(n))[:,None]
# define gp, true parameter values
with pm.Model() as model:
l_per_true = 2
cov_per = pm.gp.cov.Cosine(1, l_per_true)
l_drift_true = 4
cov_drift = pm.gp.cov.Matern52(1, l_drift_true)
s2_p_true = 0.3
s2_d_true = 1.5
s2_w_true = 0.3
periodic_cov = s2_p_true * cov_per
drift_cov = s2_d_true * cov_drift
signal_cov = periodic_cov + drift_cov
noise_cov = s2_w_true**2 * tt.eye(n)
K = theano.function([], signal_cov(X, X) + noise_cov)()
y = np.random.multivariate_normal(np.zeros(n), K)
Explanation: Example 2: A periodic signal in non-white noise
This time let's pretend we have some more complex data that we would like to decompose. For the sake of example, we simulate some data points from a function that
1. has a fainter periodic component
2. has a lower frequency drift away from periodicity
3. has additive white noise
As before, we generate the data using a throwaway PyMC3 model. We consider the sum of the drift term and the white noise to be "noise", while the periodic component is "signal". In GP regression, the treatment of signal and noise covariance functions is identical, so the distinction between signal and noise is somewhat arbitrary.
End of explanation
fig = plt.figure(figsize=(12,5)); ax = fig.add_subplot(111)
ax.plot(X, y, '--', color=cm(0.4))
ax.plot(X, y, 'o', color="k", ms=10);
ax.set_xlabel("x");
ax.set_ylabel("f(x)");
Explanation: In the plot of the observed data, the periodic component is barely distinguishable by eye. It is plausible that there isn't a periodic component, and the observed data is just the drift component and white noise.
End of explanation
with pm.Model() as model:
# prior for periodic lengthscale, or frequency
l_per = pm.Uniform('l_per', lower=1e-5, upper=10)
# prior for the drift lengthscale hyperparameter
l_drift = pm.Uniform('l_drift', lower=1e-5, upper=10)
# uninformative prior on the periodic amplitude
log_s2_p = pm.Uniform('log_s2_p', lower=-10, upper=5)
s2_p = pm.Deterministic('s2_p', tt.exp(log_s2_p))
# uninformative prior on the drift amplitude
log_s2_d = pm.Uniform('log_s2_d', lower=-10, upper=5)
s2_d = pm.Deterministic('s2_d', tt.exp(log_s2_d))
# uninformative prior on the white noise variance
log_s2_w = pm.Uniform('log_s2_w', lower=-10, upper=5)
s2_w = pm.Deterministic('s2_w', tt.exp(log_s2_w))
# the periodic "signal" covariance
signal_cov = s2_p * pm.gp.cov.Cosine(1, l_per)
# the "noise" covariance
drift_cov = s2_d * pm.gp.cov.Matern52(1, l_drift)
y_obs = pm.gp.GP('y_obs', cov_func=signal_cov + drift_cov, sigma=s2_w, observed={'X':X, 'Y':y})
with model:
trace = pm.sample(2000, step=pm.NUTS(integrator="two-stage"), init=None)
pm.traceplot(trace[1000:], varnames=['l_per', 'l_drift', 's2_d', 's2_p', 's2_w'],
lines={"l_per": l_per_true,
"l_drift": l_drift_true,
"s2_d": s2_d_true,
"s2_p": s2_p_true,
"s2_w": s2_w_true});
Explanation: Lets see if we can infer the correct values of the hyperparameters.
End of explanation
ax.get_ybound()
fig = plt.figure(figsize=(12,6)); ax = fig.add_subplot(111)
ax.hist(trace['s2_p', 1000:], 100, range=(0,4), color=cm(0.3), ec='none');
ax.plot([0.3, 0.3], [0, ax.get_ybound()[1]], "k", lw=2);
ax.set_title("Histogram of s2_p");
ax.set_ylabel("Number of samples");
ax.set_xlabel("s2_p");
Explanation: Some large samples make the histogram of s2_p hard to read. Below is a zoomed in histogram.
End of explanation
Z = np.linspace(0, 40, 100).reshape(-1, 1)
with model:
gp_samples = pm.gp.sample_gp(trace[1000:], y_obs, Z, samples=50, random_seed=42, progressbar=False)
fig, ax = plt.subplots(figsize=(14,5))
[ax.plot(Z, x, color=cm(0.3), alpha=0.3) for x in gp_samples]
# overlay the observed data
ax.plot(X, y, 'o', color="k", ms=10);
ax.set_xlabel("x");
ax.set_ylabel("f(x)");
ax.set_title("Posterior predictive distribution");
Explanation: Comparing the histograms of the results to the true values, we can see that the PyMC3's MCMC methods did a good job estimating the true GP hyperparameters. Although the periodic component is faintly apparent in the observed data, the GP model is able to extract it with high accuracy.
End of explanation |
9,926 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class 26
Step1: Define a log likelihood function for the vertex degree distribution
Step2: Define a vectorized log-factorial function, since it doesn't seem to be a builtin in python
Step3: Define a log likelihood function for a single vertex, based on Theorem 1 in the 1992 article in Machine Learning by Cooper & Herskovits. Note
Step4: Define a log-posterior-probability function for the whole graph, using the per-vertex likelihood and the network structure prior
Step5: Define an adjacency matrix for the "real" network shown in Fig. 3A of the Sachs et al. article (not including the "missed" edges which are the dotted arcs).
Step6: Make an igraph network out of the adjacency matrix that you just created, and print the network summary and plot the network.
Step7: Compute the log posterior probability of the real network from Sachs et al. Figure 3A.
Step8: Generate 10000 random rewirings of the network -- eliminating any rewired digraphs that contain cycles -- and for each randomly rewired DAG, histogram the log ratio of the "real" network's posterior probability to the posterior probabilities of each of the random networks. Does it appear that the published network is pretty close to the maximum a posteriori (MAP) estimate? | Python Code:
import pandas
g_discret_data = pandas.read_csv("shared/sachs_data_discretized.txt",
sep="\t")
g_discret_data.head(n=6)
Explanation: Class 26: Bayesian Networks
Infer a Bayesian network from a matrix of discretized phospho-flow cytometry data.
Based on supplementary data from the 2005 article by Karen Sachs et al. (Science v308, 2005).
In this class exercise, we will use the fundamental theorem for the likelihood of a Bayesian network structure for categorical variables, in order to score the posterior probability of the network shown in the Sachs et al. article (Figure 3A) vs. the phospho-flow cytometry data that the same authors provided in their supplementary data. The phospho-flow cytometry data have been already discretized for you (see "class26_bayesnet_dataprep_R.ipynb"). We will need to implement a single-vertex log-likelihood function using Theorem 1 from the article by Cooper & Herskovits in Machine Learning (volume 9, pages 309-347, 1992).
Load the tab-delimited data file of discretized phosphoprotein expression data (12 columns; first 11 columns are the expression levels -- "low", "medium", "high"; last column is the experiment identifier for the row; there are nine experiments). Print out the first six lines of the data frame, so you can see what it looks like.
End of explanation
import numpy
def log_prob_network_prior(network):
degarray = numpy.sum(network, axis=0) + numpy.sum(network, axis=1)
degarray_float = numpy.zeros(len(degarray))
degarray_float[:] = degarray
return numpy.sum(numpy.log(numpy.power(1.0 + degarray_float, -2)))
Explanation: Define a log likelihood function for the vertex degree distribution
End of explanation
import scipy.special
def lfactorial(n):
return scipy.special.gammaln(n+1)
Explanation: Define a vectorized log-factorial function, since it doesn't seem to be a builtin in python:
End of explanation
import math
def log_likelihood_network_vertex(network, vertex, discret_data):
# network is a NxN numpy matrix (N = 11 vertices)
# vertex is an integer
# discret_data is "g_discret_data" (N = 11 vertices, M = 7466 samples)
parents_vertex = numpy.where(network[:,vertex]==1)[0].tolist()
all_vertices = parents_vertex.copy()
all_vertices.append(vertex)
df1 = discret_data[all_vertices]
# change the name of the vertex column to "vertex"
df1_column_names = df1.columns.tolist()
df1_column_names[len(parents_vertex)] = "vertex"
df1.columns = df1_column_names
# count the data, grouped by all columns (parents & vertex)
df1 = df1.groupby(df1.columns.values.tolist()).size().reset_index(name='count')
# make a new column called "count factorial" that is the log of the factorial of the count
df1["countfactorial"] = lfactorial(df1["count"].values)
# drop the "count" column, as we no longer need it
df1 = df1.drop("count", 1)
if len(parents_vertex) > 0:
# sum up log-factorial-counts values for all possible states of "vertex" and its parent vertices,
# for each possible combination of parent vertices
nijkdf = df1.groupby(by=df1.columns[list(range(0,len(parents_vertex)))].tolist(),
as_index=False).sum()
# count number of cells with each possible combination of states for its parent vertices
df3 = discret_data[parents_vertex]
nijdf = df3.groupby(df3.columns.values.tolist(), as_index=False).size().reset_index()
nijdf_col_names = nijdf.columns.values
nijdf_col_names[len(nijdf_col_names)-1] = "count"
nijdf.columns = nijdf_col_names
# compute the log factorial of the counts
nijdf["countfactorial"] = math.log(2) - lfactorial(2 + nijdf["count"])
# drop the "count" column as we no longer need it
nijdf = nijdf.drop("count", 1)
# merge the two log-factorial-count values from nijdf and nijkdf, into two columns in a single dataframe
nmerge = nijdf.merge(nijkdf, how="outer",
on=nijkdf.columns[0:(len(nijkdf.columns)-1)].values.tolist(),
copy=False)
# sum the log-factorial-count values from nijdf and nijkdf
llh_res = numpy.sum(nmerge["countfactorial_x"]+nmerge["countfactorial_y"])
else:
# we handle the case of no parent vertices specially, to simplify the code
M = discret_data.shape[0]
llh_res = math.log(2) - lfactorial(M + 2) + numpy.sum(df1["countfactorial"].values)
return llh_res
Explanation: Define a log likelihood function for a single vertex, based on Theorem 1 in the 1992 article in Machine Learning by Cooper & Herskovits. Note: we are using igraph's adjacency matrix format which is the transpose of Newman's adjacency matrix definition!
End of explanation
def log_posterior_prob_network(network, discret_data):
Nvert = network.shape[1]
lpvert_values = []
for i in range(0, Nvert):
lpvert_value = log_likelihood_network_vertex(network, i, discret_data)
lpvert_values.append(lpvert_value)
return log_prob_network_prior(network) + numpy.sum(numpy.array(lpvert_values))
Explanation: Define a log-posterior-probability function for the whole graph, using the per-vertex likelihood and the network structure prior:
End of explanation
real_network_adj = numpy.zeros(shape=[11,11])
molec_names = g_discret_data[list(range(0,11))].columns.values
real_network_adj = pandas.DataFrame(real_network_adj, index=molec_names, columns=molec_names)
real_network_adj["PKA"]["PKC"] = 1
real_network_adj["praf"]["PKC"] = 1
real_network_adj["pjnk"]["PKC"] = 1
real_network_adj["P38"]["PKC"] = 1
real_network_adj["pjnk"]["PKA"] = 1
real_network_adj["P38"]["PKA"] = 1
real_network_adj["praf"]["PKA"] = 1
real_network_adj["pmek"]["PKA"] = 1
real_network_adj["p44.42"]["PKA"] = 1 # p44.42 = ERK
real_network_adj["pakts473"]["PKA"] = 1
real_network_adj["pakts473"]["p44.42"] = 1
real_network_adj["pmek"]["PKC"] = 1
real_network_adj["pmek"]["praf"] = 1
real_network_adj["p44.42"]["pmek"] = 1
real_network_adj["PIP2"]["plcg"] = 1
real_network_adj["PIP3"]["plcg"] = 1
real_network_adj["PIP2"]["PIP3"] = 1
print(real_network_adj)
real_network_adj = real_network_adj.as_matrix()
Explanation: Define an adjacency matrix for the "real" network shown in Fig. 3A of the Sachs et al. article (not including the "missed" edges which are the dotted arcs).
End of explanation
import igraph
real_network_igraph = igraph.Graph.Adjacency(real_network_adj.tolist())
real_network_igraph.summary()
Explanation: Make an igraph network out of the adjacency matrix that you just created, and print the network summary and plot the network.
End of explanation
lp_real = log_posterior_prob_network(real_network_adj, g_discret_data)
print(lp_real)
Explanation: Compute the log posterior probability of the real network from Sachs et al. Figure 3A.
End of explanation
import itertools
lprobs_rand = []
for _ in itertools.repeat(None, 10000):
graphcopy = real_network_igraph.copy()
graphcopy.rewire_edges(prob=1, loops=False, multiple=False)
if graphcopy.is_dag():
lprobs_rand.append(log_posterior_prob_network(numpy.array(graphcopy.get_adjacency().data),
g_discret_data))
import matplotlib.pyplot
matplotlib.pyplot.hist(lp_real - lprobs_rand)
matplotlib.pyplot.xlabel("log(P(G_real|D)/P(G_rand|D))")
matplotlib.pyplot.ylabel("Frequency")
matplotlib.pyplot.show()
Explanation: Generate 10000 random rewirings of the network -- eliminating any rewired digraphs that contain cycles -- and for each randomly rewired DAG, histogram the log ratio of the "real" network's posterior probability to the posterior probabilities of each of the random networks. Does it appear that the published network is pretty close to the maximum a posteriori (MAP) estimate?
End of explanation |
9,927 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Denoising Autoencoder
Sticking with the MNIST dataset, let's add noise to our data and see if we can define and train an autoencoder to de-noise the images.
<img src='notebook_ims/autoencoder_denoise.png' width=70%/>
Let's get started by importing our libraries and getting the dataset.
Step1: Visualize the Data
Step2: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1.
We'll use noisy images as input and the original, clean images as targets.
Below is an example of some of the noisy images I generated and the associated, denoised images.
<img src='notebook_ims/denoising.png' />
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here; layers with more feature maps. You might also consider adding additional layers. I suggest starting with a depth of 32 for the convolutional layers in the encoder, and the same depths going backward through the decoder.
TODO
Step3: Training
We are only concerned with the training images, which we can get from the train_loader.
In this case, we are actually adding some noise to these images and we'll feed these noisy_imgs to our model. The model will produce reconstructed images based on the noisy input. But, we want it to produce normal un-noisy images, and so, when we calculate the loss, we will still compare the reconstructed outputs to the original images!
Because we're comparing pixel values in input and output images, it will be best to use a loss that is meant for a regression task. Regression is all about comparing quantities rather than probabilistic values. So, in this case, I'll use MSELoss. And compare output images and input images as follows
Step4: Checking out the results
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is. | Python Code:
import torch
import numpy as np
from torchvision import datasets
import torchvision.transforms as transforms
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# load the training and test datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False,
download=True, transform=transform)
# Create training and test dataloaders
num_workers = 0
# how many samples per batch to load
batch_size = 20
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)
Explanation: Denoising Autoencoder
Sticking with the MNIST dataset, let's add noise to our data and see if we can define and train an autoencoder to de-noise the images.
<img src='notebook_ims/autoencoder_denoise.png' width=70%/>
Let's get started by importing our libraries and getting the dataset.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# get one image from the batch
img = np.squeeze(images[0])
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
Explanation: Visualize the Data
End of explanation
import torch.nn as nn
import torch.nn.functional as F
# define the NN architecture
class ConvDenoiser(nn.Module):
def __init__(self):
super(ConvDenoiser, self).__init__()
## encoder layers ##
## decoder layers ##
## a kernel of 2 and a stride of 2 will increase the spatial dims by 2
def forward(self, x):
## encode ##
## decode ##
return x
# initialize the NN
model = ConvDenoiser()
print(model)
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1.
We'll use noisy images as input and the original, clean images as targets.
Below is an example of some of the noisy images I generated and the associated, denoised images.
<img src='notebook_ims/denoising.png' />
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here; layers with more feature maps. You might also consider adding additional layers. I suggest starting with a depth of 32 for the convolutional layers in the encoder, and the same depths going backward through the decoder.
TODO: Build the network for the denoising autoencoder. Add deeper and/or additional layers compared to the model above.
End of explanation
# specify loss function
criterion = nn.MSELoss()
# specify loss function
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# number of epochs to train the model
n_epochs = 20
# for adding noise to images
noise_factor=0.5
for epoch in range(1, n_epochs+1):
# monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for data in train_loader:
# _ stands in for labels, here
# no need to flatten images
images, _ = data
## add random noise to the input images
noisy_imgs = images + noise_factor * torch.randn(*images.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# clear the gradients of all optimized variables
optimizer.zero_grad()
## forward pass: compute predicted outputs by passing *noisy* images to the model
outputs = model(noisy_imgs)
# calculate the loss
# the "target" is still the original, not-noisy images
loss = criterion(outputs, images)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item()*images.size(0)
# print avg training statistics
train_loss = train_loss/len(train_loader)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(
epoch,
train_loss
))
Explanation: Training
We are only concerned with the training images, which we can get from the train_loader.
In this case, we are actually adding some noise to these images and we'll feed these noisy_imgs to our model. The model will produce reconstructed images based on the noisy input. But, we want it to produce normal un-noisy images, and so, when we calculate the loss, we will still compare the reconstructed outputs to the original images!
Because we're comparing pixel values in input and output images, it will be best to use a loss that is meant for a regression task. Regression is all about comparing quantities rather than probabilistic values. So, in this case, I'll use MSELoss. And compare output images and input images as follows:
loss = criterion(outputs, images)
End of explanation
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# add noise to the test images
noisy_imgs = images + noise_factor * torch.randn(*images.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# get sample outputs
output = model(noisy_imgs)
# prep images for display
noisy_imgs = noisy_imgs.numpy()
# output is resized into a batch of iages
output = output.view(batch_size, 1, 28, 28)
# use detach when it's an output that requires_grad
output = output.detach().numpy()
# plot the first ten input images and then reconstructed images
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25,4))
# input images on top row, reconstructions on bottom
for noisy_imgs, row in zip([noisy_imgs, output], axes):
for img, ax in zip(noisy_imgs, row):
ax.imshow(np.squeeze(img), cmap='gray')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
Explanation: Checking out the results
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation |
9,928 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Template matching
Let's first download an image and a template to search for. The template is a smaller part of the original image.
Step2: Both the image used for processing and the template are converted to grayscale images to boost efficiency.
Step3: Change the code above to plot grayscale images.
The OpenCV package has a function for template mathing, so let's call it and display the result. The matchTemplate function can calculate six different formulas to find the best match. Within the function, TM_CCOEFF_NORMED it calculates a normalized coefficient in the range (0, 1), where the perfect match gives value 1.
Step4: Change the code above and try other methods, TM_CCORR_NORMED, TM_SQDIFF_NORMED, for instance.
Image transformation
If the pattern is rotated or scaled, the pattern might not match the image. This issue can be fixed by using homology matrix. For more details see
Step5: Let's try to find the template on the rotated image.
Step7: Let's transform the image back to the perpendicular plan.
Step8: Recognition of ArUco markers
"An ArUco marker is a synthetic square marker composed by a wide black border and an inner binary matrix which determines its identifier (id). The black border facilitates its fast detection in the image and the binary codification allows its identification and the application of error detection and correction techniques. The marker size determines the size of the internal matrix. For instance a marker size of 4x4 is composed by 16 bits." (from OpenCV documentation)
There is a contrib package in OpenCV to detect ArUco markers called aruco.
Let's find six ArUco markers on a simple image.
Step9: Calibration
Low-cost cameras might have significant distortions (either radial or tangential). Therefore, we have to calibrate cameras before using in deformation and movement analysis.
Radial distortion
$$ x' = x (1 + k_1 r^2 + k_2 r^4 + k_3 r^6) $$
$$ y' = y (1 + k_1 r^2 + k_2 r^4 + k_3 r^6) $$
Tangential distortion
$$ x' = x + (2 p_1 x y + p_2 (r^2 + 2 x^2)) $$
$$ y' = y + (p_1 (r^2+2 y^2) + 2 p_2 x y) $$
Camera matrix
<table>
<tr><td>f<sub>x</sub></td><td>0</td><td>c<sub>x</sub></td></tr>
<tr><td>0</td><td>f<sub>y</sub></td><td>c<sub>y</sub></td></tr>
<tr><td>0</td><td>0</td><td>1</td></tr></table>
Distortion parameters are ($ k_1, k_2, k_3, p_1, p_2 $). Camera matrix contains focal length ($ f_x, f_y $) and optical centers ($ c_x, c_y $).
For the calibration we need a chessboard like figure and more than ten photos from different directions.
Let's download the images for calibration.
Step10: The first 6 images for calibration
Step11: Using the ArUco calibration, let's find the camera matrix and the associated radial and tangential distortion parameters.
Step12: Plot undistorted image and the one corrected by calibration parameters.
Step13: Complex example
We have a video of a moving object with an ArUco marker. Let's process the video frame by frame and make a plot of movements. During the process images are corrected by the calibration data.
Click here to watch video. | Python Code:
import glob # to extend file name pattern to list
import cv2 # OpenCV for image processing
from cv2 import aruco # to find ArUco markers
import numpy as np # for matrices
import matplotlib.pyplot as plt # to show images
Explanation: <a href="https://colab.research.google.com/github/OSGeoLabBp/tutorials/blob/master/english/data_processing/lessons/img_def.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Movement and deformation analysis from images
Principles
Images/videos are made by a stable camera, to put it another way, the camera does not move during observations
Calibrated camera/system is necessary
Image resolution is enhanced by geodetic telescope
Methods
Template matching
Pattern recognition
Template matching characteristics
Pros
There is always a match
Simple algorithm
Special marker is not necessary
Cons
The chance of false match is higher
No or minimal rotation
No or minimal scale change
Pattern recognition charasteristics
Pros
Marker can rotate
Marker scale can change
Normal of the marker can be estimated
Cons
Special marker have to be fit to target
More sensitive for light conditions
First off, let's import the necessary Python packages.
End of explanation
!wget -q -O sample_data/monalisa.jpg https://raw.githubusercontent.com/OSGeoLabBp/tutorials/master/english/img_processing/code/monalisa.jpg
!wget -q -O sample_data/mona_temp4.png https://raw.githubusercontent.com/OSGeoLabBp/tutorials/master/english/img_processing/code/mona_temp4.png
Explanation: Template matching
Let's first download an image and a template to search for. The template is a smaller part of the original image.
End of explanation
img = cv2.imread('sample_data/monalisa.jpg') # load image
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # convert image to grayscale
templ = cv2.imread('sample_data/mona_temp4.png') # load template
templ_gray = cv2.cvtColor(templ, cv2.COLOR_BGR2GRAY) # convert template to grayscale
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5)) # show image and template
ax1.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
ax1.set_title('image to scan')
ax2.imshow(cv2.cvtColor(templ, cv2.COLOR_BGR2RGB)) # BGR vs. RGB
ax2.set_title('template to find')
ax2.set_xlim(ax1.get_xlim()) # set same scale
ax2.set_ylim(ax1.get_ylim())
print(f'image sizes: {img_gray.shape} template sizes: {templ_gray.shape}')
Explanation: Both the image used for processing and the template are converted to grayscale images to boost efficiency.
End of explanation
result = cv2.matchTemplate(img_gray, templ_gray, cv2.TM_CCOEFF_NORMED)
val, _, max = cv2.minMaxLoc(result)[1:4] # get position of best match
fr = np.array([max,
(max[0]+templ.shape[1], max[1]),
(max[0]+templ.shape[1], max[1]+templ.shape[0]),
(max[0], max[1]+templ.shape[0]),
max])
result_uint = ((result - np.min(result)) / (np.max(result) - np.min(result)) * 256).astype('uint8')
result_img = cv2.cvtColor(result_uint, cv2.COLOR_GRAY2BGR)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
ax1.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
ax1.set_title('Match on original image')
ax1.plot(fr[:,0], fr[:,1], 'r')
ax1.plot([max[0]],[max[1]], 'r*')
ax2.imshow(result_img)
ax2.plot(fr[:,0], fr[:,1], 'r')
ax2.plot([max[0]],[max[1]], 'r*')
ax2.set_title('Normalized coefficients')
ax2.set_xlim(ax1.get_xlim()) # set same scale
ax2.set_ylim(ax1.get_ylim())
print(f'best match at {max} value {val:.6f}')
Explanation: Change the code above to plot grayscale images.
The OpenCV package has a function for template mathing, so let's call it and display the result. The matchTemplate function can calculate six different formulas to find the best match. Within the function, TM_CCOEFF_NORMED it calculates a normalized coefficient in the range (0, 1), where the perfect match gives value 1.
End of explanation
!wget -q -O sample_data/monalisa_tilt.jpg https://raw.githubusercontent.com/OSGeoLabBp/tutorials/master/english/img_processing/code/monalisa_tilt.jpg
Explanation: Change the code above and try other methods, TM_CCORR_NORMED, TM_SQDIFF_NORMED, for instance.
Image transformation
If the pattern is rotated or scaled, the pattern might not match the image. This issue can be fixed by using homology matrix. For more details see: source
Let's download another image with a rotated Mona Lisa.
End of explanation
img = cv2.imread('sample_data/monalisa_tilt.jpg', cv2.IMREAD_GRAYSCALE)
result = cv2.matchTemplate(img, templ_gray, cv2.TM_CCOEFF_NORMED)
val, _, max = cv2.minMaxLoc(result)[1:4]
fr = np.array([max,
(max[0]+templ.shape[1], max[1]),
(max[0]+templ.shape[1], max[1]+templ.shape[0]),
(max[0], max[1]+templ.shape[0]),
max])
plt.imshow(img, cmap="gray")
plt.plot(fr[:,0], fr[:,1], 'r')
plt.plot([max[0]],[max[1]], 'r*')
print(f'best match at {max} value {val:.6f} BUT FALSE!')
Explanation: Let's try to find the template on the rotated image.
End of explanation
def project_img(image, a_src, a_dst):
calculate transformation matrix
new_image = image.copy() # make a copy of input image
# get parameters of transformation
projective_matrix = cv2.getPerspectiveTransform(a_src, a_dst)
# transform image
transformed = cv2.warpPerspective(img, projective_matrix, image.shape)
# cut destination area
transformed = transformed[0:int(np.max(a_dst[:,1])),0:int(np.max(a_dst[:,0]))]
return transformed
# frame on warped image
src = [(240, 44), (700, 116), (703, 815), (243, 903)]
# frame on original
s = img_gray.shape
dst = [(0, 0), (s[1], 0), (s[1], s[0]), (0,s[0])]
a_src = np.float32(src)
a_dst = np.float32(dst)
# image transformation
img_dst = project_img(img, a_src, a_dst)
# template match
result = cv2.matchTemplate(img_dst, templ_gray, cv2.TM_CCOEFF_NORMED)
val, _, max = cv2.minMaxLoc(result)[1:4]
# frame around template on transformed image
fr = np.array([max,
(max[0]+templ.shape[1], max[1]),
(max[0]+templ.shape[1], max[1]+templ.shape[0]),
(max[0], max[1]+templ.shape[0]),
max])
fig, ax = plt.subplots(1,2, figsize=(13,8))
ax[0].imshow(img, cmap="gray");
ax[0].plot(a_src[:,0], a_src[:,1], 'r--')
ax[0].set_title('Original Image')
ax[1].imshow(img_dst, cmap="gray")
ax[1].plot(a_dst[:,0], a_dst[:,1], 'r--')
ax[1].set_title('Warped Image')
ax[1].plot(fr[:,0], fr[:,1], 'r')
ax[1].plot([max[0]],[max[1]], 'r*')
print(f'best match at {max} value {val:.2f}')
Explanation: Let's transform the image back to the perpendicular plan.
End of explanation
!wget -q -O sample_data/markers.png https://raw.githubusercontent.com/OSGeoLabBp/tutorials/master/english/img_processing/code/markers.png
img = cv2.imread('sample_data/markers.png')
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
aruco_dict = aruco.Dictionary_get(cv2.aruco.DICT_4X4_100)
params = aruco.DetectorParameters_create()
corners, ids, _ = aruco.detectMarkers(img_gray, aruco_dict, parameters=params)
x = np.zeros(ids.size)
y = np.zeros(ids.size)
img1 = img.copy()
for j in range(ids.size):
x[j] = int(round(np.average(corners[j][0][:, 0])))
y[j] = int(round(np.average(corners[j][0][:, 1])))
cv2.putText(img1, str(ids[j][0]), (int(x[j]+2), int(y[j])), cv2.FONT_HERSHEY_SIMPLEX, 1.5, (255, 0, 255), 3)
fig, ax = plt.subplots(1,2, figsize=(10,5))
ax[0].imshow(img)
ax[1].imshow(img1)
ax[1].plot(x, y, "ro")
print(list(zip(list(x), list(y))))
Explanation: Recognition of ArUco markers
"An ArUco marker is a synthetic square marker composed by a wide black border and an inner binary matrix which determines its identifier (id). The black border facilitates its fast detection in the image and the binary codification allows its identification and the application of error detection and correction techniques. The marker size determines the size of the internal matrix. For instance a marker size of 4x4 is composed by 16 bits." (from OpenCV documentation)
There is a contrib package in OpenCV to detect ArUco markers called aruco.
Let's find six ArUco markers on a simple image.
End of explanation
!wget -q -O sample_data/cal.zip https://raw.githubusercontent.com/OSGeoLabBp/tutorials/master/english/img_processing/code/cal.zip
!unzip -q -o sample_data/cal.zip -d sample_data
width = 5 # Charuco board size
height = 7
board = cv2.aruco.CharucoBoard_create(width, height, .025, .0125, aruco_dict) # generate board in memory
img = board.draw((500, 700))
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
_ = plt.title('Charuco board')
Explanation: Calibration
Low-cost cameras might have significant distortions (either radial or tangential). Therefore, we have to calibrate cameras before using in deformation and movement analysis.
Radial distortion
$$ x' = x (1 + k_1 r^2 + k_2 r^4 + k_3 r^6) $$
$$ y' = y (1 + k_1 r^2 + k_2 r^4 + k_3 r^6) $$
Tangential distortion
$$ x' = x + (2 p_1 x y + p_2 (r^2 + 2 x^2)) $$
$$ y' = y + (p_1 (r^2+2 y^2) + 2 p_2 x y) $$
Camera matrix
<table>
<tr><td>f<sub>x</sub></td><td>0</td><td>c<sub>x</sub></td></tr>
<tr><td>0</td><td>f<sub>y</sub></td><td>c<sub>y</sub></td></tr>
<tr><td>0</td><td>0</td><td>1</td></tr></table>
Distortion parameters are ($ k_1, k_2, k_3, p_1, p_2 $). Camera matrix contains focal length ($ f_x, f_y $) and optical centers ($ c_x, c_y $).
For the calibration we need a chessboard like figure and more than ten photos from different directions.
Let's download the images for calibration.
End of explanation
fig, ax = plt.subplots(1, 6, figsize=(15, 2))
for i in range(6):
im = cv2.imread('sample_data/cal{:d}.jpg'.format(i+1))
ax[i].imshow(cv2.cvtColor(im, cv2.COLOR_BGR2RGB))
ax[i].set_title('cal{:d}.jpg'.format(i+1))
Explanation: The first 6 images for calibration:
End of explanation
allCorners = []
allIds = []
decimator = 0
for name in glob.glob("sample_data/cal*.jpg"):
frame = cv2.imread(name)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
corners, ids, _ = cv2.aruco.detectMarkers(gray, aruco_dict)
ret, corners1, ids1 = cv2.aruco.interpolateCornersCharuco(corners, ids, gray, board)
allCorners.append(corners1)
allIds.append(ids1)
decimator += 1
ret, mtx, dist, rvecs, tvecs = cv2.aruco.calibrateCameraCharuco(allCorners, allIds, board, gray.shape, None, None)
print("Camera matrix [pixels]")
for i in range(mtx.shape[0]):
print(f'{mtx[i][0]:8.1f} {mtx[i][1]:8.1f} {mtx[i][2]:8.1f}')
print('Radial components')
print(30 * '-')
print(f'{dist[0][0]:10.5f} {dist[0][1]:10.5f} {dist[0][2]:10.5f}')
print(30 * '-')
print('Tangential components')
print(f'{dist[0][3]:10.5f} {dist[0][4]:10.5f}')
Explanation: Using the ArUco calibration, let's find the camera matrix and the associated radial and tangential distortion parameters.
End of explanation
gray = cv2.imread('sample_data/cal1.jpg', cv2.IMREAD_GRAYSCALE)
fig, ax = plt.subplots(1, 2, figsize=(10,5))
ax[0].imshow(gray, cmap='gray')
ax[0].set_title('distorted image')
ax[1].imshow(cv2.undistort(gray, mtx, dist, None), cmap='gray')
_ = ax[1].set_title('undistorted image')
Explanation: Plot undistorted image and the one corrected by calibration parameters.
End of explanation
!wget -q -O sample_data/demo.mp4 https://raw.githubusercontent.com/OSGeoLabBp/tutorials/master/english/img_processing/code/demo.mp4
cap = cv2.VideoCapture('sample_data/demo.mp4')
frame = 0 # frame counter
xc = [] # for pixel coordinates of marker
yc = []
frames = []
while True:
ret, img = cap.read() # get next frame from video
if ret:
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # convert image to grayscale
img_gray = cv2.undistort(gray, mtx, dist, None) # remove camera distortion using calibration
corners, ids, _ = aruco.detectMarkers(img_gray, aruco_dict, parameters=params) # find ArUco markers
if ids: # marker found?
yc.append(img_gray.shape[1] - int(round(np.average(corners[0][0][:, 1])))) # change y direction
frames.append(frame)
frame += 1 # frame count
else:
break # no more images
plt.plot(frames, yc)
plt.title('Vertical positions of ArUco marker from video frames')
plt.xlabel('frame count')
plt.grid()
_ = plt.ylabel('vertical position [pixel]')
Explanation: Complex example
We have a video of a moving object with an ArUco marker. Let's process the video frame by frame and make a plot of movements. During the process images are corrected by the calibration data.
Click here to watch video.
End of explanation |
9,929 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Likelihood of a coin being fair
$$
P(\theta|X) = \frac{P(X|\theta)P(\theta)}{P(X)}
$$
Here, $P(X|\theta)$ is the likelihood, $P(\theta)$ is the prior on theta, $P(X)$ is the evidence, while $P(\theta|X)$ is the posteior.
Now the probability of observing $k$ heads out of $n$ trials, given that the probability of a fair coin is $\theta$ is given as follows
Step1: Plot for multiple data | Python Code:
def get_likelihood(theta, n, k, normed=False):
ll = (theta**k)*((1-theta)**(n-k))
if normed:
num_combs = comb(n, k)
ll = num_combs*ll
return ll
get_likelihood(0.5, 2, 2, normed=True)
get_likelihood(0.5, 10, np.arange(10), normed=True)
N = 100
plt.plot(
np.arange(N),
get_likelihood(0.5, N, np.arange(N), normed=True),
color='k', markeredgecolor='none', marker='o',
linestyle="--", ms=3, markerfacecolor='r', lw=0.5
)
n, k = 10, 6
theta=np.arange(0,1,0.01)
ll = get_likelihood(theta, n, k, normed=True)
source = ColumnDataSource(data=dict(
theta=theta,
ll=ll,
))
hover = HoverTool(tooltips=[
("index", "$index"),
("theta", "$x"),
("ll", "$y"),
])
p1 = figure(plot_width=600, plot_height=400,
tools=[hover], title="Likelihood of fair coin")
p1.grid.grid_line_alpha=0.3
p1.xaxis.axis_label = 'theta'
p1.yaxis.axis_label = 'Likelihood'
p1.line('theta', 'll', color='#A6CEE3', source=source)
# get a handle to update the shown cell with
handle = show(p1, notebook_handle=True)
handle
Explanation: Likelihood of a coin being fair
$$
P(\theta|X) = \frac{P(X|\theta)P(\theta)}{P(X)}
$$
Here, $P(X|\theta)$ is the likelihood, $P(\theta)$ is the prior on theta, $P(X)$ is the evidence, while $P(\theta|X)$ is the posteior.
Now the probability of observing $k$ heads out of $n$ trials, given that the probability of a fair coin is $\theta$ is given as follows:
$P(N_{heads}=k|n,\theta) = \frac{n!}{k!(n-k)!}\theta^{k}(1-\theta)^{(n-k)}$
Consider, $n=10$ and $k=9$. Now, we can plot our likelihood. as follows:
End of explanation
hover = HoverTool(tooltips=[
("index", "$index"),
("theta", "$x"),
("ll_ratio", "$y"),
])
p1 = figure(plot_width=600, plot_height=400,
#y_axis_type="log",
tools=[hover], title="Likelihood ratio compared to unbiased coin")
p1.grid.grid_line_alpha=0.3
p1.xaxis.axis_label = 'theta'
p1.yaxis.axis_label = 'Likelihood ratio wrt theta = 0.5'
theta=np.arange(0,1,0.01)
for n, k, color in zip(
[10, 100, 500],
[6, 60, 300],
["red", "blue", "black"]
):
ll_unbiased = get_likelihood(0.5, n, k, normed=False)
ll = get_likelihood(theta, n, k, normed=False)
ll_ratio = ll / ll_unbiased
source = ColumnDataSource(data=dict(
theta=theta,
ll_ratio=ll_ratio,
))
p1.line('theta', 'll_ratio',
color=color, source=source,
legend="n={}, k={}".format(n,k))
# get a handle to update the shown cell with
handle = show(p1, notebook_handle=True)
handle
hover = HoverTool(tooltips=[
("index", "$index"),
("theta", "$x"),
("ll_ratio", "$y"),
])
p1 = figure(plot_width=600, plot_height=400,
y_axis_type="log",
tools=[hover], title="Likelihood ratio compared to unbiased coin")
p1.grid.grid_line_alpha=0.3
p1.xaxis.axis_label = 'n'
p1.yaxis.axis_label = 'Likelihood ratio wrt theta = 0.5'
n = 10**np.arange(0,6)
k = (n*0.6).astype(int)
theta=0.6
ll_unbiased = get_likelihood(0.5, n, k, normed=False)
ll = get_likelihood(theta, n, k, normed=False)
ll_ratio = ll / ll_unbiased
source = ColumnDataSource(data=dict(
n=n,
ll_ratio=ll_ratio,
))
p1.line('n', 'll_ratio',
color=color, source=source,
legend="theta={:.2f}".format(theta))
# get a handle to update the shown cell with
handle = show(p1, notebook_handle=True)
handle
Explanation: Plot for multiple data
End of explanation |
9,930 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Searching,-Hashing,-Sorting" data-toc-modified-id="Searching,-Hashing,-Sorting-1"><span class="toc-item-num">1 </span>Searching, Hashing, Sorting</a></span><ul class="toc-item"><li><span><a href="#Searching" data-toc-modified-id="Searching-1.1"><span class="toc-item-num">1.1 </span>Searching</a></span></li><li><span><a href="#Hashing" data-toc-modified-id="Hashing-1.2"><span class="toc-item-num">1.2 </span>Hashing</a></span></li><li><span><a href="#Sorting" data-toc-modified-id="Sorting-1.3"><span class="toc-item-num">1.3 </span>Sorting</a></span></li></ul></li></ul></div>
Step1: Searching, Hashing, Sorting
Following the online book, Problem Solving with Algorithms and Data Structures. Chapter 6 introduces methods for searching and sorting numbers.
Searching
We can take advantage of a ordered list by doing a binary search. We start by searching in the middle, if it is not the item that we're searching for, we can use the ordered nature of the list to eliminate half of the remaining items.
Step4: Keep in mind that this approach requires sorting the list, which may not be ideal if we're simply going to search for 1 number on the very large list (since we have to first sort the list, which is not a cheap operation).
Hashing
Quick notes
Step5: Sorting
Merge Sort
Step6: Quick Sort | Python Code:
from jupyterthemes import get_themes
from jupyterthemes.stylefx import set_nb_theme
themes = get_themes()
set_nb_theme(themes[1])
%load_ext watermark
%watermark -a 'Ethen' -d -t -v -p jupyterthemes
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Searching,-Hashing,-Sorting" data-toc-modified-id="Searching,-Hashing,-Sorting-1"><span class="toc-item-num">1 </span>Searching, Hashing, Sorting</a></span><ul class="toc-item"><li><span><a href="#Searching" data-toc-modified-id="Searching-1.1"><span class="toc-item-num">1.1 </span>Searching</a></span></li><li><span><a href="#Hashing" data-toc-modified-id="Hashing-1.2"><span class="toc-item-num">1.2 </span>Hashing</a></span></li><li><span><a href="#Sorting" data-toc-modified-id="Sorting-1.3"><span class="toc-item-num">1.3 </span>Sorting</a></span></li></ul></li></ul></div>
End of explanation
def binary_search(testlist, query):
if not testlist:
return False
else:
mid_idx = len(testlist) // 2
mid = testlist[mid_idx]
if mid == query:
return True
elif query < mid:
return binary_search(testlist[:mid_idx], query)
else:
return binary_search(testlist[mid_idx + 1:], query)
testlist = [0, 1, 2, 8, 13, 17, 19, 32, 42]
query = 19
print(binary_search(testlist, query))
query = 3
print(binary_search(testlist, query))
Explanation: Searching, Hashing, Sorting
Following the online book, Problem Solving with Algorithms and Data Structures. Chapter 6 introduces methods for searching and sorting numbers.
Searching
We can take advantage of a ordered list by doing a binary search. We start by searching in the middle, if it is not the item that we're searching for, we can use the ordered nature of the list to eliminate half of the remaining items.
End of explanation
class HashTable:
a.k.a python's dictionary
the initial size of the table has been chosen to
be 11, although this number is arbitrary, it's important
for it to be a prime number so that collision resolution
will be efficient; this implementation does not handle
resizing the hashtable when it runs out of the original size
def __init__(self):
# slot will hold the key and data will hold the value
self.size = 11
self.slot = [None] * self.size
self.data = [None] * self.size
def _put(self, key, value):
hash_value = self._hash(key)
if self.slot[hash_value] == None:
self.slot[hash_value] = key
self.data[hash_value] = value
elif self.slot[hash_value] == key:
# replace the original key value
self.data[hash_value] = value
else:
# rehash to get the next location possible
# if a collision is to occurr
next_slot = self._rehash(hash_value)
slot_value = self.slot[next_slot]
while slot_value != None and slot_value != key:
next_slot = self._rehash(next_slot)
slot_value = self.slot[next_slot]
if self.slot[next_slot] == None:
self.slot[next_slot] = key
self.data[next_slot] = value
else:
self.data[next_slot] = value
def _get(self, key):
data = None
stop = False
found = False
start_slot = self._hash(key)
next_slot = start_slot
while self.slot[next_slot] != None and not found and not stop:
if self.slot[next_slot] == key:
data = self.data[next_slot]
found = True
else:
# if we rehash to the starting value
# then it means the data is not here
next_slot = self._rehash(next_slot)
if next_slot == start_slot:
stop = True
return data
def _hash(self, key):
return key % self.size
def _rehash(self, oldhash):
a simple plus 1 rehash, where we add 1 to
the original value and hash it again to
see if the slot it empty (None)
return (oldhash + 1) % self.size
def __getitem__(self, key):
# allow access using``[]`` syntax
return self._get(key)
def __setitem__(self, key, value):
self._put(key, value)
H = HashTable()
H[54] = 'cat'
H[26] = 'dog'
H[93] = 'lion'
H[17] = 'tiger'
H[77] = 'bird'
H[44] = 'goat'
H[55] = 'pig'
print(H.slot)
print(H.data)
print(H[55])
print(H[20])
Explanation: Keep in mind that this approach requires sorting the list, which may not be ideal if we're simply going to search for 1 number on the very large list (since we have to first sort the list, which is not a cheap operation).
Hashing
Quick notes:
Dictionary or map that let us stor key value pairs is typically implemented using hash table.
An element that we wish to add is converted into an integer by using a hash function. hash = hash_function(key)
Th resulting hash is independent of the underlying array size, and it is then reduced to an index by using the modulo operator. index = hash % array_size
Python has a built in hash function that can calculate has hash values for arbitrarily large objects in a fast manner. Hash function that is used for dictionary or maps should be deterministic so we can look up the values afterwards, fast to compute, else the overhead of hashing would offset the benefit it brings for fast lookup, and uniform distributed, this one is related to avoiding hash collision.
Hash collision happens when two different inputs produce the same hash value. To avoid this a well implemented hash table should be able to resolve hash collision, common techniques include linear probing and separate chaining. Also wehn our hash table is almost saturated (the number of elements is close to the array size we've defined), a.k.a the load factor is larger than some portion, then it should be able to dynamically resize the hash table to maintain our dictionary and map's performance.
Real Python: Build a Hash Table in Python With TDD has a much in depth introduction to this topic.
End of explanation
def merge_sort(alist):
if len(alist) > 1:
mid = len(alist) // 2
left_half = alist[:mid]
right_half = alist[mid:]
merge_sort(left_half)
merge_sort(right_half)
# loop through the left and right half,
# compare the value and fill them back
# to the original list
i, j, k = 0, 0, 0
while i < len(left_half) and j < len(right_half):
if left_half[i] < right_half[j]:
alist[k] = left_half[i]
i += 1
else:
alist[k] = right_half[j]
j += 1
k += 1
# after filling in the sorted value,
# there will be left-over values on
# either the left or right half, simply
# append all the left-over values back
while i < len(left_half):
alist[k] = left_half[i]
i += 1
k += 1
while j < len(right_half):
alist[k] = right_half[j]
j += 1
k += 1
return alist
alist = [54, 26, 93, 17, 77]
merge_sort(alist)
Explanation: Sorting
Merge Sort
End of explanation
def quick_sort(alist):
_sort(alist, 0, len(alist) - 1)
def _sort(alist, first, last):
if first < last:
split_point = _partition(alist, first, last)
_sort(alist, first, split_point - 1)
_sort(alist, split_point + 1, last)
def _partition(alist, first, last):
right = last
left = first + 1
pivot_value = alist[first]
# find the split point of the pivot and move all other
# items to the appropriate side of the list (i.e. if
# the item is greater than pivot, then it should be
# on the right hand side and vice versa)
done = False
while not done:
while left <= right and alist[left] <= pivot_value:
left += 1
while alist[right] >= pivot_value and right >= left:
right -= 1
if right <= left:
done = True
else:
alist[right], alist[left] = alist[left], alist[right]
# swap pivot value to split point
alist[first], alist[right] = alist[right], alist[first]
return right
# list sorted in place
alist = [54, 26, 93, 17, 77, 50]
quick_sort(alist)
alist
Explanation: Quick Sort
End of explanation |
9,931 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework assignment #3
These problem sets focus on using the Beautiful Soup library to scrape web pages.
Problem Set #1
Step1: Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of <h3> tags contained in widgets2016.html.
Step2: Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header.
Step3: In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order)
Step4: Problem set #2
Step5: Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.
Expected output
Step6: In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.
Expected output
Step8: Problem set #3
Step9: If our task was to create a dictionary that maps the name of the cheese to the description that follows in the <p> tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above
Step10: With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header "Hallowed Widgets."
Expected output
Step11: Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!
In the cell below, I've created a variable category_counts and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are "categories" of widgets (e.g., the contents of the <h3> tags on the page | Python Code:
from bs4 import BeautifulSoup
from urllib.request import urlopen
html_str = urlopen("http://static.decontextualize.com/widgets2016.html").read()
document = BeautifulSoup(html_str, "html.parser")#python读取html里所有代码,用beautifulsoup
Explanation: Homework assignment #3
These problem sets focus on using the Beautiful Soup library to scrape web pages.
Problem Set #1: Basic scraping
I've made a web page for you to scrape. It's available here. The page concerns the catalog of a famous widget company. You'll be answering several questions about this web page. In the cell below, I've written some code so that you end up with a variable called html_str that contains the HTML source code of the page, and a variable document that stores a Beautiful Soup object.
End of explanation
h3_tag=document.find_all('h3')
len(h3_tag)
Explanation: Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of <h3> tags contained in widgets2016.html.
End of explanation
a_tag=document.find('a', attrs={'class':'tel'})
print(a_tag.string)
Explanation: Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header.
End of explanation
td_tags = document.find_all('td', attrs={'class':'wname'})
for widget_name in td_tags:
print(widget_name.string)
Explanation: In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order):
Skinner Widget
Widget For Furtiveness
Widget For Strawman
Jittery Widget
Silver Widget
Divided Widget
Manicurist Widget
Infinite Widget
Yellow-Tipped Widget
Unshakable Widget
Self-Knowledge Widget
Widget For Cinema
End of explanation
widgets = []
winfo_info = document.find_all('tr', attrs={'class':'winfo'})
for winfo in winfo_info:#从winfo_info的list里提取winfo这个dictionary, dictionary才有value
partno_tag=winfo.find('td', attrs={'class':'partno'})
price_tag=winfo.find('td',attrs={'class':'price'})
quantity_tag=winfo.find('td',attrs={'class':'quantity'})
wname_tag=winfo.find('td', attrs={'class':'wname'})
widgets_dictionary = {}
widgets_dictionary['partno'] = partno_tag.string
widgets_dictionary['price'] = price_tag.string
widgets_dictionary['quantity'] = quantity_tag.string
widgets_dictionary['wname'] = wname_tag.string
widgets.append(widgets_dictionary)
widgets
widgets = []
winfo_info = document.find_all('tr', attrs={'class':'winfo'})
for winfo in winfo_info:
partno_tag=winfo.find('td', attrs={'class':'partno'})
price_tag=winfo.find('td',attrs={'class':'price'})
quantity_tag=winfo.find('td',attrs={'class':'quantity'})
wname_tag=winfo.find('td', attrs={'class':'wname'})
for price in price_tag:
price=float(price[1:])
for quantity in quantity_tag:
quantity=int(quantity)
widgets_dictionary = {}
widgets_dictionary['partno'] = partno_tag.string
widgets_dictionary['price'] = price
widgets_dictionary['quantity'] = quantity
widgets_dictionary['wname'] = wname_tag.string
widgets.append(widgets_dictionary)
widgets
Explanation: Problem set #2: Widget dictionaries
For this problem set, we'll continue to use the HTML page from the previous problem set. In the cell below, I've made an empty list and assigned it to a variable called widgets. Write code that populates this list with dictionaries, one dictionary per widget in the source file. The keys of each dictionary should be partno, wname, price, and quantity, and the value for each of the keys should be the value for the corresponding column for each row. After executing the cell, your list should look something like this:
[{'partno': 'C1-9476',
'price': '$2.70',
'quantity': u'512',
'wname': 'Skinner Widget'},
{'partno': 'JDJ-32/V',
'price': '$9.36',
'quantity': '967',
'wname': u'Widget For Furtiveness'},
...several items omitted...
{'partno': '5B-941/F',
'price': '$13.26',
'quantity': '919',
'wname': 'Widget For Cinema'}]
And this expression:
widgets[5]['partno']
... should evaluate to:
LH-74/O
End of explanation
#sum() 需要是一个list []
total_widgets = []
for widget in widgets:
total_widgets.append(widget['quantity'])#.append 把widget['quality']里的每一个值加到total_widgets这个list里
sum(total_widgets)
Explanation: Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.
Expected output: 7928
End of explanation
for widget in widgets:
if widget['price'] > 9.30:
print(widget['wname'])
Explanation: In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.
Expected output:
Widget For Furtiveness
Jittery Widget
Silver Widget
Infinite Widget
Widget For Cinema
End of explanation
example_html =
<h2>Camembert</h2>
<p>A soft cheese made in the Camembert region of France.</p>
<h2>Cheddar</h2>
<p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p>
Explanation: Problem set #3: Sibling rivalries
In the following problem set, you will yet again be working with the data in widgets2016.html. In order to accomplish the tasks in this problem set, you'll need to learn about Beautiful Soup's .find_next_sibling() method. Here's some information about that method, cribbed from the notes:
Often, the tags we're looking for don't have a distinguishing characteristic, like a class attribute, that allows us to find them using .find() and .find_all(), and the tags also aren't in a parent-child relationship. This can be tricky! For example, take the following HTML snippet, (which I've assigned to a string called example_html):
End of explanation
example_doc = BeautifulSoup(example_html, "html.parser")
cheese_dict = {}
for h2_tag in example_doc.find_all('h2'):
cheese_name = h2_tag.string
cheese_desc_tag = h2_tag.find_next_sibling('p')
cheese_dict[cheese_name] = cheese_desc_tag.string
cheese_dict
Explanation: If our task was to create a dictionary that maps the name of the cheese to the description that follows in the <p> tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above:
End of explanation
h3_tag = document.find_all('h3')
for item in h3_tag:
if item.string == "Hallowed Widgets":
partno_tag = item.find_next_sibling('table', {'class':'widgetlist'})
td_tag = hallowed_widgets_table.find_all('td', {'class':'partno'})
for partno in td_tag:
print(partno.string)
Explanation: With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header "Hallowed Widgets."
Expected output:
MZ-556/B
QV-730
T1-9731
5B-941/F
End of explanation
category_counts = {}
# your code here
# end your code
category_counts
Explanation: Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!
In the cell below, I've created a variable category_counts and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are "categories" of widgets (e.g., the contents of the <h3> tags on the page: "Forensic Widgets", "Mood widgets", "Hallowed Widgets") and the value for each key is the number of widgets that occur in that category. I.e., after your code has been executed, the dictionary category_counts should look like this:
{'Forensic Widgets': 3,
'Hallowed widgets': 4,
'Mood widgets': 2,
'Wondrous widgets': 3}
End of explanation |
9,932 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bombora Topic Interest Datasets
Explaining Bombora topic interest score datasets.
0. Surge vs Interest?
As a matter of clarification, topic surge as a product is generated from topic interest models. In technical discussions, we'll refer to both the product and the models by the latter, providing a conceptually more meaningful mapping. Bombora's user-facing topic interest scores (and the origin of the data you currently have) are currently generated from a 3rd party topic interest model and service. As mentioned previously, we are also are developing an internal topic interest model.
1. End User Datasets (Overview)
In general, down the topic interest line, there exists primarily three datasets that are consumed by end users (ordering from raw to aggregate)
Step1: Note the file Input_AllDomainsAllTopics_20150719-reduced.csv is tagged with reduce to indicate it's not the complete record set. This is due to 2GB file limiations with git LFS. The orginal compressed Input_AllDomainsAllTopics_20150719.csv.gz file weighed in at 2.62 GB.
This file was generated via
Step2: As we're interested in understanding the data schema we'll consider a smaller (non-statistically significant) sample for both files.
Step3: ADAT
The ADAT file contains topic interest scores across both global and metro resolutions, which are model aggregate values produced at both keys (domain, topic)
Step4: Master Surge
Filter
For end users who only wish to consider the surging topics—(domain, topic) and (domain, topic, metro) keys whose topic interest score meet surge criteria (i.e., when score is > 50)—we filter the ADAT dataset to only consider scores greater than 50.
Transform
In producing this filtered result, instead of leaving the schema intact, the 3rd-party also performs a tranformation of the topic interest score(s) representation. The schema is the same intitally, like | Python Code:
!ls -lh ../../data/topic-interest-score/
Explanation: Bombora Topic Interest Datasets
Explaining Bombora topic interest score datasets.
0. Surge vs Interest?
As a matter of clarification, topic surge as a product is generated from topic interest models. In technical discussions, we'll refer to both the product and the models by the latter, providing a conceptually more meaningful mapping. Bombora's user-facing topic interest scores (and the origin of the data you currently have) are currently generated from a 3rd party topic interest model and service. As mentioned previously, we are also are developing an internal topic interest model.
1. End User Datasets (Overview)
In general, down the topic interest line, there exists primarily three datasets that are consumed by end users (ordering from raw to aggregate):
Firehose (FH): the raw content consumption data, which contains event level resolution. Size is thousands of GBs/week, only a handful of clients actually consume this data. Bombora Data Science team refers to this as the raw event data.
All Domains All Topic (ADAT): an aggregate topic interest score on keys of interest in the Firehose data. Size is tens of GBs/week. Bombora Data Science team refers to this as the topic interest score data.
Master Surge (MS): A filtering and transformation of the ADAT dataset to consider only those topic keys whose scores meet some surge score criteria (explained below). Size is GBs/week. Bombora Data Science team refer to this as surging topic interest score data.
While dataset naming convention might be a little confusing, the simple explanation is that the topic interest model ingests Firehose data, and outputs both the ADAT and MasterSurge.
2. End User Dataset (Details)
As you're interested in the aggregate topic interest score, we'll only consider ADAT and MasterSurge. While similar, each has their own schema. To understand better, we consider representative topic interest result files for both ADAT and MasterSurge that are output from the current topic surge batch jobs for the week starting 2016-07-19:
End of explanation
!gzip -dc ../../data/topic-interest-score/Output_MasterSurgeFile_20150719.csv.gz | wc -l
!gzip -dc ../../data/topic-interest-score/Input_AllDomainsAllTopics_20150719-reduced.csv.gz | wc -l
Explanation: Note the file Input_AllDomainsAllTopics_20150719-reduced.csv is tagged with reduce to indicate it's not the complete record set. This is due to 2GB file limiations with git LFS. The orginal compressed Input_AllDomainsAllTopics_20150719.csv.gz file weighed in at 2.62 GB.
This file was generated via:
head -n 166434659 Input_AllDomainsAllTopics_20150719.csv > Input_AllDomainsAllTopics_20150719-reduced.csv
To get an idea of record count, count the number of lines in both files:
End of explanation
path_to_data = '../../data/topic-interest-score/'
data_files = !ls {path_to_data}
data_files
n = 10000
#cl_cmd_args = '{cmd} -n {n} ../sample_data/{data_file} >> {data_file_root}-sample.csv'
cl_cmd_args = 'gzip -cd {path_to_data}{data_file} | {cmd} -n {n} >> {data_file_out}'
for data_file in data_files:
data_file_out = data_file.strip('.csv.gz') + '-sample.csv'
print('rm -f {data_file_out}'.format(data_file_out=data_file_out))
!rm -f {data_file_out}
print('touch {data_file_out}'.format(data_file_out=data_file_out))
!touch {data_file_out}
final_cl_cmd = cl_cmd_args.format(cmd='head', n=n,
path_to_data=path_to_data,
data_file=data_file,
data_file_out=data_file_out)
print(final_cl_cmd)
!{final_cl_cmd}
Explanation: As we're interested in understanding the data schema we'll consider a smaller (non-statistically significant) sample for both files.
End of explanation
! head -n 15 Input_AllDomainsAllTopics_20150719-reduced-sample.csv
! tail -n 15 Input_AllDomainsAllTopics_20150719-reduced-sample.csv
Explanation: ADAT
The ADAT file contains topic interest scores across both global and metro resolutions, which are model aggregate values produced at both keys (domain, topic): and (domain, topic, metro) keys. Note that
The schema of the data is:
Company Name, Domain, Size, Industry, Category, Topic, Composite Score, Bucket Code, Metro Area, Metro Composite Score, Metro Bucket Code, Domain Origin Country
Note that in the schema above, the:
Composite Score is the topic interest score from the (domain, topic) key.
Metro Composite Score is the topic interest score from the (domain, topic, metro) key.
Additionally, we note that the format of data in the ADAT file topic interest scores a denormalized / flattened schema, as show below
End of explanation
! head -n 15 Output_MasterSurgeFile_20150719-sample.csv
! tail -n 15 Output_MasterSurgeFile_20150719-sample.csv
Explanation: Master Surge
Filter
For end users who only wish to consider the surging topics—(domain, topic) and (domain, topic, metro) keys whose topic interest score meet surge criteria (i.e., when score is > 50)—we filter the ADAT dataset to only consider scores greater than 50.
Transform
In producing this filtered result, instead of leaving the schema intact, the 3rd-party also performs a tranformation of the topic interest score(s) representation. The schema is the same intitally, like:
Company Name, Domain, Size, Industry, Category, Topic, Composite Score,
however, the metro resolution scores is now collapsed into an array (of sorts), unique to each (domain, topic) key. The metro name and score is formatted as metro name[metro score], and each line can contain multiple results, formatted together like:
metro_1[metro_1 score]|metro_2[metro_2 score]|metro_3[metro_3 score],
and finally, again, ending with the domain origin country, which would collectively look like:
Company Name, Domain, Size, Industry, Category, Topic, Composite Score,vmetro_1[metro_1 score]|metro_2[metro_2 score]|metro_3[metro_3 score], Domain Country Origin
Example output, below:
End of explanation |
9,933 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Errors, or bugs, in your software
Today we'll cover dealing with errors in your Python code, an important aspect of writing software.
What is a software bug?
According to Wikipedia (accessed 16 Oct 2018), a software bug is an error, flaw, failure, or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or behave in unintended ways.
Where did the terminology come from?
Engineers have used the term well before electronic computers and software. Sometimes Thomas Edison is credited with the first recorded use of bug in that fashion. [Wikipedia]
If incorrect code is never executed, is it a bug?
This is the software equivalent to "If a tree falls and no one hears it, does it make a sound?".
Three classes of bugs
Let's discuss three major types of bugs in your code, from easiest to most difficult to diagnose
Step1: Syntax errors
Step2: Runtime errors
Step3: Semantic errors
Say we're trying to confirm that a trigonometric identity holds. Let's use the basic relationship between sine and cosine, given by the Pythagorean identity"
$$
\sin^2 \theta + \cos^2 \theta = 1
$$
We can write a function to check this
Step6: Is our code correct?
How to find and resolve bugs?
Debugging has the following steps
Step7: Next steps
Step8: This can be a more convenient way to debug programs and step through the actual execution. | Python Code:
import numpy as np
Explanation: Errors, or bugs, in your software
Today we'll cover dealing with errors in your Python code, an important aspect of writing software.
What is a software bug?
According to Wikipedia (accessed 16 Oct 2018), a software bug is an error, flaw, failure, or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or behave in unintended ways.
Where did the terminology come from?
Engineers have used the term well before electronic computers and software. Sometimes Thomas Edison is credited with the first recorded use of bug in that fashion. [Wikipedia]
If incorrect code is never executed, is it a bug?
This is the software equivalent to "If a tree falls and no one hears it, does it make a sound?".
Three classes of bugs
Let's discuss three major types of bugs in your code, from easiest to most difficult to diagnose:
Syntax errors: Errors where the code is not written in a valid way. (Generally easiest to fix.)
Runtime errors: Errors where code is syntactically valid, but fails to execute. Often throwing exceptions here. (Sometimes easy to fix, harder when in other's code.)
Semantic errors: Errors where code is syntactically valid, but contain errors in logic. (Can be difficult to fix.)
End of explanation
print( "This should only work in Python 2.x, not 3.x used in this class.")
x = 1; y = 2
b = x == y # Boolean variable that is true when x & y have the same value
b = 1 = 2
Explanation: Syntax errors
End of explanation
# invalid operation
a = 0
5/a # Division by zero
# invalid operation
input = '40'
input/11 # Incompatiable types for the operation
str(21).index("1")
Explanation: Runtime errors
End of explanation
import math
import numpy as np
'''Checks that Pythagorean identity holds for one input, theta'''
def check_pythagorean_identity(theta):
return math.sin(theta)**2 + math.cos(theta)**2 == 1
check_pythagorean_identity(0)
check_pythagorean_identity(np.pi)
Explanation: Semantic errors
Say we're trying to confirm that a trigonometric identity holds. Let's use the basic relationship between sine and cosine, given by the Pythagorean identity"
$$
\sin^2 \theta + \cos^2 \theta = 1
$$
We can write a function to check this:
End of explanation
import numpy as np
def entropy(p):
arg p: list of float
items = p * np.log(p)
return -np.sum(items)
entropy([0.5, 0.5])
import numpy as np
def improved_entropy(p):
arg p: list of float
items = p * np.log(p)
new_items = []
print("1 " + str(new_items))
for item in items:
if np.isnan(item):
pass
else:
new_items.append(item)
print("2 " + str(new_items))
return -np.sum(new_items)
# Detailed examination of codes
p = [1, 0.0]
p * np.log(p)
improved_entropy([1, 0.0])
Explanation: Is our code correct?
How to find and resolve bugs?
Debugging has the following steps:
Detection of an exception or invalid results.
Isolation of where the program causes the error. This is often the most difficult step.
Resolution of how to change the code to eliminate the error. Mostly, it's not too bad, but sometimes this can cause major revisions in codes.
Detection of Bugs
The detection of bugs is too often done by chance. While running your Python code, you encounter unexpected functionality, exceptions, or syntax errors. While we'll focus on this in today's lecture, you should never leave this up to chance in the future.
Software testing practices allow for thoughtful detection of bugs in software. We'll discuss more in the lecture on testing.
Isolation of Bugs
There are three main methods commonly used for bug isolation:
1. The "thought" method. Think about how your code is structured and so what part of your could would most likely lead to the exception or invalid result.
2. Inserting print statements (or other logging techniques)
3. Using a line-by-line debugger like pdb.
Typically, all three are used in combination, often repeatedly.
Using print statements
Say we're trying to compute the entropy of a set of probabilities. The
form of the equation is
$$
H = -\sum_i p_i \log(p_i)
$$
We can write the function like this:
End of explanation
def debugging_entropy(p):
items = p * np.log(p)
if any([np.isnan(v) for v in items]):
import pdb; pdb.set_trace()
return -np.sum(items)
debugging_entropy([0.5, 0.5])
Explanation: Next steps:
- Other inputs
- Determine reason for errors by looking at details of codes
Using Python's debugger, pdb
Python comes with a built-in debugger called pdb. It allows you to step line-by-line through a computation and examine what's happening at each step. Note that this should probably be your last resort in tracing down a bug. I've probably used it a dozen times or so in five years of coding. But it can be a useful tool to have in your toolbelt.
You can use the debugger by inserting the line
python
import pdb; pdb.set_trace()
within your script. To leave the debugger, type "exit()". To see the commands you can use, type "help".
Let's try this out:
End of explanation
p = [.1, -.2, .3]
debugging_entropy(p)
Explanation: This can be a more convenient way to debug programs and step through the actual execution.
End of explanation |
9,934 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Vectorize an example sentence
Consider the following sentence
Step3: Create a vocabulary to save mappings from tokens to integer indices
Step4: Create an inverse vocabulary to save mappings from integer indices to tokens
Step5: Vectorize your sentence
Step6: Generate skip-grams from one sentence
The tf.keras.preprocessing.sequence module provides useful functions that simplify data preparation for word2vec. You can use the tf.keras.preprocessing.sequence.skipgrams to generate skip-gram pairs from the example_sequence with a given window_size from tokens in the range [0, vocab_size).
Note
Step7: Print a few positive skip-grams
Step8: Negative sampling for one skip-gram
The skipgrams function returns all positive skip-gram pairs by sliding over a given window span. To produce additional skip-gram pairs that would serve as negative samples for training, you need to sample random words from the vocabulary. Use the tf.random.log_uniform_candidate_sampler function to sample num_ns number of negative samples for a given target word in a window. You can call the function on one skip-grams's target word and pass the context word as true class to exclude it from being sampled.
Key point
Step9: Construct one training example
For a given positive (target_word, context_word) skip-gram, you now also have num_ns negative sampled context words that do not appear in the window size neighborhood of target_word. Batch the 1 positive context_word and num_ns negative context words into one tensor. This produces a set of positive skip-grams (labeled as 1) and negative samples (labeled as 0) for each target word.
Step10: Check out the context and the corresponding labels for the target word from the skip-gram example above
Step11: A tuple of (target, context, label) tensors constitutes one training example for training your skip-gram negative sampling word2vec model. Notice that the target is of shape (1,) while the context and label are of shape (1+num_ns,)
Step12: Summary
This diagram summarizes the procedure of generating a training example from a sentence
Step13: sampling_table[i] denotes the probability of sampling the i-th most common word in a dataset. The function assumes a Zipf's distribution of the word frequencies for sampling.
Key point
Step14: Prepare training data for word2vec
With an understanding of how to work with one sentence for a skip-gram negative sampling based word2vec model, you can proceed to generate training examples from a larger list of sentences!
Download text corpus
You will use a text file of Shakespeare's writing for this tutorial. Change the following line to run this code on your own data.
Step15: Read the text from the file and print the first few lines
Step16: Use the non empty lines to construct a tf.data.TextLineDataset object for the next steps
Step17: Vectorize sentences from the corpus
You can use the TextVectorization layer to vectorize sentences from the corpus. Learn more about using this layer in this Text classification tutorial. Notice from the first few sentences above that the text needs to be in one case and punctuation needs to be removed. To do this, define a custom_standardization function that can be used in the TextVectorization layer.
Step18: Call TextVectorization.adapt on the text dataset to create vocabulary.
Step19: Once the state of the layer has been adapted to represent the text corpus, the vocabulary can be accessed with TextVectorization.get_vocabulary. This function returns a list of all vocabulary tokens sorted (descending) by their frequency.
Step20: The vectorize_layer can now be used to generate vectors for each element in the text_ds (a tf.data.Dataset). Apply Dataset.batch, Dataset.prefetch, Dataset.map, and Dataset.unbatch.
Step21: Obtain sequences from the dataset
You now have a tf.data.Dataset of integer encoded sentences. To prepare the dataset for training a word2vec model, flatten the dataset into a list of sentence vector sequences. This step is required as you would iterate over each sentence in the dataset to produce positive and negative examples.
Note
Step22: Inspect a few examples from sequences
Step23: Generate training examples from sequences
sequences is now a list of int encoded sentences. Just call the generate_training_data function defined earlier to generate training examples for the word2vec model. To recap, the function iterates over each word from each sequence to collect positive and negative context words. Length of target, contexts and labels should be the same, representing the total number of training examples.
Step24: Configure the dataset for performance
To perform efficient batching for the potentially large number of training examples, use the tf.data.Dataset API. After this step, you would have a tf.data.Dataset object of (target_word, context_word), (label) elements to train your word2vec model!
Step25: Apply Dataset.cache and Dataset.prefetch to improve performance
Step26: Model and training
The word2vec model can be implemented as a classifier to distinguish between true context words from skip-grams and false context words obtained through negative sampling. You can perform a dot product multiplication between the embeddings of target and context words to obtain predictions for labels and compute the loss function against true labels in the dataset.
Subclassed word2vec model
Use the Keras Subclassing API to define your word2vec model with the following layers
Step27: Define loss function and compile model
For simplicity, you can use tf.keras.losses.CategoricalCrossEntropy as an alternative to the negative sampling loss. If you would like to write your own custom loss function, you can also do so as follows
Step28: Also define a callback to log training statistics for TensorBoard
Step29: Train the model on the dataset for some number of epochs
Step30: TensorBoard now shows the word2vec model's accuracy and loss
Step31: <!-- <img class="tfo-display-only-on-site" src="images/word2vec_tensorboard.png"/> -->
Embedding lookup and analysis
Obtain the weights from the model using Model.get_layer and Layer.get_weights. The TextVectorization.get_vocabulary function provides the vocabulary to build a metadata file with one token per line.
Step32: Create and save the vectors and metadata files
Step33: Download the vectors.tsv and metadata.tsv to analyze the obtained embeddings in the Embedding Projector | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import io
import re
import string
import tqdm
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
# Load the TensorBoard notebook extension
%load_ext tensorboard
SEED = 42
AUTOTUNE = tf.data.AUTOTUNE
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/text/word2vec">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/text/word2vec.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/word2vec.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/text/word2vec.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
word2vec
word2vec is not a singular algorithm, rather, it is a family of model architectures and optimizations that can be used to learn word embeddings from large datasets. Embeddings learned through word2vec have proven to be successful on a variety of downstream natural language processing tasks.
Note: This tutorial is based on Efficient estimation of word representations in vector space and Distributed representations of words and phrases and their compositionality. It is not an exact implementation of the papers. Rather, it is intended to illustrate the key ideas.
These papers proposed two methods for learning representations of words:
Continuous bag-of-words model: predicts the middle word based on surrounding context words. The context consists of a few words before and after the current (middle) word. This architecture is called a bag-of-words model as the order of words in the context is not important.
Continuous skip-gram model: predicts words within a certain range before and after the current word in the same sentence. A worked example of this is given below.
You'll use the skip-gram approach in this tutorial. First, you'll explore skip-grams and other concepts using a single sentence for illustration. Next, you'll train your own word2vec model on a small dataset. This tutorial also contains code to export the trained embeddings and visualize them in the TensorFlow Embedding Projector.
Skip-gram and negative sampling
While a bag-of-words model predicts a word given the neighboring context, a skip-gram model predicts the context (or neighbors) of a word, given the word itself. The model is trained on skip-grams, which are n-grams that allow tokens to be skipped (see the diagram below for an example). The context of a word can be represented through a set of skip-gram pairs of (target_word, context_word) where context_word appears in the neighboring context of target_word.
Consider the following sentence of eight words:
The wide road shimmered in the hot sun.
The context words for each of the 8 words of this sentence are defined by a window size. The window size determines the span of words on either side of a target_word that can be considered a context word. Below is a table of skip-grams for target words based on different window sizes.
Note: For this tutorial, a window size of n implies n words on each side with a total window span of 2*n+1 words across a word.
The training objective of the skip-gram model is to maximize the probability of predicting context words given the target word. For a sequence of words w<sub>1</sub>, w<sub>2</sub>, ... w<sub>T</sub>, the objective can be written as the average log probability
where c is the size of the training context. The basic skip-gram formulation defines this probability using the softmax function.
where v and v<sup>'<sup> are target and context vector representations of words and W is vocabulary size.
Computing the denominator of this formulation involves performing a full softmax over the entire vocabulary words, which are often large (10<sup>5</sup>-10<sup>7</sup>) terms.
The noise contrastive estimation (NCE) loss function is an efficient approximation for a full softmax. With an objective to learn word embeddings instead of modeling the word distribution, the NCE loss can be simplified to use negative sampling.
The simplified negative sampling objective for a target word is to distinguish the context word from num_ns negative samples drawn from noise distribution P<sub>n</sub>(w) of words. More precisely, an efficient approximation of full softmax over the vocabulary is, for a skip-gram pair, to pose the loss for a target word as a classification problem between the context word and num_ns negative samples.
A negative sample is defined as a (target_word, context_word) pair such that the context_word does not appear in the window_size neighborhood of the target_word. For the example sentence, these are a few potential negative samples (when window_size is 2).
(hot, shimmered)
(wide, hot)
(wide, sun)
In the next section, you'll generate skip-grams and negative samples for a single sentence. You'll also learn about subsampling techniques and train a classification model for positive and negative training examples later in the tutorial.
Setup
End of explanation
sentence = "The wide road shimmered in the hot sun"
tokens = list(sentence.lower().split())
print(len(tokens))
Explanation: Vectorize an example sentence
Consider the following sentence:
The wide road shimmered in the hot sun.
Tokenize the sentence:
End of explanation
vocab, index = {}, 1 # start indexing from 1
vocab['<pad>'] = 0 # add a padding token
for token in tokens:
if token not in vocab:
vocab[token] = index
index += 1
vocab_size = len(vocab)
print(vocab)
Explanation: Create a vocabulary to save mappings from tokens to integer indices:
End of explanation
inverse_vocab = {index: token for token, index in vocab.items()}
print(inverse_vocab)
Explanation: Create an inverse vocabulary to save mappings from integer indices to tokens:
End of explanation
example_sequence = [vocab[word] for word in tokens]
print(example_sequence)
Explanation: Vectorize your sentence:
End of explanation
window_size = 2
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
example_sequence,
vocabulary_size=vocab_size,
window_size=window_size,
negative_samples=0)
print(len(positive_skip_grams))
Explanation: Generate skip-grams from one sentence
The tf.keras.preprocessing.sequence module provides useful functions that simplify data preparation for word2vec. You can use the tf.keras.preprocessing.sequence.skipgrams to generate skip-gram pairs from the example_sequence with a given window_size from tokens in the range [0, vocab_size).
Note: negative_samples is set to 0 here, as batching negative samples generated by this function requires a bit of code. You will use another function to perform negative sampling in the next section.
End of explanation
for target, context in positive_skip_grams[:5]:
print(f"({target}, {context}): ({inverse_vocab[target]}, {inverse_vocab[context]})")
Explanation: Print a few positive skip-grams:
End of explanation
# Get target and context words for one positive skip-gram.
target_word, context_word = positive_skip_grams[0]
# Set the number of negative samples per positive context.
num_ns = 4
context_class = tf.reshape(tf.constant(context_word, dtype="int64"), (1, 1))
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class, # class that should be sampled as 'positive'
num_true=1, # each positive skip-gram has 1 positive context class
num_sampled=num_ns, # number of negative context words to sample
unique=True, # all the negative samples should be unique
range_max=vocab_size, # pick index of the samples from [0, vocab_size]
seed=SEED, # seed for reproducibility
name="negative_sampling" # name of this operation
)
print(negative_sampling_candidates)
print([inverse_vocab[index.numpy()] for index in negative_sampling_candidates])
Explanation: Negative sampling for one skip-gram
The skipgrams function returns all positive skip-gram pairs by sliding over a given window span. To produce additional skip-gram pairs that would serve as negative samples for training, you need to sample random words from the vocabulary. Use the tf.random.log_uniform_candidate_sampler function to sample num_ns number of negative samples for a given target word in a window. You can call the function on one skip-grams's target word and pass the context word as true class to exclude it from being sampled.
Key point: num_ns (the number of negative samples per a positive context word) in the [5, 20] range is shown to work best for smaller datasets, while num_ns in the [2, 5] range suffices for larger datasets.
End of explanation
# Add a dimension so you can use concatenation (in the next step).
negative_sampling_candidates = tf.expand_dims(negative_sampling_candidates, 1)
# Concatenate a positive context word with negative sampled words.
context = tf.concat([context_class, negative_sampling_candidates], 0)
# Label the first context word as `1` (positive) followed by `num_ns` `0`s (negative).
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Reshape the target to shape `(1,)` and context and label to `(num_ns+1,)`.
target = tf.squeeze(target_word)
context = tf.squeeze(context)
label = tf.squeeze(label)
Explanation: Construct one training example
For a given positive (target_word, context_word) skip-gram, you now also have num_ns negative sampled context words that do not appear in the window size neighborhood of target_word. Batch the 1 positive context_word and num_ns negative context words into one tensor. This produces a set of positive skip-grams (labeled as 1) and negative samples (labeled as 0) for each target word.
End of explanation
print(f"target_index : {target}")
print(f"target_word : {inverse_vocab[target_word]}")
print(f"context_indices : {context}")
print(f"context_words : {[inverse_vocab[c.numpy()] for c in context]}")
print(f"label : {label}")
Explanation: Check out the context and the corresponding labels for the target word from the skip-gram example above:
End of explanation
print("target :", target)
print("context :", context)
print("label :", label)
Explanation: A tuple of (target, context, label) tensors constitutes one training example for training your skip-gram negative sampling word2vec model. Notice that the target is of shape (1,) while the context and label are of shape (1+num_ns,)
End of explanation
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(size=10)
print(sampling_table)
Explanation: Summary
This diagram summarizes the procedure of generating a training example from a sentence:
Notice that the words temperature and code are not part of the input sentence. They belong to the vocabulary like certain other indices used in the diagram above.
Compile all steps into one function
Skip-gram sampling table
A large dataset means larger vocabulary with higher number of more frequent words such as stopwords. Training examples obtained from sampling commonly occurring words (such as the, is, on) don't add much useful information for the model to learn from. Mikolov et al. suggest subsampling of frequent words as a helpful practice to improve embedding quality.
The tf.keras.preprocessing.sequence.skipgrams function accepts a sampling table argument to encode probabilities of sampling any token. You can use the tf.keras.preprocessing.sequence.make_sampling_table to generate a word-frequency rank based probabilistic sampling table and pass it to the skipgrams function. Inspect the sampling probabilities for a vocab_size of 10.
End of explanation
# Generates skip-gram pairs with negative sampling for a list of sequences
# (int-encoded sentences) based on window size, number of negative samples
# and vocabulary size.
def generate_training_data(sequences, window_size, num_ns, vocab_size, seed):
# Elements of each training example are appended to these lists.
targets, contexts, labels = [], [], []
# Build the sampling table for `vocab_size` tokens.
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(vocab_size)
# Iterate over all sequences (sentences) in the dataset.
for sequence in tqdm.tqdm(sequences):
# Generate positive skip-gram pairs for a sequence (sentence).
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
sequence,
vocabulary_size=vocab_size,
sampling_table=sampling_table,
window_size=window_size,
negative_samples=0)
# Iterate over each positive skip-gram pair to produce training examples
# with a positive context word and negative samples.
for target_word, context_word in positive_skip_grams:
context_class = tf.expand_dims(
tf.constant([context_word], dtype="int64"), 1)
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class,
num_true=1,
num_sampled=num_ns,
unique=True,
range_max=vocab_size,
seed=SEED,
name="negative_sampling")
# Build context and label vectors (for one target word)
negative_sampling_candidates = tf.expand_dims(
negative_sampling_candidates, 1)
context = tf.concat([context_class, negative_sampling_candidates], 0)
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Append each element from the training example to global lists.
targets.append(target_word)
contexts.append(context)
labels.append(label)
return targets, contexts, labels
Explanation: sampling_table[i] denotes the probability of sampling the i-th most common word in a dataset. The function assumes a Zipf's distribution of the word frequencies for sampling.
Key point: The tf.random.log_uniform_candidate_sampler already assumes that the vocabulary frequency follows a log-uniform (Zipf's) distribution. Using these distribution weighted sampling also helps approximate the Noise Contrastive Estimation (NCE) loss with simpler loss functions for training a negative sampling objective.
Generate training data
Compile all the steps described above into a function that can be called on a list of vectorized sentences obtained from any text dataset. Notice that the sampling table is built before sampling skip-gram word pairs. You will use this function in the later sections.
End of explanation
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
Explanation: Prepare training data for word2vec
With an understanding of how to work with one sentence for a skip-gram negative sampling based word2vec model, you can proceed to generate training examples from a larger list of sentences!
Download text corpus
You will use a text file of Shakespeare's writing for this tutorial. Change the following line to run this code on your own data.
End of explanation
with open(path_to_file) as f:
lines = f.read().splitlines()
for line in lines[:20]:
print(line)
Explanation: Read the text from the file and print the first few lines:
End of explanation
text_ds = tf.data.TextLineDataset(path_to_file).filter(lambda x: tf.cast(tf.strings.length(x), bool))
Explanation: Use the non empty lines to construct a tf.data.TextLineDataset object for the next steps:
End of explanation
# Now, create a custom standardization function to lowercase the text and
# remove punctuation.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
return tf.strings.regex_replace(lowercase,
'[%s]' % re.escape(string.punctuation), '')
# Define the vocabulary size and the number of words in a sequence.
vocab_size = 4096
sequence_length = 10
# Use the `TextVectorization` layer to normalize, split, and map strings to
# integers. Set the `output_sequence_length` length to pad all samples to the
# same length.
vectorize_layer = layers.TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
Explanation: Vectorize sentences from the corpus
You can use the TextVectorization layer to vectorize sentences from the corpus. Learn more about using this layer in this Text classification tutorial. Notice from the first few sentences above that the text needs to be in one case and punctuation needs to be removed. To do this, define a custom_standardization function that can be used in the TextVectorization layer.
End of explanation
vectorize_layer.adapt(text_ds.batch(1024))
Explanation: Call TextVectorization.adapt on the text dataset to create vocabulary.
End of explanation
# Save the created vocabulary for reference.
inverse_vocab = vectorize_layer.get_vocabulary()
print(inverse_vocab[:20])
Explanation: Once the state of the layer has been adapted to represent the text corpus, the vocabulary can be accessed with TextVectorization.get_vocabulary. This function returns a list of all vocabulary tokens sorted (descending) by their frequency.
End of explanation
# Vectorize the data in text_ds.
text_vector_ds = text_ds.batch(1024).prefetch(AUTOTUNE).map(vectorize_layer).unbatch()
Explanation: The vectorize_layer can now be used to generate vectors for each element in the text_ds (a tf.data.Dataset). Apply Dataset.batch, Dataset.prefetch, Dataset.map, and Dataset.unbatch.
End of explanation
sequences = list(text_vector_ds.as_numpy_iterator())
print(len(sequences))
Explanation: Obtain sequences from the dataset
You now have a tf.data.Dataset of integer encoded sentences. To prepare the dataset for training a word2vec model, flatten the dataset into a list of sentence vector sequences. This step is required as you would iterate over each sentence in the dataset to produce positive and negative examples.
Note: Since the generate_training_data() defined earlier uses non-TensorFlow Python/NumPy functions, you could also use a tf.py_function or tf.numpy_function with tf.data.Dataset.map.
End of explanation
for seq in sequences[:5]:
print(f"{seq} => {[inverse_vocab[i] for i in seq]}")
Explanation: Inspect a few examples from sequences:
End of explanation
targets, contexts, labels = generate_training_data(
sequences=sequences,
window_size=2,
num_ns=4,
vocab_size=vocab_size,
seed=SEED)
targets = np.array(targets)
contexts = np.array(contexts)[:,:,0]
labels = np.array(labels)
print('\n')
print(f"targets.shape: {targets.shape}")
print(f"contexts.shape: {contexts.shape}")
print(f"labels.shape: {labels.shape}")
Explanation: Generate training examples from sequences
sequences is now a list of int encoded sentences. Just call the generate_training_data function defined earlier to generate training examples for the word2vec model. To recap, the function iterates over each word from each sequence to collect positive and negative context words. Length of target, contexts and labels should be the same, representing the total number of training examples.
End of explanation
BATCH_SIZE = 1024
BUFFER_SIZE = 10000
dataset = tf.data.Dataset.from_tensor_slices(((targets, contexts), labels))
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
print(dataset)
Explanation: Configure the dataset for performance
To perform efficient batching for the potentially large number of training examples, use the tf.data.Dataset API. After this step, you would have a tf.data.Dataset object of (target_word, context_word), (label) elements to train your word2vec model!
End of explanation
dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
print(dataset)
Explanation: Apply Dataset.cache and Dataset.prefetch to improve performance:
End of explanation
class Word2Vec(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim):
super(Word2Vec, self).__init__()
self.target_embedding = layers.Embedding(vocab_size,
embedding_dim,
input_length=1,
name="w2v_embedding")
self.context_embedding = layers.Embedding(vocab_size,
embedding_dim,
input_length=num_ns+1)
def call(self, pair):
target, context = pair
# target: (batch, dummy?) # The dummy axis doesn't exist in TF2.7+
# context: (batch, context)
if len(target.shape) == 2:
target = tf.squeeze(target, axis=1)
# target: (batch,)
word_emb = self.target_embedding(target)
# word_emb: (batch, embed)
context_emb = self.context_embedding(context)
# context_emb: (batch, context, embed)
dots = tf.einsum('be,bce->bc', word_emb, context_emb)
# dots: (batch, context)
return dots
Explanation: Model and training
The word2vec model can be implemented as a classifier to distinguish between true context words from skip-grams and false context words obtained through negative sampling. You can perform a dot product multiplication between the embeddings of target and context words to obtain predictions for labels and compute the loss function against true labels in the dataset.
Subclassed word2vec model
Use the Keras Subclassing API to define your word2vec model with the following layers:
target_embedding: A tf.keras.layers.Embedding layer, which looks up the embedding of a word when it appears as a target word. The number of parameters in this layer are (vocab_size * embedding_dim).
context_embedding: Another tf.keras.layers.Embedding layer, which looks up the embedding of a word when it appears as a context word. The number of parameters in this layer are the same as those in target_embedding, i.e. (vocab_size * embedding_dim).
dots: A tf.keras.layers.Dot layer that computes the dot product of target and context embeddings from a training pair.
flatten: A tf.keras.layers.Flatten layer to flatten the results of dots layer into logits.
With the subclassed model, you can define the call() function that accepts (target, context) pairs which can then be passed into their corresponding embedding layer. Reshape the context_embedding to perform a dot product with target_embedding and return the flattened result.
Key point: The target_embedding and context_embedding layers can be shared as well. You could also use a concatenation of both embeddings as the final word2vec embedding.
End of explanation
embedding_dim = 128
word2vec = Word2Vec(vocab_size, embedding_dim)
word2vec.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: Define loss function and compile model
For simplicity, you can use tf.keras.losses.CategoricalCrossEntropy as an alternative to the negative sampling loss. If you would like to write your own custom loss function, you can also do so as follows:
python
def custom_loss(x_logit, y_true):
return tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=y_true)
It's time to build your model! Instantiate your word2vec class with an embedding dimension of 128 (you could experiment with different values). Compile the model with the tf.keras.optimizers.Adam optimizer.
End of explanation
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
Explanation: Also define a callback to log training statistics for TensorBoard:
End of explanation
word2vec.fit(dataset, epochs=20, callbacks=[tensorboard_callback])
Explanation: Train the model on the dataset for some number of epochs:
End of explanation
#docs_infra: no_execute
%tensorboard --logdir logs
Explanation: TensorBoard now shows the word2vec model's accuracy and loss:
End of explanation
weights = word2vec.get_layer('w2v_embedding').get_weights()[0]
vocab = vectorize_layer.get_vocabulary()
Explanation: <!-- <img class="tfo-display-only-on-site" src="images/word2vec_tensorboard.png"/> -->
Embedding lookup and analysis
Obtain the weights from the model using Model.get_layer and Layer.get_weights. The TextVectorization.get_vocabulary function provides the vocabulary to build a metadata file with one token per line.
End of explanation
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0:
continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
Explanation: Create and save the vectors and metadata files:
End of explanation
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception:
pass
Explanation: Download the vectors.tsv and metadata.tsv to analyze the obtained embeddings in the Embedding Projector:
End of explanation |
9,935 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
plot_sky_brightness_model
Visualizing the machine-learned (PTF) sky brightness model
Step1: Let's look at the range of the data
Step2: So there are clearly outliers. Let's look at histograms to find good fiducial values
Step3: Let's look at the dependence of FWHM on altitude
Step4: g-band
Step7: i-band
Step8: Calculate some limiting mag values in bright and dark time at zenith
Step9: Now let's walk through the various parameters and look at the model behavior.
Vary moon illilumination fraction at 45 degree altitude, R-band
Step10: Not as smooth as we might expect. Let's look at the input data
Step11: low dec fields | Python Code:
# hack to get the path right
import sys
sys.path.append('..')
from ztf_sim.SkyBrightness import SkyBrightness
from ztf_sim.magnitudes import limiting_mag
import astropy.units as u
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_style('ticks')
sns.set_context('talk')
# load the training data
df = pd.read_csv('../data/ptf-iptf_diq.csv.gz')
df.head()
Explanation: plot_sky_brightness_model
Visualizing the machine-learned (PTF) sky brightness model
End of explanation
for col in df.columns:
print('{}: {}-{}'.format(col, df[col].min(), df[col].max()))
for filterkey in [1,2,4]:
wbright = (df.filterkey==filterkey) & (df.moonillf > 0.95) & (df.moonalt > 70)
print(filterkey, np.sum(wbright))
plt.hist(df.loc[wbright,'limiting_mag'],normed=True,label=f'{filterkey}')
plt.legend()
for filterkey in [1,2,4]:
w = (df.filterkey==filterkey)
print(filterkey, np.percentile(df.loc[w,'limiting_mag'],[5,95]))
Explanation: Let's look at the range of the data:
End of explanation
plt.hist(np.abs(df.moonillf))
plt.xlabel('Moon Illumination Fraction')
plt.ylabel('Number of Exposures')
sns.despine()
plt.hist(df.moonalt)
plt.xlabel('Moon Altitude (degrees)')
plt.ylabel('Number of Exposures')
sns.despine()
plt.hist(df.moon_dist)
plt.xlabel('Moon Distance (degrees)')
plt.ylabel('Number of Exposures')
sns.despine()
plt.hexbin(df['moonalt'],df['moon_dist'])
plt.xlabel('Moon Altitude (deg)')
plt.ylabel('Moon Distance (deg)')
plt.hist(df.azimuth)
plt.xlabel('Telescope Azimuth (degrees)')
plt.ylabel('Number of Exposures')
plt.xlim(0,360)
sns.despine()
plt.hist(df.altitude)
plt.xlabel('Telescope Altitude (degrees)')
plt.ylabel('Number of Exposures')
plt.xlim(0,90)
sns.despine()
plt.hist(df.sunalt)
plt.xlabel('Sun Altitude (degrees)')
plt.ylabel('Number of Exposures')
plt.xlim(-90,-10)
sns.despine()
w = (df['filterkey'] == 2)
wdark = (df['filterkey'] == 2) & (np.abs(df['moonillf']) < .1) & (df['moonalt'] < -10.)
wbright = (df['filterkey'] == 2) & (np.abs(df['moonillf']) > .9) & (df['moonalt'] > 70.)
print(df[w]['limiting_mag'].median())
normed=True
plt.hist(df[w]['limiting_mag'],bins=np.linspace(18,23,100),normed=normed,alpha=0.5)
plt.hist(df[wdark]['limiting_mag'],bins=np.linspace(18,23,100),normed=normed,alpha=0.5)
plt.hist(df[wbright]['limiting_mag'],bins=np.linspace(18,23,100),normed=normed,alpha=0.5)
plt.xlabel('Limiting Magnitude (R-band mag arcsec$^{-2}$)')
plt.ylabel('Number of Exposures')
sns.despine()
for filterkey, filtername in {1:'g',2:'R',4:'i'}.items():
plt.figure()
w = (df['filterkey'] == filterkey)
wdark = (df['filterkey'] == filterkey) & (np.abs(df['moonillf']) < .1) & (df['moonalt'] < -10.)
wbright = (df['filterkey'] == filterkey) & (np.abs(df['moonillf']) > .9) & (df['moonalt'] > 70.)
normed=True
plt.hist(df[w]['limiting_mag'],bins=np.linspace(18,23,100),normed=normed,alpha=0.5)
plt.hist(df[wdark]['limiting_mag'],bins=np.linspace(18,23,100),normed=normed,alpha=0.5)
plt.hist(df[wbright]['limiting_mag'],bins=np.linspace(18,23,100),normed=normed,alpha=0.5)
print(filterkey,df.loc[wdark,'limiting_mag'].median(),df.loc[wbright,'limiting_mag'].median())
plt.xlabel(f'Limiting Magnitude ({filtername}-band mag arcsec$^{-2}$)')
plt.ylabel('Number of Exposures')
sns.despine()
w = (df['filterkey'] == 1)
wdark = (df['filterkey'] == 1) & (np.abs(df['moonillf']) < .1) & (df['moonalt'] < -10.)
print(df[w]['limiting_mag'].median())
plt.hist(df[w]['limiting_mag'],bins=np.linspace(18,23,100))
plt.hist(df[wdark]['limiting_mag'],bins=np.linspace(18,23,100))
plt.xlabel('Limiting Magnitude (g-band mag arcsec$^{-2}$)')
plt.ylabel('Number of Exposures')
sns.despine()
w = (df['filterkey'] == 4)
wdark = (df['filterkey'] == 4) & (np.abs(df['moonillf']) < .1) & (df['moonalt'] < -10.)
print(df[w]['limiting_mag'].median())
plt.hist(df[w]['limiting_mag'],bins=np.linspace(18,23,100))
plt.hist(df[wdark]['limiting_mag'],bins=np.linspace(18,23,100))
plt.xlabel('Limiting Magnitude (i-band mag arcsec$^{-2}$)')
plt.ylabel('Number of Exposures')
sns.despine()
Explanation: So there are clearly outliers. Let's look at histograms to find good fiducial values:
End of explanation
w = (df['filterkey'] == 2)
w &= (df['altitude'] > 20) & (df['fwhmsex'] < 5)
plt.hexbin(df[w]['altitude'],df[w]['fwhmsex'])
plt.xlabel('Altitude (deg)')
plt.ylabel('FWHM (arcsec)')
alts = np.linspace(20,90,100)
airmasses = 1./np.cos(np.radians(90.-alts))
plt.plot(alts,2*airmasses**(3./5))
Explanation: Let's look at the dependence of FWHM on altitude:
End of explanation
w = (df['filterkey'] == 1)
w &= (df['altitude'] > 20) & (df['fwhmsex'] < 5)
plt.hexbin(df[w]['altitude'],df[w]['fwhmsex'])
plt.xlabel('Altitude (deg)')
plt.ylabel('FWHM (arcsec)')
plt.plot(alts,2.4*airmasses**(3./5))
Explanation: g-band
End of explanation
w = (df['filterkey'] == 4)
w &= (df['altitude'] > 20) & (df['fwhmsex'] < 5)
plt.hexbin(df[w]['altitude'],df[w]['fwhmsex'])
plt.xlabel('Altitude (deg)')
plt.ylabel('FWHM (arcsec)')
plt.plot(alts,2.0*airmasses**(3./5))
# load our model
Sky = SkyBrightness()
def test_conditions(moonillf=0,moonalt=40,moon_dist=60.,azimuth=120,altitude=60.,sunalt=-35,filter_id=2):
define convenience function for building inputs to SkyBrightness.
Inputs should be scalars expect for one and only one vector.
Note that manual inputs can yield unphysical moonalt/moon_dist pairs, but the real simulator will
calculate these correctly.
moonillf = np.atleast_1d(moonillf)
moonalt = np.atleast_1d(moonalt)
moon_dist = np.atleast_1d(moon_dist)
azimuth = np.atleast_1d(azimuth)
altitude = np.atleast_1d(altitude)
sunalt = np.atleast_1d(sunalt)
filter_id = np.atleast_1d(filter_id)
maxlen = np.max([len(moonillf), len(moonalt), len(moon_dist), len(azimuth), len(altitude), len(sunalt),
len(filter_id)])
def blow_up_array(arr,maxlen=maxlen):
if (len(arr) == maxlen):
return arr
elif (len(arr) == 1):
return np.ones(maxlen)*arr[0]
else:
raise ValueError
moonillf = blow_up_array(moonillf)
moonalt = blow_up_array(moonalt)
moon_dist = blow_up_array(moon_dist)
azimuth = blow_up_array(azimuth)
altitude = blow_up_array(altitude)
sunalt = blow_up_array(sunalt)
filter_id = blow_up_array(filter_id)
pars = {'moonillf':moonillf, 'moonalt':moonalt, 'moon_dist':moon_dist, 'azimuth':azimuth, 'altitude':altitude,
'sunalt':sunalt,'filter_id':filter_id}
return pd.DataFrame(pars,index=np.arange(maxlen))
def select_data(df, moonillf=0,moonalt=40,moon_dist=60.,azimuth=120,altitude=60,sunalt=-35,filter_id=2):
return a boolean selecting data around fiducial inputs
set parameter to None to skip comparison
# start with boolean True array
w = df['azimuth'] == df['azimuth']
def anding(df,par,value,frac=0.2):
if value is None:
return True
else:
lima = value*(1-frac)
limb = value*(1+frac)
llim = np.atleast_1d(np.where(lima < limb, lima, limb))[0]
ulim = np.atleast_1d(np.where(lima > limb, lima, limb))[0]
# figure out which is smaller
return (df[par] >= llim) & (df[par] <= ulim)
w &= anding(df, 'moonillf', moonillf)
w &= anding(df, 'moonalt', moonalt)
w &= anding(df, 'moon_dist', moon_dist)
w &= anding(df, 'azimuth', azimuth)
w &= anding(df, 'altitude', altitude)
w &= anding(df, 'sunalt', sunalt)
# dataframe uses filterkey
w &= (df['filterkey'] == filter_id)
return w
# show that it works!
test_conditions(altitude=np.linspace(0,90,5))
Explanation: i-band
End of explanation
from ztf_sim.magnitudes import limiting_mag
fids = [1,2,3]
dark_condition = test_conditions(moonillf=0,moonalt=0,altitude=90,sunalt=-80,filter_id=fids)
sky_brightness = Sky.predict(dark_condition)
for fid, sky in zip(fids,sky_brightness.values):
print(fid, sky, limiting_mag(30, 2.0, sky, filter_id = fid, altitude=90, SNR=5.))
bright_condition = test_conditions(moonillf=1.,moonalt=70.,moon_dist=20.,altitude=90,sunalt=-80,filter_id=[1,2,3])
sky_brightness = Sky.predict(bright_condition)
for fid, sky in zip(fids,sky_brightness.values):
print(fid, sky, limiting_mag(30, 2.0, sky, filter_id = fid, altitude=90, SNR=5.))
Explanation: Calculate some limiting mag values in bright and dark time at zenith:
End of explanation
moonillfs = np.linspace(0,1,20)
dft = test_conditions(moonillf=moonillfs)
skies = Sky.predict(dft)
plt.plot(moonillfs,skies)
plt.gca().invert_yaxis()
plt.xlabel('Moon Illimination Fraction')
plt.ylabel('Sky Brightness (R-band mag arcsec$^{-2}$)')
sns.despine()
w = select_data(df,moonillf=None)#,moonalt=45, moon_dist=45)
plt.scatter(np.abs(df[w]['moonillf']),df[w]['limiting_mag'],alpha=0.3)
plt.gca().invert_yaxis()
plt.xlabel('Moon Illimination Fraction')
plt.ylabel('Limiting Mag (R-band mag$)')
sns.despine()
moonillfs = np.linspace(0,1,20)
dft = test_conditions(moonillf=moonillfs,filter_id=1)
skies = Sky.predict(dft)
plt.plot(moonillfs,skies)
plt.gca().invert_yaxis()
plt.xlabel('Moon Illimination Fraction')
plt.ylabel('Sky Brightness (g-band mag arcsec$^{-2}$)')
sns.despine()
Explanation: Now let's walk through the various parameters and look at the model behavior.
Vary moon illilumination fraction at 45 degree altitude, R-band:
TOFIX
forgot to calculate limiting mag from the sky brightness through here
End of explanation
moonillfs = np.linspace(0,1,20)
dft = test_conditions(moonillf=moonillfs,filter_id=3)
skies = Sky.predict(dft)
plt.plot(moonillfs,skies)
plt.gca().invert_yaxis()
plt.xlabel('Moon Illimination Fraction')
plt.ylabel('Sky Brightness (i-band mag arcsec$^{-2}$)')
sns.despine()
moon_dists = np.linspace(0,180,30)
dft = test_conditions(moonillf=0.8,moon_dist=moon_dists)
skies = Sky.predict(dft)
plt.plot(moon_dists,skies)
plt.gca().invert_yaxis()
plt.xlabel('Moon Distance (deg)')
plt.ylabel('Sky Brightness (R-band mag arcsec$^{-2}$)')
sns.despine()
azimuths = np.linspace(0,360,30)
dft = test_conditions(azimuth=azimuths)
skies = Sky.predict(dft)
plt.plot(azimuths,skies)
plt.gca().invert_yaxis()
plt.xlabel('Telescope Azimuth (deg)')
plt.ylabel('Sky brightness (R-band mag arcsec$^{-2}$)')
sns.despine()
altitudes = np.linspace(0,90,30)
dft = test_conditions(altitude=altitudes)
skies = Sky.predict(dft)
plt.plot(altitudes,skies)
plt.gca().invert_yaxis()
plt.xlabel('Telescope Altitude (deg)')
plt.ylabel('Sky Brightness (R-band mag arcsec$^{-2}$)')
sns.despine()
sunalts = np.linspace(-10,-80,30)
dft = test_conditions(sunalt=sunalts)
skies = Sky.predict(dft)
plt.plot(sunalts,skies)
plt.gca().invert_yaxis()
plt.xlabel('Sun Altitude (deg)')
plt.ylabel('Limiting Magnitude (R-band mag arcsec$^{-2}$)')
sns.despine()
Explanation: Not as smooth as we might expect. Let's look at the input data:
These data are surprisingly sparse!
End of explanation
filter_id = 2
seeing=2.0
alts = np.linspace(25,90,30)
dft = test_conditions(altitude=alts, azimuth=0, filter_id=filter_id)
skies = Sky.predict(dft)
limmag = limiting_mag(30*u.second, seeing, skies, filter_id*np.ones(len(alts)), alts )
#plt.plot(alts,skies)
plt.plot(alts, limmag)
plt.gca().invert_yaxis()
plt.xlabel('Altitude (deg)')
plt.ylabel('Limiting Magnitude (mag)')
sns.despine()
from ztf_sim.utils import altitude_to_airmass
altitude_to_airmass(25)
def slot_metric(limiting_mag):
metric = 10.**(0.6 * (limiting_mag - 21))
# lock out -99 limiting mags even more aggressively
return metric.where(limiting_mag > 0, -0.99)
metric = slot_metric(limmag)
plt.plot(alts, metric)
plt.xlabel('Altitude (deg)')
plt.ylabel('Metric value')
sns.despine()
seeing=2.0
alts = np.linspace(25,90,30)
for filter_id in [1,2,3]:
dft = test_conditions(altitude=alts, azimuth=0, filter_id=filter_id)
skies = Sky.predict(dft)
limmag = limiting_mag(30*u.second, seeing, skies, filter_id*np.ones(len(alts)), alts )
metric = slot_metric(limmag)
#airmass = altitude_to_airmass(alts)
plt.plot(alts, metric/metric.iloc[-1], label=filter_id)
plt.plot(alts,-1e-4*(alts-90)**2.+1)
plt.legend()
plt.xlabel('Altitude (deg)')
plt.ylabel('Metric value')
sns.despine()
seeing=2.0
alts = np.linspace(25,90,30)
for filter_id in [1,2,3]:
dft = test_conditions(altitude=alts, azimuth=0, filter_id=filter_id)
skies = Sky.predict(dft)
limmag = limiting_mag(30*u.second, seeing, skies, filter_id*np.ones(len(alts)), alts )
metric = slot_metric(limmag)
#airmass = altitude_to_airmass(alts)
plt.plot(alts, metric/(-1e-4*(alts-90)**2.+1), label=filter_id)
plt.legend()
plt.xlabel('Altitude (deg)')
plt.ylabel('Metric value')
sns.despine()
Explanation: low dec fields
End of explanation |
9,936 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 変数の概要
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 変数の作成
変数を作成するには、初期値を指定します。tf.Variable は、初期化の値と同じ dtype を持ちます。
Step3: 変数の外観と動作はテンソルに似ており、実際にデータ構造が tf.Tensor で裏付けられています。テンソルのように dtype と形状を持ち、NumPy にエクスポートできます。
Step4: ほとんどのテンソル演算は期待どおりに変数を処理しますが、変数は変形できません。
Step5: 上記のように、変数はテンソルによって裏付けられています。テンソルは tf.Variable.assign を使用して再割り当てできます。assign を呼び出しても、(通常は)新しいテンソルは割り当てられません。代わりに、既存テンソルのメモリが再利用されます。
Step6: 演算でテンソルのような変数を使用する場合、通常は裏付けているテンソルで演算します。
既存の変数から新しい変数を作成すると、裏付けているテンソルが複製されます。2 つの変数が同じメモリを共有することはありません。
Step7: ライフサイクル、命名、監視
Python ベースの TensorFlow では、tf.Variable インスタンスのライフサイクルは他の Python オブジェクトと同じです。変数への参照がない場合、変数は自動的に割り当て解除されます。
変数には名前を付けることもでき、変数の追跡とデバッグに役立ちます。2 つの変数に同じ名前を付けることができます。
Step8: 変数名は、モデルの保存と読み込みを行う際に維持されます。デフォルトでは、モデル内の変数は一意の変数名を自動的に取得するため、必要がない限り自分で割り当てる必要はありません。
変数は区別のためには重要ですが、一部の変数は区別する必要はありません。作成時に trainable を false に設定すると、変数の勾配をオフにすることができます。勾配を必要としない変数には、トレーニングステップカウンターなどがあります。
Step9: 変数とテンソルの配置
パフォーマンスを向上させるため、TensorFlow はテンソルと変数を dtype と互換性のある最速のデバイスに配置しようとします。つまり、GPU を使用できる場合はほとんどの変数が GPU に配置されることになります。
ただし、この動作はオーバーライドすることができます。このスニペットでは GPU が使用できる場合でも浮動小数点数テンソルと変数を CPU に配置します。デバイスの配置ログをオンにすると(セットアップを参照)、変数が配置されている場所を確認できます。
注:手動配置も機能しますが、配置戦略 を使用したほうがより手軽かつスケーラブルに計算を最適化することができます。
このノートブックを GPU の有無にかかわらず異なるバックエンドで実行すると、異なるログが表示されます。ロギングデバイスの配置は、セッション開始時にオンにする必要があります。
Step10: あるデバイスで変数またはテンソルの場所を設定し、別のデバイスで計算を行うことができます。この処理ではデバイス間でデータをコピーする必要があるため、遅延が発生します。
ただし、複数の GPU ワーカーがあっても 1 つの変数のコピーだけが必要な場合は、この処理を実行することができます。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
# Uncomment to see where your variables get placed (see below)
# tf.debugging.set_log_device_placement(True)
Explanation: 変数の概要
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/variable"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で実行</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/variable.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/variable.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHubでソースを表示</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/variable.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
TensorFlow の変数は、プログラムが操作する共通の永続的な状態を表すために推奨される方法です。このガイドでは、TensorFlow で tf.Variable のインスタンスを作成、更新、管理する方法について説明します。
変数は tf.Variable クラスを介して作成および追跡されます。tf.Variable は、そこで演算を実行して値を変更できるテンソルを表します。特定の演算ではこのテンソルの値の読み取りと変更を行うことができます。tf.keras などのより高度なライブラリは tf.Variable を使用してモデルのパラメーターを保存します。
セットアップ
このノートブックでは、変数の配置について説明します。変数が配置されているデバイスを確認するには、この行のコメントを外します。
End of explanation
my_tensor = tf.constant([[1.0, 2.0], [3.0, 4.0]])
my_variable = tf.Variable(my_tensor)
# Variables can be all kinds of types, just like tensors
bool_variable = tf.Variable([False, False, False, True])
complex_variable = tf.Variable([5 + 4j, 6 + 1j])
Explanation: 変数の作成
変数を作成するには、初期値を指定します。tf.Variable は、初期化の値と同じ dtype を持ちます。
End of explanation
print("Shape: ", my_variable.shape)
print("DType: ", my_variable.dtype)
print("As NumPy: ", my_variable.numpy())
Explanation: 変数の外観と動作はテンソルに似ており、実際にデータ構造が tf.Tensor で裏付けられています。テンソルのように dtype と形状を持ち、NumPy にエクスポートできます。
End of explanation
print("A variable:", my_variable)
print("\nViewed as a tensor:", tf.convert_to_tensor(my_variable))
print("\nIndex of highest value:", tf.argmax(my_variable))
# This creates a new tensor; it does not reshape the variable.
print("\nCopying and reshaping: ", tf.reshape(my_variable, [1,4]))
Explanation: ほとんどのテンソル演算は期待どおりに変数を処理しますが、変数は変形できません。
End of explanation
a = tf.Variable([2.0, 3.0])
# This will keep the same dtype, float32
a.assign([1, 2])
# Not allowed as it resizes the variable:
try:
a.assign([1.0, 2.0, 3.0])
except Exception as e:
print(f"{type(e).__name__}: {e}")
Explanation: 上記のように、変数はテンソルによって裏付けられています。テンソルは tf.Variable.assign を使用して再割り当てできます。assign を呼び出しても、(通常は)新しいテンソルは割り当てられません。代わりに、既存テンソルのメモリが再利用されます。
End of explanation
a = tf.Variable([2.0, 3.0])
# Create b based on the value of a
b = tf.Variable(a)
a.assign([5, 6])
# a and b are different
print(a.numpy())
print(b.numpy())
# There are other versions of assign
print(a.assign_add([2,3]).numpy()) # [7. 9.]
print(a.assign_sub([7,9]).numpy()) # [0. 0.]
Explanation: 演算でテンソルのような変数を使用する場合、通常は裏付けているテンソルで演算します。
既存の変数から新しい変数を作成すると、裏付けているテンソルが複製されます。2 つの変数が同じメモリを共有することはありません。
End of explanation
# Create a and b; they will have the same name but will be backed by
# different tensors.
a = tf.Variable(my_tensor, name="Mark")
# A new variable with the same name, but different value
# Note that the scalar add is broadcast
b = tf.Variable(my_tensor + 1, name="Mark")
# These are elementwise-unequal, despite having the same name
print(a == b)
Explanation: ライフサイクル、命名、監視
Python ベースの TensorFlow では、tf.Variable インスタンスのライフサイクルは他の Python オブジェクトと同じです。変数への参照がない場合、変数は自動的に割り当て解除されます。
変数には名前を付けることもでき、変数の追跡とデバッグに役立ちます。2 つの変数に同じ名前を付けることができます。
End of explanation
step_counter = tf.Variable(1, trainable=False)
Explanation: 変数名は、モデルの保存と読み込みを行う際に維持されます。デフォルトでは、モデル内の変数は一意の変数名を自動的に取得するため、必要がない限り自分で割り当てる必要はありません。
変数は区別のためには重要ですが、一部の変数は区別する必要はありません。作成時に trainable を false に設定すると、変数の勾配をオフにすることができます。勾配を必要としない変数には、トレーニングステップカウンターなどがあります。
End of explanation
with tf.device('CPU:0'):
# Create some tensors
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
Explanation: 変数とテンソルの配置
パフォーマンスを向上させるため、TensorFlow はテンソルと変数を dtype と互換性のある最速のデバイスに配置しようとします。つまり、GPU を使用できる場合はほとんどの変数が GPU に配置されることになります。
ただし、この動作はオーバーライドすることができます。このスニペットでは GPU が使用できる場合でも浮動小数点数テンソルと変数を CPU に配置します。デバイスの配置ログをオンにすると(セットアップを参照)、変数が配置されている場所を確認できます。
注:手動配置も機能しますが、配置戦略 を使用したほうがより手軽かつスケーラブルに計算を最適化することができます。
このノートブックを GPU の有無にかかわらず異なるバックエンドで実行すると、異なるログが表示されます。ロギングデバイスの配置は、セッション開始時にオンにする必要があります。
End of explanation
with tf.device('CPU:0'):
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.Variable([[1.0, 2.0, 3.0]])
with tf.device('GPU:0'):
# Element-wise multiply
k = a * b
print(k)
Explanation: あるデバイスで変数またはテンソルの場所を設定し、別のデバイスで計算を行うことができます。この処理ではデバイス間でデータをコピーする必要があるため、遅延が発生します。
ただし、複数の GPU ワーカーがあっても 1 つの変数のコピーだけが必要な場合は、この処理を実行することができます。
End of explanation |
9,937 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Labeled Data
Labeled Data - contains the features and target attribute with correct answer
Training Set
Part of labeled data that is used for training the model. 60-70% of the labeled data is used for training
Evaluation Set
30-40% of the labeled data is reserved for checking prediction quality with known correct answer
Training Example
Training Example or a row. Contains one complete observation of features and the correct answer
Features
Known by different names
Step1: <h2>Binary Classification</h2>
Predict a binary class as output based on given features
<p>Examples
Step2: <h2>Multiclass Classification</h2>
Predict a class as output based on given features
Examples
Step3: Data Types | Python Code:
# read the bike train csv file
df = pd.read_csv(regression_example)
df.head()
df.corr()
df['count'].describe()
df.season.value_counts()
df.holiday.value_counts()
df.workingday.value_counts()
df.weather.value_counts()
df.temp.describe()
Explanation: Labeled Data
Labeled Data - contains the features and target attribute with correct answer
Training Set
Part of labeled data that is used for training the model. 60-70% of the labeled data is used for training
Evaluation Set
30-40% of the labeled data is reserved for checking prediction quality with known correct answer
Training Example
Training Example or a row. Contains one complete observation of features and the correct answer
Features
Known by different names: Columns, Features, variables, attributes. These are values that define a particular example. Values for some of the features could be missing or invalid in real-world datasets. So, some cleaning may have to be done before feeding to Machine Learning
Input Feature - Variables that are provided as input to the model
Target Attribute - Variable that model needs to predict
<h2>AWS ML Data Types</h2>
<h5>Data from training set needs to be mapped to one of these data types</h5>
<h4>1. Binary. Can contain only two state 1/0.</h4>
<ul>
<li>Positive Values: 1, y, yes, t, true</li>
<li>Negative Values: 0, n, no, f, false</li>
<li>Values are case-insensitive and AWS ML converts the above values to 1 or 0</li>
</ul>
<h4>2. Categorical. Qualitative attribute that describes something about an observation.</h4>
<h5>Example</h5>
<ul>
<li>Day of the week: Sunday, Monday, Tuesday,...</li>
<li>Size: XL, L, M, S </li>
<li>Month: 1, 2, 3, 4,...12</li>
<li>Season: 1, 2, 3, 4</li>
</ul>
<h4>3. Numeric. Measurements, Counts are represented as numeric types</h4>
<ul>
<li>Discrete: 20 cars, 30 trees, 2 ships</li>
<li>Continous: 98.7 degree F, 32.5 KMPH, 2.6 liters </li>
</ul>
<h4>4. Text. String of words. AWS ML automatically tokenizes at white space boundary</h4>
<h5>Example</h5>
<ul>
<li>product description, comment, reviews</li>
<li>Tokenized: 'Wildlife Photography for beginners' => {‘Wildlife’, ‘Photography’, ‘for’, ‘beginners’}</li>
</ul>
<h1>Algorithms</h1>
<h2>Linear Regression</h2>
Predict a numeric value as output based on given features
<p>Examples:
What is the market value of a car?
What is the current value of a House?
For a product, how many units can we sell?</p>
<h5>Concrete Example</h5>
Kaggle Bike Rentals - Predict number of bike rentals every hour. Total should include both casual
rentals and registered users rentals
Input Columns/Features = ['datetime', 'season', 'holiday', 'workingday', 'weather', 'temp',
'atemp', 'humidity', 'windspeed']
Output Column/Target Attribute = 'count'<br>
count = casual + registered <br>
Option 1: Predict casual and registered counts separately and then sum it up <br>
Option 2: Predict count directly
End of explanation
df = pd.read_csv(binary_class_example)
df.head()
df.columns
df.corr()
df.age.value_counts().head()
df.diabetes_class.value_counts()
Explanation: <h2>Binary Classification</h2>
Predict a binary class as output based on given features
<p>Examples: Do we need to follow up on a customer review?
Is this transaction fraudulent or valid one?
Are there signs of onset of a medical condition or disease?
Is this considered junk food or not?
</p>
<h5>Concrete Example</h5>
pima-indians-diabetes - Predict if a given patient has a risk of getting diabetes<br>
https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes
Input Columns/Features = ['preg_count', 'glucose_concentration', 'diastolic_bp',
'triceps_skin_fold_thickness', 'two_hr_serum_insulin', 'bmi',
'diabetes_pedi', 'age']
Output Column/Target Attribute = 'diabetes_class'. 1 = diabetic, 0 = normal
End of explanation
df = pd.read_csv(multi_class_example)
np.random.seed(5)
# print 10 random rows
df.iloc[np.random.randint(0, df.shape[0], 10)]
df.columns
df['class'].value_counts()
df.sepal_length.describe()
Explanation: <h2>Multiclass Classification</h2>
Predict a class as output based on given features
Examples:
1. How healthy is the food based on given ingredients?
Classes: Healthy, Moderate, Occasional, Avoid.
2. Identify type of mushroom based on features
3. What type of advertisement can be placed for this search?
<h5>Concrete Example</h5>
Iris Classification - Predict the type of Iris plant based on flower measurments<br>
https://archive.ics.uci.edu/ml/datasets/Iris
Input Columns / Features = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
Output Column / Target Attribute = 'class'.
Class: Iris-setosa, Iris-virginica, Iris-versicolor
End of explanation
df = pd.read_csv(data_type_example)
df.columns
df[['description', 'favourites_count', 'favorited', 'text', 'screen_name', 'trainingLabel']].head()
Explanation: Data Types
End of explanation |
9,938 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<hr>
Patrick BROCKMANN - LSCE (Climate and Environment Sciences Laboratory)<br>
<img align="left" width="40%" src="http
Step1: "Classic" use with cell magic
Step2: Explore interactive widgets
Step3: Another example with a map | Python Code:
%load_ext ferretmagic
Explanation: <hr>
Patrick BROCKMANN - LSCE (Climate and Environment Sciences Laboratory)<br>
<img align="left" width="40%" src="http://www.lsce.ipsl.fr/Css/img/banniere_LSCE_75.png" ><br><br>
<hr>
Updated: 2019/11/13
Load the ferret extension
End of explanation
%%ferret -s 600,400
set text/font=arial
use monthly_navy_winds.cdf
show data/full
plot uwnd[i=@ave,j=@ave,l=@sbx:12]
Explanation: "Classic" use with cell magic
End of explanation
from ipywidgets import interact
@interact(var=['uwnd','vwnd'], smooth=(1, 20), vrange=(0.5,5,0.5))
def plot(var='uwnd', smooth=5, vrange=1) :
%ferret_run -s 600,400 'ppl color 6, 70, 70, 70; plot/grat=(dash,color=6)/vlim=-%(vrange)s:%(vrange)s %(var)s[i=@ave,j=@ave], %(var)s[i=@ave,j=@ave,l=@sbx:%(smooth)s]' % locals()
Explanation: Explore interactive widgets
End of explanation
# The line of code to make interactive
%ferret_run -q -s 600,400 'cancel mode logo; \
ppl color 6, 70, 70, 70; \
shade/grat=(dash,color=6) %(var)s[l=%(lstep)s] ; \
go land' % {'var':'uwnd','lstep':'3'}
import ipywidgets as widgets
from ipywidgets import interact
play = widgets.Play(
value=1,
min=1,
max=10,
step=1,
description="Press play",
disabled=False
)
slider = widgets.IntSlider(
min=1,
max=10
)
widgets.jslink((play, 'value'), (slider, 'value'))
a=widgets.HBox([play, slider])
@interact(var=['uwnd','vwnd'], lstep=slider, lstep1=play)
def plot(var='uwnd', lstep=1, lstep1=1) :
%ferret_run -q -s 600,400 'cancel mode logo; \
ppl color 6, 70, 70, 70; \
shade/grat=(dash,color=6)/lev=(-inf)(-10,10,2)(inf)/pal=mpl_Div_PRGn.spk %(var)s[l=%(lstep)s] ; \
go land' % locals()
Explanation: Another example with a map
End of explanation |
9,939 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recommender Engine
Perhaps the most famous example of a recommender engine in the Data Science world was the Netflix competition started in 2006, in which teams from all around the world competed to improve on Netflix's reccomendation algorithm. The final prize of $1,000,000 was awarded to a team which developed a solution which had about a 10% increase in accuracy over Netflix's. In fact, this competition resulted in the development of some new techniques which are still in use. For more reading on this topic, see Simon Funk's blog post
In this exercise, you will build a collaborative-filter recommendatin engine using both a cosine similarity approach and SVD (singular value decomposition). Before proceding download the MovieLens dataset.
Importing and Pre-processing the Data
First familiarize yourself with the data you downloaded, and then import the u.data file and take a look at the first few rows.
Step1: Before building any recommendation engines, we'll have to get the data into a useful form. Do this by first splitting the data into testing and training sets, and then by constructing two new dataframes whose columns are each unique movie and rows are each unique user, filling in 0 for missing values.
Step7: Now split the data into a training and test set, using a ratio 80/20 for train/test.
Cosine Similarity
Building a recommendation engine can be thought of as "filling in the holes" in the sparse matrix you made above. For example, take a look at the MovieLense data. You'll see that that matrix is mostly zeros. Our task here is to predict what a given user will rate a given movie depending on the users tastes. To determine a users taste, we can use cosine similarity which is given by $$s_u^{cos}(u_k,u_a)
= \frac{ u_k \cdot u_a }{ \left \| u_k \right \| \left \| u_a \right \| }
= \frac{ \sum x_{k,m}x_{a,m} }{ \sqrt{\sum x_{k,m}^2\sum x_{a,m}^2} }$$
for users $u_a$ and $u_k$ on ratings given by $x_{a,m}$ and $x_{b,m}$. This is just the cosine of the angle between the two vectors. Likewise, this can also be calculated for the similarity between two items, $i_a$ and $i_b$, given by $$s_u^{cos}(i_m,i_b)
= \frac{ i_m \cdot i_b }{ \left \| i_m \right \| \left \| i_b \right \| }
= \frac{ \sum x_{a,m} x_{a,b} }{ \sqrt{ \sum x_{a,m}^2 \sum x_{a,b}^2 } }$$
Then, the similarity between two users is given by $$\hat{x}{k,m} = \bar{x}{k} + \frac{\sum\limits_{u_a} s_u^{cos}(u_k,u_a) (x_{a,m})}{\sum\limits_{u_a}|s_u^{cos}(u_k,u_a)|}$$ and for items given by $$\hat{x}{k,m} = \frac{\sum\limits{i_b} s_u^{cos}(i_m,i_b) (x_{k,b}) }{\sum\limits_{i_b}|s_u^{cos}(i_m,i_b)|}$$
Use these ideas to construct a class cos_engine which can be used to recommend movies for a given user. Be sure to also test your algorithm, reporting its accuracy. Bonus
Step13: SVD
Above we used Cosine Similarity to fill the holes in our sparse matrix. Another, and much more popular, method for matrix completion is called a Singluar Value Decomposition. SVD factors our data matrix into three smaller matricies, given by $$\textbf{M} = \textbf{U} \bf{\Sigma} \textbf{V}^*$$ where $\textbf{M}$ is our data matrix, $\textbf{U}$ is a unitary matrix containg the latent variables in the user space, $\bf{\Sigma}$ is diagonal matrix containing the singular values of $\textbf{M}$, and $\textbf{V}$ is a unitary matrix containing the latent variables in the item space. For more information on the SVD see the Wikipedia article.
Numpy contains a package to estimate the SVD of a sparse matrix. By making estimates of the matricies $\textbf{U}$, $\bf{\Sigma}$, and $\textbf{V}$, and then by multiplying them together, we can reconstruct an estimate for the matrix $\textbf{M}$ with all the holes filled in.
Use these ideas to construct a class svd_engine which can be used to recommend movies for a given user. Be sure to also test your algorithm, reporting its accuracy. Bonus
Step14: Overall RMSE of about 0.98
Step15: 7 is the optimal value of k in this case. Note that no cross-validation was performed!
Now we'll build the best recommender and recommend 5 movies to each user. | Python Code:
# Importing the data
import pandas as pd
import numpy as np
header = ['user_id', 'item_id', 'rating', 'timestamp']
data_movie_raw = pd.read_csv('../data/ml-100k/u.data', sep='\t', names=header)
data_movie_raw.head()
Explanation: Recommender Engine
Perhaps the most famous example of a recommender engine in the Data Science world was the Netflix competition started in 2006, in which teams from all around the world competed to improve on Netflix's reccomendation algorithm. The final prize of $1,000,000 was awarded to a team which developed a solution which had about a 10% increase in accuracy over Netflix's. In fact, this competition resulted in the development of some new techniques which are still in use. For more reading on this topic, see Simon Funk's blog post
In this exercise, you will build a collaborative-filter recommendatin engine using both a cosine similarity approach and SVD (singular value decomposition). Before proceding download the MovieLens dataset.
Importing and Pre-processing the Data
First familiarize yourself with the data you downloaded, and then import the u.data file and take a look at the first few rows.
End of explanation
from sklearn.model_selection import train_test_split
# First split into train and test sets
data_train_raw, data_test_raw = train_test_split(data_movie_raw, train_size = 0.8)
# Turning to pivot tables
data_train = data_train_raw.pivot_table(index = 'user_id', columns = 'item_id', values = 'rating').fillna(0)
data_test = data_test_raw.pivot_table(index = 'user_id', columns = 'item_id', values = 'rating').fillna(0)
# Print the firest few rows
data_train.head()
Explanation: Before building any recommendation engines, we'll have to get the data into a useful form. Do this by first splitting the data into testing and training sets, and then by constructing two new dataframes whose columns are each unique movie and rows are each unique user, filling in 0 for missing values.
End of explanation
# Libraries
import pandas as pd
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
class cos_engine:
def __init__(self, data_all):
Constructor for cos_engine class
Args:
data_all: Raw dataset containing all movies to build
a list of movies already seen by each user.
# Create copy of data
self.data_all = data_all.copy()
# Now build a list of movies each of you has seen
self.seen = []
for user in data_all.user_id.unique():
cur_seen = {}
cur_seen["user"] = user
cur_seen["seen"] = self.data_all[data_all.user_id == user].item_id
self.seen.append(cur_seen)
def fit(self, data_train):
Performs cosine similarity on a sparse matrix data_train
Args:
data_train: A pandas data frame data to estimate cosine similarity
# Create a copy of the dataframe
self.data_train = data_train.copy()
# Save the indices and column names
self.users = self.data_train.index
self.items = self.data_train.columns
# Compute mean vectors
self.user_means = self.data_train.replace(0, np.nan).mean(axis = 1)
self.item_means = self.data_train.T.replace(0, np.nan).mean(axis = 1)
# Get similarity matrices and compute sums for normalization
# For non adjusted cosine similarity, neglect subtracting the means.
self.data_train_adj = (self.data_train.replace(0, np.nan).T - self.user_means).fillna(0).T
self.user_cos = cosine_similarity(self.data_train_adj)
self.item_cos = cosine_similarity(self.data_train_adj.T)
self.user_cos_sum = np.abs(self.user_cos).sum(axis = 1)
self.item_cos_sum = np.abs(self.item_cos).sum(axis = 1)
self.user_cos_sum = self.user_cos_sum.reshape(self.user_cos_sum.shape[0], 1)
self.item_cos_sum = self.item_cos_sum.reshape(self.item_cos_sum.shape[0], 1)
def predict(self, method = 'user'):
Predicts using Cosine Similarity
Args:
method: A string indicating what method to use, user or item.
Default user.
Returns:
A pandas dataframe containing the prediction values
# Store prediction locally and turn to dataframe
if method == 'user':
self.pred = self.user_means[:, np.newaxis] + ((self.user_cos @ self.data_train_adj) / self.user_cos_sum)
self.pred = pd.DataFrame(self.pred, index = data_train.index, columns = data_train.columns)
elif method == 'item':
self.pred = self.item_means[:, np.newaxis] + ((self.data_train @ self.item_cos) / self.item_cos_sum.T).T
self.pred = pd.DataFrame(self.pred, columns = data_train.index.values, index = data_train.columns)
return(self.pred)
def test(self, data_test, root = False):
Tests fit given test data in data_test
Args:
data_test: A pandas dataframe containing test data
root: A boolean indicating whether to return the RMSE.
Default False
Returns:
The Mean Squared Error of the fit if root = False, the Root Mean\
Squared Error otherwise.
# Build a list of common indices (users) in the train and test set
row_idx = list(set(self.pred.index) & set(data_test.index))
# Prime the variables for loop
err = [] # To hold the Sum of Squared Errors
N = 0 # To count preditions for MSE calculation
for row in row_idx:
# Get the rows
test_row = data_test.loc[row, :]
pred_row = self.pred.loc[row, :]
# Get indices of nonzero elements in the test set
idx = test_row.index[test_row.nonzero()[0]]
# Get only common movies
temp_test = test_row[idx]
temp_pred = pred_row[idx]
# Compute error and count
temp_err = ((temp_pred - temp_test)**2).sum()
N = N + len(idx)
err.append(temp_err)
mse = np.sum(err) / N
# Switch for RMSE
if root:
err = np.sqrt(mse)
else:
err = mse
return(err)
def recommend(self, user, num_recs):
Tests fit given test data in data_test
Args:
data_test: A pandas dataframe containing test data
root: A boolean indicating whether to return the RMSE.
Default False
Returns:
The Mean Squared Error of the fit if root = False, the Root Mean
Squared Error otherwise.
# Get list of already seen movies for this user
idx_seen = next(item for item in self.seen if item["user"] == 2)["seen"]
# Remove already seen movies and recommend
rec = self.pred.loc[user, :].drop(idx_seen).nlargest(num_recs)
return(rec.index)
# Testing
cos_en = cos_engine(data_movie_raw)
cos_en.fit(data_train)
# Predict using user similarity
pred1 = cos_en.predict(method = 'user')
err = cos_en.test(data_test, root = True)
rec1 = cos_en.recommend(1, 5)
print("RMSE:", err)
print("Reccomendations for user 1:", rec1.values)
# And now with item
pred2 = cos_en.predict(method = 'item')
err = cos_en.test(data_test, root = True)
rec2 = cos_en.recommend(1, 5)
print("RMSE:", err)
print("Reccomendations for item 1:", rec2.values)
Explanation: Now split the data into a training and test set, using a ratio 80/20 for train/test.
Cosine Similarity
Building a recommendation engine can be thought of as "filling in the holes" in the sparse matrix you made above. For example, take a look at the MovieLense data. You'll see that that matrix is mostly zeros. Our task here is to predict what a given user will rate a given movie depending on the users tastes. To determine a users taste, we can use cosine similarity which is given by $$s_u^{cos}(u_k,u_a)
= \frac{ u_k \cdot u_a }{ \left \| u_k \right \| \left \| u_a \right \| }
= \frac{ \sum x_{k,m}x_{a,m} }{ \sqrt{\sum x_{k,m}^2\sum x_{a,m}^2} }$$
for users $u_a$ and $u_k$ on ratings given by $x_{a,m}$ and $x_{b,m}$. This is just the cosine of the angle between the two vectors. Likewise, this can also be calculated for the similarity between two items, $i_a$ and $i_b$, given by $$s_u^{cos}(i_m,i_b)
= \frac{ i_m \cdot i_b }{ \left \| i_m \right \| \left \| i_b \right \| }
= \frac{ \sum x_{a,m} x_{a,b} }{ \sqrt{ \sum x_{a,m}^2 \sum x_{a,b}^2 } }$$
Then, the similarity between two users is given by $$\hat{x}{k,m} = \bar{x}{k} + \frac{\sum\limits_{u_a} s_u^{cos}(u_k,u_a) (x_{a,m})}{\sum\limits_{u_a}|s_u^{cos}(u_k,u_a)|}$$ and for items given by $$\hat{x}{k,m} = \frac{\sum\limits{i_b} s_u^{cos}(i_m,i_b) (x_{k,b}) }{\sum\limits_{i_b}|s_u^{cos}(i_m,i_b)|}$$
Use these ideas to construct a class cos_engine which can be used to recommend movies for a given user. Be sure to also test your algorithm, reporting its accuracy. Bonus: Use adjusted cosine similiarity.
End of explanation
# Libraries
import pandas as pd
import numpy as np
import scipy.sparse as sp
from scipy.sparse.linalg import svds
class svd_engine:
def __init__(self, data_all, k = 6):
Constructor for svd_engine class
Args:
k: The number of latent variables to fit
self.k = k
# Create copy of data
self.data_all = data_all.copy()
# Now build a list of movies each you has seen
self.seen = []
for user in data_all.user_id.unique():
cur_seen = {}
cur_seen["user"] = user
cur_seen["seen"] = self.data_all[data_all.user_id == user].item_id
self.seen.append(cur_seen)
def fit(self, data_train):
Performs SVD on a sparse matrix data_train
Args:
data_train: A pandas data frame data to estimate SVD
Returns:
Matricies u, s, and vt of SVD
# Save local copy of data
self.data_train = data_train.copy()
# Compute adjusted matrix
self.user_means = self.data_train.replace(0, np.nan).mean(axis = 1)
self.item_means = self.data_train.T.replace(0, np.nan).mean(axis = 1)
self.data_train_adj = (self.data_train.replace(0, np.nan).T - self.user_means).fillna(0).T
# Save the indices and column names
self.users = data_train.index
self.items = data_train.columns
# Train the model
self.u, self.s, self.vt = svds(self.data_train_adj, k = self.k)
return(self.u, np.diag(self.s), self.vt)
def predict(self):
Predicts using SVD
Returns:
A pandas dataframe containing the prediction values
# Store prediction locally and turn to dataframe, adding the mean back
self.pred = pd.DataFrame(self.u @ np.diag(self.s) @ self.vt,
index = self.users,
columns = self.items)
self.pred = self.user_means[:, np.newaxis] + self.pred
return(self.pred)
def test(self, data_test, root = False):
Tests fit given test data in data_test
Args:
data_test: A pandas dataframe containing test data
root: A boolean indicating whether to return the RMSE.
Default False
Returns:
The Mean Squared Error of the fit if root = False, the Root Mean\
Squared Error otherwise.
# Build a list of common indices (users) in the train and test set
row_idx = list(set(self.pred.index) & set(data_test.index))
# Prime the variables for loop
err = [] # To hold the Sum of Squared Errors
N = 0 # To count predictions for MSE calculation
for row in row_idx:
# Get the rows
test_row = data_test.loc[row, :]
pred_row = self.pred.loc[row, :]
# Get indices of nonzero elements in the test set
idx = test_row.index[test_row.nonzero()[0]]
# Get only common movies
temp_test = test_row[idx]
temp_pred = pred_row[idx]
# Compute error and count
temp_err = ((temp_pred - temp_test)**2).sum()
N = N + len(idx)
err.append(temp_err)
mse = np.sum(err) / N
# Switch for RMSE
if root:
err = np.sqrt(mse)
else:
err = mse
return(err)
def recommend(self, user, num_recs):
Tests fit given test data in data_test
Args:
data_test: A pandas dataframe containing test data
root: A boolean indicating whether to return the RMSE.
Default False
Returns:
The Mean Squared Error of the fit if root = False, the Root Mean\
Squared Error otherwise.
# Get list of already seen movies for this user
idx_seen = next(item for item in self.seen if item["user"] == 2)["seen"]
# Remove already seen movies and recommend
rec = self.pred.loc[user, :].drop(idx_seen).nlargest(num_recs)
return(rec.index)
# Testing
svd_en = svd_engine(data_movie_raw, k = 20)
svd_en.fit(data_train)
svd_en.predict()
err = svd_en.test(data_test, root = True)
rec = svd_en.recommend(1, 5)
print("RMSE:", err)
print("Reccomendations for user 1:", rec.values)
Explanation: SVD
Above we used Cosine Similarity to fill the holes in our sparse matrix. Another, and much more popular, method for matrix completion is called a Singluar Value Decomposition. SVD factors our data matrix into three smaller matricies, given by $$\textbf{M} = \textbf{U} \bf{\Sigma} \textbf{V}^*$$ where $\textbf{M}$ is our data matrix, $\textbf{U}$ is a unitary matrix containg the latent variables in the user space, $\bf{\Sigma}$ is diagonal matrix containing the singular values of $\textbf{M}$, and $\textbf{V}$ is a unitary matrix containing the latent variables in the item space. For more information on the SVD see the Wikipedia article.
Numpy contains a package to estimate the SVD of a sparse matrix. By making estimates of the matricies $\textbf{U}$, $\bf{\Sigma}$, and $\textbf{V}$, and then by multiplying them together, we can reconstruct an estimate for the matrix $\textbf{M}$ with all the holes filled in.
Use these ideas to construct a class svd_engine which can be used to recommend movies for a given user. Be sure to also test your algorithm, reporting its accuracy. Bonus: Tune any parameters.
End of explanation
# Parameter tuning
import matplotlib.pyplot as plt
err = []
for cur_k in range(1, 50):
svd_en = svd_engine(data_movie_raw, k = cur_k)
svd_en.fit(data_train)
svd_en.predict()
err.append(svd_en.test(data_test))
plt.plot(range(1, 50), err)
plt.title('RMSE versus k')
plt.xlabel('k')
plt.ylabel('RMSE')
plt.show()
err.index(min(err))
Explanation: Overall RMSE of about 0.98
End of explanation
# Build the engine
svd_en = svd_engine(data_movie_raw, k = 7)
svd_en.fit(data_train)
svd_en.predict()
# Now make recommendations
recs = []
for user in data_movie_raw.user_id.unique():
temp_rec = svd_en.recommend(user, 5)
recs.append(temp_rec)
recs[0]
Explanation: 7 is the optimal value of k in this case. Note that no cross-validation was performed!
Now we'll build the best recommender and recommend 5 movies to each user.
End of explanation |
9,940 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Geo Query Dataset Analysis
Step1: Do some cleanup
Step2: Query Frequency Analysis
Let's take a look
Step3: The frequency of queries drops off pretty quickly, suggesting a long tail of low frequency queries. Let's get a sense of this by looking at the cumulative coverage of queries with frequencies between 1 and 10.
While we're at it, we can plot the cumulative coverage up until a frequency of 200 (in ascending order of frequency).
Step4: ie queries with a frequency of 1 account for about 30% of queries, queries with frequency of 2 or less account for 48%, 3 or less account for 58%, etc.
Looking at the graph it seems like coverate rates drops off exponentially. Plotting a log-log graph of the query frequencies (y-axis) against the descending rank of the query frequency (x-axis) shows a linear-ish trend, suggesting that it does indeed look like some kind of inverse power function situation.
Annotator Results for Pilot Annotation Round
The pilot annotation round consisted of 50 queries sampled randomly from the total 84011 query instances. Below is a summary of the annotator's results.
Annotation Codes Map
Q2.
'YY' = Yes -- with place name
'YN' = Yes -- without place name
'NY' = No (but still a place)
'NN' = Not applicable (ie not explicit location and not a place)
Q3.
'IAD' = INFORMATIONAL_ADVICE
'IDC' = INFORMATIONAL_DIRECTED_CLOSED
'IDO' = INFORMATIONAL_DIRECTED_OPEN
'ILI' = INFORMATIONAL_LIST
'ILO' = INFORMATIONAL_LOCATE
'IUN' = INFORMATIONAL_UNDIRECTED
'NAV' = NAVIGATIONAL
'RDE' = RESOURCE_ENTERTAINMENT
'RDO' = RESOURCE_DOWNLOAD
'RIN' = RESOURCE_INTERACT
'ROB' = RESOURCE_OBTAIN
Step5: Comments
It looks like Martin leant substantially more towards annotating queries as being geographical, ie is_geo = True for Q1, compared to both annotators.
for Q3, Annotator 1 was biased slightly towards INFORMATIONAL_UNDIRECTED, whereas Annotator 2 was biased towards INFORMATIONAL_LOCATE. Martin, on the other hand, favoured INFORMATIONAL_UNDIRECTED, INFORMATIONAL_LOCATE, and NAVIGATIONAL, compared to the remaining categories.
what should we do about URLs? Annotator 2 skipped the one URL. Martin and Annotator 1 labelled it as Web Navigational but disagreed regarding location explicit. Martin said 'YN', Annotator 1 said 'NN'.
Inter-annotator Agreement Scores for Pilot Annotation
The following results present inter-annotater agreement for the pilot round using Fleiss' kappa.
Super handwavy concensus guide to interpreting kappa scores for annotation exercies in computation linguistics (Artstein and Poesio 2008
Step6: These scores are not particularly high. We're struggling to get into even 'tentative' reliability land. We're probably going to need to do some disagreement analysis to work out what's going on.
We can, however, look at agreement for Q1 and Q2 using coarser level of agreement. For Q2, this is whether annotators agreed that a location was explicit in the query (but ignoring whether the query included a place name).
For Q3, this is whether they agreed that the query was navigational, informational, or pertaining to a resource.
Step7: Agreement has improved, especially for Q2. Q3, however, is still a bit on the low side.
Disagreements | Python Code:
import os
import sys
import pandas as pd
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
import utils
%matplotlib inline
%load_ext autoreload
%autoreload 2
CSV_PATH = '../../data/unique_counts_semi.csv'
# load data
initial_df = utils.load_queries(CSV_PATH)
Explanation: Geo Query Dataset Analysis
End of explanation
# filter out queries with length less than 2 characters long
start_num = len(initial_df)
df = utils.clean_queries(initial_df)
print("{} distinct queries after stripping {} queries of length 1".format(len(df), start_num-len(df)))
print("Yielding a total of {} query occurrences.".format(df['countqstring'].sum()))
Explanation: Do some cleanup
End of explanation
df.head(10)
Explanation: Query Frequency Analysis
Let's take a look
End of explanation
total = df['countqstring'].sum()
fig, ax = plt.subplots(ncols=2, figsize=(20, 8))
cum_coverage = pd.Series(range(1,200)).apply(lambda n: df[df['countqstring'] <= n]['countqstring'].sum())/total
cum_coverage = cum_coverage*100
cum_coverage = cum_coverage.round(2)
# plot the cumulative coverage
cum_coverage.plot(ax=ax[0])
ax[0].set_xlabel('Query Frequency')
ax[0].set_ylabel('Cumulative Coverage (%)')
# see if it looks Zipfian. ie plot a log-log graph of query frequency against query rank
df.plot(ax=ax[1], y='countqstring', use_index=True, logx=True, logy=True)
ax[1].set_xlabel('Rank of Query (ie most frequent to least frequent)')
ax[1].set_ylabel('Query Frequency');
print("Freq Cumulative Coverage")
for i, val in enumerate(cum_coverage[:10].get_values()):
print("{:>2} {:0<5}%".format(i+1, val))
Explanation: The frequency of queries drops off pretty quickly, suggesting a long tail of low frequency queries. Let's get a sense of this by looking at the cumulative coverage of queries with frequencies between 1 and 10.
While we're at it, we can plot the cumulative coverage up until a frequency of 200 (in ascending order of frequency).
End of explanation
print(utils.get_user_results('annotator1'))
print('\n')
print(utils.get_user_results('annotator2'))
print('\n')
print(utils.get_user_results('martin'))
Explanation: ie queries with a frequency of 1 account for about 30% of queries, queries with frequency of 2 or less account for 48%, 3 or less account for 58%, etc.
Looking at the graph it seems like coverate rates drops off exponentially. Plotting a log-log graph of the query frequencies (y-axis) against the descending rank of the query frequency (x-axis) shows a linear-ish trend, suggesting that it does indeed look like some kind of inverse power function situation.
Annotator Results for Pilot Annotation Round
The pilot annotation round consisted of 50 queries sampled randomly from the total 84011 query instances. Below is a summary of the annotator's results.
Annotation Codes Map
Q2.
'YY' = Yes -- with place name
'YN' = Yes -- without place name
'NY' = No (but still a place)
'NN' = Not applicable (ie not explicit location and not a place)
Q3.
'IAD' = INFORMATIONAL_ADVICE
'IDC' = INFORMATIONAL_DIRECTED_CLOSED
'IDO' = INFORMATIONAL_DIRECTED_OPEN
'ILI' = INFORMATIONAL_LIST
'ILO' = INFORMATIONAL_LOCATE
'IUN' = INFORMATIONAL_UNDIRECTED
'NAV' = NAVIGATIONAL
'RDE' = RESOURCE_ENTERTAINMENT
'RDO' = RESOURCE_DOWNLOAD
'RIN' = RESOURCE_INTERACT
'ROB' = RESOURCE_OBTAIN
End of explanation
user_pairs = [
['annotator1', 'annotator2'],
['martin', 'annotator1'],
['martin', 'annotator2'],
]
results = utils.do_iaa_pairs(user_pairs)
utils.print_iaa_pairs(results, user_pairs)
Explanation: Comments
It looks like Martin leant substantially more towards annotating queries as being geographical, ie is_geo = True for Q1, compared to both annotators.
for Q3, Annotator 1 was biased slightly towards INFORMATIONAL_UNDIRECTED, whereas Annotator 2 was biased towards INFORMATIONAL_LOCATE. Martin, on the other hand, favoured INFORMATIONAL_UNDIRECTED, INFORMATIONAL_LOCATE, and NAVIGATIONAL, compared to the remaining categories.
what should we do about URLs? Annotator 2 skipped the one URL. Martin and Annotator 1 labelled it as Web Navigational but disagreed regarding location explicit. Martin said 'YN', Annotator 1 said 'NN'.
Inter-annotator Agreement Scores for Pilot Annotation
The following results present inter-annotater agreement for the pilot round using Fleiss' kappa.
Super handwavy concensus guide to interpreting kappa scores for annotation exercies in computation linguistics (Artstein and Poesio 2008:576):
* kappa > 0.8 = good reliabiility
* 0.67 < kappa < 0.8 = tentative conclusions may be drawn regarding the reliability of the data
End of explanation
results = utils.do_iaa_pairs(user_pairs, questions=(2,3), level='coarse')
utils.print_iaa_pairs(results, user_pairs)
Explanation: These scores are not particularly high. We're struggling to get into even 'tentative' reliability land. We're probably going to need to do some disagreement analysis to work out what's going on.
We can, however, look at agreement for Q1 and Q2 using coarser level of agreement. For Q2, this is whether annotators agreed that a location was explicit in the query (but ignoring whether the query included a place name).
For Q3, this is whether they agreed that the query was navigational, informational, or pertaining to a resource.
End of explanation
for question in (1,2,3):
print(utils.show_agreement(question, ['annotator1', 'annotator2', 'martin']))
print('\n')
Explanation: Agreement has improved, especially for Q2. Q3, however, is still a bit on the low side.
Disagreements
End of explanation |
9,941 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'pcmdi', 'pcmdi-test-1-0', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: PCMDI
Source ID: PCMDI-TEST-1-0
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:36
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
9,942 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
When life was easy
At some point in my calculus education I developed a simple rule, when in doubt set the derivative equal to zero and solve for x. You might recall doing this, and the reason for doing it is because for a smooth function its local maximum and minimum are found at places where the derivate is zero. Imagine if you have curve shaped like a hill, now if you go up the hill and at some point go over the top (the max), if you keep going you will start traveling downward. If you went from increasing to decreasing, the calculus argument was that at some point your rate of increase was zero and that would be at the top. Largely this works very well. It leads nicely to many optimization rules, but it breaks down a little bit when we don't have a curve. In particular it breaks down when our domain is the set of integers.
Enter the real world, everything is a model
Often times the problems are stated as find the "best" route, or find the "best" fitting function. For us to solve these problems, we need to model what "best" means. We need to mathematically describe a quantity to either maximize or minimize. For example we might seek select the route that minimizes the total distance traveled by a salesman (traveling salesman). Or if we were trying to fit a curve to some data, we might minimize the error between the predicted curve and the data (regression). We might try to select paths in a distribution network that maximize flow (network flow problems) etc.
When we talk about programs, we mean a set of variables, a function of those variables to maximize or minimize, and some constraints that those variables must satisfy.
formally
Step1: lets try a small problem we can easily solve by hand
$$minimize\
Step2: 2. Setup the variables
Step3: 3. Setup the objective
Step4: what does this create?
Step5: It is an LpAffineExpression. You can actually print LpAffineExpressions to see what you have programmed. Be careful with this on larger problems
Step6: 4. Setup the constraints
Step7: 5. stuff the objective and the constraint into the problem
To add constraints and objectives to the problem, we literally just add them to it
Step8: like the LpAffineExpression class, we can print the problem to see what PuLP has generated. This is very useful for small problems, but can print thousands of lines for large problems. Its always a good idea to start small.
Step9: 6. Solve it!
Pulp comes packaged with an okay-ish solver. The really fast solvers like cplex and gurobi are either not free or not free for non academic use. I personally like GLPK which is the GNU linear programming solver, except it is for *nix platforms.
Step10: 7. Get the results | Python Code:
#load all the things!
from pulp import *
Explanation: Introduction
When life was easy
At some point in my calculus education I developed a simple rule, when in doubt set the derivative equal to zero and solve for x. You might recall doing this, and the reason for doing it is because for a smooth function its local maximum and minimum are found at places where the derivate is zero. Imagine if you have curve shaped like a hill, now if you go up the hill and at some point go over the top (the max), if you keep going you will start traveling downward. If you went from increasing to decreasing, the calculus argument was that at some point your rate of increase was zero and that would be at the top. Largely this works very well. It leads nicely to many optimization rules, but it breaks down a little bit when we don't have a curve. In particular it breaks down when our domain is the set of integers.
Enter the real world, everything is a model
Often times the problems are stated as find the "best" route, or find the "best" fitting function. For us to solve these problems, we need to model what "best" means. We need to mathematically describe a quantity to either maximize or minimize. For example we might seek select the route that minimizes the total distance traveled by a salesman (traveling salesman). Or if we were trying to fit a curve to some data, we might minimize the error between the predicted curve and the data (regression). We might try to select paths in a distribution network that maximize flow (network flow problems) etc.
When we talk about programs, we mean a set of variables, a function of those variables to maximize or minimize, and some constraints that those variables must satisfy.
formally:
$$minimize\: f(x)\subject\, to\: g(x) < 0 $$
We will see that the form of this optimization metric $f(x)$ plays a huge role in the difficulty. The first best case is when $f(x)$ and the constraints $g(x)$ are linear.
Linear Programs
A linear program is one where the metric or objective to minimize is linear and the constraints are linear, i.e.:
$$f(x) = a_{0}x_{0} + a_{1}x_{1}+ \dots + a_{n}x_{n}$$
continuous vs integer (and mixed integer!)
If all the variables are continuous life is good. If however the variables must take on integer solutions, for example, 1,2,3... then life can be very hectic. Problems with integer only solutions are integer programming problems and they can be very difficult to solve if a solution exists at all! When you have a mix of variable types, you have a mixed integer problem. While many algorithms exist to solve all of these types, the most common for continuous programs is the simplex algorithm and for mixed integer the insanely clever branch cut and bound method works very well. We will talk about these later.
How do you know if the program is linear? just make sure the expression and the constraints are a combination of constants multiplied by some variables. If any of variables are multiplied or divided by another, you are in trouble and you have a harder problem. It is also important that the variables be continuous. If for example Most interesting problems are not linear, however there are a number approximations and tricks that can turn non linear programs into linear ones. We will explore this later. For now, lets solve our first program using PuLP.
End of explanation
prob = LpProblem("Hello-Mathematical-Programming-World!",LpMinimize)
Explanation: lets try a small problem we can easily solve by hand
$$minimize\: f(x,y,z)=5x+10y+6z \
subject\, to\
x+y+z \geq 20 \
0\leq x,y,z \leq 10
$$
In school we may have learned how to solve these types or problems by writing them in canonical form and throwing some linear algebra at them. PuLP is a library that removes this need, we can code our problem almost exactly as stated above in PuLP and it will do the hard work for us. What PuLP actually does is format the problem into a standard language that is used by many numerical solvers.
1. Setup the problem
End of explanation
x = LpVariable('x',lowBound=0, upBound=10, cat='Continuous')
y = LpVariable('y',lowBound=0, upBound=10, cat='Continuous')
z = LpVariable('z',lowBound=0, upBound=10, cat='Continuous')
Explanation: 2. Setup the variables:
for now we will make them manually, but there are convenience methods for when you need to make millions at a time
End of explanation
objective = 5*x+10*y+6*z
Explanation: 3. Setup the objective
End of explanation
print(type(objective))
Explanation: what does this create?
End of explanation
print(objective)
Explanation: It is an LpAffineExpression. You can actually print LpAffineExpressions to see what you have programmed. Be careful with this on larger problems
End of explanation
constraint = x + y + z >= 20
Explanation: 4. Setup the constraints
End of explanation
#add the objective
prob+= objective
#add the constraints
prob+=constraint
Explanation: 5. stuff the objective and the constraint into the problem
To add constraints and objectives to the problem, we literally just add them to it
End of explanation
print(prob)
Explanation: like the LpAffineExpression class, we can print the problem to see what PuLP has generated. This is very useful for small problems, but can print thousands of lines for large problems. Its always a good idea to start small.
End of explanation
%time prob.solve()
print(LpStatus[prob.status])
Explanation: 6. Solve it!
Pulp comes packaged with an okay-ish solver. The really fast solvers like cplex and gurobi are either not free or not free for non academic use. I personally like GLPK which is the GNU linear programming solver, except it is for *nix platforms.
End of explanation
#get a single variables value
print(x.varValue)
#or get all the variables
for v in prob.variables():
print(v, v.varValue)
Explanation: 7. Get the results
End of explanation |
9,943 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Build local cache file from Argo data sources - first in a series of Notebooks
Execute commands to pull data from the Internet into a local HDF cache file so that we can better interact with the data
Import the ArgoData class and instatiate an ArgoData object (ad) with verbosity set to 2 so that we get INFO messages.
Step1: You can now explore what methods the of object has by typing "ad." in a cell and pressing the tab key. One of the methods is get_oxy_floats(); to see what it does select it and press shift-tab with the cursor in the parentheses of "of.get_oxy_floats()". Let's get a list of all the floats that have been out for at least 340 days and print the length of that list.
Step2: If this the first time you've executed the cell it will take minute or so to read the Argo status information from the Internet (the PerformanceWarning can be ignored - for this small table it doesn't matter much).
Once the status information is read it is cached locally and further calls to get_oxy_floats_from_status() will execute much faster. To demonstrate, let's count all the oxygen labeled floats that have been out for at least 2 years.
Step3: Now let's find the Data Assembly Center URL for each of the floats in our list. (The returned dictionary of URLs is also locally cached.)
Step4: Now, whenever we need to get profile data our lookups for status and Data Assembly Centers will be serviced from the local cache. Let's get a Pandas DataFrame (df) of 20 profiles from the float with WMO number 1900650.
Step5: Profile data is also cached locally. To demonstrate, perform the same command as in the previous cell and note the time difference.
Step6: Examine the first 5 records of the float data.
Step7: There's a lot that can be done with the profile data in this DataFrame structure. We can construct a time_range string and query for all the data values from less than 10 decibars
Step8: In one command we can take the mean of all the values from the upper 10 decibars
Step9: We can plot the profiles
Step10: We can plot the location of these profiles on a map | Python Code:
from biofloat import ArgoData
ad = ArgoData(verbosity=2)
Explanation: Build local cache file from Argo data sources - first in a series of Notebooks
Execute commands to pull data from the Internet into a local HDF cache file so that we can better interact with the data
Import the ArgoData class and instatiate an ArgoData object (ad) with verbosity set to 2 so that we get INFO messages.
End of explanation
%%time
floats340 = ad.get_oxy_floats_from_status(age_gte=340)
print('{} floats at least 340 days old'.format(len(floats340)))
Explanation: You can now explore what methods the of object has by typing "ad." in a cell and pressing the tab key. One of the methods is get_oxy_floats(); to see what it does select it and press shift-tab with the cursor in the parentheses of "of.get_oxy_floats()". Let's get a list of all the floats that have been out for at least 340 days and print the length of that list.
End of explanation
%%time
floats730 = ad.get_oxy_floats_from_status(age_gte=730)
print('{} floats at least 730 days old'.format(len(floats730)))
Explanation: If this the first time you've executed the cell it will take minute or so to read the Argo status information from the Internet (the PerformanceWarning can be ignored - for this small table it doesn't matter much).
Once the status information is read it is cached locally and further calls to get_oxy_floats_from_status() will execute much faster. To demonstrate, let's count all the oxygen labeled floats that have been out for at least 2 years.
End of explanation
%%time
dac_urls = ad.get_dac_urls(floats340)
print(len(dac_urls))
Explanation: Now let's find the Data Assembly Center URL for each of the floats in our list. (The returned dictionary of URLs is also locally cached.)
End of explanation
%%time
wmo_list = ['1900650']
ad.set_verbosity(0)
df = ad.get_float_dataframe(wmo_list, max_profiles=20)
Explanation: Now, whenever we need to get profile data our lookups for status and Data Assembly Centers will be serviced from the local cache. Let's get a Pandas DataFrame (df) of 20 profiles from the float with WMO number 1900650.
End of explanation
%%time
df = ad.get_float_dataframe(wmo_list, max_profiles=20)
Explanation: Profile data is also cached locally. To demonstrate, perform the same command as in the previous cell and note the time difference.
End of explanation
df.head()
Explanation: Examine the first 5 records of the float data.
End of explanation
time_range = '{} to {}'.format(df.index.get_level_values('time').min(),
df.index.get_level_values('time').max())
df.query('pressure < 10')
Explanation: There's a lot that can be done with the profile data in this DataFrame structure. We can construct a time_range string and query for all the data values from less than 10 decibars:
End of explanation
df.query('pressure < 10').groupby(level=['wmo', 'time']).mean()
Explanation: In one command we can take the mean of all the values from the upper 10 decibars:
End of explanation
%pylab inline
import pylab as plt
# Parameter long_name and units copied from attributes in NetCDF files
parms = {'TEMP_ADJUSTED': 'SEA TEMPERATURE IN SITU ITS-90 SCALE (degree_Celsius)',
'PSAL_ADJUSTED': 'PRACTICAL SALINITY (psu)',
'DOXY_ADJUSTED': 'DISSOLVED OXYGEN (micromole/kg)'}
plt.rcParams['figure.figsize'] = (18.0, 8.0)
fig, ax = plt.subplots(1, len(parms), sharey=True)
ax[0].invert_yaxis()
ax[0].set_ylabel('SEA PRESSURE (decibar)')
for i, (p, label) in enumerate(parms.iteritems()):
ax[i].set_xlabel(label)
ax[i].plot(df[p], df.index.get_level_values('pressure'), '.')
plt.suptitle('Float(s) ' + ' '.join(wmo_list) + ' from ' + time_range)
Explanation: We can plot the profiles:
End of explanation
from mpl_toolkits.basemap import Basemap
m = Basemap(llcrnrlon=15, llcrnrlat=-90, urcrnrlon=390, urcrnrlat=90, projection='cyl')
m.fillcontinents(color='0.8')
m.scatter(df.index.get_level_values('lon'), df.index.get_level_values('lat'), latlon=True)
plt.title('Float(s) ' + ' '.join(wmo_list) + ' from ' + time_range)
Explanation: We can plot the location of these profiles on a map:
End of explanation |
9,944 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Data Analysis, 3rd ed
Chapter 4, demo 1
Normal approximaton for Bioassay model.
Step1: The following demonstrates an alternative "bad" way of calcuting the posterior density p in a for loop. The vectorised statement above is numerically more efficient. In this small example however, it would not matter that much.
p = np.empty((len(B),len(A))) # allocate space
for i in range(len(A))
Step2: Find the mode by minimising negative log posterior. Compute gradients and Hessian analytically, and use Newton's method for optimisation. You may use optimisation routines below for checking your results. See help for scipy.optimize.minimize. | Python Code:
import numpy as np
from scipy import optimize, stats
%matplotlib inline
import matplotlib.pyplot as plt
import os, sys
# add utilities directory to path
util_path = os.path.abspath(os.path.join(os.path.pardir, 'utilities_and_data'))
if util_path not in sys.path and os.path.exists(util_path):
sys.path.insert(0, util_path)
# import from utilities
import plot_tools
# edit default plot settings
plt.rc('font', size=12)
# apply custom background plotting style
plt.style.use(plot_tools.custom_styles['gray_background'])
# Bioassay data, (BDA3 page 86)
x = np.array([-0.86, -0.30, -0.05, 0.73])
n = np.array([5, 5, 5, 5])
y = np.array([0, 1, 3, 5])
# compute the posterior density in grid
# - usually should be computed in logarithms!
# - with alternative prior, check that range and spacing of A and B
# are sensible
ngrid = 100
A = np.linspace(-4, 8, ngrid)
B = np.linspace(-10, 40, ngrid)
ilogit_abx = 1 / (np.exp(-(A[:,None] + B[:,None,None] * x)) + 1)
p = np.prod(ilogit_abx**y * (1 - ilogit_abx)**(n - y), axis=2)
Explanation: Bayesian Data Analysis, 3rd ed
Chapter 4, demo 1
Normal approximaton for Bioassay model.
End of explanation
# sample from the grid
nsamp = 1000
samp_indices = np.unravel_index(
np.random.choice(p.size, size=nsamp, p=p.ravel()/np.sum(p)),
p.shape
)
samp_A = A[samp_indices[1]]
samp_B = B[samp_indices[0]]
# add random jitter, see BDA3 p. 76
samp_A += (np.random.rand(nsamp) - 0.5) * (A[1]-A[0])
samp_B += (np.random.rand(nsamp) - 0.5) * (B[1]-B[0])
# samples of LD50
samp_ld50 = - samp_A / samp_B
Explanation: The following demonstrates an alternative "bad" way of calcuting the posterior density p in a for loop. The vectorised statement above is numerically more efficient. In this small example however, it would not matter that much.
p = np.empty((len(B),len(A))) # allocate space
for i in range(len(A)):
for j in range(len(B)):
ilogit_abx_ij = (1 / (np.exp(-(A[i] + B[j] * x)) + 1))
p[j,i] = np.prod(ilogit_abx_ij**y * ilogit_abx_ij**(n - y))
N.B. the vectorised expression can be made even more efficient, e.g. by optimising memory usage with in-place statements, but it would result in a less readable code.
End of explanation
# define the optimised function
def bioassayfun(w):
a = w[0]
b = w[1]
et = np.exp(a + b * x)
z = et / (1 + et)
e = - np.sum(y * np.log(z) + (n - y) * np.log(1 - z))
return e
# initial guess
w0 = np.array([0.0, 0.0])
# optimise
optim_res = optimize.minimize(bioassayfun, w0)
# extract desired results
w = optim_res['x']
S = optim_res['hess_inv']
# compute the normal approximation density in grid
# this is just for the illustration
# Construct a grid array of shape (ngrid, ngrid, 2) from A and B. Although
# Numpy's concatenation functions do not support broadcasting, a clever trick
# can be applied to overcome this without unnecessary memory copies
# (see Numpy's documentation for strides for more information):
A_broadcasted = np.lib.stride_tricks.as_strided(
A, shape=(ngrid,ngrid), strides=(0, A.strides[0]))
B_broadcasted = np.lib.stride_tricks.as_strided(
B, shape=(ngrid,ngrid), strides=(B.strides[0], 0))
grid = np.dstack((A_broadcasted, B_broadcasted))
p_norm = stats.multivariate_normal.pdf(x=grid, mean=w, cov=S)
# draw samples from the distribution
samp_norm = stats.multivariate_normal.rvs(mean=w, cov=S, size=1000)
# create figure
fig, axes = plt.subplots(3, 2, figsize=(9, 10))
# plot the posterior density
ax = axes[0, 0]
ax.imshow(
p,
origin='lower',
aspect='auto',
extent=(A[0], A[-1], B[0], B[-1])
)
ax.set_xlim([-2, 6])
ax.set_ylim([-10, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
ax.grid('off')
# plot the samples
ax = axes[1, 0]
ax.scatter(samp_A, samp_B, 5)
ax.set_xlim([-2, 6])
ax.set_ylim([-10, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
ax.text(0, -7, 'p(beta>0)={:.2f}'.format(np.mean(samp_B>0)))
# plot the histogram of LD50
ax = axes[2, 0]
ax.hist(samp_ld50, np.linspace(-0.8, 0.8, 31))
ax.set_xlim([-0.8, 0.8])
ax.set_xlabel(r'LD50 = -$\alpha/\beta$')
ax.set_yticks(())
ax.set_xticks(np.linspace(-0.8, 0.8, 5))
# plot the posterior density for normal approx.
ax = axes[0, 1]
ax.imshow(
p_norm,
origin='lower',
aspect='auto',
extent=(A[0], A[-1], B[0], B[-1])
)
ax.set_xlim([-2, 6])
ax.set_ylim([-10, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
ax.grid('off')
# plot the samples from the normal approx.
ax = axes[1, 1]
ax.scatter(samp_norm[:,0], samp_norm[:,1], 5)
ax.set_xlim([-2, 6])
ax.set_ylim([-10, 30])
ax.set_xlabel(r'$\alpha$')
ax.set_ylabel(r'$\beta$')
# Normal approximation does not take into account that the posterior
# is not symmetric and that there is very low density for negative
# beta values. Based on the samples from the normal approximation
# it is estimated that there is about 4% probability that beta is negative!
ax.text(0, -7, 'p(beta>0)={:.2f}'.format(np.mean(samp_norm[:,1]>0)))
# Plot the histogram of LD50
ax = axes[2, 1]
# Since we have strong prior belief that beta should not be negative we can
# improve our normal approximation by conditioning on beta>0.
bpi = samp_norm[:,1] > 0
samp_ld50_norm = - samp_norm[bpi,0] / samp_norm[bpi,1]
ax.hist(samp_ld50_norm, np.linspace(-0.8, 0.8, 31))
ax.set_xlim([-0.8, 0.8])
ax.set_xlabel(r'LD50 = -$\alpha/\beta$')
ax.set_yticks(())
ax.set_xticks(np.linspace(-0.8, 0.8, 5))
fig.tight_layout()
Explanation: Find the mode by minimising negative log posterior. Compute gradients and Hessian analytically, and use Newton's method for optimisation. You may use optimisation routines below for checking your results. See help for scipy.optimize.minimize.
End of explanation |
9,945 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Flopy MODFLOW Boundary Conditions
Flopy has a new way to enter boundary conditions for some MODFLOW packages. These changes are substantial. Boundary condtions can now be entered as a list of boundaries, as a numpy recarray, or as a dictionary. These different styles are described in this notebook.
Flopy also now requires zero-based input. This means that all boundaries are entered in zero-based layer, row, and column indicies. This means that older Flopy scripts will need to be modified to account for this change. If you are familiar with Python, this should be natural, but if not, then it may take some time to get used to zero-based numbering. Flopy users submit all information in zero-based form, and Flopy converts this to the one-based form required by MODFLOW.
The following MODFLOW packages are affected by this change
Step1: List of Boundaries
Boundary condition information is passed to a package constructor as stress_period_data. In its simplest form, stress_period_data can be a list of individual boundaries, which themselves are lists. The following shows a simple example for a MODFLOW River Package boundary
Step2: If we look at the River Package created here, you see that the layer, row, and column numbers have been increased by one.
Step3: If this model had more than one stress period, then Flopy will assume that this boundary condition information applies until the end of the simulation
Step4: Recarray of Boundaries
Numpy allows the use of recarrays, which are numpy arrays in which each column of the array may be given a different type. Boundary conditions can be entered as recarrays. Information on the structure of the recarray for a boundary condition package can be obtained from that particular package. The structure of the recarray is contained in the dtype.
Step5: Now that we know the structure of the recarray that we want to create, we can create a new one as follows.
Step6: We can then fill the recarray with our boundary conditions.
Step7: As before, if we have multiple stress periods, then this recarray will apply to all of them.
Step8: Dictionary of Boundaries
The power of the new functionality in Flopy3 is the ability to specify a dictionary for stress_period_data. If specified as a dictionary, the key is the stress period number (as a zero-based number), and the value is either a nested list, an integer value of 0 or -1, or a recarray for that stress period.
Let's say that we want to use the following schedule for our rivers
Step9: MODFLOW Auxiliary Variables
Flopy works with MODFLOW auxiliary variables by allowing the recarray to contain additional columns of information. The auxiliary variables must be specified as package options as shown in the example below.
In this example, we also add a string in the last column of the list in order to name each boundary condition. In this case, however, we do not include boundname as an auxiliary variable as MODFLOW would try to read it as a floating point number.
Step10: Working with Unstructured Grids
Flopy can create an unstructured grid boundary condition package for MODFLOW-USG. This can be done by specifying a custom dtype for the recarray. The following shows an example of how that can be done. | Python Code:
#begin by importing flopy
import os
import sys
import numpy as np
#flopypath = '../..'
#if flopypath not in sys.path:
# sys.path.append(flopypath)
import flopy
workspace = os.path.join('data')
#make sure workspace directory exists
if not os.path.exists(workspace):
os.makedirs(workspace)
Explanation: Flopy MODFLOW Boundary Conditions
Flopy has a new way to enter boundary conditions for some MODFLOW packages. These changes are substantial. Boundary condtions can now be entered as a list of boundaries, as a numpy recarray, or as a dictionary. These different styles are described in this notebook.
Flopy also now requires zero-based input. This means that all boundaries are entered in zero-based layer, row, and column indicies. This means that older Flopy scripts will need to be modified to account for this change. If you are familiar with Python, this should be natural, but if not, then it may take some time to get used to zero-based numbering. Flopy users submit all information in zero-based form, and Flopy converts this to the one-based form required by MODFLOW.
The following MODFLOW packages are affected by this change:
Well
Drain
River
General-Head Boundary
Time-Variant Constant Head
This notebook explains the different ways to enter these types of boundary conditions.
End of explanation
stress_period_data = [
[2, 3, 4, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
[2, 3, 5, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
[2, 3, 6, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
]
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data)
m.write_input()
Explanation: List of Boundaries
Boundary condition information is passed to a package constructor as stress_period_data. In its simplest form, stress_period_data can be a list of individual boundaries, which themselves are lists. The following shows a simple example for a MODFLOW River Package boundary:
End of explanation
!more 'data/test.riv'
Explanation: If we look at the River Package created here, you see that the layer, row, and column numbers have been increased by one.
End of explanation
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
dis = flopy.modflow.ModflowDis(m, nper=3)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data)
m.write_input()
!more 'data/test.riv'
Explanation: If this model had more than one stress period, then Flopy will assume that this boundary condition information applies until the end of the simulation
End of explanation
riv_dtype = flopy.modflow.ModflowRiv.get_default_dtype()
print(riv_dtype)
Explanation: Recarray of Boundaries
Numpy allows the use of recarrays, which are numpy arrays in which each column of the array may be given a different type. Boundary conditions can be entered as recarrays. Information on the structure of the recarray for a boundary condition package can be obtained from that particular package. The structure of the recarray is contained in the dtype.
End of explanation
stress_period_data = np.zeros((3), dtype=riv_dtype)
stress_period_data = stress_period_data.view(np.recarray)
print('stress_period_data: ', stress_period_data)
print('type is: ', type(stress_period_data))
Explanation: Now that we know the structure of the recarray that we want to create, we can create a new one as follows.
End of explanation
stress_period_data[0] = (2, 3, 4, 10.7, 5000., -5.7)
stress_period_data[1] = (2, 3, 5, 10.7, 5000., -5.7)
stress_period_data[2] = (2, 3, 6, 10.7, 5000., -5.7)
print(stress_period_data)
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data)
m.write_input()
!more 'data/test.riv'
Explanation: We can then fill the recarray with our boundary conditions.
End of explanation
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
dis = flopy.modflow.ModflowDis(m, nper=3)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data)
m.write_input()
!more 'data/test.riv'
Explanation: As before, if we have multiple stress periods, then this recarray will apply to all of them.
End of explanation
sp1 = [
[2, 3, 4, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
[2, 3, 5, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
[2, 3, 6, 10.7, 5000., -5.7], #layer, row, column, stage, conductance, river bottom
]
print(sp1)
riv_dtype = flopy.modflow.ModflowRiv.get_default_dtype()
sp5 = np.zeros((3), dtype=riv_dtype)
sp5 = sp5.view(np.recarray)
sp5[0] = (2, 3, 4, 20.7, 5000., -5.7)
sp5[1] = (2, 3, 5, 20.7, 5000., -5.7)
sp5[2] = (2, 3, 6, 20.7, 5000., -5.7)
print(sp5)
sp_dict = {0:0, 1:sp1, 2:0, 5:sp5}
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
dis = flopy.modflow.ModflowDis(m, nper=8)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=sp_dict)
m.write_input()
!more 'data/test.riv'
Explanation: Dictionary of Boundaries
The power of the new functionality in Flopy3 is the ability to specify a dictionary for stress_period_data. If specified as a dictionary, the key is the stress period number (as a zero-based number), and the value is either a nested list, an integer value of 0 or -1, or a recarray for that stress period.
Let's say that we want to use the following schedule for our rivers:
0. No rivers in stress period zero
1. Rivers specified by a list in stress period 1
2. No rivers
3. No rivers
4. No rivers
5. Rivers specified by a recarray
6. Same recarray rivers
7. Same recarray rivers
8. Same recarray rivers
End of explanation
#create an empty array with an iface auxiliary variable at the end
riva_dtype = [('k', '<i8'), ('i', '<i8'), ('j', '<i8'),
('stage', '<f4'), ('cond', '<f4'), ('rbot', '<f4'),
('iface', '<i4'), ('boundname', object)]
riva_dtype = np.dtype(riva_dtype)
stress_period_data = np.zeros((3), dtype=riva_dtype)
stress_period_data = stress_period_data.view(np.recarray)
print('stress_period_data: ', stress_period_data)
print('type is: ', type(stress_period_data))
stress_period_data[0] = (2, 3, 4, 10.7, 5000., -5.7, 1, 'riv1')
stress_period_data[1] = (2, 3, 5, 10.7, 5000., -5.7, 2, 'riv2')
stress_period_data[2] = (2, 3, 6, 10.7, 5000., -5.7, 3, 'riv3')
print(stress_period_data)
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data, dtype=riva_dtype, options=['aux iface'])
m.write_input()
!more 'data/test.riv'
Explanation: MODFLOW Auxiliary Variables
Flopy works with MODFLOW auxiliary variables by allowing the recarray to contain additional columns of information. The auxiliary variables must be specified as package options as shown in the example below.
In this example, we also add a string in the last column of the list in order to name each boundary condition. In this case, however, we do not include boundname as an auxiliary variable as MODFLOW would try to read it as a floating point number.
End of explanation
#create an empty array based on nodenumber instead of layer, row, and column
rivu_dtype = [('nodenumber', '<i8'), ('stage', '<f4'), ('cond', '<f4'), ('rbot', '<f4')]
rivu_dtype = np.dtype(rivu_dtype)
stress_period_data = np.zeros((3), dtype=rivu_dtype)
stress_period_data = stress_period_data.view(np.recarray)
print('stress_period_data: ', stress_period_data)
print('type is: ', type(stress_period_data))
stress_period_data[0] = (77, 10.7, 5000., -5.7)
stress_period_data[1] = (245, 10.7, 5000., -5.7)
stress_period_data[2] = (450034, 10.7, 5000., -5.7)
print(stress_period_data)
m = flopy.modflow.Modflow(modelname='test', model_ws=workspace)
riv = flopy.modflow.ModflowRiv(m, stress_period_data=stress_period_data, dtype=rivu_dtype)
m.write_input()
print(workspace)
!more 'data/test.riv'
Explanation: Working with Unstructured Grids
Flopy can create an unstructured grid boundary condition package for MODFLOW-USG. This can be done by specifying a custom dtype for the recarray. The following shows an example of how that can be done.
End of explanation |
9,946 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Programming with Escher
This notebook is a collection of preliminary notes about a "code camp" (or a series of lectures) aimed at young students inspired by the fascinating Functional Geometry paper of Peter Henderson.
In such work the Square Limit woodcut by Maurits Cornelis Escher is reconstructed from a set of primitive graphical objects suitably composed by means of a functional language.
Here the approach will be somehow different
Step1: Observe that the dotted line is not part of the tile but serves only as a hint for the tile boundary, moreover tiles are implemented in a way that the notebook automatically draws them (without the need of explicit drawing instructions).
Transforming tiles
We are now ready to introduce some transformations, namely rot and flip that, respectively, rotate counterclockwise (by 90°) a tile, and flip it around its vertical center.
Step2: Observe that the notation is the usual one for function application, that should be immediately clear even to students new to programming (but with a basic understanding of mathematical notation). Of course one can compose such transformations insomuch function can be composed, that is performed one on the result of another.
The first observation is that the order in which such transformation are performed can make a difference. Consider for instance
Step3: The second observation is that some choices, that can seem at first restrictive, can become less of a limitation thanks to composition
Step4: this can stimulate a discussion about the expressiveness or completeness of a set of primitives with respect to an assigned set of tasks.
Then a few binary transformations (that is, transformation that operate on two tiles) can be introduced, such as
Step5: Again one can observe that the order of the arguments is relevant, in the case of these two transformation, while it is not in the case of the latter.
Step6: Basic algebraic facts about transformations
Such transformations can be also implemented as binary operators (thanks to Python ability to define classes that emulate numeric types); one can for instance use | and / respectively for beside and above, and + for over.
Step7: This will allow to investigate (with a natural syntax) basic algebraic structures as (abelian or not) semigroups, and even monoids, once a blank tile is been introduced, as well as more simple concepts as associativity and commutativity,
Step8: Recursion
It can be quite natural to introduce functions, initially presented as a sort of macros, to build derived operators. For instance
Step9: is a new transformation, defined in terms of the previous one.
Step10: Then recursion can be introduced quite naturally to build tiles having self-similar parts. Let's use a triangle to obtain a more pleasing result
Step11: Let's build a tile where the upper left quarter is a (rotated) triangle surrounded by three tiles similarly defined
Step12: developing the first four levels gives
Step13: Once can even push things further and show how to use of recursion instead of iteration, emphasizing how expressive a simple set of basic transformation, endowed with composition and recursion, can become.
Extending the basic transformations
What if we want to write nonet, a version of quartet that puts together nine tiles in a $3\times 3$ arrangement? The given beside and above transformations are halving the width and height of the tiles they operate on, as it is easy to convince oneself, there is no way to use them to implement nonet.
To overcome such limitation, one can extend those transformations so that one can specify also the relative sizes of the combined tiles. For instance, in
Step14: the flipped f takes $2/5$ of the final tile, whereas f takes the other $3/5$. Using such extended transformations one can define
Step15: to obtain the desired result
Step16: of course, according to the way one decomposes the $3\times 3$ tile as a combination of two sub-tiles, there are many alternative ways to define nonet that students can experiment with.
Another possible approach will be to have above, beside (and over) accept a variable number of arguments (thanks to the way functions are defined and called in Python). In such case, otaining the nonet will be trivial.
Decomposing the woodcut
The basic building block of the woodcut is a fish
Step17: even if it is not completely contained in a tile (the unit square), Escher choose (we'll discuss such magic in the following) a shape able to fit with his own rotation
Step18: But this is not the only magic. Let's define a new transformation that is a 45° rotation and a rescaling (by a $\sqrt 2$ factor) that, somehow, will "lift up" half tile; using it on triangle should clarify its definition
Step19: Well, the fish is so magic that if we transform it with such new rotation and flip it
Step20: we obtain a new tile that will fit with the original fish, even if rotated again
Step21: and will moreover fit with itself however rotated
Step22: The t and u tiles just defined are the building blocks of the woodcut; a recursive definition of the side, based just on t is given by
Step23: Expanding the first two levels gives
Step24: Similarly, a recursive definition of the corner, based on the side and u is given by
Step25: As before, the first two levels are
Step26: We now can use a nonet to put together the (suitably rotated) sides and corners, as follows
Step27: Expanding the first three levels gives
Step28: The magic fish
What is actually even more magic is that (the outline) of the fish, that makes it fit with himself in so many ways, can be obtained just from a simple line
Step29: this can be duplicated and transformed with rot45 to obtain the left side of the fish
Step30: to obtain the other side of the fish we need to rotate the edge and to translate it outside of the boundary of the tile… this can't be accomplished with the basic transformations we introduced above, but more directly as
Step31: we are now ready to put together the three edges to obtain the fish
Step32: the role of the basic edge can become more clear if we add the triangle tile
Step33: By drawing the squarelimit using the triangle as a basic tile helps understanding the magic of how the tiles fit
Step34: Perhaps even better if we use the outline | Python Code:
f
Explanation: Programming with Escher
This notebook is a collection of preliminary notes about a "code camp" (or a series of lectures) aimed at young students inspired by the fascinating Functional Geometry paper of Peter Henderson.
In such work the Square Limit woodcut by Maurits Cornelis Escher is reconstructed from a set of primitive graphical objects suitably composed by means of a functional language.
Here the approach will be somehow different: first of all because our recipients will be students new to computer science (instead of fellow researchers), but also because besides recalling the fundamental concepts of abstraction levels (and barriers), primitives and composition, present in the original paper, we will here also take the opportunity to introduce some (albeit to some extent elementary) considerations on algebra and geometry, programming and recursion (and perhaps discuss some implementation details).
This work is to be considered very preliminary, it is not yet structured in a series of lectures, nor it is worked out the level at which every topic is to be presented, according to the age (or previous knowledge) of the students. The language and detail level used here is intended for instructors and teachers, and the various topics will be listed as mere hints, not yet as a viable and ready to use syllabus.
As a last remark, before actually beginning with the notes, the code of this notebook is very loosely derived from previous "implementations" of Functional Geometry such as Shashi Gowda's Julia version and Micah Hahn's Hasjell version (containing the Bézier curve description of the Escher fish used here). I decided to rewrote such code in a Jupyter notebook written in Python 3, a simple and widespread language, to make it easier for instructors to adopt it.
The source notebook is available on GitHub (under GPL v3), feel free to use issues to point out errors, or to fork it to suggest edits.
Square Limit and tiles
Looking at the original artwork it is evident that it is obtained by the repetition of a basic element (a fish) that is suitably oriented, scaled, and colored.
This suggest to start our journey from a tile (a set of lines enclosed in the unit square), that is a drawing building block, that we will manipulate to obtain more complex drawings.
Note that, if one wants to follow an "unplugged" approach, tiles can be actually printed as objects so that students will be able to experiment with them in the physical world to better introduce themselves with the manipulations that will follow.
It is a good idea to start with an asymmetric tile, that will make it easier to grasp the effect of the transformations that will be presented.
End of explanation
rot(f)
Explanation: Observe that the dotted line is not part of the tile but serves only as a hint for the tile boundary, moreover tiles are implemented in a way that the notebook automatically draws them (without the need of explicit drawing instructions).
Transforming tiles
We are now ready to introduce some transformations, namely rot and flip that, respectively, rotate counterclockwise (by 90°) a tile, and flip it around its vertical center.
End of explanation
flip(rot(f))
rot(flip(f))
Explanation: Observe that the notation is the usual one for function application, that should be immediately clear even to students new to programming (but with a basic understanding of mathematical notation). Of course one can compose such transformations insomuch function can be composed, that is performed one on the result of another.
The first observation is that the order in which such transformation are performed can make a difference. Consider for instance
End of explanation
rot(rot(rot(f)))
Explanation: The second observation is that some choices, that can seem at first restrictive, can become less of a limitation thanks to composition: we can obtain clockwise rotations by applying three counterclockwise rotations
End of explanation
beside(f, rot(f))
above(flip(f), f)
Explanation: this can stimulate a discussion about the expressiveness or completeness of a set of primitives with respect to an assigned set of tasks.
Then a few binary transformations (that is, transformation that operate on two tiles) can be introduced, such as: above, beside and over. The first two combine two tiles by juxtaposition (rescaling the final result so that it will again fit in a unit square), while the latter just lay one tile over another.
End of explanation
over(f, flip(f))
Explanation: Again one can observe that the order of the arguments is relevant, in the case of these two transformation, while it is not in the case of the latter.
End of explanation
class TileWithOperations(Tile):
@staticmethod
def addop(tile):
t = TileWithOperations()
t.path = tile.path
return t
def __add__(self, other):
return TileWithOperations.addop(over(self, other))
def __truediv__(self, other):
return TileWithOperations.addop(above(self, other))
def __or__(self, other):
return TileWithOperations.addop(beside(self, other))
f = TileWithOperations.addop(f)
f / ( f | f )
Explanation: Basic algebraic facts about transformations
Such transformations can be also implemented as binary operators (thanks to Python ability to define classes that emulate numeric types); one can for instance use | and / respectively for beside and above, and + for over.
End of explanation
f + blank
Explanation: This will allow to investigate (with a natural syntax) basic algebraic structures as (abelian or not) semigroups, and even monoids, once a blank tile is been introduced, as well as more simple concepts as associativity and commutativity,
End of explanation
def quartet(p, q, r, s):
return above(beside(p, q), beside(r, s))
Explanation: Recursion
It can be quite natural to introduce functions, initially presented as a sort of macros, to build derived operators. For instance
End of explanation
quartet(flip(rot(rot(rot(f)))), rot(rot(rot(f))), rot(f), flip(rot(f)))
Explanation: is a new transformation, defined in terms of the previous one.
End of explanation
triangle
Explanation: Then recursion can be introduced quite naturally to build tiles having self-similar parts. Let's use a triangle to obtain a more pleasing result
End of explanation
def rectri(n):
if n == 0:
return blank
else:
return quartet(rot(triangle), rectri(n - 1), rectri(n - 1), rectri(n - 1))
Explanation: Let's build a tile where the upper left quarter is a (rotated) triangle surrounded by three tiles similarly defined
End of explanation
rectri(4)
Explanation: developing the first four levels gives
End of explanation
beside(flip(f), f, 2, 3)
Explanation: Once can even push things further and show how to use of recursion instead of iteration, emphasizing how expressive a simple set of basic transformation, endowed with composition and recursion, can become.
Extending the basic transformations
What if we want to write nonet, a version of quartet that puts together nine tiles in a $3\times 3$ arrangement? The given beside and above transformations are halving the width and height of the tiles they operate on, as it is easy to convince oneself, there is no way to use them to implement nonet.
To overcome such limitation, one can extend those transformations so that one can specify also the relative sizes of the combined tiles. For instance, in
End of explanation
def nonet(p, q, r, s, t, u, v, w, x):
return above(
beside(p, beside(q, r), 1, 2),
above(
beside(s, beside(t, u), 1, 2),
beside(v, beside(w, x), 1, 2),
),
1, 2
)
Explanation: the flipped f takes $2/5$ of the final tile, whereas f takes the other $3/5$. Using such extended transformations one can define
End of explanation
nonet(
f, f, f,
f, blank, f,
f, f, f
)
Explanation: to obtain the desired result
End of explanation
rcParams['figure.figsize'] = 4, 4
fish
Explanation: of course, according to the way one decomposes the $3\times 3$ tile as a combination of two sub-tiles, there are many alternative ways to define nonet that students can experiment with.
Another possible approach will be to have above, beside (and over) accept a variable number of arguments (thanks to the way functions are defined and called in Python). In such case, otaining the nonet will be trivial.
Decomposing the woodcut
The basic building block of the woodcut is a fish
End of explanation
over(fish,rot(rot(fish)))
Explanation: even if it is not completely contained in a tile (the unit square), Escher choose (we'll discuss such magic in the following) a shape able to fit with his own rotation
End of explanation
rot45(triangle)
Explanation: But this is not the only magic. Let's define a new transformation that is a 45° rotation and a rescaling (by a $\sqrt 2$ factor) that, somehow, will "lift up" half tile; using it on triangle should clarify its definition
End of explanation
smallfish = flip(rot45(fish))
smallfish
Explanation: Well, the fish is so magic that if we transform it with such new rotation and flip it
End of explanation
t = over(fish, over(smallfish, rot(rot(rot(smallfish)))))
t
Explanation: we obtain a new tile that will fit with the original fish, even if rotated again
End of explanation
u = over(over(over(smallfish, rot(smallfish)), rot(rot(smallfish))), rot(rot(rot(smallfish))))
u
Explanation: and will moreover fit with itself however rotated
End of explanation
def side(n):
if n == 0:
return blank
else:
return quartet(side(n-1), side(n-1), rot(t), t)
Explanation: The t and u tiles just defined are the building blocks of the woodcut; a recursive definition of the side, based just on t is given by
End of explanation
side(2)
Explanation: Expanding the first two levels gives
End of explanation
def corner(n):
if n == 0:
return blank
else:
return quartet(corner(n-1), side(n-1), rot(side(n-1)), u)
Explanation: Similarly, a recursive definition of the corner, based on the side and u is given by
End of explanation
corner(2)
Explanation: As before, the first two levels are
End of explanation
def squarelimit(n):
return nonet(
corner(n),
side(n),
rot(rot(rot(corner(n)))),
rot(side(n)),
u,
rot(rot(rot(side(n)))),
rot(corner(n)),
rot(rot(side(n))),
rot(rot(corner(n)))
)
Explanation: We now can use a nonet to put together the (suitably rotated) sides and corners, as follows
End of explanation
rcParams['figure.figsize'] = 20, 20
squarelimit(3)
Explanation: Expanding the first three levels gives
End of explanation
rcParams['figure.figsize'] = 4, 4
edge
Explanation: The magic fish
What is actually even more magic is that (the outline) of the fish, that makes it fit with himself in so many ways, can be obtained just from a simple line
End of explanation
outline2 = over(
rot45(flip(rot(edge))),
rot(rot(rot45(flip(rot(edge)))))
)
outline2
Explanation: this can be duplicated and transformed with rot45 to obtain the left side of the fish
End of explanation
outline3 = Tile.transform(rot(edge), T().translate(-1,0))
outline3
Explanation: to obtain the other side of the fish we need to rotate the edge and to translate it outside of the boundary of the tile… this can't be accomplished with the basic transformations we introduced above, but more directly as
End of explanation
outline = over(edge, Tile.union(outline3, outline2))
outline
Explanation: we are now ready to put together the three edges to obtain the fish
End of explanation
over(triangle, outline)
Explanation: the role of the basic edge can become more clear if we add the triangle tile
End of explanation
def _t(base):
t2 = flip(rot45(base))
t3 = rot(rot(rot(t2)))
return over(base, over(t2, t3))
def _u(base):
t2 = flip(rot45(base))
return over(over(t2, rot(t2)), over(rot(rot(t2)), rot(rot(rot(t2)))))
t = _t(triangle)
u = _u(triangle)
squarelimit(3)
Explanation: By drawing the squarelimit using the triangle as a basic tile helps understanding the magic of how the tiles fit
End of explanation
t = _t(outline)
u = _u(outline)
squarelimit(3)
Explanation: Perhaps even better if we use the outline
End of explanation |
9,947 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Supervised Learning
learn the link between two datasets
Step1: 2 Linear regression
linear models
Step2: shrinkage
Step3: An example of bias/variance tradeoff, the larger the ridge $\alpha$ parameter, the higher the bias and the lower the variance
3 Classification
$y = sigmoid(X\beta - offset) + \epsilon = \frac{1}{1+exp(-X\beta+offset)} + \epsilon$
Step4: 4 SVM
SVMs can be used in regression - SVR(Support Vector Regression)-, or in classification -SVC(Support Vector Classification).
Warning | Python Code:
import numpy as np
from sklearn import datasets
iris = datasets.load_iris()
iris_X = iris.data
iris_y = iris.target
np.random.seed(0)
indices = np.random.permutation(len(iris_X))
iris_X_train = iris_X[indices[:-10]]
iris_y_train = iris_y[indices[:-10]]
iris_X_test = iris_X[indices[-10:]]
iris_y_test = iris_y[indices[-10:]]
# create and fit a nearst-neighbor classifier
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(iris_X_train, iris_y_train)
knn.predict(iris_X_test)
iris_y_test
Explanation: Supervised Learning
learn the link between two datasets: the observed data x and an external varibale y, and try to predict.
All supervised estimator implement a $fit(x, y)$ method and $predict(x)$ method, given unlabeled observations x, the estimator returns the predicted labels y
1 KNN Neighbors classifier
End of explanation
diabetes = datasets.load_diabetes()
diabetes_X_train = diabetes.data[:-20]
diabetes_y_train = diabetes.target[:-20]
diabetes_X_test = diabetes.data[-20:]
diabetes_y_test = diabetes.target[-20:]
from sklearn import linear_model
regr = linear_model.LinearRegression()
regr.fit(diabetes_X_train, diabetes_y_train)
regr.coef_
# mean square error
np.mean((regr.predict(diabetes_X_test)-diabetes_y_test)**2)
regr.score(diabetes_X_test,diabetes_y_test)
Explanation: 2 Linear regression
linear models: $y=X\beta + \epsilon$ where
+ $X$: data
+ $y$: target variables
+ $\beta$: coefficients
+ $\epsilon$: observation noise
End of explanation
X = np.c_[0.5, 1].T
y = [0.5, 1]
test = np.c_[0, 2].T
regr = linear_model.LinearRegression()
import matplotlib.pyplot as plt
plt.figure()
np.random.seed(0)
for _ in range(6):
this_X = .1 * np.random.normal(size=(2,1)) + X
regr.fit(this_X, y)
plt.plot(test, regr.predict(test))
plt.scatter(this_X, y, s=3)
plt.show()
regr = linear_model.Ridge(alpha=0.1)
plt.figure()
np.random.seed(0)
for _ in range(6):
this_X = 0.1 * np.random.normal(size=(2,1)) + X
regr.fit(this_X, y)
plt.plot(test, regr.predict(test))
plt.scatter(this_X, y, s=3)
plt.show()
Explanation: shrinkage
End of explanation
# take the first two features
X = iris.data[:,:2]
Y = iris.target
h = .02
logreg = linear_model.LogisticRegression(C=1e5)
logreg.fit(X,Y)
x_min, x_max = X[:, 0].min() - .5, X[:,0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = logreg.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(4,3))
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)
plt.scatter(X[:,0], X[:, 1], c=Y, edgecolors ='k', cmap=plt.cm.Paired)
plt.show()
Explanation: An example of bias/variance tradeoff, the larger the ridge $\alpha$ parameter, the higher the bias and the lower the variance
3 Classification
$y = sigmoid(X\beta - offset) + \epsilon = \frac{1}{1+exp(-X\beta+offset)} + \epsilon$
End of explanation
from sklearn import svm, datasets
X = iris.data[:,:2]
y = iris.target
h = .02
C = 1.0
svc = svm.SVC(kernel='linear', C=C).fit(X, y)
rbf_svc = svm.SVC(kernel='rbf', gamma=0.7, C=C).fit(X, y)
poly_svc = svm.SVC(kernel='poly', degree=3, C=C).fit(X, y)
lin_svc = svm.LinearSVC(C=C).fit(X, y)
x_min, x_max = X[:, 0].min() - 1, X[:,0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
titles = ['SVC with linear kernel',
'LinearSVC(Linear kernel)',
'SVC with RBF kernel',
'SVC with polynomial(degree 3) kernel']
for i, clf in enumerate((svc, lin_svc, rbf_svc, poly_svc)):
plt.subplot(2, 2, i+1)
plt.subplots_adjust(wspace=0.4, hspace=0.4)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)
plt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.coolwarm)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.title(titles[i])
plt.show()
Explanation: 4 SVM
SVMs can be used in regression - SVR(Support Vector Regression)-, or in classification -SVC(Support Vector Classification).
Warning: For many estimator, including the svms, having datasets with unit standard deviation for each feature is important to get good prediction
End of explanation |
9,948 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-1', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: MIROC
Source ID: SANDBOX-1
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
9,949 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Basics
Step1: Constants
Step2: Operations
Step3: Placeholders
Instead of using a constant, we can define a placeholder that allows us to provide the value at the time of execution just like function parameters.
Step4: Variables
A variable is a tensor that can change during program execution.
Step5: Classification using the MNIST dataset
Step6: The MNIST dataset contain 55,000 images. The dimensions of each image is 28-by-28. Each vector has 784 elements because 28*28=784.
Step7: Before we begin, we specify three parameters
Step8: Network parameters
Step9: def multi | Python Code:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
tf.__version__
Explanation: TensorFlow Basics
End of explanation
h = tf.constant('Hello World')
h
h.graph is tf.get_default_graph()
x = tf.constant(100)
x
# Create Session object in which we can run operations.
# A session object encapsulates the environment in which
# operations are executed. Tensor objects are evaluated
# by operations.
session = tf.Session()
session.run(h)
session.run(x)
type(session.run(x))
type(session.run(h))
Explanation: Constants
End of explanation
a = tf.constant(2)
b = tf.constant(3)
with tf.Session() as session:
print('Addition: {}'.format(session.run(a + b)))
print('Subtraction: {}'.format(session.run(a - b)))
print('Multiplication: {}'.format(session.run(a * b)))
print('Division: {}'.format(session.run(a / b)))
e = np.array([[5., 5.]])
f = np.array([[2.], [2.]])
e
f
# Convert numpy arrays to TensorFlow objects
ec = tf.constant(e)
fc = tf.constant(f)
matrix_mult_op = tf.matmul(ec, fc)
with tf.Session() as session:
print('Matrix Multiplication: {}'.format(session.run(matrix_mult_op)))
Explanation: Operations
End of explanation
c = tf.placeholder(tf.int32)
d = tf.placeholder(tf.int32)
add_op = tf.add(c, d)
sub_op = tf.subtract(c, d)
mult_op = tf.multiply(c, d)
div_op = tf.divide(c, d)
with tf.Session() as session:
input_dict = {c: 11, d: 10}
print('Addition: {}'.format(session.run(add_op, feed_dict=input_dict)))
print('Subtraction: {}'.format(session.run(sub_op, feed_dict=input_dict)))
print('Multiplication: {}'.format(session.run(mult_op, feed_dict=input_dict)))
print('Division: {}'.format(session.run(div_op, feed_dict={c:11, d:11})))
Explanation: Placeholders
Instead of using a constant, we can define a placeholder that allows us to provide the value at the time of execution just like function parameters.
End of explanation
var2 = tf.get_variable('var2', [2])
var2
Explanation: Variables
A variable is a tensor that can change during program execution.
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('data', one_hot=True)
type(mnist)
mnist.train.images
mnist.train.images.shape
Explanation: Classification using the MNIST dataset
End of explanation
# Convert the vector to a 28x28 matrix
sample_img = mnist.train.images[0].reshape(28, 28)
# Show the picture
plt.imshow(sample_img, cmap='Greys')
Explanation: The MNIST dataset contain 55,000 images. The dimensions of each image is 28-by-28. Each vector has 784 elements because 28*28=784.
End of explanation
learning_rate = 0.001
training_epochs = 15
batch_size = 100
Explanation: Before we begin, we specify three parameters:
- the learning rate $\alpha$: how quickly should the cost function be adjusted.
- training epoch: number of training cycles
- batch size: batches of training data
End of explanation
# Number of classes is 10 because we have 10 digits
n_classes = 10
# Number of training examples
n_samples = mnist.train.num_examples
# The flatten array of the 28x28 image matrix contains 784 elements
n_input = 784
# Number of neurons in the hidden layers. For image data, 256 neurons
# is common because we have 256 intensity values (8-bit).
# In this example, we only use 2 hidden layers. The more hidden
# layers, we use the longer it takes for the model to run but
# more layers has the possibility of being more accurate.
n_hidden_1 = 256
n_hidden_2 = 256
Explanation: Network parameters
End of explanation
def multilayer_perceptron(x, weights, biases):
'''
x: Placeholder for the data input
weights: Dictionary of weights
biases: Dictionary of bias values
'''
# First hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Second hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer
layer_out = tf.add(tf.matmul(layer_2, weights['out']), biases['out'])
return layer_out
weights = {
'h1': tf.Variable(tf.random_normal(shape=[n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal(shape=[n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal(shape=[n_hidden_2, n_classes]))
}
tf.random_normal(shape=(n_input, n_hidden_1))
#tf.Session().run(weights['h1'])
Explanation: def multi
End of explanation |
9,950 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data
Step2: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step3: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
Step4: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint
Step5: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint
Step6: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step7: Problem 3
Convince yourself that the data is still good after shuffling!
Step8: Problem 4
Another check
Step9: Prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed.
Also create a validation dataset for hyperparameter tuning.
Step10: Finally, let's save the data for later reuse
Step11: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions
Step12: Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
import matplotlib.pyplot as plt
import numpy as np
import os
import tarfile
import urllib
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
import cPickle as pickle
Explanation: Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
End of explanation
url = 'http://yaroslavvb.com/upload/notMNIST/'
def maybe_download(filename, expected_bytes):
Download a file if not present, and make sure it's the right size.
if not os.path.exists(filename):
filename, _ = urllib.urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print 'Found and verified', filename
else:
raise Exception(
'Failed to verify' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
End of explanation
num_classes = 10
def extract(filename):
tar = tarfile.open(filename)
tar.extractall()
tar.close()
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
data_folders = [os.path.join(root, d) for d in sorted(os.listdir(root))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_folders, len(data_folders)))
print data_folders
return data_folders
train_folders = extract(train_filename)
test_folders = extract(test_filename)
Explanation: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
End of explanation
from IPython.display import Image
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load(data_folders, min_num_images, max_num_images):
dataset = np.ndarray(
shape=(max_num_images, image_size, image_size), dtype=np.float32)
labels = np.ndarray(shape=(max_num_images), dtype=np.int32)
label_index = 0
image_index = 0
for folder in data_folders:
print folder
for image in os.listdir(folder):
if image_index >= max_num_images:
raise Exception('More images than expected: %d >= %d' % (
num_images, max_num_images))
image_file = os.path.join(folder, image)
#if image_index % 20000 == 0:
# display(Image(filename=image_file))
#plt.show()
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index, :, :] = image_data
labels[image_index] = label_index
image_index += 1
except IOError as e:
print 'Could not read:', image_file, ':', e, '- it\'s ok, skipping.'
label_index += 1
num_images = image_index
dataset = dataset[0:num_images, :, :]
labels = labels[0:num_images]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' % (
num_images, min_num_images))
print 'Full dataset tensor:', dataset.shape
print 'Mean:', np.mean(dataset)
print 'Standard deviation:', np.std(dataset)
print 'Labels:', labels.shape
return dataset, labels
train_dataset, train_labels = load(train_folders, 450000, 550000)
test_dataset, test_labels = load(test_folders, 18000, 20000)
from string import join
from matplotlib.pyplot import imshow
from time import sleep
num_png = 0
directory = "notMNIST_large/A"
for png in os.listdir(directory):
#print png
num_png += 1
#display(Image(os.path.join(directory,png)))
#sleep(2)
if num_png > 10:
break
#Image(filename="notMNIST_large/A/ISBKYW1pcm9xdWFpICEudHRm.png")
np.flatnonzero()
Explanation: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
Now let's load the data in a more manageable format.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. The labels will be stored into a separate array of integers 0 through 9.
A few images might not be readable, we'll just skip them.
End of explanation
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
for label in range(0,10):
idxs = np.random.choice(np.flatnonzero(train_labels == label),10,replace=False)
for i,idx in enumerate(idxs):
pos = i*10+1+label
plt.subplot(10,10,pos)
plt.imshow(train_dataset[idx,])
plt.axis("off")
plt.show()
#plt.imshow(train_dataset[0,])
Explanation: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
End of explanation
np.random.seed(133)
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
Explanation: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
End of explanation
for label in range(0,10):
idxs = np.random.choice(np.flatnonzero(train_labels == label),10,replace=False)
for i,idx in enumerate(idxs):
pos = i*10+1+label
plt.subplot(10,10,pos)
plt.imshow(train_dataset[idx,])
plt.axis("off")
plt.show()
#plt.imshow(train_dataset[0,])
Explanation: Problem 3
Convince yourself that the data is still good after shuffling!
End of explanation
def numexample(labels,label):
return np.sum(labels == label)
for i in range(0,10):
print i,"\t",numexample(train_labels,i)
Explanation: Problem 4
Another check: we expect the data to be balanced across classes. Verify that.
End of explanation
train_size = 200000
valid_size = 10000
valid_dataset = train_dataset[:valid_size,:,:]
valid_labels = train_labels[:valid_size]
train_dataset = train_dataset[valid_size:valid_size+train_size,:,:]
train_labels = train_labels[valid_size:valid_size+train_size]
print 'Training', train_dataset.shape, train_labels.shape
print 'Validation', valid_dataset.shape, valid_labels.shape
Explanation: Prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed.
Also create a validation dataset for hyperparameter tuning.
End of explanation
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print 'Unable to save data to', pickle_file, ':', e
raise
statinfo = os.stat(pickle_file)
print 'Compressed pickle size:', statinfo.st_size
Explanation: Finally, let's save the data for later reuse:
End of explanation
train_dataset.shape,test_dataset.shape,valid_dataset.shape
def similar(dataset1,dataset2):
print dataset2.shape
nexm,nrow,ncol = dataset1.shape
dataset1 = np.reshape(dataset1[0:5000,],(5000,nrow*ncol))
nexm,nrow,ncol = dataset2.shape
dataset2 = np.reshape(dataset2[0:1000,],(1000,1,nrow*ncol))
dataset = dataset1 - dataset2
return dataset.T
def overlap(S):
#m = np.mean(S)
return np.sum(S == 0)
STrainVal = similar(train_dataset,valid_dataset)
print "train Val overlap: ",overlap(STrainVal)
SValTest = similar(valid_dataset,test_dataset)
print "Val Test overlap: ", overlap(SValTest)
Explanation: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions:
- What about near duplicates between datasets? (images that are almost identical)
- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.
End of explanation
train = np.reshape(train_dataset,(train_dataset.shape[0],28*28))
#val = valid_dataset
#test = test_dataset
train = train[0:30000,]
train_labels = train_labels[0:30000,]
clf = LogisticRegression()
clf.fit(train,train_labels)
train.shape,train_labels.shape
train_labels.shape
Explanation: Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.
Optional question: train an off-the-shelf model on all the data!
End of explanation |
9,951 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Planet OS API demo for GEFS
<font color=red>This notebook is not working right now as GEFS Ensambled forecast is updating only by request! Let us know if you would like to use it</font>
Note
Step2: GEFS is a model with lots of output variables, which may also change depending of which particular output file you are checking. Analyse the metadata first, filter for variables we may be interested in and limit the API request.
Warning
Step3: Filter by parameter name, in this example we wan't to find pressure at surface.
Step4: API request for precipitation
Step5: API request for surface pressure
Step6: Read data from JSON responce and convert to numpy array for easier plotting
Step7: Precipitation plots
Let's first plot boxplots of ensamble members, showing 6h precipitation and accumulated precipitation.
Step8: From simple distribution it is immediately visible that ensamble members may have very different values at particular time. Interpretation of this is highly dependent on physical quantity | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import dateutil.parser
import datetime
from urllib.request import urlopen, Request
import simplejson as json
import pandas as pd
def extract_reference_time(API_data_loc):
Find reference time that corresponds to most complete forecast. Should be the earliest value.
reftimes = set()
for i in API_data_loc['entries']:
reftimes.update([i['axes']['reftime']])
reftimes=list(reftimes)
if len(reftimes)>1:
reftime = reftimes[0] if dateutil.parser.parse(reftimes[0])<dateutil.parser.parse(reftimes[1]) else reftimes[1]
else:
reftime = reftimes[0]
return reftime
#latitude = 21.205
#longitude = -158.35
latitude = 58
longitude = 26
apikey = open('APIKEY').read().strip()
num_ens = 10
prec_var = "Total_precipitation_surface_6_Hour_Accumulation_ens"
pres_var = "Pressure_surface_ens"
Explanation: Planet OS API demo for GEFS
<font color=red>This notebook is not working right now as GEFS Ensambled forecast is updating only by request! Let us know if you would like to use it</font>
Note: this notebook requires python3.
This notebook is an introduction to the PlanetOS API data format using the GFS Global Forecast dataset.
API documentation is available at http://docs.planetos.com.
If you have questions or comments, join the Planet OS Slack community to chat with our development team.
For general information on usage of IPython/Jupyter and Matplotlib, please refer to their corresponding documentation. https://ipython.org/ and http://matplotlib.org/
GEFS global probabilistic weather forecast
GEFS is a probabilistic weather forecast system, which is composed of 20 model ensemble members, which differ by small fluctuations in model initial conditions. Probabilistic forecasts try to mimic the natural chaotic nature of the atmosphere and usually have higher forecast skill than deterministic weather forecast, after third day or so. However, their interpretation is not trivial and with this demo we encourage users to have deeper look into this kind of data.
In this tutorial we analyse precipitation and surface pressure data.
End of explanation
API_meta_url = "http://api.planetos.com/v1/datasets/noaa-ncep_gefs?apikey={}".format(apikey)
request = Request(API_meta_url)
response = urlopen(request)
API_meta = json.loads(response.read())
print(API_meta_url)
Explanation: GEFS is a model with lots of output variables, which may also change depending of which particular output file you are checking. Analyse the metadata first, filter for variables we may be interested in and limit the API request.
Warning: if requesting too many variables, you may get gateway timeout error. If this happens, try to specify only one context or variable.
End of explanation
[i['name'] for i in API_meta['Variables'] if 'pressure' in i['name'].lower() and 'surface' in i['name'].lower()]
Explanation: Filter by parameter name, in this example we wan't to find pressure at surface.
End of explanation
API_url = "http://api.planetos.com/v1/datasets/noaa-ncep_gefs/point?lon={0}&lat={1}&count=2000&verbose=false&apikey={2}&var={3}".format(longitude,latitude,apikey,prec_var)
request = Request(API_url)
response = urlopen(request)
API_data_prec = json.loads(response.read())
print(API_url)
Explanation: API request for precipitation
End of explanation
API_url = "http://api.planetos.com/v1/datasets/noaa-ncep_gefs/point?lon={0}&lat={1}&count=2000&verbose=false&apikey={2}&var={3}".format(longitude,latitude,apikey,pres_var)
request = Request(API_url)
response = urlopen(request)
API_data_pres = json.loads(response.read())
print(API_url)
Explanation: API request for surface pressure
End of explanation
## first collect data to dictionaries, then convert to Pandas DataFrame
pres_data_dict = {}
pres_time_dict = {}
prec_data_dict = {}
prec_time_dict = {}
for i in range(0, num_ens):
pres_data_dict[i] = []
pres_time_dict[i] = []
prec_data_dict[i] = []
prec_time_dict[i] = []
for i in API_data_pres['entries']:
reftime = extract_reference_time(API_data_pres)
if reftime == i['axes']['reftime']:
## print("reftest", int(i['axes']['ens']))
pres_data_dict[int(i['axes']['ens'])].append(i['data'][pres_var])
pres_time_dict[int(i['axes']['ens'])].append(dateutil.parser.parse(i['axes']['time']))
for i in API_data_prec['entries']:
reftime = extract_reference_time(API_data_prec)
if reftime == i['axes']['reftime']:
prec_data_dict[int(i['axes']['ens'])].append(i['data'][prec_var])
prec_time_dict[int(i['axes']['ens'])].append(dateutil.parser.parse(i['axes']['time']))
## check if time scales are equal?!
for i in range(2,num_ens):
##print(i, np.array(pres_time_dict[1]).shape, np.array(pres_time_dict[i]).shape)
if np.amax(np.array(pres_time_dict[1])-np.array(pres_time_dict[i])) != datetime.timedelta(0):
print('timeproblem',np.amax(np.array(pres_time_dict[1])-np.array(pres_time_dict[i])))
pres_pd = pd.DataFrame(pres_data_dict)
prec_pd = pd.DataFrame(prec_data_dict)
prec_pd
Explanation: Read data from JSON responce and convert to numpy array for easier plotting
End of explanation
fig, (ax0, ax2) = plt.subplots(nrows=2,figsize=(20,12))
ax0.boxplot(prec_pd)
ax0.grid()
ax0.set_title("Simple ensamble distribution")
ax0.set_ylabel('Precipitation mm/6h')
ax2.boxplot(np.cumsum(prec_pd,axis=0))
ax2.grid()
ax2.set_title("Cumulative precipitation distribution")
ax2.set_ylabel('Precipitation mm/6h')
ax2.set_xlabel('Forecast steps (each is 6h)')
Explanation: Precipitation plots
Let's first plot boxplots of ensamble members, showing 6h precipitation and accumulated precipitation.
End of explanation
fig=plt.figure(figsize=(20,10))
plt.boxplot(pres_pd)
plt.grid()
plt.title('Ensamble distribution')
plt.ylabel('Pressure Pa')
plt.xlabel('Forecast steps (each is 6h)')
fig=plt.figure(figsize=(20,10))
plt.plot(pres_pd)
plt.grid()
plt.ylabel('Pressure Pa')
plt.xlabel('Forecast steps (each is 6h)')
Explanation: From simple distribution it is immediately visible that ensamble members may have very different values at particular time. Interpretation of this is highly dependent on physical quantity: for precipitation this may reflect changes in actual weather pattern or just small changes in timing of the precipitation event. To get rid of the latter, we use the accumulated precipitation. From this plot it is more evident (depends on particular forecast of course), that variability is smaller. For longer forecasts it may be more reasonable to check only 24h accumulated precipitation.
Surface pressure plots
Surface pressure variation is better descriptor for actual uncertainty than precipitation
End of explanation |
9,952 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Magics to Access the JVM Kernels from Python
BeakerX has magics for Python so you can run cells in the other languages.
The first few cells below show how complete the implementation is with Groovy, then we have just one cell in each other language.
There are also Polyglot Magics magics for accessing Python from the JVM.
You can communicate between languages with Autotranslation.
Groovy
Step1: Java
Step2: Scala
Step3: Kotlin
Step4: Clojure
Step5: SQL | Python Code:
%%groovy
println("stdout works")
f = {it + " work"}
f("results")
%%groovy
new Plot(title:"plots work", initHeight: 200)
%%groovy
[a:"tables", b:"work"]
%%groovy
"errors work"/1
%%groovy
HTML("<h1>HTML works</h1>")
%%groovy
def p = new Plot(title : 'Plots Work', xLabel: 'Horizontal', yLabel: 'Vertical');
p << new Line(x: [0, 1, 2, 3, 4, 5], y: [0, 1, 6, 5, 2, 8])
Explanation: Magics to Access the JVM Kernels from Python
BeakerX has magics for Python so you can run cells in the other languages.
The first few cells below show how complete the implementation is with Groovy, then we have just one cell in each other language.
There are also Polyglot Magics magics for accessing Python from the JVM.
You can communicate between languages with Autotranslation.
Groovy
End of explanation
%%java
import java.util.List;
import com.twosigma.beakerx.chart.xychart.Plot;
import java.util.Arrays;
Plot p = new Plot();
p.setTitle("Java Works");
p.setXLabel("Horizontal");
p.setYLabel("Vertical");
Bars b = new Bars();
List<Object> x = Arrays.asList(0, 1, 2, 3, 4, 5);
List<Number> y = Arrays.asList(0, 1, 6, 5, 2, 8);
Line line = new Line();
line.setX(x);
line.setY(y);
p.add(line);
return p;
Explanation: Java
End of explanation
%%scala
val plot = new Plot { title = "Scala Works"; xLabel="Horizontal"; yLabel="Vertical" }
val line = new Line {x = Seq(0, 1, 2, 3, 4, 5); y = Seq(0, 1, 6, 5, 2, 8)}
plot.add(line)
Explanation: Scala
End of explanation
%%kotlin
val x: MutableList<Any> = mutableListOf(0, 1, 2, 3, 4, 5)
val y: MutableList<Number> = mutableListOf(0, 1, 6, 5, 2, 8)
val line = Line()
line.setX(x)
line.setY(y)
val plot = Plot()
plot.setTitle("Kotlin Works")
plot.setXLabel("Horizontal")
plot.setYLabel("Vertical")
plot.add(line)
plot
Explanation: Kotlin
End of explanation
%%clojure
(import '[com.twosigma.beakerx.chart.xychart Plot]
'[com.twosigma.beakerx.chart.xychart.plotitem Line])
(doto (Plot.)
(.setTitle "Clojure Works")
(.setXLabel "Horizontal")
(.setYLabel "Vertical")
(.add (doto (Line.)
(.setX [0, 1, 2, 3, 4, 5])
(.setY [0, 1, 6, 5, 2, 8]))))
Explanation: Clojure
End of explanation
%%sql
%defaultDatasource jdbc:h2:mem:db
DROP TABLE IF EXISTS cities;
CREATE TABLE cities(
zip_code varchar(5),
latitude float,
longitude float,
city varchar(100),
state varchar(2),
county varchar(100),
PRIMARY KEY (zip_code),
) AS SELECT
zip_code,
latitude,
longitude,
city,
state,
county
FROM CSVREAD('../resources/data/UScity.csv')
%%sql
SELECT * FROM cities WHERE state = 'NY'
Explanation: SQL
End of explanation |
9,953 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
K-means clustering
Authors
Ndèye Gagnessiry Ndiaye and Christin Seifert
License
This work is licensed under the Creative Commons Attribution 3.0 Unported License https
Step1: We load the Iris flower data set. From the four measured features (e.g 'SepalLength','SepalWidth','PetalLength','PetalWidth'), two features were selected to perform k-means clustering
Step2: The following plots show for each iteration (ie. iter=1; iter=10 ;iter= max) the cluster centroids(blue) and the target data points. Each cluster is distinguished by a different color.
Step3: We compute the confusion matrices for each iteration and calculate the purity metric.
Step4: We select all the four measured features (e.g 'SepalLength','SepalWidth','PetalLength','PetalWidth') for different values of k (e.g k=2, k=3, k=4, k=6) and without random state. We compute the confusion matrix for each k and calculate the purity. | Python Code:
import pandas as pd
import numpy as np
import pylab as plt
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
import sklearn.metrics as sm
Explanation: K-means clustering
Authors
Ndèye Gagnessiry Ndiaye and Christin Seifert
License
This work is licensed under the Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/
This notebook:
introduces k-means clustering using features from the Iris flower dataset
End of explanation
from sklearn import datasets
iris = datasets.load_iris()
#iris.data
#iris.feature_names
iris.target
#iris.target_names
x = pd.DataFrame(iris.data)
x.columns = ['SepalLength','SepalWidth','PetalLength','PetalWidth']
y = pd.DataFrame(iris.target)
y.columns = ['Targets']
iris = x[['SepalLength', 'PetalLength']]
X= np.array ([[ 6,5],
[ 6.2, 5.2],
[ 5.8,4.8]])
model_1 = KMeans(n_clusters=3, random_state=42,max_iter=1,n_init=1, init = X ).fit(iris)
centroids_1 = model_1.cluster_centers_
labels_1=(model_1.labels_)
print(centroids_1)
print(labels_1)
model_10= KMeans(n_clusters=3, random_state=42,max_iter=10, n_init=1, init = X).fit(iris)
centroids_10 = model_10.cluster_centers_
labels_10=(model_10.labels_)
print(centroids_10)
print(labels_10)
model_11= KMeans(n_clusters=3, random_state=42,max_iter=11,n_init=1, init = X).fit(iris)
centroids_max = model_11.cluster_centers_
labels_max=(model_11.labels_)
print(centroids_max)
print(labels_max)
'''model_999= KMeans(n_clusters=3, random_state=42,max_iter=999).fit(iris)
centroids_max = model.cluster_centers_
labels_max=(model.labels_)
print(centroids_max)
print(labels_max)'''
Explanation: We load the Iris flower data set. From the four measured features (e.g 'SepalLength','SepalWidth','PetalLength','PetalWidth'), two features were selected to perform k-means clustering : 'SepalLength' and 'PetalLength'.
End of explanation
# Set the size of the plot
plt.figure(figsize=(24,10))
# Create a colormap
colormap = np.array(['red', 'lime', 'black'])
#colormap = {0: 'r', 1: 'g', 2: 'b'}
# Plot Original
plt.subplot(1, 4, 1)
plt.scatter(x.SepalLength, x.PetalLength, c="K", s=40)
plt.scatter(X[:,0],X[:,1], c="b")
plt.title('Initial centroids')
# Plot the Models Classifications
plt.subplot(1, 4, 2)
plt.scatter(iris.SepalLength, iris.PetalLength, c=colormap[labels_1], s=40)
plt.scatter(centroids_1[:,0],centroids_1[:,1], c="b")
plt.title('K Mean Clustering(iter=1)')
plt.subplot(1, 4, 3)
plt.scatter(iris.SepalLength, iris.PetalLength, c=colormap[labels_10], s=40)
plt.scatter(centroids_10[:,0],centroids_10[:,1], c="b")
plt.title('K Mean Clustering (iter=10)')
plt.subplot(1, 4, 4)
plt.scatter(iris.SepalLength, iris.PetalLength, c=colormap[labels_max], s=40)
plt.scatter(centroids_max[:,0],centroids_max[:,1], c="b")
plt.title('K Mean Clustering (iter= MAX)')
plt.show()
Explanation: The following plots show for each iteration (ie. iter=1; iter=10 ;iter= max) the cluster centroids(blue) and the target data points. Each cluster is distinguished by a different color.
End of explanation
def confusion(y,labels):
cm = sm.confusion_matrix(y, labels)
return cm
# Confusion Matrix (iter=1)
set_list = ["setosa","versicolor","virginica"]
cluster_list = ["c1", "c2", "c3"]
data = confusion(y, labels_1)
pd.DataFrame(data,cluster_list, set_list)
# Confusion Matrix (iter=10)
set_list = ["setosa","versicolor","virginica"]
cluster_list = ["c1", "c2", "c3"]
data = confusion(y, labels_10)
pd.DataFrame(data,cluster_list, set_list)
# Confusion Matrix (iter=max)
set_list = ["setosa","versicolor","virginica"]
cluster_list = ["c1", "c2", "c3"]
data = confusion(y, labels_max)
pd.DataFrame(data,cluster_list, set_list)
# Calculate purity of each confusion matrix
def Purity(cm):
M=[]
S=0
for i in cm:
k = max(i)
M.append(k)
for i in M:
S+=i
Purity=S/150
return Purity
metric_list = ["iter= 1", "iter= 10", "iter= MAX"]
set_list = ["Purity metric"]
data = np.array([Purity(confusion(y, labels_1)),Purity(confusion(y, labels_10)),Purity(confusion(y, labels_max))])
pd.DataFrame(data,metric_list, set_list)
Explanation: We compute the confusion matrices for each iteration and calculate the purity metric.
End of explanation
#k=2 , random-state= 0
model = KMeans(n_clusters=2,).fit(x)
centroids = model.cluster_centers_
labels=(model.labels_)
print(centroids)
print(labels)
#Confusion matrix
set_list = ["setosa","versicolor","virginica"]
cluster_list = ["c1", "c2", "c3"]
data = confusion(y, labels)
pd.DataFrame(data,set_list, cluster_list)
print ("Purity(k=2)= %f " % Purity(confusion(y, labels)))
#k=3 , random-state= 0
model = KMeans(n_clusters=3,).fit(x)
centroids = model.cluster_centers_
labels=(model.labels_)
print(centroids)
print(labels)
#Confusion matrix
set_list = ["setosa","versicolor","virginica"]
cluster_list = ["c1", "c2", "c3"]
data = confusion(y, labels)
pd.DataFrame(data,set_list, cluster_list)
print ("Purity(k=3)= %f " % Purity(confusion(y, labels)))
#k=4 , random-state= 0
model = KMeans(n_clusters=4,).fit(x)
centroids = model.cluster_centers_
labels=(model.labels_)
print(centroids)
print(labels)
# Confusion Matrix
set_list = ["setosa","versicolor","virginica","undefined"]
cluster_list = ["c1", "c2", "c3","c4"]
data = confusion(y, labels)
pd.DataFrame(data,set_list, cluster_list)
print ("Purity(k=4)= %f " % Purity(confusion(y, labels)))
#k=6 , random-state= 0
model = KMeans(n_clusters=6,).fit(x)
centroids = model.cluster_centers_
labels=(model.labels_)
print(centroids)
print(labels)
# Confusion Matrix
set_list = ["setosa","versicolor","virginica","undefined_1","undefined_2","undefined_3"]
cluster_list = ["c1", "c2", "c3","c4","c5","c6"]
data = confusion(y, labels)
pd.DataFrame(data,set_list, cluster_list)
print ("Purity(k=6)= %f " % Purity(confusion(y, labels)))
Explanation: We select all the four measured features (e.g 'SepalLength','SepalWidth','PetalLength','PetalWidth') for different values of k (e.g k=2, k=3, k=4, k=6) and without random state. We compute the confusion matrix for each k and calculate the purity.
End of explanation |
9,954 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plots
Entropy vs langton
blue = all, red = our random
Step1: Chi-square vs langton
blue = all, red = our random
Step2: Mean vs langton
blue = all, red = our random
Step3: Monte-Carlo-Pi vs langton
blue = all, red = our random
Step4: Serial-Correlation vs langton
blue = all, red = our random
Step5: p-value vs langton
blue = all, red = our random
Step6: Python's and linux' RNG
Python's random.randint and linux' /dev/urandom | Python Code:
# Plot Entropy of all rules against the langton parameter
ax1 = plt.gca()
d_five.plot("langton", "Entropy", ax=ax1, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "Entropy", ax=ax1, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.show()
ax1 = plt.gca()
d_five.plot("langton", "Entropy", ax=ax1, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "Entropy", ax=ax1, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.savefig('plots/entropy-langton.png', format='png', dpi=400)
ax1 = plt.gca()
d_five.plot("langton", "Entropy", ax=ax1, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "Entropy", ax=ax1, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.savefig('plots/entropy-langton.svg', format='svg', dpi=400)
Explanation: Plots
Entropy vs langton
blue = all, red = our random
End of explanation
# Plot Chi-Square of all rules against the langton parameter
ax2 = plt.gca()
d_five.plot("langton", "Chi-square", ax=ax2, logy=True, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "Chi-square", ax=ax2, logy=True, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.show()
ax2 = plt.gca()
d_five.plot("langton", "Chi-square", ax=ax2, logy=True, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "Chi-square", ax=ax2, logy=True, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.savefig('plots/chisquare-langton.png', format='png', dpi=400)
ax2 = plt.gca()
d_five.plot("langton", "Chi-square", ax=ax2, logy=True, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "Chi-square", ax=ax2, logy=True, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.savefig('plots/chisquare-langton.svg', format='svg', dpi=400)
Explanation: Chi-square vs langton
blue = all, red = our random
End of explanation
# Plot Mean of all rules against the langton parameter
ax3 = plt.gca()
d_five.plot("langton", "Mean", ax=ax3, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "Mean", ax=ax3, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.show()
ax3 = plt.gca()
d_five.plot("langton", "Mean", ax=ax3, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "Mean", ax=ax3, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.savefig('plots/mean-langton.png', format='png', dpi=400)
ax3 = plt.gca()
d_five.plot("langton", "Mean", ax=ax3, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "Mean", ax=ax3, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.savefig('plots/mean-langton.svg', format='svg', dpi=400)
Explanation: Mean vs langton
blue = all, red = our random
End of explanation
# Plot Monte Carlo of all rules against the langton parameter
ax4 = plt.gca()
d_five.plot("langton", "Monte-Carlo-Pi", ax=ax4, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "Monte-Carlo-Pi", ax=ax4, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.show()
ax4 = plt.gca()
d_five.plot("langton", "Monte-Carlo-Pi", ax=ax4, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "Monte-Carlo-Pi", ax=ax4, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.savefig('plots/monte-carlo-langton.png', format='png', dpi=400)
ax4 = plt.gca()
d_five.plot("langton", "Monte-Carlo-Pi", ax=ax4, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "Monte-Carlo-Pi", ax=ax4, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.savefig('plots/monte-carlo-langton.svg', format='svg', dpi=400)
Explanation: Monte-Carlo-Pi vs langton
blue = all, red = our random
End of explanation
# Plot Serial Correlation of all rules against the langton parameter
ax5 = plt.gca()
d_five.plot("langton", "Serial-Correlation", ax=ax5, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "Serial-Correlation", ax=ax5, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.show()
ax5 = plt.gca()
d_five.plot("langton", "Serial-Correlation", ax=ax5, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "Serial-Correlation", ax=ax5, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.savefig('plots/serial-correlation-langton.png', format='png', dpi=400)
ax5 = plt.gca()
d_five.plot("langton", "Serial-Correlation", ax=ax5, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "Serial-Correlation", ax=ax5, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.savefig('plots/serial-correlation-langton.svg', format='svg', dpi=400)
Explanation: Serial-Correlation vs langton
blue = all, red = our random
End of explanation
# Plot p-value of all rules against the langton parameter
ax6 = plt.gca()
d_five.plot("langton", "p-value", ax=ax6, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "p-value", ax=ax6, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.show()
ax6 = plt.gca()
d_five.plot("langton", "p-value", ax=ax6, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "p-value", ax=ax6, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.savefig('plots/p-value-langton.png', format='png', dpi=400)
ax6 = plt.gca()
d_five.plot("langton", "p-value", ax=ax6, kind="scatter", marker='o', alpha=.5, s=40)
d_five_p10_90.plot("langton", "p-value", ax=ax6, kind="scatter", color="r", marker='o', alpha=.5, s=40)
plt.savefig('plots/p-value-langton.svg', format='svg', dpi=400)
# Cutoff rules with high Chi-Square (not random)
d_rands_paper_chi = d_rands_paper[(d_rands_paper["Chi-square"] < 300)] # 300 or 1E5 is same cutoff
print("Number of random rules according to paper: %d" % len(d_rands_paper))
print("Number of paper rules with high Chi-Square: %d " % (len(d_rands_paper) - len(d_rands_paper_chi)), end="")
print(set(d_rands_paper.rule) - set(d_rands_paper_chi.rule))
selection = d_five_p10_90[['rule', 'pi_deviation', 'mean_deviation', 'p_value_deviation', 'Serial-Correlation']]
selection
p_value_top_10 = selection.sort_values(by='p_value_deviation').head(10)
mean_top_10 = selection.sort_values(by='mean_deviation').head(10)
pi_top_10 = selection.sort_values(by='pi_deviation').head(10)
print("Top 10 p-value: \t", end="")
print(p_value_top_10.rule.values)
print("Top 10 Mean: \t\t", end="")
print(mean_top_10.rule.values)
print("Top 10 Monte-Carlo-Pi: \t", end="")
print(pi_top_10.rule.values)
print()
print("Both in top 10 p-value and Mean: ", end="")
print(set(p_value_top_10.rule.values) & set(mean_top_10.rule.values))
print("In all three top 10s: ", end="")
print(set(p_value_top_10.rule.values) & set(mean_top_10.rule.values) & set(pi_top_10.rule.values))
selection[selection.rule.isin(set(p_value_top_10.rule.values) & set(mean_top_10.rule.values) & set(pi_top_10.rule.values))]
p_value_top_10
mean_top_10
pi_top_10
Explanation: p-value vs langton
blue = all, red = our random
End of explanation
def read_results(filename):
results = (File_bytes, Monte_Carlo_Pi, Rule, Serial_Correlation, Entropy, Chi_square, Mean) = [[] for _ in range(7)]
with open(filename) as f:
data = json.load(f)
variables = {"File-bytes": File_bytes, "Monte-Carlo-Pi": Monte_Carlo_Pi, "Rule": Rule, "Serial-Correlation": Serial_Correlation,
"Entropy": Entropy, "Chi-square": Chi_square, "Mean": Mean}
for k, v in variables.items():
v.append(data[k])
results = np.array([np.array(r) for r in results]).T
headers = ["File-bytes", "Monte-Carlo-Pi", "Rule", "Serial-Correlation", "Entropy", "Chi-square", "Mean"]
return pd.DataFrame(results, columns=headers)
python = read_results('python_1466717839.json')
urandom = read_results('urandom_1466717941.json')
for d in (python, urandom):
d["pi_deviation"] = np.abs(d["Monte-Carlo-Pi"] - np.pi)
d["mean_deviation"] = np.abs(d["Mean"] - 255 / 2)
d["p-value"] = chisqprob(d["Chi-square"], 255)
d['Entropy_norm'] = d['Entropy'] / 8
d['Entropy'] = d['Entropy_norm']
d['p_value_deviation'] = np.abs(d['p-value'] - 0.5)
python[['pi_deviation', 'mean_deviation', 'p_value_deviation']]
urandom[['pi_deviation', 'mean_deviation', 'p_value_deviation']]
selection
Explanation: Python's and linux' RNG
Python's random.randint and linux' /dev/urandom
End of explanation |
9,955 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Least squares fitting of models to data
This is a quick introduction to statsmodels for physical scientists (e.g. physicists, astronomers) or engineers.
Why is this needed?
Because most of statsmodels was written by statisticians and they use a different terminology and sometimes methods, making it hard to know which classes and functions are relevant and what their inputs and outputs mean.
Step2: Linear models
Assume you have data points with measurements y at positions x as well as measurement errors y_err.
How can you use statsmodels to fit a straight line model to this data?
For an extensive discussion see Hogg et al. (2010), "Data analysis recipes
Step3: To fit a straight line use the weighted least squares class WLS ... the parameters are called
Step4: Check against scipy.optimize.curve_fit
Step6: Check against self-written cost function
Step7: Non-linear models | Python Code:
import numpy as np
import pandas as pd
import statsmodels.api as sm
Explanation: Least squares fitting of models to data
This is a quick introduction to statsmodels for physical scientists (e.g. physicists, astronomers) or engineers.
Why is this needed?
Because most of statsmodels was written by statisticians and they use a different terminology and sometimes methods, making it hard to know which classes and functions are relevant and what their inputs and outputs mean.
End of explanation
data =
x y y_err
201 592 61
244 401 25
47 583 38
287 402 15
203 495 21
58 173 15
210 479 27
202 504 14
198 510 30
158 416 16
165 393 14
201 442 25
157 317 52
131 311 16
166 400 34
160 337 31
186 423 42
125 334 26
218 533 16
146 344 22
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
data = pd.read_csv(StringIO(data), delim_whitespace=True).astype(float)
# Note: for the results we compare with the paper here, they drop the first four points
data.head()
Explanation: Linear models
Assume you have data points with measurements y at positions x as well as measurement errors y_err.
How can you use statsmodels to fit a straight line model to this data?
For an extensive discussion see Hogg et al. (2010), "Data analysis recipes: Fitting a model to data" ... we'll use the example data given by them in Table 1.
So the model is f(x) = a * x + b and on Figure 1 they print the result we want to reproduce ... the best-fit parameter and the parameter errors for a "standard weighted least-squares fit" for this data are:
* a = 2.24 +- 0.11
* b = 34 +- 18
End of explanation
exog = sm.add_constant(data['x'])
endog = data['y']
weights = 1. / (data['y_err'] ** 2)
wls = sm.WLS(endog, exog, weights)
results = wls.fit(cov_type='fixed scale')
print(results.summary())
Explanation: To fit a straight line use the weighted least squares class WLS ... the parameters are called:
* exog = sm.add_constant(x)
* endog = y
* weights = 1 / sqrt(y_err)
Note that exog must be a 2-dimensional array with x as a column and an extra column of ones. Adding this column of ones means you want to fit the model y = a * x + b, leaving it off means you want to fit the model y = a * x.
And you have to use the option cov_type='fixed scale' to tell statsmodels that you really have measurement errors with an absolute scale. If you do not, statsmodels will treat the weights as relative weights between the data points and internally re-scale them so that the best-fit model will have chi**2 / ndf = 1.
End of explanation
# You can use `scipy.optimize.curve_fit` to get the best-fit parameters and parameter errors.
from scipy.optimize import curve_fit
def f(x, a, b):
return a * x + b
xdata = data['x']
ydata = data['y']
p0 = [0, 0] # initial parameter estimate
sigma = data['y_err']
popt, pcov = curve_fit(f, xdata, ydata, p0, sigma, absolute_sigma=True)
perr = np.sqrt(np.diag(pcov))
print('a = {0:10.3f} +- {1:10.3f}'.format(popt[0], perr[0]))
print('b = {0:10.3f} +- {1:10.3f}'.format(popt[1], perr[1]))
Explanation: Check against scipy.optimize.curve_fit
End of explanation
# You can also use `scipy.optimize.minimize` and write your own cost function.
# This does not give you the parameter errors though ... you'd have
# to estimate the HESSE matrix separately ...
from scipy.optimize import minimize
def chi2(pars):
Cost function.
y_model = pars[0] * data['x'] + pars[1]
chi = (data['y'] - y_model) / data['y_err']
return np.sum(chi ** 2)
result = minimize(fun=chi2, x0=[0, 0])
popt = result.x
print('a = {0:10.3f}'.format(popt[0]))
print('b = {0:10.3f}'.format(popt[1]))
Explanation: Check against self-written cost function
End of explanation
# TODO: we could use the examples from here:
# http://probfit.readthedocs.org/en/latest/api.html#probfit.costfunc.Chi2Regression
Explanation: Non-linear models
End of explanation |
9,956 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Understanding Electronic Health Records with BigQuery ML
This tutorial introduces
BigQuery ML (BQML) in the
context of working with the MIMIC3
dataset.
BigQuery ML adds only a few statements to
standard SQL.
These statements automate the creation and evaluation of statistical models on
BigQuery datasets. BigQuery ML has several
advantages
over older machine learning tools and workflows. Some highlights are BQML's high
performance on massive datasets, support for
HIPAA compliance, and
ease of use. BQML automatically implements state of the art best practices in
machine learning for your dataset.
MIMIC3 is a 10-year database of health records from the intensive care unit of
Beth Israel Deaconess Medical Center in Boston. It's full of insights that are
just begging to be uncovered.
Table of Contents
Setup
Covers importing libraries, and authenticating with Google Cloud in Colab.
Case complexity & mortality
Non-technical. Introduces the theme for this tutorial.
Taking a first look at the data
Covers basic SQL syntax, how BigQuery integrates with Colab and pandas, and the
basics of creating visualizations with seaborn.
Creating a classification model
Covers creating and training simple models with BigQuery ML.
Plotting the predictions
Covers inference (making predictions) with BigQuery ML models, and how to
inspect the weights of a parametric model.
Adding a confounding variable
Covers creating and training a slightly more complicated model, and introduces
how BigQuery ML's model comparison features can be used to address confounding
relationships.
Plotting ROC and precision-recall curves
Covers how to create ROC and precision-recall curves with BigQuery ML. These are
visualizations that describe the performance of binary classification models .
More complex models
Creating the models
Covers creating logistic regression models with many input variables.
Getting evaluation metrics
Covers how to get numerical measures of model performance using BigQuery ML.
Exploring our model
Demonstrates how to interpret models with many variables.
Conclusion
Non-technical. Looks back on how we have used BigQuery ML to answer a research
question.
Setup
First, you'll need to sign into your google account to access the Google Cloud
Platform (GCP).
We're also going to import some standard python data analysis packages that
we'll use later to visualize our models.
Step1: Next you'll need to enter some information on how to access the data.
analysis_project is the project used for processing the queries.
The other fields,
admissions_table,
d_icd_diagnoses_table,
diagnoses_icd_table,
and patients_table,
identify the BigQuery tables we're going to query. They're written in the form
"project_id.dataset_id.table_id". We're going to use a slightly modified
version of the %%bigquery cell magic in this tutorial, which replaces these
variables with their values whenever they're surrounded by curly-braces.
Step2: Case complexity & mortality
This tutorial is a case study. We're going to use BQML and MIMIC3 to answer a
research question.
In the intensive care unit, are complex cases more or less likely to be
fatal?
Maybe it's obvious that they would be more fatal. After all, things only get
worse as you add more comorbidities. Or maybe the exact opposite is true.
Compare the patient who comes to the ICU with ventricular fibrillation to the
patient who comes with a laundry list of chronic comorbidities. Especially
within the context of a particular admission, the single acute condition seems
more lethal.
Taking a first look at the data
Do we have the data to answer this question?
If you browse through the
list of tables in the MIMIC dataset,
you'll find that whether the patient passed away during the course of their
admission is recorded. We can also operationalize the definition of case
complexity by counting the number of diagnoses that the patient had during an
admission. More diagnoses means greater case complexity.
We need to check that we have a sufficiently diverse sample to build a viable
model. First we'll check our dependent variable, which measures whether a
patient passed away.
Step3: Clearly the ICU is a very serious place
Step4: With the exception of the dramatic mode¹, the spread of the diagnosis counts is
bell-curved shaped. The mathematical explanation of this is called central limit
theorem. While this is by no means a deal breaker, the thins tails we see in the
distribution can be a challenge for linear-regression models. This is because
the extreme points tend to affect the
likelihood the most, so
having fewer of them makes your model more sensitive to outliers. Regularization
can help with this, but if it becomes too much of a problem we can consider a
different type of model (such as support-vector machines, or robust regression)
instead of generalized linear regression.
¹ Which is sort of fascinating. Comparing the most common diagnoses for
admissions with exactly 9 diagnoses to the rest of the cohort seems to suggest
that this is due to positive correlations between cardiac diagnoses, e.g.
cardiac complications NOS, mitral valve disorders, aortic valve disorders,
subendocardial infarction etc. Your team might be interested in investigating
this more seriously, especially if there is a cardiologist among you.
Creating a classification model
Creating a model with BigQuery ML
is simple. You write a normal query in standard SQL, and each row of the result
is used as an input to train your model. BigQuery ML automatically applies the
required
transformations
depending on each variable's data type. For example, STRINGs are transformed
into one-hot vectors, and TIMESTAMPs
are
standardized.
These transformations are necessary to get a valid result, but they're easy to
forget and a pain to implement. Without BQML, you also have to remember to apply
these transformations when you make predictions and plots. It's fantastic that
BigQuery takes care of all this for you.
BigQuery ML also automatically performs
validation-based early stopping
to prevent
overfitting.
To start, we're going to create a
(regularized) logistic regression
model that uses a single variable, the number of diagnoses a patient had during
an admission, to predict the probability that a patient will pass away during an
ICU admission.
Step5: Optional aside
Step6: By default the weights are automatically translated to their unstandardized
forms. Meaning that we don't have to standardize our inputs before multiplying
them with the weights to obtain predictions. You can see the standardized
weights with ML.WEIGHTS(MODEL ..., STRUCT(true AS standardize)), which can be
helpful for answering questions about the relative importance of different
variables, regardless of their scale.
We can use the unstandardized weights to make a python function that returns the
predicted probability of mortality given an ICU admission with a certain number
of diagnoses
python
def predict(number_of_diagnoses)
Step7: Qualitatively, our model fits the data quite well, and the trend is pretty
clear. We might be tempted to say we've proven that increasing case complexity
increases the probability of death during an admission to the ICU. While we've
provided some evidence of this, we haven't proven it yet.
The biggest problem is we don't know if case complexity is causing the increase
in deaths, or if is merely correlated with some other variables that affect the
probability of death more directly.
Adding a confounding variable
Patient age is a likely candidate for a confounding variable that could be
mediating the relationship between complexity and mortality. Patients generally
accrue diagnoses as they age¹ and approach their life expectancy. By adding the
patient's age to our model, we can see how much of the relationship between case
complexity and mortality is explained the patient's age.
¹ Using the CORR standard SQL function, you can calculate that the Pearson
correlation coeffiecient between age and number of diagnoses is $0.37$
Step8: When we investigate the weights for this model, we see the weight associated
with the number of diagnoses is only slightly smaller now. This tells us that
some of the effect we saw in the univariate model was due to the confounding
influence of age, but most of it wasn't.
Step9: Another way to understand this relationship is to compare the effectiveness of
the model with and without age as an input. This answers the question
Step10: We see that
Step12: which we'll use to create our models. In the CREATE MODEL SELECT statement, we
create one column for each of the $m$ diagnoses and fill it with $1$ if the
patient had that diagnosis and $0$ otherwise.
This time around we're using l1_reg instead of l2_reg because we expect that
some of our some of our many variables will not significantly impact the
outcome, and we would prefer a sparse model if possible.
Step13: Getting evaluation metrics
To obtain numerical evaluation metrics on your models, BigQuery ML provides the
ML.EVALUATE
statement. Just like ML.ROC_CURVE, ML.EVALUATE defaults to using the
evaluation dataset that was set aside when the model was created.
Step14: And we can also plot the precision-recall curves as we did before.
Step15: The model with $m = 512$ seems to be overfitting the data, while somewhere
between $m = 128$ and $m = 256$ seems to be the sweet spot for model
flexibility. Since we've now used the evaluation dataset to determine $m$
(albeit informally), and when to stop-early during training, dogmatic rigour
would demand that we measure our model on a third (validation) dataset before we
brag about its efficacy. On the other hand, there isn't a ton of flexibility in
choosing between a few different values of $m$, nor in when to stop early. You
can use your own judgment.
Actually, the predictive power of our model¹ isn't nearly as interesting as it's
weights and what they tell us. In the next section, we'll dig into them.
¹Which could be described as approaching respectability, but still a long way
away from brag worthy.
Exploring our model
Let's have a look at the weights from the $m = 128$ model.
Step16: First we'll look at the weights for the numerical inputs.
Step17: We see have a list of diagnoses, sorted from most fatal to least fatal according
to our model.
Going back to our original question, we can see that the weight for num_diag
(a.k.a the number of diagnoses) has essentially gone to zero. The average
diagnoses weight is also very small
Step18: so we can conclude that given that a patient has been admitted to the ICU, the
number of diagnoses they've been given does not predict their outcome beyond the
linear effect of the component diagnoses.
It might be surprising that the weight for age is also very small. One
explanation for this might be that DNR¹ status, and falls are among the highest
weighted diagnoses. These diagnoses are associated with advanced age² ³ and
there is literature³ to support that DNR status mediates the effect of age on
survival. One thing we couldn't find much data on was the relationship between
age and palliative treatment. This could be a good subject for a datathon team
to tackle.
¹Do not resuscitate
²Article | Python Code:
from __future__ import print_function
from google.colab import auth
from google.cloud import bigquery
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
auth.authenticate_user()
Explanation: Understanding Electronic Health Records with BigQuery ML
This tutorial introduces
BigQuery ML (BQML) in the
context of working with the MIMIC3
dataset.
BigQuery ML adds only a few statements to
standard SQL.
These statements automate the creation and evaluation of statistical models on
BigQuery datasets. BigQuery ML has several
advantages
over older machine learning tools and workflows. Some highlights are BQML's high
performance on massive datasets, support for
HIPAA compliance, and
ease of use. BQML automatically implements state of the art best practices in
machine learning for your dataset.
MIMIC3 is a 10-year database of health records from the intensive care unit of
Beth Israel Deaconess Medical Center in Boston. It's full of insights that are
just begging to be uncovered.
Table of Contents
Setup
Covers importing libraries, and authenticating with Google Cloud in Colab.
Case complexity & mortality
Non-technical. Introduces the theme for this tutorial.
Taking a first look at the data
Covers basic SQL syntax, how BigQuery integrates with Colab and pandas, and the
basics of creating visualizations with seaborn.
Creating a classification model
Covers creating and training simple models with BigQuery ML.
Plotting the predictions
Covers inference (making predictions) with BigQuery ML models, and how to
inspect the weights of a parametric model.
Adding a confounding variable
Covers creating and training a slightly more complicated model, and introduces
how BigQuery ML's model comparison features can be used to address confounding
relationships.
Plotting ROC and precision-recall curves
Covers how to create ROC and precision-recall curves with BigQuery ML. These are
visualizations that describe the performance of binary classification models .
More complex models
Creating the models
Covers creating logistic regression models with many input variables.
Getting evaluation metrics
Covers how to get numerical measures of model performance using BigQuery ML.
Exploring our model
Demonstrates how to interpret models with many variables.
Conclusion
Non-technical. Looks back on how we have used BigQuery ML to answer a research
question.
Setup
First, you'll need to sign into your google account to access the Google Cloud
Platform (GCP).
We're also going to import some standard python data analysis packages that
we'll use later to visualize our models.
End of explanation
#@title Fill out this form then press [shift ⇧]+[enter ⏎] {run: "auto"}
import subprocess
import re
analysis_project = 'your-analysis-project' #@param {type:"string"}
admissions_table = 'physionet-data.mimiciii_clinical.admissions' # @param {type: "string"}
d_icd_diagnoses_table = 'physionet-data.mimiciii_clinical.d_icd_diagnoses' # @param {type: "string"}
diagnoses_icd_table = 'physionet-data.mimiciii_clinical.diagnoses_icd' # @param {type: "string"}
patients_table = 'physionet-data.mimiciii_clinical.patients' # @param {type: "string"}
# Preprocess queries made with the %%bigquery magic
# by substituting these values
sub_dict = {
'admissions_table': admissions_table,
'd_icd_diagnoses_table': d_icd_diagnoses_table,
'diagnoses_icd_table': diagnoses_icd_table,
'patients_table': patients_table
}
# Get a suffix to attach to the names of the models created during this tutorial
# to avoid collisions between simultaneous users.
account = subprocess.check_output(
['gcloud', 'config', 'list', 'account', '--format',
'value(core.account)']).decode().strip()
sub_dict['suffix'] = re.sub(r'[^\w]', '_', account)[:900]
# Set the default project for running queries
bigquery.magics.context.project = analysis_project
# Set up the substitution preprocessing injection
if bigquery.magics._run_query.func_name != 'format_and_run_query':
original_run_query = bigquery.magics._run_query
def format_and_run_query(client, query, job_config=None):
query = query.format(**sub_dict)
return original_run_query(client, query, job_config)
bigquery.magics._run_query = format_and_run_query
print('analysis_project:', analysis_project)
print()
print('custom %%bigquery magic substitutions:')
for k, v in sub_dict.items():
print(' ', '{%s}' % k, '→', v)
%config InlineBackend.figure_format = 'svg'
bq = bigquery.Client(project=analysis_project)
Explanation: Next you'll need to enter some information on how to access the data.
analysis_project is the project used for processing the queries.
The other fields,
admissions_table,
d_icd_diagnoses_table,
diagnoses_icd_table,
and patients_table,
identify the BigQuery tables we're going to query. They're written in the form
"project_id.dataset_id.table_id". We're going to use a slightly modified
version of the %%bigquery cell magic in this tutorial, which replaces these
variables with their values whenever they're surrounded by curly-braces.
End of explanation
%%bigquery
SELECT
COUNT(*) as total,
SUM(HOSPITAL_EXPIRE_FLAG) as died
FROM
`{admissions_table}`
Explanation: Case complexity & mortality
This tutorial is a case study. We're going to use BQML and MIMIC3 to answer a
research question.
In the intensive care unit, are complex cases more or less likely to be
fatal?
Maybe it's obvious that they would be more fatal. After all, things only get
worse as you add more comorbidities. Or maybe the exact opposite is true.
Compare the patient who comes to the ICU with ventricular fibrillation to the
patient who comes with a laundry list of chronic comorbidities. Especially
within the context of a particular admission, the single acute condition seems
more lethal.
Taking a first look at the data
Do we have the data to answer this question?
If you browse through the
list of tables in the MIMIC dataset,
you'll find that whether the patient passed away during the course of their
admission is recorded. We can also operationalize the definition of case
complexity by counting the number of diagnoses that the patient had during an
admission. More diagnoses means greater case complexity.
We need to check that we have a sufficiently diverse sample to build a viable
model. First we'll check our dependent variable, which measures whether a
patient passed away.
End of explanation
%%bigquery hist_df
SELECT
n_diagnoses, COUNT(*) AS cnt
FROM (
SELECT
COUNT(*) AS n_diagnoses
FROM
`{diagnoses_icd_table}`
GROUP BY
HADM_ID
)
GROUP BY n_diagnoses
ORDER BY n_diagnoses
g = sns.barplot(
x=hist_df.n_diagnoses, y=hist_df.cnt, color=sns.color_palette()[0])
# Remove every fifth label on the x-axis for readability
for i, label in enumerate(g.get_xticklabels()):
if i % 5 != 4 and i != 0:
label.set_visible(False)
Explanation: Clearly the ICU is a very serious place: about 10% of admissions are mortal. As
data scientists, this tells us that we have a significant, albeit imbalanced,
number of samples in both categories. The models we're training will easily
adapt to this class imbalance, but we will need to be cautious when evaluating
the performance of our models. After all, a model that simply says "no one dies"
will be right 91% of the time.
Next we'll look at the distribution of our independent variable: the number of
diagnoses assigned to a patient during their admission.
End of explanation
%%bigquery
# BigQuery ML create model statement:
CREATE OR REPLACE MODEL `mimic_models.complexity_mortality_{suffix}`
OPTIONS(
# Use logistic_reg for discrete predictions (classification) and linear_reg
# for continuous predictions (forecasting).
model_type = 'logistic_reg',
# See the below aside (𝜎 = 0.5 ⇒ 𝜆 = 2)
l2_reg = 2,
# Identify the column to use as the label (dependent variable)
input_label_cols = ["died"]
)
AS
# standard SQL query to train the model with:
SELECT
COUNT(*) AS number_of_diagnoses,
MAX(HOSPITAL_EXPIRE_FLAG) as died
FROM
`{admissions_table}`
INNER JOIN `{diagnoses_icd_table}`
USING (HADM_ID)
GROUP BY HADM_ID
Explanation: With the exception of the dramatic mode¹, the spread of the diagnosis counts is
bell-curved shaped. The mathematical explanation of this is called central limit
theorem. While this is by no means a deal breaker, the thins tails we see in the
distribution can be a challenge for linear-regression models. This is because
the extreme points tend to affect the
likelihood the most, so
having fewer of them makes your model more sensitive to outliers. Regularization
can help with this, but if it becomes too much of a problem we can consider a
different type of model (such as support-vector machines, or robust regression)
instead of generalized linear regression.
¹ Which is sort of fascinating. Comparing the most common diagnoses for
admissions with exactly 9 diagnoses to the rest of the cohort seems to suggest
that this is due to positive correlations between cardiac diagnoses, e.g.
cardiac complications NOS, mitral valve disorders, aortic valve disorders,
subendocardial infarction etc. Your team might be interested in investigating
this more seriously, especially if there is a cardiologist among you.
Creating a classification model
Creating a model with BigQuery ML
is simple. You write a normal query in standard SQL, and each row of the result
is used as an input to train your model. BigQuery ML automatically applies the
required
transformations
depending on each variable's data type. For example, STRINGs are transformed
into one-hot vectors, and TIMESTAMPs
are
standardized.
These transformations are necessary to get a valid result, but they're easy to
forget and a pain to implement. Without BQML, you also have to remember to apply
these transformations when you make predictions and plots. It's fantastic that
BigQuery takes care of all this for you.
BigQuery ML also automatically performs
validation-based early stopping
to prevent
overfitting.
To start, we're going to create a
(regularized) logistic regression
model that uses a single variable, the number of diagnoses a patient had during
an admission, to predict the probability that a patient will pass away during an
ICU admission.
End of explanation
%%bigquery simple_model_weights
SELECT * FROM ML.WEIGHTS(MODEL `mimic_models.complexity_mortality_{suffix}`)
Explanation: Optional aside: picking the regularization penalty $(\lambda)$ with Bayes' Theorem
From the frequentist point of view,
$l_2$ regularized regression
minimizes the negative log-likelihood of a model with an added penalty term:
$\lambda \| w \|^2$. This penalty term reflects our desire for the model to be
as simple as possible, and it removes the degeneracies caused by
collinear input variables.
$\lambda$ is called l2_reg in BigQuery ML model options. You're given the
freedom to set it to anything you want. In general, larger values of lambda
encourage the model to give simpler explanations¹, and smaller values give the
model more freedom to match the observed data. So what should you set $\lambda$
(a.k.a l2_reg) to?
A short calculation (see e.g. chapters 4.3.2 and 4.5.1 of
Pattern Recognition and Machine Learning)
shows that $l_2$ penalized logistic regression is equivalent to Bayesian
logistic regression with the pior $ \omega \sim \mathcal{N}(0, \sigma^2 =
\frac{1}{2 \lambda})$.
Later on in this tutorial, we'll run an
$l_1$ regularized regression,
which means the penalty term is $\lambda \| \omega \|$. The same reasoning
applies except the corresponding prior is $w \sim \text{Laplace}(0, b =
\frac{1}{\lambda})$.
This Bayesian perspective gives meaning to the value of $\lambda$. It reflects
our prior uncertainty towards the strength of the relationship that we're
modeling.
Since BQML automatically standardizes and one-hot encodes its inputs, we can use
this interpretation to give some generic advice on choosing $\lambda$. If you
don't have any special information, then any value of $\lambda$ around $1$ is
reasonable, and reflects that even a perfect correlation between the input and
the output is not too surprising.
As long as you choose $\lambda$ to be much less than your sample size, its exact
value should not influence your results very much. And even very small values of
$\lambda$ can remedy problems due to collinear inputs.
¹ Although regularization helps with overfitting, it does not completely solve
it, and due care should still be taken not to select too many inputs for too
little data.
Plotting the predictions
We can inspect the weights that our model learned using the
ML.WEIGHTS
statement. The positive weight that we see for number_of_diagnoses is our
first evidence that case complexity is associated with mortality.
End of explanation
params = {'max_prediction': hist_df.n_diagnoses.max()}
%%bigquery line_df --params $params
SELECT * FROM
ML.PREDICT(MODEL `mimic_models.complexity_mortality_{suffix}`, (
SELECT * FROM
UNNEST(GENERATE_ARRAY(1, @max_prediction)) AS number_of_diagnoses
))
%%bigquery scatter_df
SELECT
COUNT(*) AS num_diag,
MAX(HOSPITAL_EXPIRE_FLAG) as died
FROM
`{admissions_table}` AS adm
INNER JOIN `{diagnoses_icd_table}` AS diag
USING (HADM_ID)
GROUP BY HADM_ID
sns.regplot(
x='num_diag',
y='died',
data=scatter_df,
fit_reg=False,
x_bins=np.arange(1,
scatter_df.num_diag.max() + 1))
plt.plot(line_df.number_of_diagnoses,
line_df.predicted_died_probs.apply(lambda x: x[0]['prob']))
plt.xlabel('Case complexity (number of diagnoses)')
plt.ylabel('Probability of death during admission')
Explanation: By default the weights are automatically translated to their unstandardized
forms. Meaning that we don't have to standardize our inputs before multiplying
them with the weights to obtain predictions. You can see the standardized
weights with ML.WEIGHTS(MODEL ..., STRUCT(true AS standardize)), which can be
helpful for answering questions about the relative importance of different
variables, regardless of their scale.
We can use the unstandardized weights to make a python function that returns the
predicted probability of mortality given an ICU admission with a certain number
of diagnoses
python
def predict(number_of_diagnoses):
return scipy.special.expit(
simple_model_weights.weight[0] * number_of_diagnoses
+ simple_model_weights.weight[1])
but it's often faster and easier to make predictions with the
ML.PREDICT
statement.
We'd like to create a plot showing our model's predictions and the underlying
data. We can use ML.PREDICT to get the data to draw the prediction line, and
copy-paste the query we fed into CREATE MODEL to get the data points.
End of explanation
%%bigquery
CREATE OR REPLACE MODEL `mimic_models.complexity_age_mortality_{suffix}`
OPTIONS(model_type='logistic_reg', l2_reg=2, input_label_cols=["died"])
AS
SELECT
# MIMIC3 sets all ages over 89 to 300 to avoid the possibility of
# identification.
IF(DATETIME_DIFF(ADMITTIME, DOB, DAY)/365.25 < 200,
DATETIME_DIFF(ADMITTIME, DOB, DAY)/365.25,
# The life expectancy of a 90 year old is approximately 5 years according
# to actuarial tables. So we'll use 95 as the mean age of 90+'s
95) AS age,
num_diag,
died
FROM
(SELECT
COUNT(*) AS num_diag,
MAX(HOSPITAL_EXPIRE_FLAG) as died,
ANY_VALUE(ADMITTIME) as ADMITTIME,
SUBJECT_ID
FROM
`{admissions_table}` AS adm
JOIN `{diagnoses_icd_table}` AS diag
USING (HADM_ID, SUBJECT_ID)
GROUP BY HADM_ID, SUBJECT_ID
)
JOIN `{patients_table}` AS patients
USING (SUBJECT_ID)
Explanation: Qualitatively, our model fits the data quite well, and the trend is pretty
clear. We might be tempted to say we've proven that increasing case complexity
increases the probability of death during an admission to the ICU. While we've
provided some evidence of this, we haven't proven it yet.
The biggest problem is we don't know if case complexity is causing the increase
in deaths, or if is merely correlated with some other variables that affect the
probability of death more directly.
Adding a confounding variable
Patient age is a likely candidate for a confounding variable that could be
mediating the relationship between complexity and mortality. Patients generally
accrue diagnoses as they age¹ and approach their life expectancy. By adding the
patient's age to our model, we can see how much of the relationship between case
complexity and mortality is explained the patient's age.
¹ Using the CORR standard SQL function, you can calculate that the Pearson
correlation coeffiecient between age and number of diagnoses is $0.37$
End of explanation
%%bigquery
SELECT * FROM ML.WEIGHTS(MODEL `mimic_models.complexity_age_mortality_{suffix}`)
Explanation: When we investigate the weights for this model, we see the weight associated
with the number of diagnoses is only slightly smaller now. This tells us that
some of the effect we saw in the univariate model was due to the confounding
influence of age, but most of it wasn't.
End of explanation
%%bigquery comp_roc
SELECT * FROM ML.ROC_CURVE(MODEL `mimic_models.complexity_mortality_{suffix}`)
%%bigquery comp_age_roc
SELECT * FROM
ML.ROC_CURVE(MODEL `mimic_models.complexity_age_mortality_{suffix}`)
def set_precision(df):
df['precision'] = df.true_positives / (df.true_positives + df.false_positives)
def plot_precision_recall(df, label=None):
# manually add the threshold = -∞ point
df = df[df.true_positives != 0]
recall = [0] + list(df.recall)
precision = [1] + list(df.precision)
# x=recall, y=precision line chart
plt.plot(recall, precision, label=label)
set_precision(comp_roc)
set_precision(comp_age_roc)
plot_precision_recall(comp_age_roc, label='bivariate (age) model')
plot_precision_recall(comp_roc, label='univariate model')
plt.plot(
np.linspace(0, 1, 2), [comp_roc.precision.min()] * 2,
label='null model',
linestyle='--')
plt.legend()
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.xlabel(r'Recall $\left(\frac{T_p}{T_p + F_n} \right)$')
plt.ylabel(r'Precision $\left(\frac{T_p}{T_p + F_p} \right)$')
Explanation: Another way to understand this relationship is to compare the effectiveness of
the model with and without age as an input. This answers the question: given the
number of diagnoses that a patient has received, how much extra information does
their age give us? To be thorough, we could also include a model with just the
patient's age. You can add a couple of code cells to this notebook and do this
as an exercise if you're curious.
Plotting ROC and precision-recall curves
One way to compare the effectiveness of binary classification models is with
ROC curves or a precision-recall curves.
Since ROC curves tend to appear overly optimistic when the data has a
significant class imbalance, we're going to favour precision-recall curves in
this tutorial. Precision-Recall curves plot the recall (which measures the
model's performance on the positive samples)
$$
\text{Recall} = \frac{\text{True Positives}}{\text{True Positives} +
\text{False Negatives}}
$$
against the precision (which measures the model's performance on the samples it
classified as positive examples)
$$
\text{Precision} = \frac{\text{True Positives}}{\text{True Positives} +
\text{False Positives}}
$$
as the decision threshold
ranges from $0$ (predict no one dies) to $1$ (predict everyone dies)¹.
To make these plots, we're going to use the
ML.ROC_CURVE
BigQuery ML statement. ML.ROC_CURVE returns the data you need to draw both ROC
and precision-recall curves with your graphing library of choice.
ML.ROC_CURVE defaults to using data from the evaluation dataset. If it
operated on the training dataset, it would be difficult to distinguish
overfitting from excellent performance. If you have your own validation dataset,
you can provide it as an optional second argument.
¹ BigQuery ML uses the convention that the threshold is between $0$ and $1$,
rather than the logit of this value.
End of explanation
%%bigquery top_diagnoses
WITH top_diag AS (
SELECT COUNT(*) AS count, ICD9_CODE FROM `{diagnoses_icd_table}`
GROUP BY ICD9_CODE
)
SELECT top_diag.ICD9_CODE, icd_lookup.SHORT_TITLE, top_diag.count FROM
top_diag JOIN
`{d_icd_diagnoses_table}` AS icd_lookup
USING (ICD9_CODE)
ORDER BY count DESC LIMIT 1024
Explanation: We see that:
Both these models are significantly better than the zero variable model,
implying that case complexity has a significant impact on patient mortality.
Adding the patient's age only marginally improves the model, implying that
the impact of case complexity is not mediated through age.
Of course, neither of these models is very good when it comes to making
predictions. For our last set of models, we'll try more earnestly to predict
patient mortality
More complex models
One of the main attractions of BigQuery ML is its ability to scale to high
dimensional models with
up to millions of variables.
Our dataset isn't nearly large enough to train this many variables without
severe overfitting, be we can still abide training models with hundreds of
variables.
Our strategy will use the $m$ most frequent diagnoses, and a handful of other
likely relevant variables as the inputs to our model. Namely, we'll use:
ADMISSION_TYPE: reflects the reason for, and seriousness of the admission
urgent
emergency
newborn
elective
INSURANCE: reflects the patients socioeconomic status, a well-known
covariate with patient outcomes
Self Pay
Medicare
Private
Medicaid
Government
GENDER: accounts for both social and physiological differences across
genders
AGE: accounts for both social and physiological differences across ages
number of diagnoses: our stand-in for case complexity
in addition to the top $m$ diagnoses. We'll compare models with $m \in \left{8,
16, 32, 64, 128, 256, 512 \right}$ to determine the most sensible value of $m$.
This will give us valuable information regarding our original question: whether
case complexity increases the probability of ICU mortality. We wonder if the
number of diagnoses increases patient risk only because it increases the chances
of one of their many diagnoses being lethal, or if these is an interactive
effect¹. We'll be able to test this by determining whether
$\omega_{n_{\text{diagnoses}}}$ goes to $0$ as we increase $m$.
We'll also get some interesting information on the relative lethality of
different diagnoses, and how these compare with social determinants.
¹As in the often misattributed quote:
quantity has a quality all its own, or
does it?
Creating the models
We'll start by getting a list of the most frequent diagnoses
End of explanation
top_n_diagnoses = (8, 16, 32, 64, 128, 256, 512)
query_jobs = list()
for m in top_n_diagnoses:
# The expressions for creating the new columns for each input diagnosis
diagnosis_columns = list()
for _, row in top_diagnoses.iloc[:m].iterrows():
diagnosis_columns.append('MAX(IF(ICD9_CODE = "{0}", 1.0, 0.0))'
' as `icd9_{0}`'.format(row.ICD9_CODE))
query =
CREATE OR REPLACE MODEL `mimic_models.predict_mortality_diag_{m}_{suffix}`
OPTIONS(model_type='logistic_reg', l1_reg=2, input_label_cols=["died"])
AS
WITH diagnoses AS (
SELECT
HADM_ID,
COUNT(*) AS num_diag,
{diag_cols}
FROM `{diagnoses_icd_table}`
WHERE ICD9_CODE IS NOT NULL
GROUP BY HADM_ID
)
SELECT
IF(DATETIME_DIFF(adm.ADMITTIME, patients.DOB, DAY)/365.25 < 200,
DATETIME_DIFF(adm.ADMITTIME, patients.DOB, DAY)/365.25, 95) AS age,
diagnoses.* EXCEPT (HADM_ID),
adm.HOSPITAL_EXPIRE_FLAG as died,
adm.ADMISSION_TYPE as adm_type,
adm.INSURANCE as insurance,
patients.GENDER
FROM
`{admissions_table}` AS adm
LEFT JOIN `{patients_table}` AS patients USING (SUBJECT_ID)
LEFT JOIN diagnoses USING (HADM_ID)
.format(
m=m, diag_cols=',\n '.join(diagnosis_columns), **sub_dict)
# Run the query, and track its progress with query_jobs
query_jobs.append(bq.query(query))
# Wait for all of the models to finish training
for j in query_jobs:
j.exception()
Explanation: which we'll use to create our models. In the CREATE MODEL SELECT statement, we
create one column for each of the $m$ diagnoses and fill it with $1$ if the
patient had that diagnosis and $0$ otherwise.
This time around we're using l1_reg instead of l2_reg because we expect that
some of our some of our many variables will not significantly impact the
outcome, and we would prefer a sparse model if possible.
End of explanation
eval_queries = list()
for m in top_n_diagnoses:
eval_queries.append(
'SELECT * FROM ML.EVALUATE('
'MODEL `mimic_models.predict_mortality_diag_{}_{suffix}`)'
.format(m, **sub_dict))
eval_query = '\nUNION ALL\n'.join(eval_queries)
bq.query(eval_query).result().to_dataframe()
Explanation: Getting evaluation metrics
To obtain numerical evaluation metrics on your models, BigQuery ML provides the
ML.EVALUATE
statement. Just like ML.ROC_CURVE, ML.EVALUATE defaults to using the
evaluation dataset that was set aside when the model was created.
End of explanation
for m in top_n_diagnoses:
df = bq.query('SELECT * FROM ML.ROC_CURVE('
'MODEL `mimic_models.predict_mortality_diag_{}_{suffix}`)'
.format(m, **sub_dict)).result().to_dataframe()
set_precision(df)
plot_precision_recall(df, label='{} diagnoses'.format(m))
plt.plot(
np.linspace(0, 1, 2), [df.precision.min()] * 2,
label='null model',
linestyle='--')
plt.legend()
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.xlabel(r'Recall $\left(\frac{T_p}{T_p + F_n} \right)$')
plt.ylabel(r'Precision $\left(\frac{T_p}{T_p + F_p} \right)$')
Explanation: And we can also plot the precision-recall curves as we did before.
End of explanation
%%bigquery weights_128
SELECT * FROM ML.WEIGHTS(MODEL `mimic_models.predict_mortality_diag_128_{suffix}`)
ORDER BY weight DESC
Explanation: The model with $m = 512$ seems to be overfitting the data, while somewhere
between $m = 128$ and $m = 256$ seems to be the sweet spot for model
flexibility. Since we've now used the evaluation dataset to determine $m$
(albeit informally), and when to stop-early during training, dogmatic rigour
would demand that we measure our model on a third (validation) dataset before we
brag about its efficacy. On the other hand, there isn't a ton of flexibility in
choosing between a few different values of $m$, nor in when to stop early. You
can use your own judgment.
Actually, the predictive power of our model¹ isn't nearly as interesting as it's
weights and what they tell us. In the next section, we'll dig into them.
¹Which could be described as approaching respectability, but still a long way
away from brag worthy.
Exploring our model
Let's have a look at the weights from the $m = 128$ model.
End of explanation
pd.set_option('max_rows', 150)
weights_128['ICD9_CODE'] = weights_128.processed_input \
.apply(lambda x: x[len('icd9_'):] if x.startswith('icd9_') else x)
view_df = weights_128.merge(top_diagnoses,how='left', on='ICD9_CODE') \
.rename(columns={'ICD9_CODE': 'input'})
view_df = view_df[~pd.isnull(view_df.weight)]
view_df[['input', 'SHORT_TITLE', 'weight', 'count']]
Explanation: First we'll look at the weights for the numerical inputs.
End of explanation
view_df[~pd.isnull(view_df.SHORT_TITLE)].weight.mean()
Explanation: We see have a list of diagnoses, sorted from most fatal to least fatal according
to our model.
Going back to our original question, we can see that the weight for num_diag
(a.k.a the number of diagnoses) has essentially gone to zero. The average
diagnoses weight is also very small:
End of explanation
for _, row in weights_128[pd.isnull(weights_128.weight)].iterrows():
print(row.processed_input)
print(
*sorted([tuple(x.values()) for x in row.category_weights],
key=lambda x: x[1],
reverse=True),
sep='\n',
end='\n\n')
Explanation: so we can conclude that given that a patient has been admitted to the ICU, the
number of diagnoses they've been given does not predict their outcome beyond the
linear effect of the component diagnoses.
It might be surprising that the weight for age is also very small. One
explanation for this might be that DNR¹ status, and falls are among the highest
weighted diagnoses. These diagnoses are associated with advanced age² ³ and
there is literature³ to support that DNR status mediates the effect of age on
survival. One thing we couldn't find much data on was the relationship between
age and palliative treatment. This could be a good subject for a datathon team
to tackle.
¹Do not resuscitate
²Article: Age-Related Changes in Physical Fall Risk Factors: Results from a 3
Year Follow-up of Community Dwelling Older Adults in Tasmania,
Australia
³Article: Do Not Resuscitate (DNR) Status, Not Age, Affects Outcomes after
Injury: An Evaluation of 15,227 Consecutive Trauma
Patients
Now let's look at the weights for the categorical variables.
End of explanation |
9,957 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
W7 Lab Assignment
Step1: Cumulative histogram and CDF
How can we plot a cumulative histogram?
Step2: Does it reach 1.0? Why should it become 1.0 at the right end? Also you can do the plot with pandas.
Step3: CDF
Let's make it CDF rather than cumulative histogram. You can sort a Series with order function. You can use np.linspace to generate a list of evenly spaced value.
Step4: The main advantange of CDF is that we can directly observe percentiles from the plot. Given the number of movies we have, can you estimate the following statistics by observing the plot? Compare your estimation to the precise results calculated from movie_df.
The numer of movies with rating <= 7
The median rating of movies
The rating which 90% of movies are under or equal to
Step5: Bootstrap Resampling
Let's imagine that we only have a sample of the IMDB data, say 50 movies. How much can we infer about the original data from this small sample? This is a question that we encounter very often in statistical analysis.
In such situations, we can seek help from the bootstraping method. This is a family of statistical methods that relies on random sampling with replacement. Different to the traditional methods, it does not assume that our data follows a particular distribution, and so is very flexible to use.
Step6: Now we have a sample with size = 50. We can compute, for example, the mean of movie ratings in this sample
Step7: But we only have one statistic. How can we know if this correctly represents the mean of the actual data? We need to compute a confidence interval. This is when we can use bootstrapping.
First, Let's create a function that does the resampling with replacement. It should create a list of the same length as the sample(50 in this case), in which each element is taken randomly from the sample. In this way, some elements may appear more than once, and some none. Then we calculate the mean value of this list.
Step8: We don't usually just do this once
Step9: Now we can compute the confidence interval. Say we want the 90% confidence, then we only need to pick out the .95 and .05 critical values.
Step10: That is, we need to pick the 50th and 950th largest values from the list. We can name it x_a and x_b.
Step11: Let x be the mean value of the sample, we have
Step12: The confidence interval will then be | Python Code:
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import numpy as np
import random
sns.set_style('white')
%matplotlib inline
Explanation: W7 Lab Assignment
End of explanation
# TODO: Load IMDB data into movie_df using pandas
movie_df = pd.read_csv('imdb.csv', delimiter='\t')
movie_df.head()
# TODO: draw a cumulative histogram of movie ratings with 20 bins. Hint: use plt.hist()
n, bins, patches = plt.hist(movie_df['Rating'], bins = 20, cumulative=True)
# TODO: same histogram, but with normalization
n, bins, patches = plt.hist(movie_df['Rating'], bins = 20,normed=1, histtype='step', cumulative=True)
# TODO: same histogram, but with normalization
n, bins, patches = plt.hist(movie_df['Rating'], bins = 20,normed=1, cumulative=True)
Explanation: Cumulative histogram and CDF
How can we plot a cumulative histogram?
End of explanation
# TODO: same plot, but call directly from dataframe movie_df
movie_df['Rating'].hist( bins = 20,normed=1, cumulative='True')
Explanation: Does it reach 1.0? Why should it become 1.0 at the right end? Also you can do the plot with pandas.
End of explanation
# TODO: plot CDF (not cumulative histogram) of movie ratings.
ratings = movie_df['Rating'].sort_values()
cum_dist = np.linspace( 1/len(ratings), 1, num=len(ratings))
plt.plot(ratings,cum_dist)
Explanation: CDF
Let's make it CDF rather than cumulative histogram. You can sort a Series with order function. You can use np.linspace to generate a list of evenly spaced value.
End of explanation
#TODO: provide your estimations.
#1. 0.65 * len(ratings) = 203457.15.
#2. 6.5.
#3. 8.2.
#TODO: calculate the statistics from movie_df.
seven = movie_df['Rating'][movie_df['Rating'] <= 7].values
print(len(seven))
print(np.median(movie_df['Rating']))
print(np.percentile(movie_df['Rating'], [90])[0])
Explanation: The main advantange of CDF is that we can directly observe percentiles from the plot. Given the number of movies we have, can you estimate the following statistics by observing the plot? Compare your estimation to the precise results calculated from movie_df.
The numer of movies with rating <= 7
The median rating of movies
The rating which 90% of movies are under or equal to
End of explanation
#create a random sample from the movie table.
movie_df_sample = movie_df.sample(50)
len(movie_df_sample)
Explanation: Bootstrap Resampling
Let's imagine that we only have a sample of the IMDB data, say 50 movies. How much can we infer about the original data from this small sample? This is a question that we encounter very often in statistical analysis.
In such situations, we can seek help from the bootstraping method. This is a family of statistical methods that relies on random sampling with replacement. Different to the traditional methods, it does not assume that our data follows a particular distribution, and so is very flexible to use.
End of explanation
print('Mean of sample: ', movie_df_sample.Rating.mean())
Explanation: Now we have a sample with size = 50. We can compute, for example, the mean of movie ratings in this sample:
End of explanation
def bootstrap_resample(rating_list):
resampled_list = []
#todo: write the function that returns the mean of resampled list.
for i in range(50):
resampled_list.append(random.choice(rating_list))
return np.mean(resampled_list)
Explanation: But we only have one statistic. How can we know if this correctly represents the mean of the actual data? We need to compute a confidence interval. This is when we can use bootstrapping.
First, Let's create a function that does the resampling with replacement. It should create a list of the same length as the sample(50 in this case), in which each element is taken randomly from the sample. In this way, some elements may appear more than once, and some none. Then we calculate the mean value of this list.
End of explanation
sampled_means = []
#todo: call the function 1000 times and populate the list with its returned values.
for i in range(1000):
mean = bootstrap_resample(movie_df_sample['Rating'].values)
sampled_means.append(mean)
Explanation: We don't usually just do this once: the typical minimal resample number is 1000. We can create a new list to keep this 1000 mean values.
End of explanation
print(1000*0.05, 1000*0.95)
Explanation: Now we can compute the confidence interval. Say we want the 90% confidence, then we only need to pick out the .95 and .05 critical values.
End of explanation
#todo: sort the list by ascending and pick out the 50th and 950th value.
sampled_means.sort()
x_a = sampled_means[49]
x_b = sampled_means[949]
print (x_a,x_b)
Explanation: That is, we need to pick the 50th and 950th largest values from the list. We can name it x_a and x_b.
End of explanation
x = movie_df_sample.Rating.mean()
Explanation: Let x be the mean value of the sample, we have:
End of explanation
#todo: calculate the confidence interval.
#Does the mean of the original data fall within this interval? Show your statistics.
print([x - (x - x_a), x + (x_b - x)])
np.mean(movie_df['Rating'])
Explanation: The confidence interval will then be: [x - (x - x_a), x + (x_b - x)].
End of explanation |
9,958 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
On-Axis Field of a Finite Solenoid
This formula uses the formula for the field due to a thin shell solenoid, integrated over a range of radii to obtain the magnetic field at any point on the axis of a finite.
General Case
$B = \frac {\mu_o i n}{2 (r_2 - r_1)} \left [ x_2 \ln \left ( \frac {\sqrt{r_2^2 + x_2^2} + r_2}{\sqrt{r_1^2 + x_2^2} + r_1} \right ) - x_1 \ln \left ( \frac {\sqrt{r_2^2 + x_1^2} + r_2}{\sqrt{r_1^2 + x_1^2} + r_1} \right ) \right ]$
B is the magnetic field, in teslas, at any point on the axis of the solenoid. The direction of the field is parallel to the solenoid axis.
$\mathbf \mu_o$ is the permeability constant (1.26x10<sup>-6</sup> Hm<sup>-1</sup>)
i is the current in the wire, in amperes.
n is the number of turns of wire per unit length in the solenoid.
r<sub>1</sub> is the inner radius of the solenoid, in meters.
r<sub>1</sub> is the inner radius of the solenoid, in meters.
r<sub>2</sub> is the outer radius of the solenoid, in meters.
x<sub>1</sub> and x<sub>2</sub> are the distances, on axis, from the ends of the solenoid to the magnetic field measurement point, in meters.
The "G Factor"
The field can be expressed in a form that separates the unit system, power and winding configuration from the unitless geometry of the coil. This introduces the "G Factor"
Step1: Now let's apply the B function to a typical coil. We'll assume copper (at resistivity of 1.68x10<sup>-8</sup> ohm-m) conductors at a packing density of 0.75, inner radius of 1.25 cm, power of 100 W and with supposedly optimal $\alpha$ and $\beta$ of 3 and 2, respectively
Step2: Now try any combination of factors (assuming packing of 0.75 and standard copper conductors) to compute the field
Step3: For a given inner radius, power and winding configuration, the field strength is directly proportional to G. Therefore, we can test the assertion that G is maximum when $\alpha$ is 3 and $\beta$ is 2 by constructing a map of G as a function of $\alpha$ and $\beta$
Step4: Although it is apparent that the maximum G Factor occurs near the $\alpha=3$, $\beta=2$ point, it is not exactly so | Python Code:
%matplotlib inline
from scipy.special import ellipk, ellipe, ellipkm1
from numpy import pi, sqrt, linspace, log
from pylab import plot, xlabel, ylabel, suptitle, legend, show
uo = 4E-7*pi # Permeability constant - units of H/m
# Compute G Factor from unitless parameters
def GFactorUnitless(a, b, g=0.0): # alpha, beta - omit gamma for central
gpb2 = (g+b)*(g+b)
gmb2 = (g-b)*(g-b)
if not g == 0.0:
sq = sqrt(1/(8*pi*b*(a*a-1)))
t1 = (g+b)*log((a+sqrt(a*a+gpb2))/(1+sqrt(1+gpb2)))
t2 = (g-b)*log((a+sqrt(a*a+gmb2))/(1+sqrt(1+gmb2)))
B = sq*(t1-t2)
else:
sq = sqrt(b/2/pi/(a*a-1))
B = sq*log((a+sqrt(a*a+b*b))/(1+sqrt(1+b*b)))
return B
# Compute G Factor from all dimensions
def GFactor(r1, r2, l, x1=0.0, x2=0.0): # omit x1, x2 to compute central field
a = r2/r1
b = l/2/r1
g = (x1+x2)/2/r1
return GFactorUnitless(a, b, g)
# Compute B field on axis from unitless dimensions
def BFieldUnitless(power, packing, resistivity, r1, a, b, g=0.0):
return uo*GFactorUnitless(a, b, g)*sqrt(power*packing/r1/resistivity)
# Compute B field on axis from actual dimensions (x is measurement point - center if none)
def BField(power, packing, resistivity, r1, r2, length, x=0.0):
a = r2/r1
b = length/2/r1
g = x/r1
return BFieldUnitless(power, packing, resistivity, r1, a, b, g)
Explanation: On-Axis Field of a Finite Solenoid
This formula uses the formula for the field due to a thin shell solenoid, integrated over a range of radii to obtain the magnetic field at any point on the axis of a finite.
General Case
$B = \frac {\mu_o i n}{2 (r_2 - r_1)} \left [ x_2 \ln \left ( \frac {\sqrt{r_2^2 + x_2^2} + r_2}{\sqrt{r_1^2 + x_2^2} + r_1} \right ) - x_1 \ln \left ( \frac {\sqrt{r_2^2 + x_1^2} + r_2}{\sqrt{r_1^2 + x_1^2} + r_1} \right ) \right ]$
B is the magnetic field, in teslas, at any point on the axis of the solenoid. The direction of the field is parallel to the solenoid axis.
$\mathbf \mu_o$ is the permeability constant (1.26x10<sup>-6</sup> Hm<sup>-1</sup>)
i is the current in the wire, in amperes.
n is the number of turns of wire per unit length in the solenoid.
r<sub>1</sub> is the inner radius of the solenoid, in meters.
r<sub>1</sub> is the inner radius of the solenoid, in meters.
r<sub>2</sub> is the outer radius of the solenoid, in meters.
x<sub>1</sub> and x<sub>2</sub> are the distances, on axis, from the ends of the solenoid to the magnetic field measurement point, in meters.
The "G Factor"
The field can be expressed in a form that separates the unit system, power and winding configuration from the unitless geometry of the coil. This introduces the "G Factor":
$B = \mu_o G \sqrt \frac {P \lambda} {r_1 \rho}$
where G is the unitless geometry factor defined as:
$G = \sqrt{\frac {1}{8 \pi \beta (\alpha^2 - 1)}} \left [ (\gamma + \beta) \ln \left ( \frac {\alpha + \sqrt{\alpha^2 + (\gamma + \beta)^2}}{1 + \sqrt{1 + (\gamma + \beta)^2}} \right ) - (\gamma - \beta) \ln \left ( \frac {\alpha + \sqrt{\alpha^2 + (\gamma - \beta)^2}}{1 + \sqrt{1 + (\gamma - \beta)^2}} \right ) \right ]$
where,
$\alpha = \frac {r_2}{r_1}$, $\beta = \frac l {2 r_1}$ and $\gamma = \frac {x_1 + x_2}{2 r_1}$
P is the total power consumed by the coil, in watts.
$\lambda$ is equal to the total conductor cross section area divided by the total coil cross section area, which ranges from 0.6 to 0.8 in typical coils.
$\rho$ is the conductor resistivity, in units of ohms-length. The length units must match those of r<sub>1</sub>.
Special Case: x<sub>1</sub> = -x<sub>2</sub>
When the magnetic field measurement point is at the center of the solenoid:
$B = \frac {\mu_o i N}{2(r_2 - r_1)} \ln \left ( \frac {\sqrt{r_2^2 + (\frac l 2)^2} + r_2}{\sqrt{r_1^2 + (\frac l 2)^2} + r_1} \right )$
or...
$B = \frac {\mu_o j l}{2} \ln \left ( \frac {\sqrt{r_2^2 + (\frac l 2)^2} + r_2}{\sqrt{r_1^2 + (\frac l 2)^2} + r_1} \right )$
j is the bulk current density in the coil cross section, in amperes per square meter.
l is the length of the solenoid, in meters.
N is the total number of turns of wire in the coil.
The unitless geometry factor G is simply:
$G = \sqrt \frac {\beta} {2 \pi (\alpha^2 - 1)} \ln \left ( \frac {\alpha + \sqrt{\alpha^2 + \beta^2}} {1 + \sqrt{1 + \beta^2}} \right )$
Note that G is maximum when $\alpha=3$ and $\beta=2$. A coil built with a given inner diameter and input power will deliver the highest central field strength when these conditions are met.
Code Example
The following Python code shows how to use these formulas to calculate magnetic fields.
End of explanation
resistivity = 1.68E-8 # ohm-meter
r1 = 0.0125 # meter
packing = 0.75
power = 100.0 # watts
B = BFieldUnitless(power, packing, resistivity, r1, 3, 2)
print("B Field: {:.3} T".format(B))
Explanation: Now let's apply the B function to a typical coil. We'll assume copper (at resistivity of 1.68x10<sup>-8</sup> ohm-m) conductors at a packing density of 0.75, inner radius of 1.25 cm, power of 100 W and with supposedly optimal $\alpha$ and $\beta$ of 3 and 2, respectively:
End of explanation
from ipywidgets import interactive
from IPython.display import display
def B(power, r1, r2, length, x):
return "{:.3} T".format(BField(power, 0.75, resistivity, r1, r2, length, x))
v = interactive(B,
power=(0.0, 200.0, 1),
r1 = (0.01, 0.1, 0.001),
r2 = (0.02, 0.5, 0.001),
length = (0.01, 2, 0.01),
x = (0.0, 4, 0.01))
display(v)
Explanation: Now try any combination of factors (assuming packing of 0.75 and standard copper conductors) to compute the field:
End of explanation
from pylab import pcolor, colorbar, meshgrid, contour
from numpy import arange
a = arange(1.1, 6.0, 0.1)
b = arange(0.1, 4.0, 0.1)
A, B = meshgrid(a,b)
G = GFactorUnitless(A, B)
contour(A, B, G, 30)
colorbar()
xlabel("Unitless parameter, Alpha")
ylabel("Unitless parameter, Beta")
suptitle("Electromagnet 'G Factor'")
show()
print("G Factor at A=3, B=2: {:.3}".format(GFactorUnitless(3,2)))
print("G Factor at A=3, B=1.9: {:.3}".format(GFactorUnitless(3,1.9)))
Explanation: For a given inner radius, power and winding configuration, the field strength is directly proportional to G. Therefore, we can test the assertion that G is maximum when $\alpha$ is 3 and $\beta$ is 2 by constructing a map of G as a function of $\alpha$ and $\beta$:
End of explanation
from scipy.optimize import minimize
def GMin(AB):
return -GFactorUnitless(AB[0], AB[1])
res = minimize(GMin, [3, 2])
print("G Factor is maximum at Alpha = {:.4}, Beta = {:.4}".format(*res.x))
Explanation: Although it is apparent that the maximum G Factor occurs near the $\alpha=3$, $\beta=2$ point, it is not exactly so:
End of explanation |
9,959 | Given the following text description, write Python code to implement the functionality described.
Description:
Number of n digit numbers that do not contain 9
function to find number of n digit numbers possible ; driver function
| Python Code:
def totalNumber(n ) :
return 8 * pow(9 , n - 1 ) ;
n = 3
print(totalNumber(n ) )
|
9,960 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CSV 파일 다루기와 데이터 시각화
수정 사항
좀 더 복잡한 csv 데이터 활용 예제 추가 필요
주요 내용
데이터 분석을 위해 가장 기본적으로 할 수 있고, 해야 하는 일이 데이터 시각화이다.
데이터를 시각화하는 것은 어렵지 않지만, 적합한 시각화를 만드는 일은 매우 어려우며,
많은 훈련과 직관이 요구된다.
여기서는 데이터를 탐색하여 얻어진 데이터를 시각화하는 기본적인 방법 네 가지를 배운다.
선그래프
막대그래프
히스토그램
산점도
오늘이 주요 예제
서울과 수도권의 1949년부터 2010년까지 인구증가율 데이터가 아래와 같다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/Seoul_pop04.jpg" style="width
Step1: 선그래프
data 디렉토리의 Seoul_pop1.csv 파일에는 1949년부터 5년 간격으로 측정된 서울시 인구수를 담은 데이터가
들어 있으며, 그 내용은 다음과 같다.
1949 1,437,670
1955 1,568,746
1960 2,445,402
1966 3,793,280
1970 5,525,262
1975 6,879,464
1980 8,350,616
1985 9,625,755
1990 10,603,250
1995 10,217,177
2000 9,853,972
2005 9,762,546
2010 9,631,482
출처
Step2: 막대그래프
동일한 데이터를 막대그래프를 이용하여 보여줄 수 있다.
그렇게 하면 년도별 미세한 차이를 보다 자세히 나타낼 수 있다.
Step3: 그런데 이렇게 하면 막대 그래프의 두께가 좀 좁아 보인다. 그리고
년도가 정확히 5년 단위로 쪼개진 것이 아니기에 막대들 사이의 간격이 불규칙해 보인다.
따라서 먼저 막대의 두께를 좀 조절해보자.
힌트
Step4: 막대들의 간격이 완전히 규칙적으로 되지는 않았지만 이전 그래프와는 좀 다른 느낌을 준다.
이와 같이 막대들의 두께 뿐만아니라, 간격, 색상 모두 조절할 수 있지만,
여기서는 그럴 수 있다는 사실만 언급하고 넘어간다.
예제
대한민국이 하계 올림픽에서 가장 많은 메일을 획득한 상위 여섯 종목과 메달 숫자는 아래와 같다.
<p>
<table cellspacing="20">
<tr>
<td>종목</td>
<td>메달 수</td>
</tr>
<tr>
<td>Archery(양궁)</td>
<td>39</td>
</tr>
<tr>
<td>Badminton(배드민턴)</td>
<td>19</td>
</tr>
<tr>
<td>Boxing(복싱)</td>
<td>20</td>
</tr>
<tr>
<td>Judo(유도)</td>
<td>43</td>
</tr>
<tr>
<td>Taekwondo(태권도)</td>
<td>19</td>
</tr>
<tr>
<td>Wrestling(레슬링)</td>
<td>36</td>
</tr>
<caption align='bottom'>출처
Step5: x축에 종목 이름 대신에 숫자를 넣을 수도 있지만 정확한 정보를 전달하지는 못한다.
Step6: 따라서 x축에 6개의 막대가 필요하고 각각의 막대에 레이블 형식으로 종목 이름을 지정해야 한다.
Step7: 여전히 그래프가 좀 어색하다. 막대들이 좀 두껍다. 이럴 때는 x축에 사용되는 점들의 간격을 좀 벌리는 게 좋다.
Step8: 이번에는 막대 두께가 좁아 보인다. 그래서 좀 넓히는 게 좋다.
Step9: 지금까지 살펴보았듯이 적합한 시각화는 경우에 따라 상당히 많은 노력을 요구하기도 한다.
여기서는 matplotlib.pyplot 라이브러리에 다양한 설정 옵션이 있다는 정도만 기억하면 좋겠다.
히스토그램
히스토 그램은 막대그래프와 비슷하다. 다만 막대 사이에 공간이 없다.
따라서 연속적인 구간으로 구분된 범위에 포함된 숫자들의 도수를 나타내는 데에 효과적이다.
아래 예제는 임의로 생성된 1000개의 실수들의 분포를 보여주는 히스토그램이다.
주의
Step10: 산점도
두 변수 간의 연관관계를 보여 주는 데에 매우 적합한 그래프이다.
예를 들어, 카카오톡에 등록된 친구 수와 하룻동안의 스마트폰 사용시간 사이의 연관성을 보여주는 데이터가 아래와 같이 주어졌다고 하자.
주의
Step11: 위 산점도를 보면 카카오톡에 등록된 친구 수가 많을 수록 스마트폰 사용시간이 증가하는 경향을 한 눈에 확인할 수 있다.
물론, 이는 주어진 (조작된) 데이터에 근거한 정보이다.
오늘의 주요 예제 해결
서울과 수도권의 1949년부터 2010년까지 인구증가율 데이터가 아래와 같다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="../../images/population/Seoul_pop04.jpg" style="width
Step12: csv.reader 함수의 리턴값은 csv 파일의 내용을 줄 별로 리스트로 저장한 특별한 자료형이다.
여기서는 위 예제처럼 사용하는 정도만 기억하면 된다.
Step13: 주의
Step14: 단계 3 | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: CSV 파일 다루기와 데이터 시각화
수정 사항
좀 더 복잡한 csv 데이터 활용 예제 추가 필요
주요 내용
데이터 분석을 위해 가장 기본적으로 할 수 있고, 해야 하는 일이 데이터 시각화이다.
데이터를 시각화하는 것은 어렵지 않지만, 적합한 시각화를 만드는 일은 매우 어려우며,
많은 훈련과 직관이 요구된다.
여기서는 데이터를 탐색하여 얻어진 데이터를 시각화하는 기본적인 방법 네 가지를 배운다.
선그래프
막대그래프
히스토그램
산점도
오늘이 주요 예제
서울과 수도권의 1949년부터 2010년까지 인구증가율 데이터가 아래와 같다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/Seoul_pop04.jpg" style="width:360">
</td>
</tr>
</table>
</p>
이제 위 파일을 읽어서 서울과 수도권의 인구증가율 추이를 아래 그림에서처럼 선그래프로 나타내 보자.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/Seoul_pop05.png" style="width:360">
</td>
</tr>
</table>
</p>
데이터 시각화 도구 소개: matplotlib 라이브러리
데이터 시각화를 위한 도구 중에서 간단한 막대 그래프, 히스토그램, 선 그래프, 산점도를 쉽게 그릴 수 있는
많은 도구들을 포함한 라이브러리이다.
이 라이브러리에 포함된 모듈 중에서 여기서는 pyplot 모듈에 포함된 가장 기본적인 몇 개의 도구들의 활용법을
간단한 예제를 배우고자 한다.
End of explanation
data_f = open("data/Seoul_pop1.csv")
# 년도 리스트
years = []
# 인구수 리스트
populations = []
for line in data_f:
(year, population) = line.split(',')
years.append(int(year))
populations.append(int(population))
data_f.close()
print(years)
print(populations)
# 그래프를 그릴 도화지 준비하기
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
# x축에 년도, y축에 인구수가 있는 선 그래프 만들기
plt.plot(years, populations, color='green', marker='o', linestyle='solid')
# 제목 더하기
plt.title("Seoul Population Change")
# y축에 레이블 추가하기
plt.ylabel("10Million")
plt.show()
Explanation: 선그래프
data 디렉토리의 Seoul_pop1.csv 파일에는 1949년부터 5년 간격으로 측정된 서울시 인구수를 담은 데이터가
들어 있으며, 그 내용은 다음과 같다.
1949 1,437,670
1955 1,568,746
1960 2,445,402
1966 3,793,280
1970 5,525,262
1975 6,879,464
1980 8,350,616
1985 9,625,755
1990 10,603,250
1995 10,217,177
2000 9,853,972
2005 9,762,546
2010 9,631,482
출처: 국가통계포털(kosis.kr)
파일에서 데이터 목록 추출하기
연도별 서울시 인구수의 연도별 변화추이를 간단한 선그래프를 이용하여 확인하려면,
먼저 x축에 사용될 년도 목록과 y축에 사용될 인구수 목록을 구해야 한다.
먼저 이전에 배운 기술을 활용하고, 이후에 보다 쉽게 활용하는 고급기술을 활용한다.
주의: 확장자가 csv인 파일은데이터가 쉼표(콤마)로 구분되어 정리되어 있는 파일을 의미한다.
csv는 Comma-Separated Values의 줄임말이다.
따라서, csv 파일을 읽어들인 후, 각 줄을 쉼표 기준으로 분리(split)하면 이전에 공백 기분으로 데이터를 쪼개는 방식과
동일한 결과를 얻을 수 있다. 즉, split 메소드의 인자로 여기서는 쉼표를 사용하면 된다.
End of explanation
# 그래프를 그릴 도화지 준비하기
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
# 막대그래프 그리기
plt.bar(years, populations)
# 제목 더하기
plt.title("Seoul Population Change")
# y축에 레이블 추가하기
plt.ylabel("10Million")
plt.show()
Explanation: 막대그래프
동일한 데이터를 막대그래프를 이용하여 보여줄 수 있다.
그렇게 하면 년도별 미세한 차이를 보다 자세히 나타낼 수 있다.
End of explanation
# 그래프를 그릴 도화지 준비하기
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
# 막대그래프 그리기, 막대 두께 조절
plt.bar(years, populations, 2.5)
# 제목 더하기
plt.title("Seoul Population Change")
# y축에 레이블 추가하기
plt.ylabel("10Million")
plt.show()
Explanation: 그런데 이렇게 하면 막대 그래프의 두께가 좀 좁아 보인다. 그리고
년도가 정확히 5년 단위로 쪼개진 것이 아니기에 막대들 사이의 간격이 불규칙해 보인다.
따라서 먼저 막대의 두께를 좀 조절해보자.
힌트: plt.bar() 함수의 세 번째 인자는 막대들의 두께를 지정한다.
End of explanation
sports = ['Archery', 'Badminton', 'Boxing', 'Jugdo', 'Taekwondo', 'Wrestling']
medals = [39, 19, 20, 43, 19, 36]
plt.bar(sports, medals)
plt.ylabel("Medals")
plt.title("Olympic Medals")
plt.show()
Explanation: 막대들의 간격이 완전히 규칙적으로 되지는 않았지만 이전 그래프와는 좀 다른 느낌을 준다.
이와 같이 막대들의 두께 뿐만아니라, 간격, 색상 모두 조절할 수 있지만,
여기서는 그럴 수 있다는 사실만 언급하고 넘어간다.
예제
대한민국이 하계 올림픽에서 가장 많은 메일을 획득한 상위 여섯 종목과 메달 숫자는 아래와 같다.
<p>
<table cellspacing="20">
<tr>
<td>종목</td>
<td>메달 수</td>
</tr>
<tr>
<td>Archery(양궁)</td>
<td>39</td>
</tr>
<tr>
<td>Badminton(배드민턴)</td>
<td>19</td>
</tr>
<tr>
<td>Boxing(복싱)</td>
<td>20</td>
</tr>
<tr>
<td>Judo(유도)</td>
<td>43</td>
</tr>
<tr>
<td>Taekwondo(태권도)</td>
<td>19</td>
</tr>
<tr>
<td>Wrestling(레슬링)</td>
<td>36</td>
</tr>
<caption align='bottom'>출처: 위키피디아</caption>
</table>
</p>
이제 위 데이터를 막대 그래프로 시각화할 수 있다.
End of explanation
sports = ['Archery', 'Badminton', 'Boxing', 'Jugdo', 'Taekwondo', 'Wrestling']
medals = [39, 19, 20, 43, 19, 36]
plt.bar(range(6), medals)
plt.ylabel("Medals")
plt.title("Olympic Medals")
plt.show()
Explanation: x축에 종목 이름 대신에 숫자를 넣을 수도 있지만 정확한 정보를 전달하지는 못한다.
End of explanation
sports = ['Archery', 'Badminton', 'Boxing', 'Jugdo', 'Taekwondo', 'Wrestling']
medals = [39, 19, 20, 43, 19, 36]
xs = range(6)
plt.bar(xs, medals)
plt.xticks(xs, sports)
plt.ylabel("Medals")
plt.title("Olympic Medals")
plt.show()
Explanation: 따라서 x축에 6개의 막대가 필요하고 각각의 막대에 레이블 형식으로 종목 이름을 지정해야 한다.
End of explanation
sports = ['Archery', 'Badminton', 'Boxing', 'Jugdo', 'Taekwondo', 'Wrestling']
medals = [39, 19, 20, 43, 19, 36]
xs = range(0, 12, 2)
plt.bar(xs, medals)
plt.xticks(xs, sports)
plt.ylabel("Medals")
plt.title("Olympic Medals")
plt.show()
Explanation: 여전히 그래프가 좀 어색하다. 막대들이 좀 두껍다. 이럴 때는 x축에 사용되는 점들의 간격을 좀 벌리는 게 좋다.
End of explanation
sports = ['Archery', 'Badminton', 'Boxing', 'Jugdo', 'Taekwondo', 'Wrestling']
medals = [39, 19, 20, 43, 19, 36]
xs = range(0, 12, 2)
plt.bar(xs, medals, 1.2)
plt.xticks(xs, sports)
plt.ylabel("Medals")
plt.title("Olympic Medals")
plt.show()
Explanation: 이번에는 막대 두께가 좁아 보인다. 그래서 좀 넓히는 게 좋다.
End of explanation
import numpy as np
gaussian_numbers = np.random.randn(1000)
plt.hist(gaussian_numbers, bins=10)
plt.title("Gaussian Histogram")
plt.xlabel("Value")
plt.ylabel("Frequency")
plt.show()
Explanation: 지금까지 살펴보았듯이 적합한 시각화는 경우에 따라 상당히 많은 노력을 요구하기도 한다.
여기서는 matplotlib.pyplot 라이브러리에 다양한 설정 옵션이 있다는 정도만 기억하면 좋겠다.
히스토그램
히스토 그램은 막대그래프와 비슷하다. 다만 막대 사이에 공간이 없다.
따라서 연속적인 구간으로 구분된 범위에 포함된 숫자들의 도수를 나타내는 데에 효과적이다.
아래 예제는 임의로 생성된 1000개의 실수들의 분포를 보여주는 히스토그램이다.
주의:
* numpy 모듈의 randn 함수는 표준정규분포를 따르도록 실수들을 임의로 생성한다.
* 표준정규분포: 데이터들의 평균이 0이고 표준편차가 1인 정규분포
* 여기서는 표준정규분포가 확률과 통계 분야에서 매우 중요한 역할을 수행한다는 정도만 알고 넘어간다.
End of explanation
num_friends = [41, 26, 90, 50, 18, 124, 152, 88, 72, 51]
phone_time = [4.1, 3.3, 5.7, 4.2, 3.2, 6.4, 6.0, 5.1, 6.2, 3.7]
plt.scatter(num_friends, phone_time)
plt.show()
Explanation: 산점도
두 변수 간의 연관관계를 보여 주는 데에 매우 적합한 그래프이다.
예를 들어, 카카오톡에 등록된 친구 수와 하룻동안의 스마트폰 사용시간 사이의 연관성을 보여주는 데이터가 아래와 같이 주어졌다고 하자.
주의: 아래 데이터는 강의를 위해 임의로 조작되었으며, 어떠한 근거도 갖지 않는다.
End of explanation
import csv
with open('data/Seoul_pop2.csv') as f:
reader = csv.reader(f)
for row in reader:
if len(row) == 0 or row[0][0] == '#':
continue
else:
print(row)
Explanation: 위 산점도를 보면 카카오톡에 등록된 친구 수가 많을 수록 스마트폰 사용시간이 증가하는 경향을 한 눈에 확인할 수 있다.
물론, 이는 주어진 (조작된) 데이터에 근거한 정보이다.
오늘의 주요 예제 해결
서울과 수도권의 1949년부터 2010년까지 인구증가율 데이터가 아래와 같다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="../../images/population/Seoul_pop04.jpg" style="width:360">
</td>
</tr>
</table>
</p>
위 도표의 데이터는 'Seoul_pop2.csv' 파일에 아래와 같이 저장되어 있다.
```
1949년부터 2010년 사이의 서울과 수도권 인구 증가율(%)
구간,서울,수도권
1949-1955,9.12,-5.83
1955-1960,55.88,32.22
1960-1966,55.12,32.76
1966-1970,45.66,28.76
1970-1975,24.51,22.93
1975-1980,21.38,21.69
1980-1985,15.27,18.99
1985-1990,10.15,17.53
1990-1995,-3.64,8.54
1995-2000,-3.55,5.45
2000-2005,-0.93,6.41
2005-2010,-1.34,3.71
```
이제 위 파일을 읽어서 서울과 수도권의 인구증가율 추이를 선그래프로 나타내 보자.
단계 1: csv 파일 읽어드리기
확장자가 csv인 파일은 데이터를 저장하기 위해 주로 사용한다.
csv 파일을 읽어드리는 방법은 csv 모듈의 reader() 함수를 활용하면 매우 쉽다.
End of explanation
type(reader)
Explanation: csv.reader 함수의 리턴값은 csv 파일의 내용을 줄 별로 리스트로 저장한 특별한 자료형이다.
여기서는 위 예제처럼 사용하는 정도만 기억하면 된다.
End of explanation
year_intervals = []
Seoul_pop = []
Capital_region_pop = []
with open('data/Seoul_pop2.csv') as f:
reader = csv.reader(f)
for row in reader:
if len(row) == 0 or row[0][0] == '#':
continue
else:
year_intervals.append(row[0])
Seoul_pop.append(float(row[1]))
Capital_region_pop.append(float(row[2]))
print(year_intervals)
print(Seoul_pop)
print(Capital_region_pop)
Explanation: 주의: 위 코드의 5번 줄을 아래와 같이 하면 오류 발생
if row[0][0] == '#' or len(row) == 0:
이유: 'A or B'의 경우 먼저 A의 참, 거짓을 먼저 판단한 후에, A참이면 참으로 처리하고 끝낸다.
그리고 A가 거짓이면 그제서야 B의 참, 거짓을 확인한다.
그래서 A의 참, 거짓을 판단하면서 오류가 발생하면 바로 멈추게 된다.
위 예제의 경우 row[0][0]이 셋째줄의 인덱스 오류가 발생하게 되서 코드 전체가 멈추게 된다.
주의:
다음 형식은
python
with open('Seoul_pop2.csv') as f:
코드
아래 코드에 대응한다.
python
f = open('Seoul_pop2.csv')
코드
f.close()
단계 2: 선그래프에 사용될 데이터 정리하기
End of explanation
# 그래프를 그릴 도화지 준비하기
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
# x축에 년도, y축에 인구수가 있는 선 그래프 만들기
plt.plot(range(12), Seoul_pop, color='green', marker='o', linestyle='solid', \
label='Seoul')
plt.plot(range(12), Capital_region_pop, color='red', marker='o', linestyle='solid', \
label='Capital Region')
plt.xticks(range(12), year_intervals, rotation=45)
plt.title("Population Change")
plt.ylabel("Percentage")
plt.legend()
plt.show()
Explanation: 단계 3: 그래프 그리기
End of explanation |
9,961 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Obtaining the first trajectories for a Toy Model
Tasks covered in this notebook
Step1: Basic system setup
First we set up our system
Step2: Set up the toy system
For the toy model, we need to give a snapshot as a template, as well as a potential energy surface. The template snapshot also includes a pointer to the topology information (which is relatively simple for the toy systems.)
Step3: Set up the engine
The engine needs the template snapshot we set up above, as well as an integrator and a few other options. We name the engine; this makes it easier to reload it in the future.
Step4: Finally, we make this engine into the default engine for any PathMover that requires an one (e.g., shooting movers, minus movers).
Step5: Now let's look at the potential energy surface we've created
Step6: Defining states and interfaces
TIS methods usually require that you define states and interfaces before starting the simulation. State and interfaces are both defined in terms of Volume objects. The most common type of Volume is one based on some set of collective variables, so the first thing we have to do is to define the collective variable.
For this system, we'll define the collective variables as circles centered on the middle of the state. OPS allows us to define one function for the circle, which is parameterized by different centers. Note that each collective variable is in fact a separate function.
Step7: Now we define the states and interfaces in terms of these order parameters. The CVRangeVolumeSet gives a shortcut to create several volume objects using the same collective variable.
Step8: Build the MSTIS transition network
Once we have the collective variables, states, and interfaces defined, we can create the entire transition network. In this one small piece of code, we create all the path ensembles needed for the simulation, organized into structures to assist with later analysis.
Step9: Bootstrap to fill all interfaces
Now we actually run the bootstrapping calculation. The full_bootstrap function requires an initial snapshot in the state, and then it will generate trajectories satisfying TIS ensemble for the given interfaces. To fill all the ensembles in the MSTIS network, we need to do this once for each initial state.
Step10: Now that we've done that for all 3 states, let's look at the trajectories we generated.
Step11: Finally, we join these into one SampleSet. The function relabel_replicas_per_ensemble ensures that the trajectory associated with each ensemble has a unique replica ID.
Step12: Storing stuff
Up to this point, we haven't stored anything in files. In other notebooks, a lot of the storage is done automatically. Here we'll show you how to store a few things manually. Instead of storing the entire bootstrapping history, we'll only store the final trajectories we get out.
First we create a file. When we create it, the file also requires the template snapshot.
Step13: The storage will recursively store data, so storing total_sample_set leads to automatic storage of all the Sample objects in that sampleset, which in turn leads to storage of all the ensemble, trajectories, and snapshots.
Since the path movers used in bootstrapping and the engine are not required for the sampleset, they would not be stored. We explicitly store the engine for later use, but we won't need the path movers, so we don't try to store them.
Step14: Now we can check to make sure that we actually have stored the objects that we claimed to store. There should be 0 pathmovers, 1 engine, 12 samples (4 samples from each of 3 transitions), and 1 sampleset. There will be some larger number of snapshots. There will also be a larger number of ensembles, because each ensemble is defined in terms of subensembles, each of which gets saved.
Step15: Finally, we close the storage. Not strictly necessary, but a good habit. | Python Code:
# Basic imports
from __future__ import print_function
import openpathsampling as paths
import numpy as np
%matplotlib inline
# used for visualization of the 2D toy system
# we use the %run magic because this isn't in a package
%run ../resources/toy_plot_helpers.py
Explanation: Obtaining the first trajectories for a Toy Model
Tasks covered in this notebook:
Setting up a system using the OPS toy engine
Using a user-defined function to create a collective variable
Using collective variables to define states and interfaces
Storing things manually
Path sampling methods require that the user supply an input path for each path ensemble. This means that you must somehow generate a first input path. The first rare path can come from any number of sources. The main idea is that any trajectory that is nearly physical is good enough. This is discussed more in the OPS documentation on initial trajectories.
In this example, we use a bootstrapping/ratcheting approach, which does create paths satisfying the true dynamics of the system. This approach is nice because it is quick and convenient, although it is best for smaller systems with less complicated transitions. It works by running normal MD to generate a path that satisfies the innermost interface, and then performing shooting moves in that interface's path ensemble until we have a path that crosses the next interface. Then we switch to the path ensemble for the next interface, and shoot until the path crossing the interface after that. The process continues until we have paths for all interfaces.
In this example, we perform multiple state (MS) TIS. Therefore we do one bootstrapping calculation per initial state.
End of explanation
# convenience for the toy dynamics
import openpathsampling.engines.toy as toys
Explanation: Basic system setup
First we set up our system: for the toy dynamics, this involves defining a potential energy surface (PES), setting up an integrator, and giving the simulation an initial configuration. In real MD systems, the PES is handled by the combination of a topology (generated from, e.g., a PDB file) and a force field definition, and the initial configuration would come from a file instead of being described by hand.
End of explanation
# Toy_PES supports adding/subtracting various PESs.
# The OuterWalls PES type gives an x^6+y^6 boundary to the system.
pes = (
toys.OuterWalls([1.0, 1.0], [0.0, 0.0])
+ toys.Gaussian(-0.7, [12.0, 12.0], [0.0, 0.4])
+ toys.Gaussian(-0.7, [12.0, 12.0], [-0.5, -0.5])
+ toys.Gaussian(-0.7, [12.0, 12.0], [0.5, -0.5])
)
topology=toys.Topology(
n_spatial=2,
masses=[1.0, 1.0],
pes=pes
)
Explanation: Set up the toy system
For the toy model, we need to give a snapshot as a template, as well as a potential energy surface. The template snapshot also includes a pointer to the topology information (which is relatively simple for the toy systems.)
End of explanation
integ = toys.LangevinBAOABIntegrator(dt=0.02, temperature=0.1, gamma=2.5)
options={
'integ': integ,
'n_frames_max': 5000,
'n_steps_per_frame': 1
}
toy_eng = toys.Engine(
options=options,
topology=topology
).named('toy_engine')
template = toys.Snapshot(
coordinates=np.array([[-0.5, -0.5]]),
velocities=np.array([[0.0,0.0]]),
engine=toy_eng
)
toy_eng.current_snapshot = template
Explanation: Set up the engine
The engine needs the template snapshot we set up above, as well as an integrator and a few other options. We name the engine; this makes it easier to reload it in the future.
End of explanation
paths.PathMover.engine = toy_eng
Explanation: Finally, we make this engine into the default engine for any PathMover that requires an one (e.g., shooting movers, minus movers).
End of explanation
plot = ToyPlot()
plot.contour_range = np.arange(-1.5, 1.0, 0.1)
plot.add_pes(pes)
fig = plot.plot()
Explanation: Now let's look at the potential energy surface we've created:
End of explanation
def circle(snapshot, center):
import math
return math.sqrt((snapshot.xyz[0][0]-center[0])**2
+ (snapshot.xyz[0][1]-center[1])**2)
opA = paths.CoordinateFunctionCV(name="opA", f=circle, center=[-0.5, -0.5])
opB = paths.CoordinateFunctionCV(name="opB", f=circle, center=[0.5, -0.5])
opC = paths.CoordinateFunctionCV(name="opC", f=circle, center=[0.0, 0.4])
Explanation: Defining states and interfaces
TIS methods usually require that you define states and interfaces before starting the simulation. State and interfaces are both defined in terms of Volume objects. The most common type of Volume is one based on some set of collective variables, so the first thing we have to do is to define the collective variable.
For this system, we'll define the collective variables as circles centered on the middle of the state. OPS allows us to define one function for the circle, which is parameterized by different centers. Note that each collective variable is in fact a separate function.
End of explanation
stateA = paths.CVDefinedVolume(opA, 0.0, 0.2)
stateB = paths.CVDefinedVolume(opB, 0.0, 0.2)
stateC = paths.CVDefinedVolume(opC, 0.0, 0.2)
interfacesA = paths.VolumeInterfaceSet(opA, 0.0, [0.2, 0.3, 0.4])
interfacesB = paths.VolumeInterfaceSet(opB, 0.0, [0.2, 0.3, 0.4])
interfacesC = paths.VolumeInterfaceSet(opC, 0.0, [0.2, 0.3, 0.4])
Explanation: Now we define the states and interfaces in terms of these order parameters. The CVRangeVolumeSet gives a shortcut to create several volume objects using the same collective variable.
End of explanation
ms_outers = paths.MSOuterTISInterface.from_lambdas(
{ifaces: 0.5
for ifaces in [interfacesA, interfacesB, interfacesC]}
)
mstis = paths.MSTISNetwork(
[(stateA, interfacesA),
(stateB, interfacesB),
(stateC, interfacesC)],
ms_outers=ms_outers
)
Explanation: Build the MSTIS transition network
Once we have the collective variables, states, and interfaces defined, we can create the entire transition network. In this one small piece of code, we create all the path ensembles needed for the simulation, organized into structures to assist with later analysis.
End of explanation
initA = toys.Snapshot(
coordinates=np.array([[-0.5, -0.5]]),
velocities=np.array([[1.0,0.0]]),
)
bootstrapA = paths.FullBootstrapping(
transition=mstis.from_state[stateA],
snapshot=initA,
engine=toy_eng,
forbidden_states=[stateB, stateC],
extra_interfaces=[ms_outers.volume_for_interface_set(interfacesA)]
)
gsA = bootstrapA.run()
initB = toys.Snapshot(
coordinates=np.array([[0.5, -0.5]]),
velocities=np.array([[-1.0,0.0]]),
)
bootstrapB = paths.FullBootstrapping(
transition=mstis.from_state[stateB],
snapshot=initB,
engine=toy_eng,
forbidden_states=[stateA, stateC]
)
gsB = bootstrapB.run()
initC = toys.Snapshot(
coordinates=np.array([[0.0, 0.4]]),
velocities=np.array([[0.0,-0.5]]),
)
bootstrapC = paths.FullBootstrapping(
transition=mstis.from_state[stateC],
snapshot=initC,
engine=toy_eng,
forbidden_states=[stateA, stateB]
)
gsC = bootstrapC.run()
Explanation: Bootstrap to fill all interfaces
Now we actually run the bootstrapping calculation. The full_bootstrap function requires an initial snapshot in the state, and then it will generate trajectories satisfying TIS ensemble for the given interfaces. To fill all the ensembles in the MSTIS network, we need to do this once for each initial state.
End of explanation
plot.plot([s.trajectory for s in gsA]+[s.trajectory for s in gsB]+[s.trajectory for s in gsC]);
Explanation: Now that we've done that for all 3 states, let's look at the trajectories we generated.
End of explanation
total_sample_set = paths.SampleSet.relabel_replicas_per_ensemble(
[gsA, gsB, gsC]
)
Explanation: Finally, we join these into one SampleSet. The function relabel_replicas_per_ensemble ensures that the trajectory associated with each ensemble has a unique replica ID.
End of explanation
storage = paths.Storage("mstis_bootstrap.nc", "w")
Explanation: Storing stuff
Up to this point, we haven't stored anything in files. In other notebooks, a lot of the storage is done automatically. Here we'll show you how to store a few things manually. Instead of storing the entire bootstrapping history, we'll only store the final trajectories we get out.
First we create a file. When we create it, the file also requires the template snapshot.
End of explanation
storage.save(total_sample_set)
storage.save(toy_eng)
Explanation: The storage will recursively store data, so storing total_sample_set leads to automatic storage of all the Sample objects in that sampleset, which in turn leads to storage of all the ensemble, trajectories, and snapshots.
Since the path movers used in bootstrapping and the engine are not required for the sampleset, they would not be stored. We explicitly store the engine for later use, but we won't need the path movers, so we don't try to store them.
End of explanation
print("PathMovers:", len(storage.pathmovers))
print("Engines:", len(storage.engines))
print("Samples:", len(storage.samples))
print("SampleSets:", len(storage.samplesets))
print("Snapshots:", len(storage.snapshots))
print("Ensembles:", len(storage.ensembles))
print("CollectiveVariables:", len(storage.cvs))
Explanation: Now we can check to make sure that we actually have stored the objects that we claimed to store. There should be 0 pathmovers, 1 engine, 12 samples (4 samples from each of 3 transitions), and 1 sampleset. There will be some larger number of snapshots. There will also be a larger number of ensembles, because each ensemble is defined in terms of subensembles, each of which gets saved.
End of explanation
storage.close()
Explanation: Finally, we close the storage. Not strictly necessary, but a good habit.
End of explanation |
9,962 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Maxwell filter data with movement compensation
Demonstrate movement compensation on simulated data. The simulated data
contains bilateral activation of auditory cortices, repeated over 14
different head rotations (head center held fixed). See the following for
details
Step1: Visualize the "subject" head movements. By providing the measurement
information, the distance to the nearest sensor in each direction
(e.g., left/right for the X direction, forward/backward for Y) can
be shown in blue, and the destination (if given) shown in red.
Step2: This can also be visualized using a quiver.
Step3: Process our simulated raw data (taking into account head movements).
Step4: First, take the average of stationary data (bilateral auditory patterns).
Step5: Second, take a naive average, which averages across epochs that have been
simulated to have different head positions and orientations, thereby
spatially smearing the activity.
Step6: Third, use raw movement compensation (restores pattern).
Step7: Fourth, use evoked movement compensation. For these data, which contain
very large rotations, it does not as cleanly restore the pattern. | Python Code:
# Authors: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
from os import path as op
import mne
from mne.preprocessing import maxwell_filter
print(__doc__)
data_path = op.join(mne.datasets.misc.data_path(verbose=True), 'movement')
head_pos = mne.chpi.read_head_pos(op.join(data_path, 'simulated_quats.pos'))
raw = mne.io.read_raw_fif(op.join(data_path, 'simulated_movement_raw.fif'))
raw_stat = mne.io.read_raw_fif(op.join(data_path,
'simulated_stationary_raw.fif'))
Explanation: Maxwell filter data with movement compensation
Demonstrate movement compensation on simulated data. The simulated data
contains bilateral activation of auditory cortices, repeated over 14
different head rotations (head center held fixed). See the following for
details:
https://github.com/mne-tools/mne-misc-data/blob/master/movement/simulate.py
End of explanation
mne.viz.plot_head_positions(
head_pos, mode='traces', destination=raw.info['dev_head_t'], info=raw.info)
Explanation: Visualize the "subject" head movements. By providing the measurement
information, the distance to the nearest sensor in each direction
(e.g., left/right for the X direction, forward/backward for Y) can
be shown in blue, and the destination (if given) shown in red.
End of explanation
mne.viz.plot_head_positions(
head_pos, mode='field', destination=raw.info['dev_head_t'], info=raw.info)
Explanation: This can also be visualized using a quiver.
End of explanation
# extract our resulting events
events = mne.find_events(raw, stim_channel='STI 014')
events[:, 2] = 1
raw.plot(events=events)
topo_kwargs = dict(times=[0, 0.1, 0.2], ch_type='mag', vmin=-500, vmax=500,
time_unit='s')
Explanation: Process our simulated raw data (taking into account head movements).
End of explanation
evoked_stat = mne.Epochs(raw_stat, events, 1, -0.2, 0.8).average()
evoked_stat.plot_topomap(title='Stationary', **topo_kwargs)
Explanation: First, take the average of stationary data (bilateral auditory patterns).
End of explanation
epochs = mne.Epochs(raw, events, 1, -0.2, 0.8)
evoked = epochs.average()
evoked.plot_topomap(title='Moving: naive average', **topo_kwargs)
Explanation: Second, take a naive average, which averages across epochs that have been
simulated to have different head positions and orientations, thereby
spatially smearing the activity.
End of explanation
raw_sss = maxwell_filter(raw, head_pos=head_pos)
evoked_raw_mc = mne.Epochs(raw_sss, events, 1, -0.2, 0.8).average()
evoked_raw_mc.plot_topomap(title='Moving: movement compensated (raw)',
**topo_kwargs)
Explanation: Third, use raw movement compensation (restores pattern).
End of explanation
evoked_evo_mc = mne.epochs.average_movements(epochs, head_pos=head_pos)
evoked_evo_mc.plot_topomap(title='Moving: movement compensated (evoked)',
**topo_kwargs)
Explanation: Fourth, use evoked movement compensation. For these data, which contain
very large rotations, it does not as cleanly restore the pattern.
End of explanation |
9,963 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PUMP IT
Using data from Taarifa and the Tanzanian Ministry of Water, can you predict which pumps are functional, which need some repairs, and which don't work at all? This is an intermediate-level practice competition. Predict one of these three classes based on a number of variables about what kind of pump is operating, when it was installed, and how it is managed. A smart understanding of which waterpoints will fail can improve maintenance operations and ensure that clean, potable water is available to communities across Tanzania.
An interactive course exploring this dataset is currently offered by DataCamp.com!
Competition End Date
Step1: Custom Functions
MarkUP Fns
Step2: DataFrame Value Counts
Step4: Confusion Matrix
Step5: Import & Explore Data
Step6: Pre Processing
Log_Lat_Help
Step7: Text Data Tranformations
Step8: Cols vs Uniq distribution
Data Distribution
Vector Transformation
Feature Selection
Step9: UniVariate Analysis
Step10: PCA
Step11: Test-Train Split
Step12: Model Training
Random Forest
Step13: Scoring
Random Forest Score
Step14: XGBOOST
Submission | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
np.random.seed(69572)
%matplotlib inline
%load_ext writeandexecute
# plt.figure(figsize=(120,10))
small = (4,3)
mid = (10, 8)
large = (12, 8)
Explanation: PUMP IT
Using data from Taarifa and the Tanzanian Ministry of Water, can you predict which pumps are functional, which need some repairs, and which don't work at all? This is an intermediate-level practice competition. Predict one of these three classes based on a number of variables about what kind of pump is operating, when it was installed, and how it is managed. A smart understanding of which waterpoints will fail can improve maintenance operations and ensure that clean, potable water is available to communities across Tanzania.
An interactive course exploring this dataset is currently offered by DataCamp.com!
Competition End Date: Jan. 28, 2017, 11:59 p.m.
This competition is for learning and exploring, so the deadline may be extended in the future.
Git Hub Repo
Git Hub Report
Features Details
Import Libraries
End of explanation
from __future__ import absolute_import
from IPython.core.getipython import get_ipython
from IPython.core.magic import (Magics, magics_class, cell_magic)
import sys
from StringIO import StringIO
from markdown import markdown
from IPython.core.display import HTML
@magics_class
class MarkdownMagics(Magics):
@cell_magic
def asmarkdown(self, line, cell):
buffer = StringIO()
stdout = sys.stdout
sys.stdout = buffer
try:
exec(cell, locals(), self.shell.user_ns)
except:
sys.stdout = stdout
raise
sys.stdout = stdout
return HTML("<p>{}</p>".format(markdown(buffer.getvalue(), extensions=['markdown.extensions.extra'])))
return buffer.getvalue() + 'test'
get_ipython().register_magics(MarkdownMagics)
Explanation: Custom Functions
MarkUP Fns
End of explanation
def raw_markup_value_counts(dataframe, max_print_value_counts=30, show_plots=False):
'''
prints value counts of each feature in data frame
'''
mydf = pd.DataFrame.copy(dataframe)
i = 0
raw_markup_data = []
pp = raw_markup_data.append
pp('''|Col ID|Col Name|UniqCount|Col Values|UniqValCount|''')
pp('''|------|--------|---------|----------|------------|''')
for col in mydf.dtypes.index:
i += 1
sam = mydf[col]
tmp = len(sam.value_counts())
if tmp < max_print_value_counts:
flag = True
for key, val in dict(sam.value_counts()).iteritems():
if flag:
pp('|%i|%s|%i|%s|%s|' % (
i, col, len(sam.value_counts()), key, val))
flag = False
else:
pp('||-|-|%s|%s|' % (key, val))
if show_plots:
plt.figure(i)
ax = sam.value_counts().plot(kind='barh', figsize=(12, 5))
_ = plt.title(col.upper())
_ = plt.xlabel('counts')
else:
pp('|%i|%s|%i|||' % (i, col, len(sam.value_counts())))
return raw_markup_data
Explanation: DataFrame Value Counts
End of explanation
from __future__ import division
import itertools
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
def confusion_maxtrix_stuff(y_test, y_pred, class_names):
'''
Example
>>> confusion_maxtrix_stuff(y_test,
y_pred,
class_names=RAW_y.status_group.value_counts().keys()
):
'''
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure(figsize=(8,8))
plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure(figsize=(8,8))
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized confusion matrix')
plt.show()
Explanation: Confusion Matrix
End of explanation
RAW_X = pd.read_csv('traning_set_values.csv', index_col='id')
RAW_y = pd.read_csv('training_set_labels.csv', index_col='id')
RAW_TEST_X = pd.read_csv('test_set_values.csv', index_col='id')
Explanation: Import & Explore Data
End of explanation
from datetime import datetime
strptime = datetime.strptime
DATE_FORMAT = "%Y-%m-%d"
REFERENCE_DATE_POINT = strptime('2000-01-01', DATE_FORMAT)
# Reducing geo location precision to 11 meters
LONG_LAT_PRECISION = 0.001
def sam_datetime_to_number(x):
return (strptime(str(x), DATE_FORMAT) - REFERENCE_DATE_POINT).days
# Transforming Date to Int.
if RAW_X.date_recorded.dtype == 'O':
RAW_X.date_recorded = RAW_X.date_recorded.map(sam_datetime_to_number)
RAW_TEST_X.date_recorded = RAW_TEST_X.date_recorded.map(sam_datetime_to_number)
# Filling Missing/OUTLIAR Values
_ = np.mean(RAW_X[u'latitude'][RAW_X.latitude < -1.0].values)
if not RAW_X.loc[RAW_X.latitude >= -1.0, u'latitude'].empty:
RAW_X.loc[RAW_X.latitude >= -1.0, u'latitude'] = _
RAW_TEST_X.loc[RAW_TEST_X.latitude >= -1.0, u'latitude'] = _
# Filling Missing/OUTLIAR Values
_ = np.mean(RAW_X[u'longitude'][RAW_X[u'longitude'] > 1.0].values)
if not RAW_X.loc[RAW_X[u'longitude'] <= 1.0, u'longitude'].empty:
RAW_X.loc[RAW_X[u'longitude'] <= 1.0, u'longitude'] = _
RAW_TEST_X.loc[RAW_TEST_X[u'longitude'] <= 1.0, u'longitude'] = _
# Reducing Precision of Lat.
if RAW_X.longitude.mean() < 50:
RAW_X.longitude = RAW_X.longitude // LONG_LAT_PRECISION
RAW_X.latitude = RAW_X.latitude // LONG_LAT_PRECISION
RAW_TEST_X.longitude = RAW_TEST_X.longitude // LONG_LAT_PRECISION
RAW_TEST_X.latitude = RAW_TEST_X.latitude // LONG_LAT_PRECISION
# Filling Missing/OUTLIAR Values
if RAW_X.public_meeting.dtype != 'bool':
RAW_X.public_meeting = RAW_X.public_meeting == True
RAW_TEST_X.public_meeting = RAW_TEST_X.public_meeting == True
if RAW_X.permit.dtype != 'bool':
RAW_X.permit = RAW_X.permit == True
RAW_TEST_X.permit = RAW_TEST_X.permit == True
if list(RAW_TEST_X.dtypes[RAW_TEST_X.dtypes != RAW_X.dtypes]):
raise Exception('RAW_X.dtypes and RAW_TEST_X.dtypes are not in Sync')
Explanation: Pre Processing
Log_Lat_Help: Link
Num Data Tranformation
date_recorded --> Int
longitude --> Float(less precision)
latitude --> Float(less precision)
public_meeting --> Bool
permit --> Bool
End of explanation
def text_transformation(name):
if name:
name = name.lower().strip()
name = ''.join([i if 96 < ord(i) < 128 else ' ' for i in name])
if 'and' in name:
name = name.replace('and', ' ')
if '/' in name:
name = name.replace('/', ' ')
while ' ' in name:
name = name.replace(' ', ' ')
return name.strip()
return
for col in RAW_X.dtypes[RAW_X.dtypes == object].index:
aa = len(RAW_X[col].unique())
RAW_X[col] = RAW_X[col].fillna('').apply(text_transformation)
RAW_TEST_X[col] = RAW_TEST_X[col].fillna('').apply(text_transformation)
bb = len(RAW_X[col].unique())
if aa != bb:
print col, aa, bb
Explanation: Text Data Tranformations
End of explanation
from collections import defaultdict
from sklearn import preprocessing
# http://stackoverflow.com/questions/24458645/label-encoding-across-multiple-columns-in-scikit-learn
d = defaultdict(preprocessing.LabelEncoder)
# Labels Fit
sam = pd.concat([RAW_X, RAW_TEST_X]).apply(lambda x: d[x.name].fit(x))
# Labels Transform - Training Data
X = RAW_X.apply(lambda x: d[x.name].transform(x))
TEST_X = RAW_TEST_X.apply(lambda x: d[x.name].transform(x))
le = preprocessing.LabelEncoder().fit(RAW_y[u'status_group'])
y = le.transform(RAW_y[u'status_group'])
# g = sns.PairGrid(X[:1000])
# g.map(plt.scatter);
Explanation: Cols vs Uniq distribution
Data Distribution
Vector Transformation
Feature Selection:
http://machinelearningmastery.com/feature-selection-machine-learning-python/
End of explanation
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
test = SelectKBest(score_func=chi2, k=30)
fit = test.fit(X, y)
cols_names = RAW_X.columns
np.set_printoptions(precision=2)
print(fit.scores_), len(fit.scores_)
col_importances = list(zip(fit.scores_, cols_names))
col_importances.sort(reverse=True)
selected_cols = [_[-1] for _ in col_importances[:30] ]
features = pd.DataFrame(fit.transform(X))
features.columns = selected_cols
print len(X.columns), features.shape, len(y)
X = pd.DataFrame(fit.transform(X))
TEST_X = pd.DataFrame(fit.transform(TEST_X))
X.columns = selected_cols
TEST_X.columns = selected_cols
Explanation: UniVariate Analysis
End of explanation
from sklearn.decomposition import PCA
# feature extraction
pca = PCA(n_components=18)
fit = pca.fit(X)
plt.scatter (range(len(fit.explained_variance_ratio_)), fit.explained_variance_ratio_.cumsum())
X = pca.transform(X)
TEST_X = pca.transform(TEST_X)
Explanation: PCA
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42, stratify=y)
# X_train, X_test, y_train, y_test = train_test_split(features, y, test_size=0.25, random_state=42, stratify=y)
Explanation: Test-Train Split
End of explanation
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100, class_weight="balanced_subsample", n_jobs=-1)
# class_weight="balanced_subsample"/"balanced"
# criterion="gini"/"entropy"
clf
clf = clf.fit(X_train, y_train)
pred = clf.predict_proba(X_test)
clf.score(X_test, y_test) # 0.79303132333435367 # 0.80252525252525253 # 0.80303030303030298 # 0.80345117845117842
# 0.79814814814814816
# (n_estimators=100, class_weight="balanced_subsample", n_jobs=-1) 0.80782828282828278
# (n_estimators=100, class_weight="balanced_subsample", n_jobs=-1) 0.81186868686868685
clf?
plt.figure(figsize=(12, 3))
# making importance relative
a, b = min(clf.feature_importances_), max(clf.feature_importances_)
cols_imp = (clf.feature_importances_ - a) /b
_ = plt.scatter(range(30), cols_imp)
_ = plt.plot((0, 29), (0.05,0.05), '-r')
_ = plt.xlabel('Columns')
_ = plt.ylabel('Relative Col Importance')
Explanation: Model Training
Random Forest
End of explanation
from sklearn import metrics
print map(lambda x: len(x), [X_test, y_test])
clf.score(X_test, y_test) # 0.79303132333435367 # 0.80252525252525253 # 0.80303030303030298 # 0.80345117845117842
print .79303132333435367 - 0.80345117845117842
print .8285 - 0.80345117845117842, .8285 - .79303132333435367
Explanation: Scoring
Random Forest Score
End of explanation
test_ids = RAW_TEST_X.index
predictions = clf.predict(TEST_X)
print (predictions.shape)
predictions_labels = le.inverse_transform(predictions)
# sub = pd.DataFrame(predictions, columns=list(le.classes_))
sub = pd.DataFrame(predictions_labels, columns=['status_group'])
sub.head()
sub.insert(0, 'id', test_ids)
sub.reset_index()
sub.to_csv('submit.csv', index = False)
sub.head()
X.shape
Explanation: XGBOOST
Submission
End of explanation |
9,964 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple RL
Welcome! Here we'll showcase some basic examples of typical RL programming tasks.
Example 1
Step1: Next, we make an MDP and a few agents
Step2: The real meat of <i>simple_rl</i> are the functions that run experiments. The first of which takes a list of agents and an mdp and simulates their interaction
Step3: We can throw R-Max, introduced by [Brafman and Tennenholtz, 2002] in the mix, too
Step4: Each experiment we run generates an Experiment object. This facilitates recording results, making relevant files, and plotting. When the <code>run_agents...</code> function is called, a <i>results</i> dir is created containing relevant experiment data. There should be a subdirectory in <i>results</i> named after the mdp you ran experiments on -- this is where the plot, agent results, and <i>parameters.txt</i> file are stored.
All of the above code is contained in the <i>simple_example.py</i> file.
Example 2
Step5: <img src="val.png" alt="Val" style="width
Step6: Which Produces
Step7: Above, we specify the objects of the OOMDP and their attributes. Now, just as before, we can let some agents interact with the MDP
Step8: More on OOMDPs in <i>examples/oomdp_example.py</i>
Example 4
Step9: Example 5 | Python Code:
# Add simple_rl to system path.
import os
import sys
parent_dir = os.path.abspath(os.path.join(os.getcwd(), os.pardir))
sys.path.insert(0, parent_dir)
from simple_rl.agents import QLearningAgent, RandomAgent
from simple_rl.tasks import GridWorldMDP
from simple_rl.run_experiments import run_agents_on_mdp
Explanation: Simple RL
Welcome! Here we'll showcase some basic examples of typical RL programming tasks.
Example 1: Grid World
First, we'll grab our relevant imports: some agents, an MDP, an a function to facilitate running experiments and plotting:
End of explanation
# Setup MDP.
mdp = GridWorldMDP(width=6, height=6, init_loc=(1,1), goal_locs=[(6,6)])
# Setup Agents.
ql_agent = QLearningAgent(actions=mdp.get_actions())
rand_agent = RandomAgent(actions=mdp.get_actions())
Explanation: Next, we make an MDP and a few agents:
End of explanation
# Run experiment and make plot.
run_agents_on_mdp([ql_agent, rand_agent], mdp, instances=5, episodes=100, steps=40, reset_at_terminal=True, verbose=False)
Explanation: The real meat of <i>simple_rl</i> are the functions that run experiments. The first of which takes a list of agents and an mdp and simulates their interaction:
End of explanation
from simple_rl.agents import RMaxAgent
rmax_agent = RMaxAgent(actions=mdp.get_actions(), horizon=3, s_a_threshold=1)
# Run experiment and make plot.
run_agents_on_mdp([rmax_agent, ql_agent, rand_agent], mdp, instances=5, episodes=100, steps=20, reset_at_terminal=True, verbose=False)
Explanation: We can throw R-Max, introduced by [Brafman and Tennenholtz, 2002] in the mix, too:
End of explanation
from simple_rl.tasks import FourRoomMDP
four_room_mdp = FourRoomMDP(9, 9, goal_locs=[(9, 9)], gamma=0.95)
# Run experiment and make plot.
four_room_mdp.visualize_value()
Explanation: Each experiment we run generates an Experiment object. This facilitates recording results, making relevant files, and plotting. When the <code>run_agents...</code> function is called, a <i>results</i> dir is created containing relevant experiment data. There should be a subdirectory in <i>results</i> named after the mdp you ran experiments on -- this is where the plot, agent results, and <i>parameters.txt</i> file are stored.
All of the above code is contained in the <i>simple_example.py</i> file.
Example 2: Visuals (require pygame)
First let's make a FourRoomMDP from [Sutton, Precup, Singh 1999], which is more visually interesting than a grid world.
End of explanation
from simple_rl.tasks.grid_world import GridWorldMDPClass
pblocks_mdp = GridWorldMDPClass.make_grid_world_from_file("pblocks_grid.txt", randomize=False)
pblocks_mdp.visualize_value()
Explanation: <img src="val.png" alt="Val" style="width: 400px;"/>
Or we can visualize a policy:
<img src="pol.png" alt="Val Visual" style="width: 400px;"/>
Both of these are in examples/viz_example.py. If you need pygame in anaconda, give this a shot:
> conda install -c cogsci pygame
If you get an sdl font related error on Mac/Linux, try:
> brew update sdl && sdl_tf
We can also make grid worlds with a text file. For instance, we can construct the grid problem from [Barto and Pickett 2002] by making a text file:
--w-----w---w----g
--------w---------
--w-----w---w-----
--w-----w---w-----
wwwww-wwwwwwwww-ww
---w----w----w----
---w---------w----
--------w---------
wwwwwwwww---------
w-------wwwwwww-ww
--w-----w---w-----
--------w---------
--w---------w-----
--w-----w---w-----
wwwww-wwwwwwwww-ww
---w-----w---w----
---w-----w---w----
a--------w--------
Then, we make a grid world out of it:
End of explanation
from simple_rl.tasks import TaxiOOMDP
from simple_rl.run_experiments import run_agents_on_mdp
from simple_rl.agents import QLearningAgent, RandomAgent
# Taxi initial state attributes..
agent = {"x":1, "y":1, "has_passenger":0}
passengers = [{"x":3, "y":2, "dest_x":2, "dest_y":3, "in_taxi":0}]
taxi_mdp = TaxiOOMDP(width=4, height=4, agent=agent, walls=[], passengers=passengers)
# Make agents.
ql_agent = QLearningAgent(actions=taxi_mdp.get_actions())
rand_agent = RandomAgent(actions=taxi_mdp.get_actions())
Explanation: Which Produces:
<img src="pblocks.png" alt="Policy Blocks Grid World" style="width: 400px;"/>
Example 3: OOMDPs, Taxi
There's also a Taxi MDP, which is actually built on top of an Object Oriented MDP Abstract class from [Diuk, Cohen, Littman 2008].
End of explanation
# Run experiment and make plot.
run_agents_on_mdp([ql_agent, rand_agent], taxi_mdp, instances=5, episodes=100, steps=150, reset_at_terminal=True)
Explanation: Above, we specify the objects of the OOMDP and their attributes. Now, just as before, we can let some agents interact with the MDP:
End of explanation
from simple_rl.run_experiments import play_markov_game
from simple_rl.agents import QLearningAgent, FixedPolicyAgent
from simple_rl.tasks import RockPaperScissorsMDP
import random
# Setup MDP, Agents.
markov_game = RockPaperScissorsMDP()
ql_agent = QLearningAgent(actions=markov_game.get_actions(), epsilon=0.2)
fixed_action = random.choice(markov_game.get_actions())
fixed_agent = FixedPolicyAgent(policy=lambda s:fixed_action)
# Run experiment and make plot.
play_markov_game([ql_agent, fixed_agent], markov_game, instances=10, episodes=1, steps=10)
Explanation: More on OOMDPs in <i>examples/oomdp_example.py</i>
Example 4: Markov Games
--------
I've added a few markov games, including rock paper scissors, grid games, and prisoners dilemma. Just as before, we get a run agents method that simulates learning and makes a plot:
End of explanation
from simple_rl.tasks import GymMDP
from simple_rl.agents import LinearQLearningAgent, RandomAgent
from simple_rl.run_experiments import run_agents_on_mdp
# Gym MDP.
gym_mdp = GymMDP(env_name='CartPole-v0', render=False) # If render is true, visualizes interactions.
num_feats = gym_mdp.get_num_state_feats()
# Setup agents and run.
lin_agent = LinearQLearningAgent(gym_mdp.get_actions(), num_features=num_feats, alpha=0.2, epsilon=0.4, rbf=True)
run_agents_on_mdp([lin_agent], gym_mdp, instances=3, episodes=1, steps=50)
Explanation: Example 5: Gym MDP
--------
Recently I added support for making OpenAI gym MDPs. It's again only a few lines of code:
End of explanation |
9,965 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python and Friends
This is a very quick run-through of some python syntax
Step1: The Python Language
Lets talk about using Python as a calculator...
Step2: Notice integer division and floating-point error below!
Step3: Here is how we can print things. Something on the last line by itself is returned as the output value.
Step4: We can obtain the type of a variable, and use boolean comparisons tontest these types.
Step5: Python and Iteration (and files)
In working with python I always remember
Step6: Python has some nifty functions like enumerate and zip. The former gives a list of tuples with each tuple of the form (index, value), while the latter takes elements from each list and outs them together into a tuple, thus creating a list of tuples. The first is a duck, but the second isnt.
Step7: Someone realized that design flaw and created izip.
Step8: Open files behave like lists too! Here we get each line in the file and find its length, using the comprehension syntax to put these lengths into a big list.
Step9: But perhaps we want to access Hamlet word by word and not line by line
Step10: One can use the with syntax which cretaes a context. The file closing is then done automatically for us.
Step11: There are roughly 32,000 words in Hamlet.
The indexing of lists
Step12: Lets split the word tokens. The first one below reads, give me the second, third, and fourth words (remember that python is 0 indexed). Try and figure what the others mean.
Step13: range and xrange get the list of integers upto N. But xrange behaves like an iterator. The reason for this is that there is no point generaing all os a million integers. We can just add 1 to the previous one and save memory. So we trade off storage for computation.
Step14: Dictionaries
These are the bread and butter. You will use them a lot. They even duck like lists. But be careful how.
Step15: The keys do not have to be strings. From python 2.7 you can use dictionary comprehensions as well
Step16: You can construct then nicely using the function dict.
Step17: and conversion to json
Step18: Strings
Basically they behave like immutable lists
Step19: Functions
Functions are even more the bread and butter. You'll see them as methods on objects, or standing alone by themselves.
Step20: In Python, functions are "first-class". This is just a fancy way of saying, you can pass functions to other functions
Step21: Python functions can have positional arguments and keyword arguments. Positional arguments are stored in a tuple, and keyword arguments in a dictionary. Note the "starred" syntax
Step22: YOUR TURN create a dictionary with keys the integers upto and including 10, and values the cubes of these dictionaries
Step23: Booleans and Control-flow
Lets test for belonging...
Step24: Python supports if/elif/else clauses for multi-way conditionals
Step25: You can break out of a loop based on a condition. The loop below is a for loop.
Step26: While loops are also supported. continue continues to the next iteration of the loop skipping all the code below, while break breaks out of it.
Step27: Exceptions
This is the way to catch errors.
Step28: All together now
Lets see what hamlet gives us. We convert all words to lower-case
Step29: We then find a unique set of words using python's set data structure. We count how often those words occured usinf the count method on lists.
Step30: We find the 100 most used words...
Step31: Lets get the top 20 of this and plot a bar chart! | Python Code:
# The %... is an iPython thing, and is not part of the Python language.
# In this case we're just telling the plotting library to draw things on
# the notebook, instead of on a separate window.
%matplotlib inline
#this line above prepares IPython notebook for working with matplotlib
# See all the "as ..." contructs? They're just aliasing the package names.
# That way we can call methods like plt.plot() instead of matplotlib.pyplot.plot().
import numpy as np # imports a fast numerical programming library
import scipy as sp #imports stats functions, amongst other things
import matplotlib as mpl # this actually imports matplotlib
import matplotlib.cm as cm #allows us easy access to colormaps
import matplotlib.pyplot as plt #sets up plotting under plt
import pandas as pd #lets us handle data as dataframes
#sets up pandas table display
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
import seaborn as sns #sets up styles and gives us more plotting options
Explanation: Python and Friends
This is a very quick run-through of some python syntax
End of explanation
1+2
Explanation: The Python Language
Lets talk about using Python as a calculator...
End of explanation
1/2,1.0/2.0,3*3.2
Explanation: Notice integer division and floating-point error below!
End of explanation
print 1+3.0,"\n",5/3.0
5/3
Explanation: Here is how we can print things. Something on the last line by itself is returned as the output value.
End of explanation
a=5.0/6.0
print(a)
print type(a)
import types
type(a)==types.FloatType
type(a)==types.IntType
Explanation: We can obtain the type of a variable, and use boolean comparisons tontest these types.
End of explanation
alist=[1,2,3,4,5]
asquaredlist=[i*i for i in alist]
asquaredlist
Explanation: Python and Iteration (and files)
In working with python I always remember: a python is a duck.
What I mean is, python has a certain way of doing things. For example lets call one of these ways listiness. Listiness works on lists, dictionaries, files, and a general notion of something called an iterator.
But first, lets introduce the notion of a comprehension. Its a way of constructing a list
End of explanation
enumerate(asquaredlist),zip(alist, asquaredlist)
Explanation: Python has some nifty functions like enumerate and zip. The former gives a list of tuples with each tuple of the form (index, value), while the latter takes elements from each list and outs them together into a tuple, thus creating a list of tuples. The first is a duck, but the second isnt.
End of explanation
from itertools import izip
izip(alist, asquaredlist)
print enumerate(asquaredlist)
[k for k in enumerate(asquaredlist)]
Explanation: Someone realized that design flaw and created izip.
End of explanation
linelengths=[len(line) for line in open("hamlet.txt")]#poor code as we dont close the file
print linelengths
sum(linelengths), np.mean(linelengths), np.median(linelengths), np.std(linelengths)
Explanation: Open files behave like lists too! Here we get each line in the file and find its length, using the comprehension syntax to put these lengths into a big list.
End of explanation
hamletfile=open("hamlet.txt")
hamlettext=hamletfile.read()
hamletfile.close()
hamlettokens=hamlettext.split()#split with no arguments splits on whitespace
len(hamlettokens)
Explanation: But perhaps we want to access Hamlet word by word and not line by line
End of explanation
with open("hamlet.txt") as hamletfile:
hamlettext=hamletfile.read()
hamlettokens=hamlettext.split()
print len(hamlettokens)
Explanation: One can use the with syntax which cretaes a context. The file closing is then done automatically for us.
End of explanation
print hamlettext[:1000]#first 1000 characters from Hamlet.
print hamlettext[-1000:]#and last 1000 characters from Hamlet.
Explanation: There are roughly 32,000 words in Hamlet.
The indexing of lists
End of explanation
print hamlettokens[1:4], hamlettokens[:4], hamlettokens[0], hamlettokens[-1]
hamlettokens[1:8:2]#get every 2nd world between the 2nd and the 9th: ie 2nd, 4th, 6th, and 8th
Explanation: Lets split the word tokens. The first one below reads, give me the second, third, and fourth words (remember that python is 0 indexed). Try and figure what the others mean.
End of explanation
mylist=[]
for i in xrange(10):
mylist.append(i)
mylist
Explanation: range and xrange get the list of integers upto N. But xrange behaves like an iterator. The reason for this is that there is no point generaing all os a million integers. We can just add 1 to the previous one and save memory. So we trade off storage for computation.
End of explanation
adict={'one':1, 'two': 2, 'three': 3}
print [i for i in adict], [(k,v) for k,v in adict.items()], adict.values()
Explanation: Dictionaries
These are the bread and butter. You will use them a lot. They even duck like lists. But be careful how.
End of explanation
mydict ={k:v for (k,v) in zip(alist, asquaredlist)}
mydict
Explanation: The keys do not have to be strings. From python 2.7 you can use dictionary comprehensions as well
End of explanation
dict(a=1, b=2)
Explanation: You can construct then nicely using the function dict.
End of explanation
import json
s=json.dumps(mydict)
print s
json.loads(s)
Explanation: and conversion to json
End of explanation
lastword=hamlettokens[-1]
print(lastword)
lastword[-2]="k"#cant change a part of a string
lastword[-2]
You can join a list with a separator to make a string.
wierdstring=",".join(hamlettokens)
wierdstring[:1000]
Explanation: Strings
Basically they behave like immutable lists
End of explanation
def square(x):
return(x*x)
def cube(x):
return x*x*x
square(5),cube(5)
print square, type(cube)
Explanation: Functions
Functions are even more the bread and butter. You'll see them as methods on objects, or standing alone by themselves.
End of explanation
def sum_of_anything(x,y,f):
print x,y,f
return(f(x) + f(y))
sum_of_anything(3,4,square)
Explanation: In Python, functions are "first-class". This is just a fancy way of saying, you can pass functions to other functions
End of explanation
def f(a,b,*posargs,**dictargs):
print "got",a,b,posargs, dictargs
return a
print f(1,3)
print f(1,3,4,d=1,c=2)
Explanation: Python functions can have positional arguments and keyword arguments. Positional arguments are stored in a tuple, and keyword arguments in a dictionary. Note the "starred" syntax
End of explanation
#your code here
Explanation: YOUR TURN create a dictionary with keys the integers upto and including 10, and values the cubes of these dictionaries
End of explanation
a=[1,2,3,4,5]
1 in a
6 in a
Explanation: Booleans and Control-flow
Lets test for belonging...
End of explanation
def do_it(x):
if x==1:
print "One"
elif x==2:
print "Two"
else:
print x
do_it(1)
do_it(2), do_it(3)
Explanation: Python supports if/elif/else clauses for multi-way conditionals
End of explanation
for i in range(10):
print i
if (i > 5):
break
Explanation: You can break out of a loop based on a condition. The loop below is a for loop.
End of explanation
i=0
while i < 10:
print i
i=i+1
if i < 5:
continue
else:
break
Explanation: While loops are also supported. continue continues to the next iteration of the loop skipping all the code below, while break breaks out of it.
End of explanation
try:
f(1)#takes atleast 2 arguments
except:
import sys
print sys.exc_info()
Explanation: Exceptions
This is the way to catch errors.
End of explanation
hamletlctokens=[word.lower() for word in hamlettokens]
hamletlctokens.count("thou")
Explanation: All together now
Lets see what hamlet gives us. We convert all words to lower-case
End of explanation
uniquelctokens=set(hamletlctokens)
tokendict={}
for ut in uniquelctokens:
tokendict[ut]=hamletlctokens.count(ut)
Explanation: We then find a unique set of words using python's set data structure. We count how often those words occured usinf the count method on lists.
End of explanation
L=sorted(tokendict.iteritems(), key= lambda (k,v):v, reverse=True)[:100]
L
Explanation: We find the 100 most used words...
End of explanation
topfreq=L[:20]
print topfreq
pos = np.arange(len(topfreq))
plt.bar(pos, [e[1] for e in topfreq]);
plt.xticks(pos+0.4, [e[0] for e in topfreq]);
Explanation: Lets get the top 20 of this and plot a bar chart!
End of explanation |
9,966 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
XID+ Example Run Script
(This is based on a Jupyter notebook, available in the XID+ package and can be interactively run and edited)
XID+ is a probababilistic deblender for confusion dominated maps. It is designed to
Step1: Work out what small tiles are in the test large tile file for PACS
Step2: You can fit with the numpyro backend. | Python Code:
import numpy as np
from astropy.io import fits
from astropy import wcs
import pickle
import dill
import sys
import os
import xidplus
import copy
from xidplus import moc_routines, catalogue
from xidplus import posterior_maps as postmaps
from builtins import input
Explanation: XID+ Example Run Script
(This is based on a Jupyter notebook, available in the XID+ package and can be interactively run and edited)
XID+ is a probababilistic deblender for confusion dominated maps. It is designed to:
Use a MCMC based approach to get FULL posterior probability distribution on flux
Provide a natural framework to introduce additional prior information
Allows more representative estimation of source flux density uncertainties
Provides a platform for doing science with the maps (e.g XID+ Hierarchical stacking, Luminosity function from the map etc)
Cross-identification tends to be done with catalogues, then science with the matched catalogues.
XID+ takes a different philosophy. Catalogues are a form of data compression. OK in some cases, not so much in others, i.e. confused images: catalogue compression loses correlation information. Ideally, science should be done without compression.
XID+ provides a framework to cross identify galaxies we know about in different maps, with the idea that it can be extended to do science with the maps!!
Philosophy:
build a probabilistic generative model for the SPIRE maps
Infer model on SPIRE maps
Bayes Theorem
$p(\mathbf{f}|\mathbf{d}) \propto p(\mathbf{d}|\mathbf{f}) \times p(\mathbf{f})$
In order to carry out Bayesian inference, we need a model to carry out inference on.
For the SPIRE maps, our model is quite simple, with likelihood defined as:
$L = p(\mathbf{d}|\mathbf{f}) \propto |\mathbf{N_d}|^{-1/2} \exp\big{ -\frac{1}{2}(\mathbf{d}-\mathbf{Af})^T\mathbf{N_d}^{-1}(\mathbf{d}-\mathbf{Af})\big}$
where:
$\mathbf{N_{d,ii}} =\sigma_{inst.,ii}^2+\sigma_{conf.}^2$
Simplest model for XID+ assumes following:
All sources are known and have positive flux (fi)
A global background (B) contributes to all pixels
PRF is fixed and known
Confusion noise is constant and not correlated across pixels
Because we are getting the joint probability distribution, our model is generative i.e. given parameters, we generate data and vica-versa
Compared to discriminative model (i.e. neural network), which only obtains conditional probability distribution i.e. Neural network, give inputs, get output. Can't go other way'
Generative model is full probabilistic model. Allows more complex relationships between observed and target variables
End of explanation
from healpy import pixelfunc
order_large=6
order_small=10
tile_large=21875
output_folder='../../../test_files/'
outfile=output_folder+'Tile_'+str(tile_large)+'_'+str(order_large)+'.pkl'
with open(outfile, 'rb') as f:
obj=pickle.load(f)
priors=obj['priors']
theta, phi =pixelfunc.pix2ang(2**order_large, tile_large, nest=True)
tile_small = pixelfunc.ang2pix(2**order_small, theta, phi, nest=True)
priors[0].moc.write(output_folder+'Tile_'+str(tile_large)+'_'+str(order_large)+'_moc_.fits')
from astropy.table import Table, join, vstack,hstack
Table([priors[0].sra,priors[0].sdec]).write(output_folder+'Tile_'+str(tile_large)+'_'+str(order_large)+'_table.fits')
moc=moc_routines.get_fitting_region(order_small,tile_small)
for p in priors:
p.moc=moc
p.cut_down_prior()
p.prior_bkg(0.0,1)
p.get_pointing_matrix()
moc.write(output_folder+'Tile_'+str(tile_small)+'_'+str(order_small)+'_moc_.fits')
print('fitting '+ str(priors[0].nsrc)+' sources \n')
print('there are '+ str(priors[0].snpix)+' pixels')
%%time
from xidplus.stan_fit import PACS
#priors[0].upper_lim_map()
#priors[0].prior_flux_upper=(priors[0].prior_flux_upper-10.0+0.02)/np.max(priors[0].prf)
fit=PACS.all_bands(priors[0],priors[1],iter=1000)
Took 13205.7 seconds (3.6 hours)
outfile=output_folder+'Tile_'+str(tile_small)+'_'+str(order_small)
posterior=xidplus.posterior_stan(fit,priors)
xidplus.save(priors,posterior,outfile)
post_rep_map = postmaps.replicated_maps(priors, posterior, nrep=2000)
band = ['PACS_100', 'PACS_160']
for i, p in enumerate(priors):
Bayesian_Pval = postmaps.make_Bayesian_pval_maps(priors[i], post_rep_map[i])
wcs_temp = wcs.WCS(priors[i].imhdu)
ra, dec = wcs_temp.wcs_pix2world(priors[i].sx_pix, priors[i].sy_pix, 0)
kept_pixels = np.array(moc_routines.sources_in_tile([tile_small], order_small, ra, dec))
Bayesian_Pval[np.invert(kept_pixels)] = np.nan
Bayes_map = postmaps.make_fits_image(priors[i], Bayesian_Pval)
Bayes_map.writeto(outfile + '_' + band[i] + '_Bayes_Pval.fits', overwrite=True)
cat = catalogue.create_PACS_cat(posterior, priors[0], priors[1])
kept_sources = moc_routines.sources_in_tile([tile_small], order_small, priors[0].sra, priors[0].sdec)
kept_sources = np.array(kept_sources)
cat[1].data = cat[1].data[kept_sources]
cat.writeto(outfile + '_PACS_cat.fits', overwrite=True)
Explanation: Work out what small tiles are in the test large tile file for PACS
End of explanation
%%time
from xidplus.numpyro_fit import PACS
fit_numpyro=PACS.all_bands(priors)
outfile=output_folder+'Tile_'+str(tile_small)+'_'+str(order_small)+'_numpyro'
posterior_numpyro=xidplus.posterior_numpyro(fit_numpyro,priors)
xidplus.save(priors,posterior_numpyro,outfile)
post_rep_map = postmaps.replicated_maps(priors, posterior_numpyro, nrep=2000)
band = ['PACS_100', 'PACS_160']
for i, p in enumerate(priors):
Bayesian_Pval = postmaps.make_Bayesian_pval_maps(priors[i], post_rep_map[i])
wcs_temp = wcs.WCS(priors[i].imhdu)
ra, dec = wcs_temp.wcs_pix2world(priors[i].sx_pix, priors[i].sy_pix, 0)
kept_pixels = np.array(moc_routines.sources_in_tile([tile_small], order_small, ra, dec))
Bayesian_Pval[np.invert(kept_pixels)] = np.nan
Bayes_map = postmaps.make_fits_image(priors[i], Bayesian_Pval)
Bayes_map.writeto(outfile + '_' + band[i] + '_Bayes_Pval_numpyro.fits', overwrite=True)
cat = catalogue.create_PACS_cat(posterior_numpyro, priors[0], priors[1])
kept_sources = moc_routines.sources_in_tile([tile_small], order_small, priors[0].sra, priors[0].sdec)
kept_sources = np.array(kept_sources)
cat[1].data = cat[1].data[kept_sources]
cat.writeto(outfile + '_PACS_cat_numpyro.fits', overwrite=True)
moc.area_sq_deg
100.0*(20.0*np.pi*(1.0/3600.0)**2)/moc.area_sq_deg
Explanation: You can fit with the numpyro backend.
End of explanation |
9,967 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook demonstrates how to use the Python MagicDataFrame object. A MagicDataFrame contains the data from one MagIC-format table and provides functionality for accessing and editing that data.
Getting started
Step3: Creating a MagicDataFrame
Step4: Indexing and selecting data
Step5: Changing values
Step6: Starting from scratch -- making a blank table
Step7: Interactions between two MagicDataFrames
Step8: Gotchas
Can't do self.df = self.df.append(blah). Must instead do self.df.loc(blah.name) = blah
Beware chained indexing | Python Code:
from pmagpy import new_builder as nb
from pmagpy import ipmag
import os
import json
import numpy as np
import sys
import pandas as pd
from pandas import DataFrame
from pmagpy import pmag
working_dir = os.path.join("..", "3_0", "Osler")
Explanation: This notebook demonstrates how to use the Python MagicDataFrame object. A MagicDataFrame contains the data from one MagIC-format table and provides functionality for accessing and editing that data.
Getting started
End of explanation
reload(nb)
#class MagicDataFrame(object):
#
# Each MagicDataFrame corresponds to one MagIC table.
# The MagicDataFrame object consists of a pandas DataFrame,
# and assorted methods for manipulating that DataFrame.
#
# def __init__(self, magic_file=None, columns=None, dtype=None):
#
# Provide either a magic_file name or a dtype.
# List of columns is optional,
# and will only be used if magic_file == None
#
fname = os.path.join("..", '3_0', 'Osler', 'sites.txt')
# the MagicDataFrame object:
site_container = nb.MagicDataFrame(magic_file=fname)
# the actual pandas DataFrame:
site_df = site_container.df
# show the first 5 site records
site_df[:5]
# FAILS
#print site_df.fillna.__doc__
#site_df.fillna(value=None)
#FAILS
#print site_df.replace.__doc__
#site_df.replace(np.nan, None)
#FAILS
#site_df[site_df.astype(str) == ""] = None
#site_df[site_df.where(site_df.astype(str) == "").notnull()] = None
# WORKS!
#site_df.where(site_df.notnull(), None)
site_df.head()
# make an empty MagicDataFrame with 'Age' and 'Metadata' headers
reload(nb)
fname = os.path.join("..", '3_0', 'Osler', 'sites.txt')
# the MagicDataFrame object:
site_container = nb.MagicDataFrame(dtype='sites', groups=['Age', 'Metadata'])
# the actual pandas DataFrame:
site_df = site_container.df
# show the (empty) dataframe
site_df
Explanation: Creating a MagicDataFrame
End of explanation
fname = os.path.join('..', '3_0', 'Osler', 'sites.txt')
# the MagicDataFrame object:
site_container = nb.MagicDataFrame(fname)
# the actual pandas DataFrame:
site_df = site_container.df
# all sites with site_name (index) of '1'
# will return a smaller DataFrame (or a Series if there is only 1 row with that index)
site_container.df.ix['1']
# index by position (using an integer), will always return a single record as Series
# in this case, get the second record
site_container.df.iloc[1]
# return all sites with the description column filled in
cond = site_container.df['description'].notnull()
site_container.df[cond].head()
# get list of all sites with the same location_name
name = site_df.iloc[0].location
site_df[site_df['location'] == name][['location']]
# grab out declinations & inclinations
# get di block, providing the index (slicing the dataframe will be done in the function)
print site_container.get_di_block(do_index=True, item_names=['1', '2'], tilt_corr='100')
# get di block, providing a slice of the DataFrame
print site_container.get_di_block(site_container.df.loc[['1', '2']])
# Get names of all sites with a particular method code
# (returns a pandas Series with the site name and method code)
site_container.get_records_for_code('DE-K', incl=True)['method_codes'].head()
# Get names of all sites WITHOUT a particular method code
site_container.get_records_for_code('DE-K', incl=False)['method_codes'].head()
Explanation: Indexing and selecting data
End of explanation
# update all sites named '1' to have a 'bed_dip' of 22 (.loc works in place)
site_df.loc['1', 'bed_dip'] = '22'
site_df.loc['1']
# update any site's value for 'conglomerate_test' to 25 if that value was previously null
site_container.df['conglomerate_test'] = np.where(site_container.df['conglomerate_test'].isnull(), 25, \
site_container.df['conglomerate_test'])
site_container.df[:5]
# new_builder function to update a row (by row number)
ind = 1
row_data = {"bed_dip": "new_value", "new_col": "new_value"}
site_container.update_row(ind, row_data)
site_df.head()[["bed_dip", "new_col", "site"]]
site_df.head()[['site', 'new_col', 'citations']]
# new builder function to update a record
# finds self.df row based on a condition
# then updates that row with new_data
# then deletes any other rows that also meet that condition
site_name = "1"
col_val = "new_value"
# data to add:
new_data = {"citations": "new citation"}
# condition to find row
cond1 = site_df.index.str.contains(site_name) == True
cond2 = site_df['new_col'] == col_val
condition = (cond1 & cond2)
# update record
site_container.update_record(site_name, new_data, condition)
site_df.head()[["citations", "new_col"]]
# initialize a new site with a name but no values, add it to site table
site_container.add_blank_row('blank_site')
site_container.df = site_container.df
site_container.df.tail()
# copy a site from the site DataFrame,
#change a few values,
#then add the new site to the site DataFrame
new_site = site_container.df.ix[2]
new_site['bed_dip'] = "other"
new_site.name = 'new_site'
site_container.df = site_container.df.append(new_site)
site_container.df.tail()
# remove a row
site_container.delete_row(3)
# this deletes the 4th row
site_df.head()
# get rid of all rows with index "1" or "2"
site_df.drop(["1", "2"])
Explanation: Changing values
End of explanation
reload(nb)
# create an empty MagicDataFrame with column names
cols = ['analyst_names', 'aniso_ftest', 'aniso_ftest12', 'aniso_ftest23', 'aniso_s', 'aniso_s_mean', 'aniso_s_n_measurements', 'aniso_s_sigma', 'aniso_s_unit', 'aniso_tilt_correction', 'aniso_type', 'aniso_v1', 'aniso_v2', 'aniso_v3', 'citations', 'description', 'dir_alpha95', 'dir_comp_name', 'dir_dec', 'dir_inc', 'dir_mad_free', 'dir_n_measurements', 'dir_tilt_correction', 'experiment_names', 'geologic_classes', 'geologic_types', 'hyst_bc', 'hyst_bcr', 'hyst_mr_moment', 'hyst_ms_moment', 'int_abs', 'int_b', 'int_b_beta', 'int_b_sigma', 'int_corr', 'int_dang', 'int_drats', 'int_f', 'int_fvds', 'int_gamma', 'int_mad_free', 'int_md', 'int_n_measurements', 'int_n_ptrm', 'int_q', 'int_rsc', 'int_treat_dc_field', 'lithologies', 'meas_step_max', 'meas_step_min', 'meas_step_unit', 'method_codes', 'sample_name', 'software_packages', 'specimen_name']
dtype = 'specimens'
data_container = nb.MagicDataFrame(dtype=dtype, columns=None)
df = data_container.df
# create fake specimen data
fake_data = {col: 1 for col in cols}
# include a new column name in the data
fake_data['new_one'] = '999'
# add one row of specimen data (any addition column headers in will be added automatically)
data_container.add_row('name', fake_data)
# add another row
fake_data['other'] = 'cheese'
fake_data.pop('aniso_ftest')
data_container.add_row('name2', fake_data)
# now the dataframe has two new columns, 'new_one' and 'other'
df
Explanation: Starting from scratch -- making a blank table
End of explanation
# get location DataFrame
fname = os.path.join('..', '3_0', 'Osler', 'locations.txt')
loc_container = nb.MagicDataFrame(fname)
loc_df = loc_container.df
loc_df.head()
# get all sites belonging to a particular location RECORD (i.e., what used to be a result)
# (diferent from getting all sites with the same location name)
name = loc_df.ix[1].name
loc_record = loc_df.ix[name].ix[1]
site_names = loc_record['site_names']
print "All sites belonging to {}:".format(name), loc_record['site_names']
site_names = site_names.split(":")
# fancy indexing
site_container.df.ix[site_names].head()
Explanation: Interactions between two MagicDataFrames
End of explanation
# first site
print site_container.df.ix[0][:5]
print '-'
# find site by index value
print site_container.df.ix['new_site'][:5]
print '-'
# return all sites' values for a col
site_container.df['bed_dip'][:5]
Explanation: Gotchas
Can't do self.df = self.df.append(blah). Must instead do self.df.loc(blah.name) = blah
Beware chained indexing: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
To make a real, independent copy of a DataFrame, use DataFrame.copy()
To update inplace: df.loc[:, 'col_name'] = 'blah'
http://stackoverflow.com/questions/37175007/pandas-dataframe-logic-operations-with-nan
Pandas indexing
End of explanation |
9,968 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Frame Plots
documentation
Step1: The plot method on Series and DataFrame is just a simple wrapper around plt.plot()
If the index consists of dates, it calls gcf().autofmt_xdate() to try to format the x-axis nicely as show in the plot window.
Step2: On DataFrame, plot() is a convenience to plot all of the columns, and include a legend within the plot.
Step3: You can plot one column versus another using the x and y keywords in plot()
Step4: Plots other than line plots
Plotting methods allow for a handful of plot styles other than the default Line plot. These methods can be provided as the kind keyword argument to plot(). These include
Step5: stack bar chart
Step6: horizontal bar chart
Step7: box plot
Step8: area plot
Step9: Plotting with Missing Data
Pandas tries to be pragmatic about plotting DataFrames or Series that contain missing data. Missing values are dropped, left out, or filled depending on the plot type.
| Plot Type | NaN Handling | |
|----------------|-------------------------|---|
| Line | Leave gaps at NaNs | |
| Line (stacked) | Fill 0’s | |
| Bar | Fill 0’s | |
| Scatter | Drop NaNs | |
| Histogram | Drop NaNs (column-wise) | |
| Box | Drop NaNs (column-wise) | |
| Area | Fill 0’s | |
| KDE | Drop NaNs (column-wise) | |
| Hexbin | Drop NaNs | |
| Pie | Fill 0’s | |
If any of these defaults are not what you want, or if you want to be explicit about how missing values are handled, consider using fillna() or dropna() before plotting.
density plot
Step10: lag plot
Lag plots are used to check if a data set or time series is random. Random data should not exhibit any structure in the lag plot. Non-random structure implies that the underlying data are not random. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
Explanation: Data Frame Plots
documentation: http://pandas.pydata.org/pandas-docs/stable/visualization.html
End of explanation
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
ts = ts.cumsum()
ts.plot()
plt.show()
Explanation: The plot method on Series and DataFrame is just a simple wrapper around plt.plot()
If the index consists of dates, it calls gcf().autofmt_xdate() to try to format the x-axis nicely as show in the plot window.
End of explanation
df = pd.DataFrame(np.random.randn(1000, 4), index=pd.date_range('1/1/2016', periods=1000), columns=list('ABCD'))
df = df.cumsum()
plt.figure()
df.plot()
plt.show()
Explanation: On DataFrame, plot() is a convenience to plot all of the columns, and include a legend within the plot.
End of explanation
df3 = pd.DataFrame(np.random.randn(1000, 2), columns=['B', 'C']).cumsum()
df3['A'] = pd.Series(list(range(len(df))))
df3.plot(x='A', y='B')
plt.show()
df3.tail()
Explanation: You can plot one column versus another using the x and y keywords in plot():
End of explanation
plt.figure()
df.ix[5].plot(kind='bar')
plt.axhline(0, color='k')
plt.show()
df.ix[5]
Explanation: Plots other than line plots
Plotting methods allow for a handful of plot styles other than the default Line plot. These methods can be provided as the kind keyword argument to plot(). These include:
‘bar’ or ‘barh’ for bar plots
‘hist’ for histogram
‘box’ for boxplot
‘kde’ or 'density' for density plots
‘area’ for area plots
‘scatter’ for scatter plots
‘hexbin’ for hexagonal bin plots
‘pie’ for pie plots
For example, a bar plot can be created the following way:
End of explanation
df2 = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd'])
df2.plot.bar(stacked=True)
plt.show()
Explanation: stack bar chart
End of explanation
df2.plot.barh(stacked=True)
plt.show()
Explanation: horizontal bar chart
End of explanation
df = pd.DataFrame(np.random.rand(10, 5), columns=['A', 'B', 'C', 'D', 'E'])
df.plot.box()
plt.show()
Explanation: box plot
End of explanation
df = pd.DataFrame(np.random.rand(10, 4), columns=['a', 'b', 'c', 'd'])
df.plot.area()
plt.show()
Explanation: area plot
End of explanation
ser = pd.Series(np.random.randn(1000))
ser.plot.kde()
plt.show()
Explanation: Plotting with Missing Data
Pandas tries to be pragmatic about plotting DataFrames or Series that contain missing data. Missing values are dropped, left out, or filled depending on the plot type.
| Plot Type | NaN Handling | |
|----------------|-------------------------|---|
| Line | Leave gaps at NaNs | |
| Line (stacked) | Fill 0’s | |
| Bar | Fill 0’s | |
| Scatter | Drop NaNs | |
| Histogram | Drop NaNs (column-wise) | |
| Box | Drop NaNs (column-wise) | |
| Area | Fill 0’s | |
| KDE | Drop NaNs (column-wise) | |
| Hexbin | Drop NaNs | |
| Pie | Fill 0’s | |
If any of these defaults are not what you want, or if you want to be explicit about how missing values are handled, consider using fillna() or dropna() before plotting.
density plot
End of explanation
from pandas.tools.plotting import lag_plot
plt.figure()
data = pd.Series(0.1 * np.random.rand(1000) + 0.9 * np.sin(np.linspace(-99 * np.pi, 99 * np.pi, num=1000)))
lag_plot(data)
plt.show()
Explanation: lag plot
Lag plots are used to check if a data set or time series is random. Random data should not exhibit any structure in the lag plot. Non-random structure implies that the underlying data are not random.
End of explanation |
9,969 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Soring, searching, and counting
Step1: Sorting
Q1. Sort x along the second axis.
Step2: Q2. Sort pairs of surnames and first names and return their indices. (first by surname, then by name).
Step3: Q3. Get the indices that would sort x along the second axis.
Step4: Q4. Create an array such that its fifth element would be the same as the element of sorted x, and it divide other elements by their value.
Step5: Q5. Create the indices of an array such that its third element would be the same as the element of sorted x, and it divide other elements by their value.
Step6: Searching
Q6. Get the maximum and minimum values and their indices of x along the second axis.
Step7: Q7. Get the maximum and minimum values and their indices of x along the second axis, ignoring NaNs.
Step8: Q8. Get the values and indices of the elements that are bigger than 2 in x.
Step9: Q9. Get the indices of the elements that are bigger than 2 in the flattend x.
Step10: Q10. Check the elements of x and return 0 if it is less than 0, otherwise the element itself.
Step11: Q11. Get the indices where elements of y should be inserted to x to maintain order.
Step12: Counting
Q12. Get the number of nonzero elements in x. | Python Code:
import numpy as np
np.__version__
author = 'kyubyong. longinglove@nate.com'
Explanation: Soring, searching, and counting
End of explanation
x = np.array([[1,4],[3,1]])
out = np.sort(x, axis=1)
x.sort(axis=1)
assert np.array_equal(out, x)
print out
Explanation: Sorting
Q1. Sort x along the second axis.
End of explanation
surnames = ('Hertz', 'Galilei', 'Hertz')
first_names = ('Heinrich', 'Galileo', 'Gustav')
print np.lexsort((first_names, surnames))
Explanation: Q2. Sort pairs of surnames and first names and return their indices. (first by surname, then by name).
End of explanation
x = np.array([[1,4],[3,1]])
out = np.argsort(x, axis=1)
print out
Explanation: Q3. Get the indices that would sort x along the second axis.
End of explanation
x = np.random.permutation(10)
print "x =", x
print "\nCheck the fifth element of this new array is 5, the first four elements are all smaller than 5, and 6th through the end are bigger than 5\n",
out = np.partition(x, 5)
x.partition(5) # in-place equivalent
assert np.array_equal(x, out)
print out
Explanation: Q4. Create an array such that its fifth element would be the same as the element of sorted x, and it divide other elements by their value.
End of explanation
x = np.random.permutation(10)
print "x =", x
partitioned = np.partition(x, 3)
indices = np.argpartition(x, 3)
print "partitioned =", partitioned
print "indices =", partitioned
assert np.array_equiv(x[indices], partitioned)
Explanation: Q5. Create the indices of an array such that its third element would be the same as the element of sorted x, and it divide other elements by their value.
End of explanation
x = np.random.permutation(10).reshape(2, 5)
print "x =", x
print "maximum values =", np.max(x, 1)
print "max indices =", np.argmax(x, 1)
print "minimum values =", np.min(x, 1)
print "min indices =", np.argmin(x, 1)
Explanation: Searching
Q6. Get the maximum and minimum values and their indices of x along the second axis.
End of explanation
x = np.array([[np.nan, 4], [3, 2]])
print "maximum values ignoring NaNs =", np.nanmax(x, 1)
print "max indices =", np.nanargmax(x, 1)
print "minimum values ignoring NaNs =", np.nanmin(x, 1)
print "min indices =", np.nanargmin(x, 1)
Explanation: Q7. Get the maximum and minimum values and their indices of x along the second axis, ignoring NaNs.
End of explanation
x = np.array([[1, 2, 3], [1, 3, 5]])
print "Values bigger than 2 =", x[x>2]
print "Their indices are ", np.nonzero(x > 2)
assert np.array_equiv(x[x>2], x[np.nonzero(x > 2)])
assert np.array_equiv(x[x>2], np.extract(x > 2, x))
Explanation: Q8. Get the values and indices of the elements that are bigger than 2 in x.
End of explanation
x = np.array([[1, 2, 3], [1, 3, 5]])
print np.flatnonzero(x)
assert np.array_equiv(np.flatnonzero(x), x.ravel().nonzero())
Explanation: Q9. Get the indices of the elements that are bigger than 2 in the flattend x.
End of explanation
x = np.arange(-5, 4).reshape(3, 3)
print np.where(x <0, 0, x)
Explanation: Q10. Check the elements of x and return 0 if it is less than 0, otherwise the element itself.
End of explanation
x = [1, 3, 5, 7, 9]
y = [0, 4, 2, 6]
np.searchsorted(x, y)
Explanation: Q11. Get the indices where elements of y should be inserted to x to maintain order.
End of explanation
x = [[0,1,7,0,0],[3,0,0,2,19]]
print np.count_nonzero(x)
assert np.count_nonzero(x) == len(x[x!=0])
Explanation: Counting
Q12. Get the number of nonzero elements in x.
End of explanation |
9,970 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.soft - Calcul numérique et Cython
Python est très lent. Il est possible d'écrire certains parties en C mais le dialogue entre les deux langages est fastidieux. Cython propose un mélange de C et Python qui accélère la conception.
Step1: Calcul numérique
On peut mesurer le temps que met en programme comme ceci (qui ne marche qu'avec IPython...timeit)
Step2: La seconde fonction est plus rapide. Seconde vérification
Step3: On remarque également que l'appel à une fonction pour ensuite effectuer le calcul a coûté environ 100 $\mu s$ pour 1000 appels. L'instruction timeit effectue 10 boucles qui calcule 1000 fois une racine carrée.
Cython
Le module Cython est une façon d'accélérer les calculs en insérant dans un programme python du code écrit dans une syntaxe proche de celle du C. Il existe différentes approches pour accélérer un programme python
Step4: Puis on décrit la fonction avec la syntaxe Cython
Step5: On termine en estimant son temps d'exécution. Il faut noter aussi que ce code ne peut pas être déplacé dans la section précédente qui doit être entièrement écrite en cython.
Step6: Exercice
Step9: Auparavant, il est probablement nécessaire de suivre ces indications
Step10: Puis on compile le fichier .pyx créé en exécutant le fichier setup.py avec des paramètres précis
Step11: Puis on importe le module
Step12: Si votre dernière modification n'apparaît pas, il faut redémarrer le kernel. Lorsque Python importe le module example_cython la première fois, il charge le fichier example_cython.pyd. Lors d'une modification du module, ce fichier est bloqué en lecture et ne peut être modifié. Or cela est nécessaire car le module doit être recompilé. Pour cette raison, il est plus pratique d'implémenter sa fonction dans un éditeur de texte qui n'utilise pas IPython.
On teste le temps mis par la fonction primes
Step13: Puis on compare avec la version écrites un Python | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.soft - Calcul numérique et Cython
Python est très lent. Il est possible d'écrire certains parties en C mais le dialogue entre les deux langages est fastidieux. Cython propose un mélange de C et Python qui accélère la conception.
End of explanation
def racine_carree1(x) :
return x**0.5
%timeit -r 10 [ racine_carree1(x) for x in range(0,1000) ]
import math
def racine_carree2(x) :
return math.sqrt(x)
%timeit -r 10 [ racine_carree2(x) for x in range(0,1000) ]
Explanation: Calcul numérique
On peut mesurer le temps que met en programme comme ceci (qui ne marche qu'avec IPython...timeit) :
End of explanation
%timeit -r 10 [ x**0.5 for x in range(0,1000) ]
%timeit -r 10 [ math.sqrt(x) for x in range(0,1000) ]
Explanation: La seconde fonction est plus rapide. Seconde vérification :
End of explanation
%load_ext cython
Explanation: On remarque également que l'appel à une fonction pour ensuite effectuer le calcul a coûté environ 100 $\mu s$ pour 1000 appels. L'instruction timeit effectue 10 boucles qui calcule 1000 fois une racine carrée.
Cython
Le module Cython est une façon d'accélérer les calculs en insérant dans un programme python du code écrit dans une syntaxe proche de celle du C. Il existe différentes approches pour accélérer un programme python :
Cython : on insère du code [C](http://fr.wikipedia.org/wiki/C_(langage) dans le programme python, on peut gagné un facteur 10 sur des fonctions qui utilisent des boucles de façon intensives.
autres alternatives :
cffi, il faut connaître le C (ne fait pas le C++)
pythran
numba
...
PyPy : on compile le programme python de façon statique au lieu de l'interpréter au fur et à mesure de l'exécution, cette solution n'est praticable que si on a déjà programmé avec un langage compilé ou plus exactement un langage où le typage est fort. Le langage python, parce qu'il autorise une variable à changer de type peut créer des problèmes d'inférence de type.
module implémenté en C : c'est le cas le plus fréquent et une des raisons pour lesquelles Python a été rapidement adopté. Beaucoup de librairies se sont ainsi retrouvées disponibles en Python. Néanmoins, l'API C du Python nécessite un investissement conséquent pour éviter les erreurs. Il est préférable de passer par des outils tels que
boost python : facile d'accès, le module sera disponible sous forme compilée,
SWIG : un peu plus difficile, le module sera soit compilé par la librairie soit packagé de telle sorte qu'il soit compilé lors de son l'installation.
Parmi les trois solutions, la première est la plus accessible, et en développement constant (Cython changes).
L'exemple qui suit ne peut pas fonctionner directement sous notebook car Cython compile un module (fichier *.pyd) avant de l'utiliser. Si la compilation ne fonctionne pas et fait apparaître un message avec unable for find file vcvarsall.bat, il vous faut lire l'article Build a Python 64 bit extension on Windows 8 après avoir noté la version de Visual Studio que vous utilisez. Il est préférable d'avoir programmé en C/C++ même si ce n'est pas indispensable.
Cython dans un notebook
Le module IPython propose une façon simplifiée de se servir de Cython illustrée ici : Some Linear Algebra with Cython. Vous trouverez plus bas la façon de faire sans IPython que nous n'utiliserons pas pour cette séance. On commence par les préliminaires à n'exécuter d'une fois :
End of explanation
%%cython --annotate
cimport cython
def cprimes(int kmax):
cdef int n, k, i
cdef int p[1000]
result = []
if kmax > 1000:
kmax = 1000
k = 0
n = 2
while k < kmax:
i = 0
while i < k and n % p[i] != 0:
i = i + 1
if i == k:
p[k] = n
k = k + 1
result.append(n)
n = n + 1
return result
Explanation: Puis on décrit la fonction avec la syntaxe Cython :
End of explanation
%timeit [ cprimes (567) for i in range(10) ]
Explanation: On termine en estimant son temps d'exécution. Il faut noter aussi que ce code ne peut pas être déplacé dans la section précédente qui doit être entièrement écrite en cython.
End of explanation
def distance_edition(mot1, mot2):
dist = { (-1,-1): 0 }
for i,c in enumerate(mot1) :
dist[i,-1] = dist[i-1,-1] + 1
dist[-1,i] = dist[-1,i-1] + 1
for j,d in enumerate(mot2) :
opt = [ ]
if (i-1,j) in dist :
x = dist[i-1,j] + 1
opt.append(x)
if (i,j-1) in dist :
x = dist[i,j-1] + 1
opt.append(x)
if (i-1,j-1) in dist :
x = dist[i-1,j-1] + (1 if c != d else 0)
opt.append(x)
dist[i,j] = min(opt)
return dist[len(mot1)-1,len(mot2)-1]
%timeit distance_edition("idstzance","distances")
Explanation: Exercice : python/C appliqué à une distance d'édition
La distance de Levenshtein aussi appelé distance d'édition calcule une distance entre deux séquences d'éléments. Elle s'applique en particulier à deux mots comme illustré par Distance d'édition et programmation dynamique. L'objectif est de modifier la fonction suivante de façon à utiliser Cython puis de comparer les temps d'exécution.
End of explanation
code =
def primes(int kmax):
cdef int n, k, i
cdef int p[1000]
result = []
if kmax > 1000:
kmax = 1000
k = 0
n = 2
while k < kmax:
i = 0
while i < k and n % p[i] != 0:
i = i + 1
if i == k:
p[k] = n
k = k + 1
result.append(n)
n = n + 1
return result
name = "example_cython"
with open(name + ".pyx","w") as f : f.write(code)
setup_code =
from distutils.core import setup
from Cython.Build import cythonize
setup(
ext_modules = cythonize("__NAME__.pyx",
compiler_directives={'language_level' : "3"})
)
.replace("__NAME__",name)
with open("setup.py","w") as f:
f.write(setup_code)
Explanation: Auparavant, il est probablement nécessaire de suivre ces indications :
Si vous souhaitez remplacer le dictionnaire par un tableau à deux dimensions, comme le langage C n'autorise pas la création de tableau de longueur variables, il faut allouer un pointeur (c'est du C par du C++). Toutefois, je déconseille cette solution :
Cython n'accepte pas les doubles pointeurs : How to declare 2D list in Cython, les pointeurs simples si Python list to Cython.
Cython n'est pas forcément compilé avec la même version que votre version du compilateur Visual Studio C++. Ce faisant, vous pourriez obtenir l'erreur warning C4273: 'round' : inconsistent dll linkage. Après la lecture de cet article, BUILDING PYTHON 3.3.4 WITH VISUAL STUDIO 2013, vous comprendrez que ce n'est pas si simple à résoudre.
Je suggère donc de remplacer dist par un tableau cdef int dist [500][500]. La signature de la fonction est la suivante : def cdistance_edition(str mot1, str mot2). Enfin, Cython a été optimisé pour une utilisation conjointe avec numpy, à chaque fois que vous avez le choix, il vaut mieux utiliser les container numpy plutôt que d'allouer de grands tableaux sur la pile des fonctions ou d'allouer soi-même ses propres pointeurs.
Cython sans les notebooks
Cette partie n'est utile que si vous avez l'intention d'utiliser Cython sans IPython. Les lignes suivantes implémentent toujours avec Cython la fonction primes qui retourne les entiers premiers entiers compris entre 1 et $N$. On suit maintenant la méthode préconisée dans le tutoriel de Cython. Il faut d'abord créer deux fichiers :
example_cython.pyx qui contient le code de la fonction
setup.py qui compile le module avec le compilateur Visual Studio C++
End of explanation
import os
import sys
cmd = "{0} setup.py build_ext --inplace".format(sys.executable)
from pyquickhelper.loghelper import run_cmd
out,err = run_cmd(cmd)
if err != '' and err is not None:
raise Exception(err)
[ _ for _ in os.listdir(".") if "cython" in _ or "setup.py" in _ ]
Explanation: Puis on compile le fichier .pyx créé en exécutant le fichier setup.py avec des paramètres précis :
End of explanation
import pyximport
pyximport.install()
import example_cython
Explanation: Puis on importe le module :
End of explanation
%timeit [ example_cython.primes (567) for i in range(10) ]
Explanation: Si votre dernière modification n'apparaît pas, il faut redémarrer le kernel. Lorsque Python importe le module example_cython la première fois, il charge le fichier example_cython.pyd. Lors d'une modification du module, ce fichier est bloqué en lecture et ne peut être modifié. Or cela est nécessaire car le module doit être recompilé. Pour cette raison, il est plus pratique d'implémenter sa fonction dans un éditeur de texte qui n'utilise pas IPython.
On teste le temps mis par la fonction primes :
End of explanation
def py_primes(kmax):
p = [ 0 for _ in range(1000) ]
result = []
if kmax > 1000:
kmax = 1000
k = 0
n = 2
while k < kmax:
i = 0
while i < k and n % p[i] != 0:
i = i + 1
if i == k:
p[k] = n
k = k + 1
result.append(n)
n = n + 1
return result
%timeit [ py_primes (567) for i in range(10) ]
Explanation: Puis on compare avec la version écrites un Python :
End of explanation |
9,971 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-3', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-3
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:21
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
9,972 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding river levels
Developed by R.A. Collenteur & D. Brakenhoff
In this example it is shown how to create a Pastas model that not only includes precipitation and evaporation, but also observed river levels. We will consider observed heads that are strongly influenced by river level, based on a visual interpretation of the raw data.
Step1: 1. import and plot data
Before a model is created, it is generally a good idea to try and visually interpret the raw data and think about possible relationship between the time series and hydrological variables. Below the different time series are plotted.
The top plot shows the observed heads, with different observation frequencies and some gaps in the data. Below that the observed river levels, precipitation and evaporation are shown. Especially the river level show a clear relationship with the observed heads. Note however how the range in the river levels is about twice the range in the heads. Based on these observations, we would expect the the final step response of the head to the river level to be around 0.5 [m/m].
Step2: 2. Create a timeseries model
First we create a model with precipitation and evaporation as explanatory time series. The results show that precipitation and evaporation can explain part of the fluctuations in the observed heads, but not all of them.
Step3: 3. Adding river water levels
Based on the analysis of the raw data, we expect that the river levels can help to explain the fluctuations in the observed heads. Here, we add a stress model (ps.StressModel) to add the rivers level as an explanatory time series to the model. The model fit is greatly improved, showing that the rivers help in explaining the observed fluctuations in the observed heads. It can also be observed how the response of the head to the river levels is a lot faster than the response to precipitation and evaporation. | Python Code:
import pandas as pd
import pastas as ps
import matplotlib.pyplot as plt
ps.show_versions()
ps.set_log_level("INFO")
Explanation: Adding river levels
Developed by R.A. Collenteur & D. Brakenhoff
In this example it is shown how to create a Pastas model that not only includes precipitation and evaporation, but also observed river levels. We will consider observed heads that are strongly influenced by river level, based on a visual interpretation of the raw data.
End of explanation
oseries = pd.read_csv("../data/nb5_head.csv", parse_dates=True,
squeeze=True, index_col=0)
rain = pd.read_csv("../data/nb5_prec.csv", parse_dates=True, squeeze=True,
index_col=0)
evap = pd.read_csv("../data/nb5_evap.csv", parse_dates=True, squeeze=True,
index_col=0)
waterlevel = pd.read_csv("../data/nb5_riv.csv", parse_dates=True,
squeeze=True, index_col=0)
ps.plots.series(oseries, [rain, evap, waterlevel], figsize=(10, 5), hist=False);
Explanation: 1. import and plot data
Before a model is created, it is generally a good idea to try and visually interpret the raw data and think about possible relationship between the time series and hydrological variables. Below the different time series are plotted.
The top plot shows the observed heads, with different observation frequencies and some gaps in the data. Below that the observed river levels, precipitation and evaporation are shown. Especially the river level show a clear relationship with the observed heads. Note however how the range in the river levels is about twice the range in the heads. Based on these observations, we would expect the the final step response of the head to the river level to be around 0.5 [m/m].
End of explanation
ml = ps.Model(oseries.resample("D").mean().dropna(), name="River")
sm = ps.RechargeModel(rain, evap, rfunc=ps.Exponential, name="recharge")
ml.add_stressmodel(sm)
ml.solve(tmin="2000", tmax="2019-10-29")
ml.plots.results(figsize=(12, 8));
Explanation: 2. Create a timeseries model
First we create a model with precipitation and evaporation as explanatory time series. The results show that precipitation and evaporation can explain part of the fluctuations in the observed heads, but not all of them.
End of explanation
w = ps.StressModel(waterlevel, rfunc=ps.One, name="waterlevel",
settings="waterlevel")
ml.add_stressmodel(w)
ml.solve(tmin="2000", tmax="2019-10-29")
axes = ml.plots.results(figsize=(12, 8));
axes[-1].set_xlim(0,10); # By default, the axes between responses are shared.
Explanation: 3. Adding river water levels
Based on the analysis of the raw data, we expect that the river levels can help to explain the fluctuations in the observed heads. Here, we add a stress model (ps.StressModel) to add the rivers level as an explanatory time series to the model. The model fit is greatly improved, showing that the rivers help in explaining the observed fluctuations in the observed heads. It can also be observed how the response of the head to the river levels is a lot faster than the response to precipitation and evaporation.
End of explanation |
9,973 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Grand Canonical Monte Carlo of atomic species using tabulated pair potentials
Step1: Download and build faunus
Step2: Generate an artificial, tabulated potential
This is just a damped sinus curve to which we later on will add a Coulomb potential. The idea is to replace this with a cation-anion "fingerprint" from all-atom MD. That is a PMF, where the long ranged Coulomb part has been subtracted.
Step3: Generate input and run simulation | Python Code:
from __future__ import division, unicode_literals, print_function
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np, pandas as pd
import os.path, os, sys, json, filecmp, copy
plt.rcParams.update({'font.size': 16, 'figure.figsize': [8.0, 6.0]})
try:
workdir
except NameError:
workdir=%pwd
else:
%cd $workdir
print(workdir)
Explanation: Grand Canonical Monte Carlo of atomic species using tabulated pair potentials
End of explanation
%%bash -s "$workdir"
cd $1
if [ ! -d "faunus/" ]; then
git clone https://github.com/mlund/faunus.git
cd faunus
#git checkout a0f0b46
else
cd faunus
fi
# if different, copy custom grand.cpp into faunus
if ! cmp ../grand.cpp src/examples/grand.cpp >/dev/null 2>&1
then
cp ../grand.cpp src/examples/
fi
pwd
CXX=clang++ CC=clang cmake . -DCMAKE_BUILD_TYPE=RelWithDebInfo -DENABLE_APPROXMATH=on &>/dev/null
make example_grand -j4
Explanation: Download and build faunus
End of explanation
r = np.arange(0, 200, 0.1)
u = np.sin(r)*np.exp(-r/6)
plt.plot(r, u)
d = np.dstack((r,u))
np.savetxt('cation-anion.dat', np.c_[r,u])
Explanation: Generate an artificial, tabulated potential
This is just a damped sinus curve to which we later on will add a Coulomb potential. The idea is to replace this with a cation-anion "fingerprint" from all-atom MD. That is a PMF, where the long ranged Coulomb part has been subtracted.
End of explanation
%cd $workdir
def mkinput():
js = {
"atomlist" : {
"cat" : { "eps": 0.15, "sigma":4.0, "dp":40, "activity":Cs, "q":1.0 },
"an" : { "eps": 0.20, "sigma":4.0, "dp":10, "activity":Cs, "q":-1.0 }
},
"moleculelist" : {
"salt" : { "atoms":"cat an", "atomic":True, "Ninit":50 }
},
"moves" : {
"atomtranslate" : {
"salt" : { "peratom":True }
},
"atomgc": { "molecule": "salt" }
},
"energy" : {
"nonbonded" : {
"pairpotentialmap" : {
"spline" : { "rmin":4.0, "rmax":100, "utol":0.01 },
"cat an" : {
"fromdisk" : "../cation-anion.dat",
"_coulomb" : { "epsr": 80.0 }
},
"default" : {
"lennardjones" : {}
}
}
}
},
"system" : {
"temperature" : 298.15,
"cuboid" : { "len":50 },
"mcloop": { "macro": 10, "micro": micro }
}
}
with open('gc.json', 'w+') as f:
f.write(json.dumps(js, indent=4))
Cs_range = [0.1]
for Cs in Cs_range:
pfx='Cs'+str(Cs)
if True: #not os.path.isdir(pfx):
%mkdir -p $pfx
%cd $pfx
# equilibration run (no translation)
!rm -fR state
micro=1000
mkinput()
!../faunus/src/examples/grand > eq
# production run
micro=1000
mkinput()
%time !../faunus/src/examples/grand
%cd ..
print('done.')
Explanation: Generate input and run simulation
End of explanation |
9,974 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OP-WORKFLOW-CAGEscan-short-reads-v2.0
This document is an example of how to process a C1 CAGE library with a Jupyter notebook from raw reads to single molecule count. All the steps are described in the tutorial section of this repository. In the following section we assume that
Step1: Custom functions
Step2: Declare the function that deals with inputs and outputs
Step3: Parameters
If the required softwares are not in the PATH you can manually set their location here
Step4: Path to the reference genome you want to align your reads against
Step5: The name of the output folders for each command
Step6: Create the folders
Step7: The actual command to run. See the tutorial section for more details about each command
Step8: Get the reads. Here we assume that the reads are in the current directory, in a folder named following the MiSeq run id
Step9: Run the commands for all the pairs
Step10: Generate the level1 file
Step11: Generate logs (triplet)
Here we generate four summary files that will be used for QC and place them in the 'output' directory.
mapped.log | Python Code:
import subprocess, os, csv, signal, pysam
Explanation: OP-WORKFLOW-CAGEscan-short-reads-v2.0
This document is an example of how to process a C1 CAGE library with a Jupyter notebook from raw reads to single molecule count. All the steps are described in the tutorial section of this repository. In the following section we assume that:
- you are familiar with python programming and Jupyter (IPython notebook)
- the softwares mentioned in the prerequisite section are all installed and referenced in your PATH
- you've already downloaded the available C1 CAGE library and extracted it in your current directory
- The reference genome is already indexed with BWA
In our hands this notebook worked without trouble on a machine running Debian GNU/Linux 8. We noticed that the behavior of tagdust2 in single-end mode was different on Mac OSX. In short, the order of the reads1 is changed after extraction on Mac OSX which is a problem because syncpairs expect the order of reads1 and reads2 to be the same. One way to overcome this issue is sort reads1 and reads2 separately after the extraction then syncpairs will work properly.
Imports
End of explanation
remove_extension = lambda x: x.split('.')[0]
Explanation: Custom functions
End of explanation
def get_args(read1, read2, ref_genome, output_folders):
'''Set the input and output path for a given pair of reads'''
r1_shortname = remove_extension(os.path.basename(read1))
args = {
'r1_input': read1,
'r2_input': read2,
'ref_genome': ref_genome,
}
output_paths = {folder: os.path.join('output', folder, r1_shortname) for folder in output_folders}
return dict(args, **output_paths)
Explanation: Declare the function that deals with inputs and outputs
End of explanation
tagdust2_path = 'tagdust'
bwa_path = 'bwa'
samtools_path = 'samtools'
paired_bam_to_bed12_path = 'pairedBamToBed12'
umicountFP_path = 'umicountFP'
syncpairs_path = 'syncpairs'
Explanation: Parameters
If the required softwares are not in the PATH you can manually set their location here
End of explanation
ref_genome = './GRCh38.fa'
softwares = {
'bwa': bwa_path,
'tagdust': tagdust2_path,
'syncpairs': syncpairs_path,
'samtools': samtools_path,
'pairedBamToBed12': paired_bam_to_bed12_path,
'umicountFP': umicountFP_path}
Explanation: Path to the reference genome you want to align your reads against
End of explanation
output_folders = [ 'tagdust_r1', 'unzip_r2' # Demultiplexed R1, unziped R2
, 'extracted_r1', 'extracted_r2' # Synced R1 and R2
, 'cleaned_reads', 'cleaned_r1', 'cleaned_r2' # rRNA reads removed
, 'r1_sai', 'r2_sai', 'sampe' # Intermediate files from BWA
, 'genome_mapped', 'properly_paired' # Final output in BAM format
, 'cagescan_pairs', 'cagescan_fragments' # Final output in BED12 format
]
Explanation: The name of the output folders for each command
End of explanation
for folder in output_folders:
os.makedirs(os.path.join('output', folder))
Explanation: Create the folders
End of explanation
cmds = [
'{tagdust} -t8 -o {tagdust_r1} -1 F:NNNNNNNN -2 S:TATAGGG -3 R:N {r1_input}',
'gunzip -c {r2_input} > {unzip_r2}.fq',
'{syncpairs} {tagdust_r1}.fq {unzip_r2}.fq {extracted_r1}.fq {extracted_r2}.fq',
'{tagdust} -arch SimpleArchitecture.txt -ref ercc_and_human_rRNA_and_tagdust.fa -o {cleaned_reads} {extracted_r1}.fq {extracted_r2}.fq',
'cp {cleaned_reads}_READ1.fq {cleaned_r1}.fq',
'cp {cleaned_reads}_READ2.fq {cleaned_r2}.fq',
'{bwa} aln -t8 {ref_genome} {cleaned_r1}.fq > {r1_sai}.sai',
'{bwa} aln -t8 {ref_genome} {cleaned_r2}.fq > {r2_sai}.sai',
'{bwa} sampe -a 2000000 {ref_genome} {r1_sai}.sai {r2_sai}.sai {cleaned_r1}.fq {cleaned_r2}.fq > {sampe}.sam',
'{samtools} view -uSo - {sampe}.sam | {samtools} sort - -o {genome_mapped}.bam',
'{samtools} view -f 0x0002 -F 0x0100 -uo - {genome_mapped}.bam | {samtools} sort -n - -o {properly_paired}.bam',
'{pairedBamToBed12} -i {properly_paired}.bam > {cagescan_pairs}.bed',
'{umicountFP} -f {cagescan_pairs}.bed > {cagescan_fragments}.bed'
]
Explanation: The actual command to run. See the tutorial section for more details about each command
End of explanation
fastq_path = './150519_M00528_0125_000000000-ACUAB-links/'
root, folders, files = os.walk(fastq_path).__next__()
files = [f for f in files if not f.startswith('.')] #remove hidden files if there exist
reads1 = sorted([os.path.join(root, f) for f in files if 'R1' in f])
reads2 = sorted([os.path.join(root, f) for f in files if 'R2' in f])
Explanation: Get the reads. Here we assume that the reads are in the current directory, in a folder named following the MiSeq run id
End of explanation
for read1, read2 in zip(reads1, reads2):
args = get_args(read1, read2, ref_genome, output_folders)
args = dict(args, **softwares)
for cmd in cmds:
# print(cmd.format(**args))
subprocess.call(cmd.format(**args), preexec_fn=lambda: signal.signal(signal.SIGPIPE, signal.SIG_DFL), shell=True)
Explanation: Run the commands for all the pairs
End of explanation
root, folders, files = os.walk('./output/genome_mapped/').__next__()
files = [os.path.join(root, f) for f in files if f.endswith('bam')]
level1 = 'level1.py -o output/mylevel1file.l1.osc.gz -f 0x0042 -F 0x0104 --fingerprint {files}'.format(files=' '.join(files))
subprocess.call(level1, shell=True)
Explanation: Generate the level1 file
End of explanation
total_cmd = "grep 'total input reads' {tagdust_r1}_logfile.txt | cut -f 2"
extracted_cmd = "grep 'successfully extracted' {tagdust_r1}_logfile.txt | cut -f 2"
cleaned_cmd = "grep 'successfully extracted' {cleaned_reads}_logfile.txt | cut -f 2"
rdna_cmd = "grep 'ribosomal' {cleaned_reads}_logfile.txt | cut -f 2"
spikes_cmd = "grep 'ERCC' {cleaned_reads}_logfile.txt | cut -f 2 | paste -s -d+ - | bc"
mapped_cmd = "{samtools} view -u -f 0x40 {genome_mapped}.bam | {samtools} flagstat - | grep 'mapped (' | cut -f 1 -d ' '"
properpairs_cmd = "{samtools} view -u -f 0x42 {genome_mapped}.bam | {samtools} flagstat - | grep 'mapped (' | cut -f 1 -d ' '"
counts_cmd = "wc -l {cagescan_fragments}.bed | cut -f 1 -d ' '"
#remove _R1 from the file's name
custom_rename = lambda x: x.replace('_R1', '')
total, cleaned, extracted, rdna, spikes, mapped, properpairs, counts = ([], [], [], [], [], [], [], [])
def run_qc(command, dest, keyword):
output = subprocess.check_output(command.format(**args), shell=True).strip().decode()
dest.append([keyword, custom_rename(r1_shortname), output])
for read1 in reads1:
r1_shortname = remove_extension(os.path.basename(read1))
args = {'tagdust_r1': os.path.join('output', 'tagdust_r1', r1_shortname),
'genome_mapped': os.path.join('output', 'genome_mapped', r1_shortname),
'cagescan_fragments': os.path.join('output', 'cagescan_fragments', r1_shortname),
'cleaned_reads': os.path.join('output', 'cleaned_reads', r1_shortname),
'samtools': samtools_path}
run_qc(total_cmd, total, 'total')
run_qc(extracted_cmd, extracted, 'extracted')
run_qc(cleaned_cmd, cleaned, 'cleaned')
run_qc(rdna_cmd, rdna, 'rdna')
run_qc(spikes_cmd, spikes, 'spikes')
run_qc(mapped_cmd, mapped, 'mapped')
run_qc(properpairs_cmd, properpairs, 'properpairs')
run_qc(counts_cmd, counts, 'counts')
def write_logs(file_dest, results):
with open(file_dest, 'w') as handler:
writer = csv.writer(handler, delimiter='\t')
writer.writerows(results)
write_logs('output/total.log', total)
write_logs('output/cleaned.log', cleaned)
write_logs('output/extracted.log', extracted)
write_logs('output/rdna.log', rdna)
write_logs('output/spikes.log', spikes)
write_logs('output/mapped.log', mapped)
write_logs('output/properpairs.log', properpairs)
write_logs('output/counts.log', counts)
Explanation: Generate logs (triplet)
Here we generate four summary files that will be used for QC and place them in the 'output' directory.
mapped.log: The number of mapped reads per cell
extracted.log: The number of remaining reads after filtering for ribosomal DNA and unreadable UMIs
filtered.log: The detailed number of ribosomal DNA extracted per cell
transcript_count.log: The exact number of unique transcript per cell
End of explanation |
9,975 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotly and Cufflinks for plotting with Pandas
Get cufflinks
Cufflinks integrates plotly with pandas to allow plotting right from pandas dataframes. Install using pip
pip install cufflinks
Step1: interactive plotting
line plots
With Plotly, you can turn on and off data values by clicking on the legend
Step2: bar plot
Step3: box plot
Step4: surface plot
Step5: histograms
Step6: spread plots
Used to show the spread in data value between two columns / variables.
Step7: bubble scatter plots
same as scatter, but you can easily size the dots by another column
Step8: scatter matrix
This is similar to seaborn's pairplot | Python Code:
import numpy as np
import pandas as pd
import cufflinks as cf
from plotly.offline import download_plotlyjs, init_notebook_mode
from plotly.offline import plot, iplot
#set notebook mode
init_notebook_mode(connected=True)
cf.go_offline()
df = pd.DataFrame(np.random.randn(100,4),
columns='A B C D'.split(' '))
df.head()
df2 = pd.DataFrame({'category':['A','B','C'], 'values':[33,56,67]})
df2
Explanation: Plotly and Cufflinks for plotting with Pandas
Get cufflinks
Cufflinks integrates plotly with pandas to allow plotting right from pandas dataframes. Install using pip
pip install cufflinks
End of explanation
df.iplot()
Explanation: interactive plotting
line plots
With Plotly, you can turn on and off data values by clicking on the legend
End of explanation
df2.iplot(kind='bar')
Explanation: bar plot
End of explanation
df.iplot(kind='box')
Explanation: box plot
End of explanation
df3 = pd.DataFrame({'x':[1,2,3,4,5],
'y':[11,22,33,44,55],
'z':[5,4,3,2,1]})
df3
df3.iplot(kind='surface')
Explanation: surface plot
End of explanation
df.iplot(kind='hist',bins=50)
Explanation: histograms
End of explanation
df[['A','B']].iplot(kind='spread')
Explanation: spread plots
Used to show the spread in data value between two columns / variables.
End of explanation
df.iplot(kind='bubble',x='A', y='B', size='C')
Explanation: bubble scatter plots
same as scatter, but you can easily size the dots by another column
End of explanation
df.scatter_matrix()
Explanation: scatter matrix
This is similar to seaborn's pairplot
End of explanation |
9,976 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
门面模式(Facade Pattern)
1 代码
假设有一组火警报警系统,由三个子元件构成:一个警报器,一个喷水器,一个自动拨打电话的装置。其抽象如下:
Step1: 在业务中如果需要将三个部件启动,例如,如果有一个烟雾传感器,检测到了烟雾。在业务环境中需要做如下操作:
Step2: 但如果在多个业务场景中需要启动三个部件,怎么办?Ctrl+C加上Ctrl+V么?当然可以这样,但作为码农的基本修养之一,减少重复代码是应该会被很轻易想到的方法。这样,需要将其进行封装,在设计模式中,被封装成的新对象,叫做门面。门面构建如下 | Python Code:
class AlarmSensor:
def run(self):
print ("Alarm Ring...")
class WaterSprinker:
def run(self):
print ("Spray Water...")
class EmergencyDialer:
def run(self):
print ("Dial 119...")
Explanation: 门面模式(Facade Pattern)
1 代码
假设有一组火警报警系统,由三个子元件构成:一个警报器,一个喷水器,一个自动拨打电话的装置。其抽象如下:
End of explanation
alarm_sensor = AlarmSensor()
water_sprinker = WaterSprinker()
emergency_dialer = EmergencyDialer()
alarm_sensor.run()
water_sprinker.run()
emergency_dialer.run()
Explanation: 在业务中如果需要将三个部件启动,例如,如果有一个烟雾传感器,检测到了烟雾。在业务环境中需要做如下操作:
End of explanation
class EmergencyFacade(object):
def __init__(self):
self.alarm_sensor=AlarmSensor()
self.water_sprinker=WaterSprinker()
self.emergency_dialer=EmergencyDialer()
def runAll(self):
self.alarm_sensor.run()
self.water_sprinker.run()
self.emergency_dialer.run()
emergency_facade=EmergencyFacade()
emergency_facade.runAll()
Explanation: 但如果在多个业务场景中需要启动三个部件,怎么办?Ctrl+C加上Ctrl+V么?当然可以这样,但作为码农的基本修养之一,减少重复代码是应该会被很轻易想到的方法。这样,需要将其进行封装,在设计模式中,被封装成的新对象,叫做门面。门面构建如下
End of explanation |
9,977 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A decomposition of spatial forecast errors for wildfires using a modification of the contiguous rain area (CRA) method.
Copyright Bureau Of Meteorology.
This software is provided under license "as is", without warranty of any
kind including, but not limited to, fitness for a particular purpose. The
user assumes the entire risk as to the use and performance of the software.
In no event shall the copyright holder be held liable for any claim, damages
or other liability arising from the use of the software.
Step3: Using a simple parametric form we construct synthetic fires with different rates of spread.
Step7: For analysis we polygonize the fire shapes and form grids.
Step8: Data
Step9: Data is transformed to a regular grid of 100m spacing - this could really improve.
Step10: CRA
Extending the CRA to fires requires some modifications, translation is removed and rotation and stretch transforms are used instead. The translations are pinned to a known ignition point (in grid space).
Estimate a pinned rotation that minimizes the squared error between the forecast and the observation field.
Estimate the stretch (laterally and longitudally) which further reduces the error.
Subtract those components from the total error to figure our their contributions to total error.
Step11: Fit a rotation transform that minimizes the difference between the simulation and the observation.
Step12: We can visualize the resulting error field.
Step15: By applying the rotation to the forecast you can see that all the error not accounted for by simply rotating the field, there is an additional component that could be corrected for by stretching the forecast.
Fit a "stretch" tranform that minimizes the difference between the simulation and the observation.
Step16: Breakdown of error components | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from shapely import geometry, ops
from scipy import optimize
from skimage import transform
from IPython import display
%matplotlib inline
Explanation: A decomposition of spatial forecast errors for wildfires using a modification of the contiguous rain area (CRA) method.
Copyright Bureau Of Meteorology.
This software is provided under license "as is", without warranty of any
kind including, but not limited to, fitness for a particular purpose. The
user assumes the entire risk as to the use and performance of the software.
In no event shall the copyright holder be held liable for any claim, damages
or other liability arising from the use of the software.
End of explanation
def anderson(origin=(0, 0), beta=0.0, a=1.0, f=1.0, g=0.0, h=1.0, t=1.0):
Anderson et al, "Modelling the spread of grass fires", 1981.
Inputs:
a: burining rate
t: time (minues after the start of a fire)
Outputs:
(geometry) linestring object
x0, y0 = origin
theta = np.linspace(0, 2*np.pi, 100)
# Equation (4)
X = a*t*(f*np.cos(theta) + g)
Y = a*t*h*np.sin(theta)
rbeta = np.deg2rad(beta)
# Equation (6)
x = x0 + X * np.cos(rbeta) - Y * np.sin(rbeta)
y = y0 + X * np.sin(rbeta) + Y * np.cos(rbeta)
return geometry.linestring.LineString([(_x, _y) for _x, _y in zip(x, y)])
def fire_boundary(origin=(0,0), beta=0.0, n=3, max_t=100., a=1.0, duration=12):
Geometric fire boundary
t = np.linspace(1., max_t, n)
delta = (duration * 3600) / n
valid_time = [pd.Timestamp("2009-02-07 00:50:00+00:00") + pd.Timedelta((_n + 1) * delta, 's') for _n in range(n)]
geom = [anderson(origin=origin, beta=beta, a=a, f=1.0, g=1.0, h=0.7, t=_t) for _t in t]
return pd.DataFrame({'valid_time':valid_time, 'geometry':geom})
def fcst_fire(beta, n, duration, a=1):
return fire_boundary(beta=beta, n=n, duration=duration, a=a, max_t=1000., origin=(100, 100))
def obs_fire(n, duration):
return fire_boundary(beta=0, n=n, duration=duration, max_t=1000., origin=(100, 100))
Explanation: Using a simple parametric form we construct synthetic fires with different rates of spread.
End of explanation
def polygonize(dataset):
Transform the dataset into a polygon set
geom = dataset['geometry'].apply(lambda x: geometry.Polygon(x).buffer(0))
return ops.cascaded_union(geom)
def coord(bounds, spacing=80.):
Form a grid of specific bounds and spacing.
(minx, miny, maxx, maxy) = bounds
x = np.arange(minx, maxx, spacing)
y = np.arange(miny, maxy, spacing)
X, Y = np.meshgrid(x, y)
return np.r_[[X.flatten(), Y.flatten()]].T, X.shape
def intersection(coords, shape, polygon):
Return the intersection as a mask.
mask = np.array([polygon.contains(geometry.Point(x, y)) for x, y in coords])
return mask.reshape(shape)
Explanation: For analysis we polygonize the fire shapes and form grids.
End of explanation
fcst, obs = fcst_fire(50, 10, 100, a=0.8), obs_fire(10, 100)
obs_poly, fcst_poly = polygonize(obs), polygonize(fcst)
Explanation: Data:
End of explanation
coords, shape = coord(obs_poly.buffer(5000).bounds, spacing=100.)
obs_grid = intersection(coords, shape, obs_poly).astype('float')
fcst_grid = intersection(coords, shape, fcst_poly).astype('float')
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 10))
ax1.matshow(obs_grid)
ax2.matshow(fcst_grid)
Explanation: Data is transformed to a regular grid of 100m spacing - this could really improve.
End of explanation
ignition_point = (50, 55)
Explanation: CRA
Extending the CRA to fires requires some modifications, translation is removed and rotation and stretch transforms are used instead. The translations are pinned to a known ignition point (in grid space).
Estimate a pinned rotation that minimizes the squared error between the forecast and the observation field.
Estimate the stretch (laterally and longitudally) which further reduces the error.
Subtract those components from the total error to figure our their contributions to total error.
End of explanation
def rot_warp(img, theta, ignition_point):
return transform.rotate(img, theta, center=ignition_point)
def rot_callback(params):
ax = plt.gca().matshow(rot_warp(fcst_grid, params[0], ignition_point) - obs_grid, cmap='jet', vmin=0, vmax=1)
plt.gca().plot(ignition_point[0], ignition_point[1], 'or')
plt.grid('on')
display.clear_output(wait=True)
display.display(plt.gcf())
def cost(params, fcst, obs, ignition_point):
return ((rot_warp(fcst, params[0], ignition_point).flatten() - obs.flatten())**2).sum()
theta = 0
res = optimize.minimize(cost,
theta,
args=(fcst_grid.astype('float'), obs_grid.astype('float'), ignition_point),
callback=rot_callback,
options={'disp': True})
theta = res.x
theta
Explanation: Fit a rotation transform that minimizes the difference between the simulation and the observation.
End of explanation
fig, ax = plt.subplots(figsize=(10, 10))
im = ax.matshow(rot_warp(fcst_grid, theta, ignition_point) - obs_grid, cmap='jet', vmin=-1, vmax=1)
plt.gca().plot(ignition_point[0], ignition_point[1], 'or')
plt.grid('on')
plt.colorbar(im, shrink=0.75)
Explanation: We can visualize the resulting error field.
End of explanation
from skimage import measure
def estimate_angle(img):
Estimate angle (from the x-axis) using a segmentation of the fire output.
** If there are multiple objects we return the median of estimated angle of each object.
label_img = measure.label(img)
thetas = [x.orientation for x in measure.regionprops(label_img)]
return np.rad2deg(np.median(thetas))
def stretch_warp(img, alpha, beta, ignition_point):
Stretch along along the lateral (alpha) and longitudial (beta) directions.
center = np.asarray(p0)
angle = estimate_angle(img)
tform1 = transform.SimilarityTransform(translation=center)
tform2 = transform.SimilarityTransform(rotation=np.deg2rad(angle))
tform3 = transform.AffineTransform(scale=(1. - alpha, 1. - beta))
tform4 = transform.SimilarityTransform(rotation=np.deg2rad(-angle))
tform5 = transform.SimilarityTransform(translation=-center)
tform = tform5 + tform4 + tform3 + tform2 + tform1
return transform.warp(img, tform)
# from ipywidgets import interact
# fig = plt.figure(figsize=(6, 6))
# axe = fig.add_subplot(111)
# img = axe.matshow(fcst_grid,cmap='jet')
# def cbfn(theta, alpha, beta):
# data = rot_warp(stretch_warp(fcst_grid, alpha, beta, ignition_point), theta, ignition_point)
# img.set_data(data)
# img.autoscale()
# display.display(fig)
# interact(cbfn, theta=(0, 360, 5), alpha=(0., 1., 0.01), beta=(0., 1., 0.01));
def rot_warp(img, theta, ignition_point):
return transform.rotate(img, theta, center=ignition_point)
def warp(img, alpha, beta, theta, ignition_point):
return rot_warp(stretch_warp(img, alpha, beta, ignition_point), theta, ignition_point)
def alpha_callback(param):
ax = plt.gca().matshow(warp(fcst_grid, param[0], 0., theta, ignition_point) - obs_grid, cmap='jet', vmin=0, vmax=1)
plt.gca().plot(ignition_point[0], ignition_point[1], 'or')
plt.grid('on')
display.clear_output(wait=True)
display.display(plt.gcf())
def beta_callback(param):
ax = plt.gca().matshow(warp(fcst_grid, alpha, param[0], theta, ignition_point) - obs_grid, cmap='jet', vmin=0, vmax=1)
plt.gca().plot(ignition_point[0], ignition_point[1], 'or')
plt.grid('on')
display.clear_output(wait=True)
display.display(plt.gcf())
def cost_lateral(param, fcst, obs, ignition_point, theta):
return ((warp(fcst, param, 0., theta, ignition_point).flatten() - obs.flatten())**2).sum()
def cost_longitudial(param, fcst, obs, ignition_point, alpha, theta):
return ((warp(fcst, alpha, param, theta, ignition_point).flatten() - obs.flatten())**2).sum()
alpha = 0.
res = optimize.minimize(cost_lateral,
alpha,
args=(fcst_grid.astype('float'), obs_grid.astype('float'), ignition_point, theta),
callback=alpha_callback,
options={'disp': True})
alpha = res.x
beta = 0.
res = optimize.minimize(cost_longitudial,
beta,
args=(fcst_grid.astype('float'), obs_grid.astype('float'), ignition_point, alpha, theta),
callback=beta_callback,
options={'disp': True})
beta = res.x
fig, ax = plt.subplots(figsize=(10, 10))
im = ax.matshow(warp(fcst_grid, alpha, beta, theta, ignition_point) - obs_grid, cmap='jet', vmin=-1, vmax=1)
plt.gca().plot(ignition_point[0], ignition_point[1], 'or')
plt.grid('on')
plt.colorbar(im, shrink=0.75)
Explanation: By applying the rotation to the forecast you can see that all the error not accounted for by simply rotating the field, there is an additional component that could be corrected for by stretching the forecast.
Fit a "stretch" tranform that minimizes the difference between the simulation and the observation.
End of explanation
fig, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(14, 8))
im = ax1.matshow(fcst_grid - obs_grid, cmap='jet', vmin=-1, vmax=1)
im = ax2.matshow(rot_warp(fcst_grid, theta, ignition_point) - obs_grid, cmap='jet', vmin=-1, vmax=1)
im = ax3.matshow(warp(fcst_grid, alpha, 0., theta, ignition_point) - obs_grid, cmap='jet', vmin=-1, vmax=1)
im = ax4.matshow(warp(fcst_grid, alpha, beta, theta, ignition_point) - obs_grid, cmap='jet', vmin=-1, vmax=1)
total_error = ((fcst_grid.flatten() - obs_grid.flatten())**2).sum()
total_error
rotaion_error = ((rot_warp(fcst_grid, theta, ignition_point).flatten() - obs_grid.flatten())**2).sum()
rotaion_error
stretch_error_lat = ((warp(fcst_grid, 0., beta, theta, ignition_point).flatten() - obs_grid.flatten())**2).sum()
stretch_error_lat
stretch_error_latlong = ((warp(fcst_grid, alpha, beta, theta, ignition_point).flatten() - obs_grid.flatten())**2).sum()
stretch_error_latlong
error_components = np.abs(np.diff((total_error, shape_error, stretch_error_lat, stretch_error_latlong, 0))) / total_error
error_components
dframe = pd.DataFrame(error_components).T
dframe.columns = ['rotation_error', 'stretch_error_lateral', 'stretch_error_longitudial', 'remaining_error']
dframe
dframe.plot(kind='bar')
Explanation: Breakdown of error components
End of explanation |
9,978 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.data - Classification, régression, anomalies - énoncé
Le jeu de données Wine Quality Data Set contient 5000 vins décrits par leurs caractéristiques chimiques et évalués par un expert. Peut-on s'approcher de l'expert à l'aide d'un modèle de machine learning.
Step1: Les données
On peut les récupérer sur github...data_2a.
Step2: Exercice 1
Step3: Exercice 5
Step4: Exercice 6 | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.data - Classification, régression, anomalies - énoncé
Le jeu de données Wine Quality Data Set contient 5000 vins décrits par leurs caractéristiques chimiques et évalués par un expert. Peut-on s'approcher de l'expert à l'aide d'un modèle de machine learning.
End of explanation
from ensae_teaching_cs.data import wines_quality
from pandas import read_csv
df = read_csv(wines_quality(local=True, filename=True))
df.head()
Explanation: Les données
On peut les récupérer sur github...data_2a.
End of explanation
from sklearn.metrics import roc_curve, auc
# labels = pipe.steps[1][1].classes_
# y_score = pipe.predict_proba(X_test)
fpr = dict()
tpr = dict()
roc_auc = dict()
# for i, cl in enumerate(labels):
# fpr[cl], tpr[cl], _ = roc_curve(y_test == cl, y_score[:, i])
# roc_auc[cl] = auc(fpr[cl], tpr[cl])
# fig, ax = plt.subplots(1, 1, figsize=(8,4))
# for k in roc_auc:
# ax.plot(fpr[k], tpr[k], label="c%d = %1.2f" % (k, roc_auc[k]))
# ax.legend();
Explanation: Exercice 1 : afficher la distribution des notes
La fonction hist est simple, efficice.
Exercice 2 : séparation train / test
La fonction est tellement utilisée que vous la trouverez rapidement.
Exercice 3 : la variable couleur n'est pas numérique
M... OneHotEncoder.
Exercice 3 : premier classifieur
Vous trouverez aussi tout seul. Quelques fonctions pourront vous aider à évaluer le modèle confusion_matrix, classification_report.
Beaucoup mieux.
Exercice 4 : courbe ROC
Quelques aides...
End of explanation
from sklearn.covariance import EllipticEnvelope
Explanation: Exercice 5 : anomalies
Une anomalie est un point aberrant. Cela revient à dire que sa probabilité qu'un tel événement se reproduise est faible. Un modèle assez connu est EllipticEnvelope. On suppose que si le modèle détecte une anomalie, un modèle de prédiction aura plus de mal à prédire. On réutilise le pipeline précédent en changeant juste la dernière étape.
End of explanation
from sklearn.ensemble import RandomForestRegressor
Explanation: Exercice 6 : régression
La note est numérique, pourquoi ne pas essayer une régression.
End of explanation |
9,979 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src='https
Step1: Customizing matplotlib
http
Step2: Quick, easy, simple plots.
Object-oriented pyplot interface
No global variables
Separates style from graph
Can easily have multiple subplots
Step3: The figures and axes objects
First, we create a blank figure. Then we add a subpot.
Step4: Multiple subplots
Step5: The plt.subplots() command
Step6: plt.plot?
========== ========
character color
========== ========
'b' blue
'g' green
'r' red
'c' cyan
'm' magenta
'y' yellow
'k' black
'w' white
========== ========
The subplot2grid command
Step7: Sharing axis values
Step8: How about a little d3.js with mpld3?
https | Python Code:
#inline to use with notebook (from pylab import *)
%pylab inline
Explanation: <img src='https://www.rc.colorado.edu/sites/all/themes/research/logo.png'>
Introduction to Data Visualization with matplotlib
Thomas Hauser
<img src='https://s3.amazonaws.com/research_computing_tutorials/mpl-overview.png'>
Objectives
Understand the different between pylab and pyplot.
Understand the basic components of a plot.
Understand style
Give you enough information to use the gallery.
Reference for several standard plots.
histogram, density, boxplot (when appropriate)
scatter, line, hexbin
contour, false-color
References
This tutorial based on some of the following excellent content.
J.R. Johansson's tutorial
Matplotlib tutorial by Jake Vanderplas
Nicolas P. Rougier's tutorial
Painless create beautiful matplotlib
Making matplotlib look like ggplot
https://github.com/jakevdp/mpld3
Harvard CS109 Data Science Class.
Alternatives
ggplot for python.
vincent: Python to Vega (and ultimately d3).
bokeh
mpld3: Render matplotlib as d3.js in the notebook.
Object and Functional Models
Functional
Emulate Matlab
Convension: implicit statefrom pylab import *
Object-oriented
Not a flat model.
Figure, Axesimport matplotlib.pyplot as plt
Caution: redundant interface, namespace issues
Enabling plotting
IPython terminal
ipython --pylab
ipython --matplotlib
IPython notebook
%pylab inline
%matplotlib inline
ipython notebook --pylab=inline
ipython notebook --matplotlib=inline
The funtional pylab interface
Loads all of numpy and matplotlib into the global namesapce.
Great for interactive use.
End of explanation
# make the plots smaller or larger
rcParams['figure.figsize'] = 10, 6
x = linspace(0, 2*pi, 100)
y = sin(x)
plot(x, y)
show()
hist(randn(1000), alpha=0.5, histtype='stepfilled')
hist(0.75*randn(1000)+1, alpha=0.5, histtype='stepfilled')
show()
#hist?
Explanation: Customizing matplotlib
http://matplotlib.org/users/customizing.html
rcParams for configuration
End of explanation
#restart plotting
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = 8, 4
plt.plot(range(20))
Explanation: Quick, easy, simple plots.
Object-oriented pyplot interface
No global variables
Separates style from graph
Can easily have multiple subplots
End of explanation
x = np.linspace(0, 2*np.pi, 100) #same as before
y = np.sin(x)
fig = plt.figure()
ax = fig.add_subplot(1,1,1) # 1 row, 1 col, graphic 1
ax.plot(x, y)
ax.set_title("sin(x)")
ax.set_xlabel("x")
ax.set_ylabel("y")
Explanation: The figures and axes objects
First, we create a blank figure. Then we add a subpot.
End of explanation
fig = plt.figure()
ax1 = fig.add_subplot(1,2,1) # 1 row, 2 cols, graphic 1
ax2 = fig.add_subplot(1,2,2) # graphic 2
ax1.plot(x, y)
ax2.hist(np.random.randn(1000), alpha=0.5, histtype='stepfilled')
ax2.hist(0.75*np.random.randn(1000)+1, alpha=0.5, histtype='stepfilled')
fig.show()
Explanation: Multiple subplots
End of explanation
fig, ax = plt.subplots(2,3, figsize=(10,10))
ax[0,0].plot(x, y)
ax[0,1].plot(x, np.cos(x), color="r")
ax[0,2].hist(np.random.randn(100), alpha=0.5, color="g")
ax[1,0].plot(x, y, 'o-')
ax[1,0].set_xlim([0,np.pi])
ax[1,0].set_ylim([0,1])
ax[1,1].plot(x, np.cos(x), 'x', color="r")
ax[1,2].scatter(np.random.randn(10), np.random.randn(10), color="g")
fig.show()
Explanation: The plt.subplots() command
End of explanation
fig = plt.figure(figsize=(8,6))
ax1 = plt.subplot2grid((3,3), (0,0), colspan=3)
ax2 = plt.subplot2grid((3,3), (1,0), colspan=2)
ax3 = plt.subplot2grid((3,3), (1,2), rowspan=2)
ax4 = plt.subplot2grid((3,3), (2,0))
ax5 = plt.subplot2grid((3,3), (2,1))
ax1.plot(x, y)
fig.tight_layout()
fig.show()
?plt.subplot2grid
Explanation: plt.plot?
========== ========
character color
========== ========
'b' blue
'g' green
'r' red
'c' cyan
'm' magenta
'y' yellow
'k' black
'w' white
========== ========
The subplot2grid command
End of explanation
fig, axes = plt.subplots( 3, 1, sharex = True)
for ax in axes:
ax.set_axis_bgcolor('0.95')
axes[0].plot(x,y)
fig.show()
print axes.shape
fig, axes = plt.subplots( 2, 2, sharex = True, sharey = True)
plt.subplots_adjust( wspace = 0.3, hspace = 0.1)
fig.show()
print axes.shape
Explanation: Sharing axis values
End of explanation
from mpld3 import enable_notebook
enable_notebook()
fig, ax = plt.subplots(1,2, sharey=True, sharex=True)
print ax.shape
ax[0].plot(x, y, color='green')
ax[1].scatter(np.random.randn(10), np.random.randn(10), color='red')
fig.show()
Explanation: How about a little d3.js with mpld3?
https://github.com/jakevdp/mpld3
End of explanation |
9,980 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Path function
The goal of the MADlib path function is to perform regular pattern matching over a sequence of rows, and to extract useful information about the pattern matches. The useful information could be a simple count of matches or something more involved like aggregations or window functions.
Step1: The data set describes shopper behavior on a notional web site that sells beer and wine. A beacon fires an event to a log file when the shopper visits different pages on the site
Step2: Calculate the revenue by checkout
Step3: Note that there are 2 checkouts within session 102, which is apparent from the 'match_id' column. This serves to illustrate that the 'aggregate_func' operates on a per pattern match basis, not on a per partition basis. If in fact we wanted revenue by partition ('session_id' in this example), then we could do
Step4: Since we set TRUE for 'persist_rows', we can view the associated pattern matches
Step5: Notice that the 'symbol' and 'match_id' columns are added to the right of the matched rows.
We are interested in sessions with an order placed within 4 pages of entering the shopping site via the landing page. We represent this by the regular expression
Step6: Now view the associated pattern matches
Step7: For instances where a purchase is made within 4 pages of entering a site, compute the elasped time to checkout
Step8: We may want to use a window function instead of an aggregate. You can write window functions on the output tuples to achieve the desired result. Continuing the previous example, let’s say we want to compute average revenue for checkouts within 4 pages of entering the shopping site via the landing page
Step9: Now we want to do a golden path analysis to find the most successful shopper paths through the site. Since our data set is small, we decide this means the most frequently viewed page just before a checkout is made
Step10: There are only 2 different paths. The wine page is viewed more frequently than the beer page just before checkout.
To demonstrate the use of 'overlapping_patterns', consider a pattern with at least one page followed by and ending with a checkout
Step11: With overlap turned off, the result is | Python Code:
%load_ext sql
# %sql postgresql://gpdbchina@10.194.10.68:55000/madlib
%sql postgresql://fmcquillan@localhost:5432/madlib
%sql select madlib.version();
Explanation: Path function
The goal of the MADlib path function is to perform regular pattern matching over a sequence of rows, and to extract useful information about the pattern matches. The useful information could be a simple count of matches or something more involved like aggregations or window functions.
End of explanation
%%sql
DROP TABLE IF EXISTS eventlog, path_output, path_output_tuples CASCADE;
CREATE TABLE eventlog (event_timestamp TIMESTAMP,
user_id INT,
session_id INT,
page TEXT,
revenue FLOAT);
INSERT INTO eventlog VALUES
('04/15/2015 01:03:00', 100821, 100, 'LANDING', 0),
('04/15/2015 01:04:00', 100821, 100, 'WINE', 0),
('04/15/2015 01:05:00', 100821, 100, 'CHECKOUT', 39),
('04/15/2015 02:06:00', 100821, 101, 'WINE', 0),
('04/15/2015 02:09:00', 100821, 101, 'WINE', 0),
('04/15/2015 01:15:00', 101121, 102, 'LANDING', 0),
('04/15/2015 01:16:00', 101121, 102, 'WINE', 0),
('04/15/2015 01:17:00', 101121, 102, 'CHECKOUT', 15),
('04/15/2015 01:18:00', 101121, 102, 'LANDING', 0),
('04/15/2015 01:19:00', 101121, 102, 'HELP', 0),
('04/15/2015 01:21:00', 101121, 102, 'WINE', 0),
('04/15/2015 01:22:00', 101121, 102, 'CHECKOUT', 23),
('04/15/2015 02:15:00', 101331, 103, 'LANDING', 0),
('04/15/2015 02:16:00', 101331, 103, 'WINE', 0),
('04/15/2015 02:17:00', 101331, 103, 'HELP', 0),
('04/15/2015 02:18:00', 101331, 103, 'WINE', 0),
('04/15/2015 02:19:00', 101331, 103, 'CHECKOUT', 16),
('04/15/2015 02:22:00', 101443, 104, 'BEER', 0),
('04/15/2015 02:25:00', 101443, 104, 'CHECKOUT', 12),
('04/15/2015 02:29:00', 101881, 105, 'LANDING', 0),
('04/15/2015 02:30:00', 101881, 105, 'BEER', 0),
('04/15/2015 01:05:00', 102201, 106, 'LANDING', 0),
('04/15/2015 01:06:00', 102201, 106, 'HELP', 0),
('04/15/2015 01:09:00', 102201, 106, 'LANDING', 0),
('04/15/2015 02:15:00', 102201, 107, 'WINE', 0),
('04/15/2015 02:16:00', 102201, 107, 'BEER', 0),
('04/15/2015 02:17:00', 102201, 107, 'WINE', 0),
('04/15/2015 02:18:00', 102871, 108, 'BEER', 0),
('04/15/2015 02:19:00', 102871, 108, 'WINE', 0),
('04/15/2015 02:22:00', 102871, 108, 'CHECKOUT', 21),
('04/15/2015 02:25:00', 102871, 108, 'LANDING', 0),
('04/15/2015 02:17:00', 103711, 109, 'BEER', 0),
('04/15/2015 02:18:00', 103711, 109, 'LANDING', 0),
('04/15/2015 02:19:00', 103711, 109, 'WINE', 0);
SELECT * FROM eventlog ORDER BY event_timestamp ASC;
Explanation: The data set describes shopper behavior on a notional web site that sells beer and wine. A beacon fires an event to a log file when the shopper visits different pages on the site: landing page, beer selection page, wine selection page, and checkout. Other pages on the site like help pages show up in the logs as well. Let’s assume that the log has been sessionized.
End of explanation
%%sql
SELECT madlib.path(
'eventlog', -- Name of input table
'path_output', -- Table name to store path results
'session_id', -- Partition input table by session
'event_timestamp ASC', -- Order partitions in input table by time
'buy:=page=''CHECKOUT''', -- Define a symbol for checkout events
'(buy)', -- Pattern search: purchase
'sum(revenue) as checkout_rev', -- Aggregate: sum revenue by checkout
TRUE -- Persist matches
);
SELECT * FROM path_output ORDER BY session_id, match_id;
Explanation: Calculate the revenue by checkout:
End of explanation
%%sql
SELECT session_id, sum(checkout_rev) FROM path_output GROUP BY session_id ORDER BY session_id;
Explanation: Note that there are 2 checkouts within session 102, which is apparent from the 'match_id' column. This serves to illustrate that the 'aggregate_func' operates on a per pattern match basis, not on a per partition basis. If in fact we wanted revenue by partition ('session_id' in this example), then we could do:
End of explanation
%%sql
SELECT * FROM path_output_tuples ORDER BY session_id ASC, event_timestamp ASC;
Explanation: Since we set TRUE for 'persist_rows', we can view the associated pattern matches:
End of explanation
%%sql
DROP TABLE IF EXISTS path_output, path_output_tuples;
SELECT madlib.path(
'eventlog', -- Name of input table
'path_output', -- Table name to store path results
'session_id', -- Partition input table by session
'event_timestamp ASC', -- Order partitions in input table by time
$$ land:=page='LANDING',
wine:=page='WINE',
beer:=page='BEER',
buy:=page='CHECKOUT',
other:=page<>'LANDING' AND page<>'WINE' AND page<>'BEER' AND page<>'CHECKOUT'
$$, -- Symbols for page types
'(land)[^(land)(buy)]{0,2}(buy)', -- Purchase within 4 pages entering site
'sum(revenue) as checkout_rev', -- Aggregate: sum revenue by checkout
TRUE -- Persist matches
);
SELECT * FROM path_output ORDER BY session_id, match_id;
Explanation: Notice that the 'symbol' and 'match_id' columns are added to the right of the matched rows.
We are interested in sessions with an order placed within 4 pages of entering the shopping site via the landing page. We represent this by the regular expression: '(land)[^(land)(buy)]{0,2}(buy)'. In other words, visit to the landing page followed by from 0 to 2 non-entry, non-sale pages, followed by a purchase. The SQL is as follows:
End of explanation
%%sql
SELECT * FROM path_output_tuples ORDER BY session_id ASC, event_timestamp ASC;
Explanation: Now view the associated pattern matches:
End of explanation
%%sql
DROP TABLE IF EXISTS path_output, path_output_tuples;
SELECT madlib.path(
'eventlog', -- Name of input table
'path_output', -- Table name to store path results
'session_id', -- Partition input table by session
'event_timestamp ASC', -- Order partitions in input table by time
$$ land:=page='LANDING',
wine:=page='WINE',
beer:=page='BEER',
buy:=page='CHECKOUT',
other:=page<>'LANDING' AND page<>'WINE' AND page<>'BEER' AND page<>'CHECKOUT'
$$, -- Symbols for page types
'(land)[^(land)(buy)]{0,2}(buy)', -- Purchase within 4 pages entering site
'(max(event_timestamp)-min(event_timestamp)) as elapsed_time', -- Aggregate: elapsed time
TRUE -- Persist matches
);
SELECT * FROM path_output ORDER BY session_id, match_id;
Explanation: For instances where a purchase is made within 4 pages of entering a site, compute the elasped time to checkout:
End of explanation
%%sql
SELECT DATE(event_timestamp), user_id, session_id, revenue,
avg(revenue) OVER (PARTITION BY DATE(event_timestamp)) as avg_checkout_rev
FROM path_output_tuples
WHERE page='CHECKOUT'
ORDER BY user_id, session_id;
Explanation: We may want to use a window function instead of an aggregate. You can write window functions on the output tuples to achieve the desired result. Continuing the previous example, let’s say we want to compute average revenue for checkouts within 4 pages of entering the shopping site via the landing page:
End of explanation
%%sql
DROP TABLE IF EXISTS path_output, path_output_tuples;
SELECT madlib.path(
'eventlog', -- Name of input table
'path_output', -- Table name to store path results
'session_id', -- Partition input table by session
'event_timestamp ASC', -- Order partitions in input table by time
$$ land:=page='LANDING',
wine:=page='WINE',
beer:=page='BEER',
buy:=page='CHECKOUT',
other:=page<>'LANDING' AND page<>'WINE' AND page<>'BEER' AND page<>'CHECKOUT'
$$, -- Symbols for page types
'[^(buy)](buy)', -- Pattern to match
'array_agg(page ORDER BY session_id ASC, event_timestamp ASC) as page_path');
SELECT count(*), page_path from
(SELECT * FROM path_output) q
GROUP BY page_path
ORDER BY count(*) DESC
LIMIT 10;
Explanation: Now we want to do a golden path analysis to find the most successful shopper paths through the site. Since our data set is small, we decide this means the most frequently viewed page just before a checkout is made:
End of explanation
%%sql
DROP TABLE IF EXISTS path_output, path_output_tuples;
SELECT madlib.path(
'eventlog', -- Name of the table
'path_output', -- Table name to store the path results
'session_id', -- Partition by session
'event_timestamp ASC', -- Order partitions in input table by time
$$ nobuy:=page<>'CHECKOUT',
buy:=page='CHECKOUT'
$$, -- Definition of symbols used in the pattern definition
'(nobuy)+(buy)', -- At least one page followed by and ending with a CHECKOUT.
'array_agg(page ORDER BY session_id ASC, event_timestamp ASC) as page_path',
FALSE, -- Don't persist matches
TRUE -- Turn on overlapping patterns
);
SELECT * FROM path_output ORDER BY session_id, match_id;
Explanation: There are only 2 different paths. The wine page is viewed more frequently than the beer page just before checkout.
To demonstrate the use of 'overlapping_patterns', consider a pattern with at least one page followed by and ending with a checkout:
End of explanation
%%sql
DROP TABLE IF EXISTS path_output, path_output_tuples;
SELECT madlib.path(
'eventlog', -- Name of the table
'path_output', -- Table name to store the path results
'session_id', -- Partition by session
'event_timestamp ASC', -- Order partitions in input table by time
$$ nobuy:=page<>'CHECKOUT',
buy:=page='CHECKOUT'
$$, -- Definition of symbols used in the pattern definition
'(nobuy)+(buy)', -- At least one page followed by and ending with a CHECKOUT.
'array_agg(page ORDER BY session_id ASC, event_timestamp ASC) as page_path',
FALSE, -- Don't persist matches
FALSE -- Turn on overlapping patterns
);
SELECT * FROM path_output ORDER BY session_id, match_id;
Explanation: With overlap turned off, the result is:
End of explanation |
9,981 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Networks
author
Step1: The Monty Hall Gameshow
The Monty Hall problem arose from the gameshow <i>Let's Make a Deal</i>, where a guest had to choose which one of three doors had a prize behind it. The twist was that after the guest chose, the host, originally Monty Hall, would then open one of the doors the guest did not pick and ask if the guest wanted to switch which door they had picked. Initial inspection may lead you to believe that if there are only two doors left, there is a 50-50 chance of you picking the right one, and so there is no advantage one way or the other. However, it has been proven both through simulations and analytically that there is in fact a 66% chance of getting the prize if the guest switches their door, regardless of the door they initially went with.
We can reproduce this result using Bayesian networks with three nodes, one for the guest, one for the prize, and one for the door Monty chooses to open. The door the guest initially chooses and the door the prize is behind are completely random processes across the three doors, but the door which Monty opens is dependent on both the door the guest chooses (it cannot be the door the guest chooses), and the door the prize is behind (it cannot be the door with the prize behind it).
To create the Bayesian network in pomegranate, we first create the distributions which live in each node in the graph. For a discrete (aka categorical) bayesian network we use DiscreteDistribution objects for the root nodes and ConditionalProbabilityTable objects for the inner and leaf nodes. The columns in a ConditionalProbabilityTable correspond to the order in which the parents (the second argument) are specified, and the last column is the value the ConditionalProbabilityTable itself takes. In the case below, the first column corresponds to the value 'guest' takes, then the value 'prize' takes, and then the value that 'monty' takes. 'B', 'C', 'A' refers then to the probability that Monty reveals door 'A' given that the guest has chosen door 'B' and that the prize is actually behind door 'C', or P(Monty='A'|Guest='B', Prize='C').
Step2: Next, we pass these distributions into state objects along with the name for the node.
Step3: Then we add the states to the network, exactly like we did when making a HMM. In the future, all matrices of data should have their columns organized in the same order that the states are added to the network. The way the states are added to the network makes no difference to it, and so you should add the states according to how the columns are organized in your data.
Step4: Then we need to add edges to the model. The edges represent which states are parents of which other states. This is currently a bit redundant with passing in the distribution objects that are parents for each ConditionalProbabilityTable. For now edges are added from parent -> child by calling model.add_edge(parent, child).
Step5: Lastly, the model must be baked to finalize the internals. Since Bayesian networks use factor graphs for inference, an explicit factor graph is produced from the Bayesian network during the bake step.
Step6: Predicting Probabilities
We can calculate probabilities of a sample under the Bayesian network in the same way that we can calculate probabilities under other models. In this case, let's calculate the probability that you initially said door A, that Monty then opened door B, but that the actual car was behind door C.
Step7: That seems in line with what we know, that there is a 1/9th probability of that happening.
Next, let's look at an impossible situation. What is the probability of initially saying door A, that Monty opened door B, and that the car was actually behind door B.
Step8: The reason that situation is impossible is because Monty can't open a door that has the car behind it.
Performing Inference
Perhaps the most powerful aspect of Bayesian networks is their ability to perform inference. Given any set of observed variables, including no observations, Bayesian networks can make predictions for all other variables. Obviously, the more variables that are observed, the more accurate the predictions will get of the remaining variables.
pomegranate uses the loopy belief propagation algorithm to do inference. This is an approximate algorithm which can yield exact results in tree-like graphs, and in most other cases still yields good results. Inference on a Bayesian network consists of taking in observations for a subset of the variables and using that to infer the values that the other variables take. The most variables which are observed, the closer the inferred values will be to truth. One of the powers of Bayesian networks is that the set of observed and 'hidden' (or unobserved) variables does not need to be specified beforehand, and can change from sample to sample.
We can run inference using the predict_proba method and passing in a dictionary of values, where the key is the name of the state and the value is the observed value for that state. If we don't supply any values, we get the marginal of the graph, which is just the frequency of each value for each variable over an infinite number of randomly drawn samples from the graph.
Lets see what happens when we look at the marginal of the Monty hall network.
Step9: We are returned three DiscreteDistribution objects, each representing the marginal distribution for each variable, in the same order they were put into the model. In this case, they represent the guest, prize, and monty variables respectively. We see that everything is equally likely.
We can also pass in an array where None (or np.nan) correspond to the unobserved values.
Step10: All of the variables that were observed will be the observed value, and all of the variables that were unobserved will be a DiscreteDistribution object. This means that parameters[0] will return the underlying dictionary used by the distribution.
Now lets do something different, and say that the guest has chosen door 'A'. We do this by passing a dictionary to predict_proba with key pairs consisting of the name of the state (in the state object), and the value which that variable has taken, or by passing in a list where the first index is our observation.
Step11: We can see that now Monty will not open door 'A', because the guest has chosen it. At the same time, the distribution over the prize has not changed, it is still equally likely that the prize is behind each door.
Now, lets say that Monty opens door 'C' and see what happens. Here we use a dictionary rather than a list simply to show how one can use both input forms depending on what is more convenient.
Step12: Suddenly, we see that the distribution over prizes has changed. It is now twice as likely that the car is behind the door labeled 'B'. This demonstrates that when on the game show, it is always better to change your initial guess after being shown an open door. Now you could go and win tons of cars, except that the game show got cancelled.
Imputation Given Structured Constraints
The task of filling in an incomplete matrix can be called imputation, and there are many approaches for doing so. One of the most well known is that of matrix factorization, where a latent representation is learned for each of the columns and each of the rows such that the dot product between the two can reconstruct values in the matrix. Due to the manner that these latent representations are learned, the matrix does not need to be complete, and the dot product can then be used to fill in all of the missing values.
One weakness of the matrix factorization approach is that constraints and known relationships can't be enforced between the features. A simple example is that the rule "when column 1 is 'A' and column 2 is 'B', column 3 must be 'C'" can potentially be learned in the representation, but can't be simply hard-coded like a conditional probability table would. A more complex example would say that a pixel in an image can be predicted from its neighbors, whereas the notion of neighbors is more difficult to specify for a factorization approach.
The process for imputing data given a Bayesian network is to either first learn the structure of the network from the given data, or have a known structure, and then use loopy-belief propogation to predict the best values for the missing features.
Let's see an example of this on the digits data set, binarizing the data based on the median value. We'll only use the first two rows because learning large, unconstrained Bayesian networks is difficult.
Step13: Now let's remove a large portion of the pixels randomly from each of the images. We can do that with numpy arrays by setting missing values to np.nan.
Step14: We can set up a baseline for how good an imputation is by using the average pixel value as a replacement. Because this is binary data, we can use the mean absolute error to measure how good these approaches are on imputing the pixels that are not observed.
Step15: Next, we can see how good an IterativeSVD approach is, which is similar to a matrix factorization.
Step16: Now, we can try building a Bayesian network using the Chow-Liu algorithm and then use the resulting network to fill in the matrix.
Step17: We can compare this to a better imputation approach, that of K-nearest neighbors, and see how good using a Bayesian network is.
Step18: Looks like in this case the Bayesian network is better than KNN for imputing the pixels. It is, however, slower than the fancyimpute methods.
The API
Initialization
While the methods are similar across all models in pomegranate, Bayesian networks are more closely related to hidden Markov models than the other models. This makes sense, because both of them rely on graphical structures.
The first way to initialize Bayesian networks is to pass in one distribution and state at a time, and then pass in edges. This is similar to hidden Markov models.
Step19: The other way is to use the from_samples method if given a data set.
Step20: The structure learning algorithms are covered more in depth in the accompanying notebook.
We can look at the structure of the network by using the structure attribute. Each tuple is a node, and the integers in the tuple correspond to the parents of the node.
Step21: Prediction
The prediction method is similar to the other models. Inference is done using loopy belief propogation, which is an approximate version of the forward-backward algorithm that can be significantly faster while still precise.
Step22: The predict method will simply return the argmax of all the distributions after running the predict_proba method. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set_style('whitegrid')
import numpy
from pomegranate import *
numpy.random.seed(0)
numpy.set_printoptions(suppress=True)
%load_ext watermark
%watermark -m -n -p numpy,scipy,pomegranate
Explanation: Bayesian Networks
author: Jacob Schreiber <br>
contact: jmschreiber91@gmail.com
Bayesian networks are a powerful inference tool, in which a set of variables are represented as nodes, and the lack of an edge represents a conditional independence statement between the two variables, and an edge represents a dependence between the two variables. One of the powerful components of a Bayesian network is the ability to infer the values of certain variables, given observed values for another set of variables. These are referred to as the 'hidden' and 'observed' variables respectively, and need not be set at the time the network is created. The same network can have a different set of variables be hidden or observed between two data points. The more values which are observed, the closer the inferred values will be to the truth.
While Bayesian networks can have extremely complex emission probabilities, usually Gaussian or conditional Gaussian distributions, pomegranate currently supports only discrete Bayesian networks. Bayesian networks are explicitly turned into Factor Graphs when inference is done, wherein the Bayesian network is turned into a bipartite graph with all variables having marginal nodes on one side, and joint tables on the other.
End of explanation
# The guests initial door selection is completely random
guest = DiscreteDistribution({'A': 1./3, 'B': 1./3, 'C': 1./3})
# The door the prize is behind is also completely random
prize = DiscreteDistribution({'A': 1./3, 'B': 1./3, 'C': 1./3})
# Monty is dependent on both the guest and the prize.
monty = ConditionalProbabilityTable(
[[ 'A', 'A', 'A', 0.0 ],
[ 'A', 'A', 'B', 0.5 ],
[ 'A', 'A', 'C', 0.5 ],
[ 'A', 'B', 'A', 0.0 ],
[ 'A', 'B', 'B', 0.0 ],
[ 'A', 'B', 'C', 1.0 ],
[ 'A', 'C', 'A', 0.0 ],
[ 'A', 'C', 'B', 1.0 ],
[ 'A', 'C', 'C', 0.0 ],
[ 'B', 'A', 'A', 0.0 ],
[ 'B', 'A', 'B', 0.0 ],
[ 'B', 'A', 'C', 1.0 ],
[ 'B', 'B', 'A', 0.5 ],
[ 'B', 'B', 'B', 0.0 ],
[ 'B', 'B', 'C', 0.5 ],
[ 'B', 'C', 'A', 1.0 ],
[ 'B', 'C', 'B', 0.0 ],
[ 'B', 'C', 'C', 0.0 ],
[ 'C', 'A', 'A', 0.0 ],
[ 'C', 'A', 'B', 1.0 ],
[ 'C', 'A', 'C', 0.0 ],
[ 'C', 'B', 'A', 1.0 ],
[ 'C', 'B', 'B', 0.0 ],
[ 'C', 'B', 'C', 0.0 ],
[ 'C', 'C', 'A', 0.5 ],
[ 'C', 'C', 'B', 0.5 ],
[ 'C', 'C', 'C', 0.0 ]], [guest, prize])
Explanation: The Monty Hall Gameshow
The Monty Hall problem arose from the gameshow <i>Let's Make a Deal</i>, where a guest had to choose which one of three doors had a prize behind it. The twist was that after the guest chose, the host, originally Monty Hall, would then open one of the doors the guest did not pick and ask if the guest wanted to switch which door they had picked. Initial inspection may lead you to believe that if there are only two doors left, there is a 50-50 chance of you picking the right one, and so there is no advantage one way or the other. However, it has been proven both through simulations and analytically that there is in fact a 66% chance of getting the prize if the guest switches their door, regardless of the door they initially went with.
We can reproduce this result using Bayesian networks with three nodes, one for the guest, one for the prize, and one for the door Monty chooses to open. The door the guest initially chooses and the door the prize is behind are completely random processes across the three doors, but the door which Monty opens is dependent on both the door the guest chooses (it cannot be the door the guest chooses), and the door the prize is behind (it cannot be the door with the prize behind it).
To create the Bayesian network in pomegranate, we first create the distributions which live in each node in the graph. For a discrete (aka categorical) bayesian network we use DiscreteDistribution objects for the root nodes and ConditionalProbabilityTable objects for the inner and leaf nodes. The columns in a ConditionalProbabilityTable correspond to the order in which the parents (the second argument) are specified, and the last column is the value the ConditionalProbabilityTable itself takes. In the case below, the first column corresponds to the value 'guest' takes, then the value 'prize' takes, and then the value that 'monty' takes. 'B', 'C', 'A' refers then to the probability that Monty reveals door 'A' given that the guest has chosen door 'B' and that the prize is actually behind door 'C', or P(Monty='A'|Guest='B', Prize='C').
End of explanation
# State objects hold both the distribution, and a high level name.
s1 = State(guest, name="guest")
s2 = State(prize, name="prize")
s3 = State(monty, name="monty")
Explanation: Next, we pass these distributions into state objects along with the name for the node.
End of explanation
# Create the Bayesian network object with a useful name
model = BayesianNetwork("Monty Hall Problem")
# Add the three states to the network
model.add_states(s1, s2, s3)
Explanation: Then we add the states to the network, exactly like we did when making a HMM. In the future, all matrices of data should have their columns organized in the same order that the states are added to the network. The way the states are added to the network makes no difference to it, and so you should add the states according to how the columns are organized in your data.
End of explanation
# Add edges which represent conditional dependencies, where the second node is
# conditionally dependent on the first node (Monty is dependent on both guest and prize)
model.add_edge(s1, s3)
model.add_edge(s2, s3)
Explanation: Then we need to add edges to the model. The edges represent which states are parents of which other states. This is currently a bit redundant with passing in the distribution objects that are parents for each ConditionalProbabilityTable. For now edges are added from parent -> child by calling model.add_edge(parent, child).
End of explanation
model.bake()
Explanation: Lastly, the model must be baked to finalize the internals. Since Bayesian networks use factor graphs for inference, an explicit factor graph is produced from the Bayesian network during the bake step.
End of explanation
model.probability([['A', 'B', 'C']])
Explanation: Predicting Probabilities
We can calculate probabilities of a sample under the Bayesian network in the same way that we can calculate probabilities under other models. In this case, let's calculate the probability that you initially said door A, that Monty then opened door B, but that the actual car was behind door C.
End of explanation
model.probability([['A', 'B', 'B']])
Explanation: That seems in line with what we know, that there is a 1/9th probability of that happening.
Next, let's look at an impossible situation. What is the probability of initially saying door A, that Monty opened door B, and that the car was actually behind door B.
End of explanation
model.predict_proba({})
Explanation: The reason that situation is impossible is because Monty can't open a door that has the car behind it.
Performing Inference
Perhaps the most powerful aspect of Bayesian networks is their ability to perform inference. Given any set of observed variables, including no observations, Bayesian networks can make predictions for all other variables. Obviously, the more variables that are observed, the more accurate the predictions will get of the remaining variables.
pomegranate uses the loopy belief propagation algorithm to do inference. This is an approximate algorithm which can yield exact results in tree-like graphs, and in most other cases still yields good results. Inference on a Bayesian network consists of taking in observations for a subset of the variables and using that to infer the values that the other variables take. The most variables which are observed, the closer the inferred values will be to truth. One of the powers of Bayesian networks is that the set of observed and 'hidden' (or unobserved) variables does not need to be specified beforehand, and can change from sample to sample.
We can run inference using the predict_proba method and passing in a dictionary of values, where the key is the name of the state and the value is the observed value for that state. If we don't supply any values, we get the marginal of the graph, which is just the frequency of each value for each variable over an infinite number of randomly drawn samples from the graph.
Lets see what happens when we look at the marginal of the Monty hall network.
End of explanation
model.predict_proba([[None, None, None]])
Explanation: We are returned three DiscreteDistribution objects, each representing the marginal distribution for each variable, in the same order they were put into the model. In this case, they represent the guest, prize, and monty variables respectively. We see that everything is equally likely.
We can also pass in an array where None (or np.nan) correspond to the unobserved values.
End of explanation
model.predict_proba([['A', None, None]])
Explanation: All of the variables that were observed will be the observed value, and all of the variables that were unobserved will be a DiscreteDistribution object. This means that parameters[0] will return the underlying dictionary used by the distribution.
Now lets do something different, and say that the guest has chosen door 'A'. We do this by passing a dictionary to predict_proba with key pairs consisting of the name of the state (in the state object), and the value which that variable has taken, or by passing in a list where the first index is our observation.
End of explanation
model.predict_proba([{'guest': 'A', 'monty': 'C'}])
Explanation: We can see that now Monty will not open door 'A', because the guest has chosen it. At the same time, the distribution over the prize has not changed, it is still equally likely that the prize is behind each door.
Now, lets say that Monty opens door 'C' and see what happens. Here we use a dictionary rather than a list simply to show how one can use both input forms depending on what is more convenient.
End of explanation
from sklearn.datasets import load_digits
data = load_digits()
X, _ = data.data, data.target
plt.imshow(X[0].reshape(8, 8))
plt.grid(False)
plt.show()
X = X[:,:16]
X = (X > numpy.median(X)).astype('float64')
Explanation: Suddenly, we see that the distribution over prizes has changed. It is now twice as likely that the car is behind the door labeled 'B'. This demonstrates that when on the game show, it is always better to change your initial guess after being shown an open door. Now you could go and win tons of cars, except that the game show got cancelled.
Imputation Given Structured Constraints
The task of filling in an incomplete matrix can be called imputation, and there are many approaches for doing so. One of the most well known is that of matrix factorization, where a latent representation is learned for each of the columns and each of the rows such that the dot product between the two can reconstruct values in the matrix. Due to the manner that these latent representations are learned, the matrix does not need to be complete, and the dot product can then be used to fill in all of the missing values.
One weakness of the matrix factorization approach is that constraints and known relationships can't be enforced between the features. A simple example is that the rule "when column 1 is 'A' and column 2 is 'B', column 3 must be 'C'" can potentially be learned in the representation, but can't be simply hard-coded like a conditional probability table would. A more complex example would say that a pixel in an image can be predicted from its neighbors, whereas the notion of neighbors is more difficult to specify for a factorization approach.
The process for imputing data given a Bayesian network is to either first learn the structure of the network from the given data, or have a known structure, and then use loopy-belief propogation to predict the best values for the missing features.
Let's see an example of this on the digits data set, binarizing the data based on the median value. We'll only use the first two rows because learning large, unconstrained Bayesian networks is difficult.
End of explanation
numpy.random.seed(111)
i = numpy.random.randint(X.shape[0], size=10000)
j = numpy.random.randint(X.shape[1], size=10000)
X_missing = X.copy()
X_missing[i, j] = numpy.nan
X_missing
Explanation: Now let's remove a large portion of the pixels randomly from each of the images. We can do that with numpy arrays by setting missing values to np.nan.
End of explanation
from fancyimpute import SimpleFill
y_pred = SimpleFill().fit_transform(X_missing)[i, j]
numpy.abs(y_pred - X[i, j]).mean()
Explanation: We can set up a baseline for how good an imputation is by using the average pixel value as a replacement. Because this is binary data, we can use the mean absolute error to measure how good these approaches are on imputing the pixels that are not observed.
End of explanation
from fancyimpute import IterativeSVD
y_pred = IterativeSVD(verbose=False).fit_transform(X_missing)[i, j]
numpy.abs(y_pred - X[i, j]).mean()
Explanation: Next, we can see how good an IterativeSVD approach is, which is similar to a matrix factorization.
End of explanation
y_hat = BayesianNetwork.from_samples(X_missing, max_parents=1).predict(X_missing)
numpy.abs(numpy.array(y_hat)[i, j] - X[i, j]).mean()
Explanation: Now, we can try building a Bayesian network using the Chow-Liu algorithm and then use the resulting network to fill in the matrix.
End of explanation
from fancyimpute import KNN
y_pred = KNN(verbose=False).fit_transform(X_missing)[i, j]
numpy.abs(y_pred - X[i, j]).mean()
Explanation: We can compare this to a better imputation approach, that of K-nearest neighbors, and see how good using a Bayesian network is.
End of explanation
d1 = DiscreteDistribution({True: 0.2, False: 0.8})
d2 = DiscreteDistribution({True: 0.6, False: 0.4})
d3 = ConditionalProbabilityTable(
[[True, True, True, 0.2],
[True, True, False, 0.8],
[True, False, True, 0.3],
[True, False, False, 0.7],
[False, True, True, 0.9],
[False, True, False, 0.1],
[False, False, True, 0.4],
[False, False, False, 0.6]], [d1, d2])
s1 = State(d1, name="s1")
s2 = State(d2, name="s2")
s3 = State(d3, name="s3")
model = BayesianNetwork()
model.add_states(s1, s2, s3)
model.add_edge(s1, s3)
model.add_edge(s2, s3)
model.bake()
Explanation: Looks like in this case the Bayesian network is better than KNN for imputing the pixels. It is, however, slower than the fancyimpute methods.
The API
Initialization
While the methods are similar across all models in pomegranate, Bayesian networks are more closely related to hidden Markov models than the other models. This makes sense, because both of them rely on graphical structures.
The first way to initialize Bayesian networks is to pass in one distribution and state at a time, and then pass in edges. This is similar to hidden Markov models.
End of explanation
numpy.random.seed(111)
X = numpy.random.randint(2, size=(15, 15))
X[:,5] = X[:,4] = X[:,3]
X[:,11] = X[:,12] = X[:,13]
model = BayesianNetwork.from_samples(X)
model.plot()
Explanation: The other way is to use the from_samples method if given a data set.
End of explanation
model.structure
Explanation: The structure learning algorithms are covered more in depth in the accompanying notebook.
We can look at the structure of the network by using the structure attribute. Each tuple is a node, and the integers in the tuple correspond to the parents of the node.
End of explanation
model.predict([[False, False, False, False, None, None, False, None, False, None, True, None, None, True, False]])
Explanation: Prediction
The prediction method is similar to the other models. Inference is done using loopy belief propogation, which is an approximate version of the forward-backward algorithm that can be significantly faster while still precise.
End of explanation
model.predict_proba([[False, False, False, False, None, None, False, None, False, None,
True, None, None, True, False]])
Explanation: The predict method will simply return the argmax of all the distributions after running the predict_proba method.
End of explanation |
9,982 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dynamic factors and coincident indices
Factor models generally try to find a small number of unobserved "factors" that influence a subtantial portion of the variation in a larger number of observed variables, and they are related to dimension-reduction techniques such as principal components analysis. Dynamic factor models explicitly model the transition dynamics of the unobserved factors, and so are often applied to time-series data.
Macroeconomic coincident indices are designed to capture the common component of the "business cycle"; such a component is assumed to simultaneously affect many macroeconomic variables. Although the estimation and use of coincident indices (for example the Index of Coincident Economic Indicators) pre-dates dynamic factor models, in several influential papers Stock and Watson (1989, 1991) used a dynamic factor model to provide a theoretical foundation for them.
Below, we follow the treatment found in Kim and Nelson (1999), of the Stock and Watson (1991) model, to formulate a dynamic factor model, estimate its parameters via maximum likelihood, and create a coincident index.
Macroeconomic data
The coincident index is created by considering the comovements in four macroeconomic variables (versions of thse variables are available on FRED; the ID of the series used below is given in parentheses)
Step1: Note
Step2: Stock and Watson (1991) report that for their datasets, they could not reject the null hypothesis of a unit root in each series (so the series are integrated), but they did not find strong evidence that the series were co-integrated.
As a result, they suggest estimating the model using the first differences (of the logs) of the variables, demeaned and standardized.
Step3: Dynamic factors
A general dynamic factor model is written as
Step4: Estimates
Once the model has been estimated, there are two components that we can use for analysis or inference
Step5: Estimated factors
While it can be useful to plot the unobserved factors, it is less useful here than one might think for two reasons
Step6: Post-estimation
Although here we will be able to interpret the results of the model by constructing the coincident index, there is a useful and generic approach for getting a sense for what is being captured by the estimated factor. By taking the estimated factors as given, regressing them (and a constant) each (one at a time) on each of the observed variables, and recording the coefficients of determination ($R^2$ values), we can get a sense of the variables for which each factor explains a substantial portion of the variance and the variables for which it does not.
In models with more variables and more factors, this can sometimes lend interpretation to the factors (for example sometimes one factor will load primarily on real variables and another on nominal variables).
In this model, with only four endogenous variables and one factor, it is easy to digest a simple table of the $R^2$ values, but in larger models it is not. For this reason, a bar plot is often employed; from the plot we can easily see that the factor explains most of the variation in industrial production index and a large portion of the variation in sales and employment, it is less helpful in explaining income.
Step7: Coincident Index
As described above, the goal of this model was to create an interpretable series which could be used to understand the current status of the macroeconomy. This is what the coincident index is designed to do. It is constructed below. For readers interested in an explanation of the construction, see Kim and Nelson (1999) or Stock and Watson (1991).
In essense, what is done is to reconstruct the mean of the (differenced) factor. We will compare it to the coincident index on published by the Federal Reserve Bank of Philadelphia (USPHCI on FRED).
Step8: Below we plot the calculated coincident index along with the US recessions and the comparison coincident index USPHCI.
Step9: Appendix 1
Step10: So what did we just do?
__init__
The important step here was specifying the base dynamic factor model which we were operating with. In particular, as described above, we initialize with factor_order=4, even though we will only end up with an AR(2) model for the factor. We also performed some general setup-related tasks.
start_params
start_params are used as initial values in the optimizer. Since we are adding three new parameters, we need to pass those in. If we hadn't done this, the optimizer would use the default starting values, which would be three elements short.
param_names
param_names are used in a variety of places, but especially in the results class. Below we get a full result summary, which is only possible when all the parameters have associated names.
transform_params and untransform_params
The optimizer selects possibly parameter values in an unconstrained way. That's not usually desired (since variances can't be negative, for example), and transform_params is used to transform the unconstrained values used by the optimizer to constrained values appropriate to the model. Variances terms are typically squared (to force them to be positive), and AR lag coefficients are often constrained to lead to a stationary model. untransform_params is used for the reverse operation (and is important because starting parameters are usually specified in terms of values appropriate to the model, and we need to convert them to parameters appropriate to the optimizer before we can begin the optimization routine).
Even though we don't need to transform or untransform our new parameters (the loadings can in theory take on any values), we still need to modify this function for two reasons
Step11: Although this model increases the likelihood, it is not preferred by the AIC and BIC mesaures which penalize the additional three parameters.
Furthermore, the qualitative results are unchanged, as we can see from the updated $R^2$ chart and the new coincident index, both of which are practically identical to the previous results. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
np.set_printoptions(precision=4, suppress=True, linewidth=120)
from pandas_datareader.data import DataReader
# Get the datasets from FRED
start = '1979-01-01'
end = '2014-12-01'
indprod = DataReader('IPMAN', 'fred', start=start, end=end)
income = DataReader('W875RX1', 'fred', start=start, end=end)
sales = DataReader('CMRMTSPL', 'fred', start=start, end=end)
emp = DataReader('PAYEMS', 'fred', start=start, end=end)
# dta = pd.concat((indprod, income, sales, emp), axis=1)
# dta.columns = ['indprod', 'income', 'sales', 'emp']
Explanation: Dynamic factors and coincident indices
Factor models generally try to find a small number of unobserved "factors" that influence a subtantial portion of the variation in a larger number of observed variables, and they are related to dimension-reduction techniques such as principal components analysis. Dynamic factor models explicitly model the transition dynamics of the unobserved factors, and so are often applied to time-series data.
Macroeconomic coincident indices are designed to capture the common component of the "business cycle"; such a component is assumed to simultaneously affect many macroeconomic variables. Although the estimation and use of coincident indices (for example the Index of Coincident Economic Indicators) pre-dates dynamic factor models, in several influential papers Stock and Watson (1989, 1991) used a dynamic factor model to provide a theoretical foundation for them.
Below, we follow the treatment found in Kim and Nelson (1999), of the Stock and Watson (1991) model, to formulate a dynamic factor model, estimate its parameters via maximum likelihood, and create a coincident index.
Macroeconomic data
The coincident index is created by considering the comovements in four macroeconomic variables (versions of thse variables are available on FRED; the ID of the series used below is given in parentheses):
Industrial production (IPMAN)
Real aggregate income (excluding transfer payments) (W875RX1)
Manufacturing and trade sales (CMRMTSPL)
Employees on non-farm payrolls (PAYEMS)
In all cases, the data is at the monthly frequency and has been seasonally adjusted; the time-frame considered is 1972 - 2005.
End of explanation
# HMRMT = DataReader('HMRMT', 'fred', start='1967-01-01', end=end)
# CMRMT = DataReader('CMRMT', 'fred', start='1997-01-01', end=end)
# HMRMT_growth = HMRMT.diff() / HMRMT.shift()
# sales = pd.Series(np.zeros(emp.shape[0]), index=emp.index)
# # Fill in the recent entries (1997 onwards)
# sales[CMRMT.index] = CMRMT
# # Backfill the previous entries (pre 1997)
# idx = sales.loc[:'1997-01-01'].index
# for t in range(len(idx)-1, 0, -1):
# month = idx[t]
# prev_month = idx[t-1]
# sales.loc[prev_month] = sales.loc[month] / (1 + HMRMT_growth.loc[prev_month].values)
dta = pd.concat((indprod, income, sales, emp), axis=1)
dta.columns = ['indprod', 'income', 'sales', 'emp']
dta.loc[:, 'indprod':'emp'].plot(subplots=True, layout=(2, 2), figsize=(15, 6));
Explanation: Note: in a recent update on FRED (8/12/15) the time series CMRMTSPL was truncated to begin in 1997; this is probably a mistake due to the fact that CMRMTSPL is a spliced series, so the earlier period is from the series HMRMT and the latter period is defined by CMRMT.
This has since (02/11/16) been corrected, however the series could also be constructed by hand from HMRMT and CMRMT, as shown below (process taken from the notes in the Alfred xls file).
End of explanation
# Create log-differenced series
dta['dln_indprod'] = (np.log(dta.indprod)).diff() * 100
dta['dln_income'] = (np.log(dta.income)).diff() * 100
dta['dln_sales'] = (np.log(dta.sales)).diff() * 100
dta['dln_emp'] = (np.log(dta.emp)).diff() * 100
# De-mean and standardize
dta['std_indprod'] = (dta['dln_indprod'] - dta['dln_indprod'].mean()) / dta['dln_indprod'].std()
dta['std_income'] = (dta['dln_income'] - dta['dln_income'].mean()) / dta['dln_income'].std()
dta['std_sales'] = (dta['dln_sales'] - dta['dln_sales'].mean()) / dta['dln_sales'].std()
dta['std_emp'] = (dta['dln_emp'] - dta['dln_emp'].mean()) / dta['dln_emp'].std()
Explanation: Stock and Watson (1991) report that for their datasets, they could not reject the null hypothesis of a unit root in each series (so the series are integrated), but they did not find strong evidence that the series were co-integrated.
As a result, they suggest estimating the model using the first differences (of the logs) of the variables, demeaned and standardized.
End of explanation
# Get the endogenous data
endog = dta.loc['1979-02-01':, 'std_indprod':'std_emp']
# Create the model
mod = sm.tsa.DynamicFactor(endog, k_factors=1, factor_order=2, error_order=2)
initial_res = mod.fit(method='powell', disp=False)
res = mod.fit(initial_res.params, disp=False)
Explanation: Dynamic factors
A general dynamic factor model is written as:
$$
\begin{align}
y_t & = \Lambda f_t + B x_t + u_t \
f_t & = A_1 f_{t-1} + \dots + A_p f_{t-p} + \eta_t \qquad \eta_t \sim N(0, I)\
u_t & = C_1 u_{t-1} + \dots + C_q u_{t-q} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \Sigma)
\end{align}
$$
where $y_t$ are observed data, $f_t$ are the unobserved factors (evolving as a vector autoregression), $x_t$ are (optional) exogenous variables, and $u_t$ is the error, or "idiosyncratic", process ($u_t$ is also optionally allowed to be autocorrelated). The $\Lambda$ matrix is often referred to as the matrix of "factor loadings". The variance of the factor error term is set to the identity matrix to ensure identification of the unobserved factors.
This model can be cast into state space form, and the unobserved factor estimated via the Kalman filter. The likelihood can be evaluated as a byproduct of the filtering recursions, and maximum likelihood estimation used to estimate the parameters.
Model specification
The specific dynamic factor model in this application has 1 unobserved factor which is assumed to follow an AR(2) proces. The innovations $\varepsilon_t$ are assumed to be independent (so that $\Sigma$ is a diagonal matrix) and the error term associated with each equation, $u_{i,t}$ is assumed to follow an independent AR(2) process.
Thus the specification considered here is:
$$
\begin{align}
y_{i,t} & = \lambda_i f_t + u_{i,t} \
u_{i,t} & = c_{i,1} u_{1,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \
f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\
\end{align}
$$
where $i$ is one of: [indprod, income, sales, emp ].
This model can be formulated using the DynamicFactor model built-in to Statsmodels. In particular, we have the following specification:
k_factors = 1 - (there is 1 unobserved factor)
factor_order = 2 - (it follows an AR(2) process)
error_var = False - (the errors evolve as independent AR processes rather than jointly as a VAR - note that this is the default option, so it is not specified below)
error_order = 2 - (the errors are autocorrelated of order 2: i.e. AR(2) processes)
error_cov_type = 'diagonal' - (the innovations are uncorrelated; this is again the default)
Once the model is created, the parameters can be estimated via maximum likelihood; this is done using the fit() method.
Note: recall that we have de-meaned and standardized the data; this will be important in interpreting the results that follow.
Aside: in their empirical example, Kim and Nelson (1999) actually consider a slightly different model in which the employment variable is allowed to also depend on lagged values of the factor - this model does not fit into the built-in DynamicFactor class, but can be accomodated by using a subclass to implement the required new parameters and restrictions - see Appendix A, below.
Parameter estimation
Multivariate models can have a relatively large number of parameters, and it may be difficult to escape from local minima to find the maximized likelihood. In an attempt to mitigate this problem, I perform an initial maximization step (from the model-defined starting paramters) using the modified Powell method available in Scipy (see the minimize documentation for more information). The resulting parameters are then used as starting parameters in the standard LBFGS optimization method.
End of explanation
print(res.summary(separate_params=False))
Explanation: Estimates
Once the model has been estimated, there are two components that we can use for analysis or inference:
The estimated parameters
The estimated factor
Parameters
The estimated parameters can be helpful in understanding the implications of the model, although in models with a larger number of observed variables and / or unobserved factors they can be difficult to interpret.
One reason for this difficulty is due to identification issues between the factor loadings and the unobserved factors. One easy-to-see identification issue is the sign of the loadings and the factors: an equivalent model to the one displayed below would result from reversing the signs of all factor loadings and the unobserved factor.
Here, one of the easy-to-interpret implications in this model is the persistence of the unobserved factor: we find that exhibits substantial persistence.
End of explanation
fig, ax = plt.subplots(figsize=(13,3))
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, res.factors.filtered[0], label='Factor')
ax.legend()
# Retrieve and also plot the NBER recession indicators
rec = DataReader('USREC', 'fred', start=start, end=end)
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:-4,0], facecolor='k', alpha=0.1);
Explanation: Estimated factors
While it can be useful to plot the unobserved factors, it is less useful here than one might think for two reasons:
The sign-related identification issue described above.
Since the data was differenced, the estimated factor explains the variation in the differenced data, not the original data.
It is for these reasons that the coincident index is created (see below).
With these reservations, the unobserved factor is plotted below, along with the NBER indicators for US recessions. It appears that the factor is successful at picking up some degree of business cycle activity.
End of explanation
res.plot_coefficients_of_determination(figsize=(8,2));
Explanation: Post-estimation
Although here we will be able to interpret the results of the model by constructing the coincident index, there is a useful and generic approach for getting a sense for what is being captured by the estimated factor. By taking the estimated factors as given, regressing them (and a constant) each (one at a time) on each of the observed variables, and recording the coefficients of determination ($R^2$ values), we can get a sense of the variables for which each factor explains a substantial portion of the variance and the variables for which it does not.
In models with more variables and more factors, this can sometimes lend interpretation to the factors (for example sometimes one factor will load primarily on real variables and another on nominal variables).
In this model, with only four endogenous variables and one factor, it is easy to digest a simple table of the $R^2$ values, but in larger models it is not. For this reason, a bar plot is often employed; from the plot we can easily see that the factor explains most of the variation in industrial production index and a large portion of the variation in sales and employment, it is less helpful in explaining income.
End of explanation
usphci = DataReader('USPHCI', 'fred', start='1979-01-01', end='2014-12-01')['USPHCI']
usphci.plot(figsize=(13,3));
dusphci = usphci.diff()[1:].values
def compute_coincident_index(mod, res):
# Estimate W(1)
spec = res.specification
design = mod.ssm['design']
transition = mod.ssm['transition']
ss_kalman_gain = res.filter_results.kalman_gain[:,:,-1]
k_states = ss_kalman_gain.shape[0]
W1 = np.linalg.inv(np.eye(k_states) - np.dot(
np.eye(k_states) - np.dot(ss_kalman_gain, design),
transition
)).dot(ss_kalman_gain)[0]
# Compute the factor mean vector
factor_mean = np.dot(W1, dta.loc['1972-02-01':, 'dln_indprod':'dln_emp'].mean())
# Normalize the factors
factor = res.factors.filtered[0]
factor *= np.std(usphci.diff()[1:]) / np.std(factor)
# Compute the coincident index
coincident_index = np.zeros(mod.nobs+1)
# The initial value is arbitrary; here it is set to
# facilitate comparison
coincident_index[0] = usphci.iloc[0] * factor_mean / dusphci.mean()
for t in range(0, mod.nobs):
coincident_index[t+1] = coincident_index[t] + factor[t] + factor_mean
# Attach dates
coincident_index = pd.Series(coincident_index, index=dta.index).iloc[1:]
# Normalize to use the same base year as USPHCI
coincident_index *= (usphci.loc['1992-07-01'] / coincident_index.loc['1992-07-01'])
return coincident_index
Explanation: Coincident Index
As described above, the goal of this model was to create an interpretable series which could be used to understand the current status of the macroeconomy. This is what the coincident index is designed to do. It is constructed below. For readers interested in an explanation of the construction, see Kim and Nelson (1999) or Stock and Watson (1991).
In essense, what is done is to reconstruct the mean of the (differenced) factor. We will compare it to the coincident index on published by the Federal Reserve Bank of Philadelphia (USPHCI on FRED).
End of explanation
fig, ax = plt.subplots(figsize=(13,3))
# Compute the index
coincident_index = compute_coincident_index(mod, res)
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, coincident_index, label='Coincident index')
ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI')
ax.legend(loc='lower right')
# Retrieve and also plot the NBER recession indicators
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:-4,0], facecolor='k', alpha=0.1);
Explanation: Below we plot the calculated coincident index along with the US recessions and the comparison coincident index USPHCI.
End of explanation
from statsmodels.tsa.statespace import tools
class ExtendedDFM(sm.tsa.DynamicFactor):
def __init__(self, endog, **kwargs):
# Setup the model as if we had a factor order of 4
super(ExtendedDFM, self).__init__(
endog, k_factors=1, factor_order=4, error_order=2,
**kwargs)
# Note: `self.parameters` is an ordered dict with the
# keys corresponding to parameter types, and the values
# the number of parameters of that type.
# Add the new parameters
self.parameters['new_loadings'] = 3
# Cache a slice for the location of the 4 factor AR
# parameters (a_1, ..., a_4) in the full parameter vector
offset = (self.parameters['factor_loadings'] +
self.parameters['exog'] +
self.parameters['error_cov'])
self._params_factor_ar = np.s_[offset:offset+2]
self._params_factor_zero = np.s_[offset+2:offset+4]
@property
def start_params(self):
# Add three new loading parameters to the end of the parameter
# vector, initialized to zeros (for simplicity; they could
# be initialized any way you like)
return np.r_[super(ExtendedDFM, self).start_params, 0, 0, 0]
@property
def param_names(self):
# Add the corresponding names for the new loading parameters
# (the name can be anything you like)
return super(ExtendedDFM, self).param_names + [
'loading.L%d.f1.%s' % (i, self.endog_names[3]) for i in range(1,4)]
def transform_params(self, unconstrained):
# Perform the typical DFM transformation (w/o the new parameters)
constrained = super(ExtendedDFM, self).transform_params(
unconstrained[:-3])
# Redo the factor AR constraint, since we only want an AR(2),
# and the previous constraint was for an AR(4)
ar_params = unconstrained[self._params_factor_ar]
constrained[self._params_factor_ar] = (
tools.constrain_stationary_univariate(ar_params))
# Return all the parameters
return np.r_[constrained, unconstrained[-3:]]
def untransform_params(self, constrained):
# Perform the typical DFM untransformation (w/o the new parameters)
unconstrained = super(ExtendedDFM, self).untransform_params(
constrained[:-3])
# Redo the factor AR unconstraint, since we only want an AR(2),
# and the previous unconstraint was for an AR(4)
ar_params = constrained[self._params_factor_ar]
unconstrained[self._params_factor_ar] = (
tools.unconstrain_stationary_univariate(ar_params))
# Return all the parameters
return np.r_[unconstrained, constrained[-3:]]
def update(self, params, transformed=True, complex_step=False):
# Peform the transformation, if required
if not transformed:
params = self.transform_params(params)
params[self._params_factor_zero] = 0
# Now perform the usual DFM update, but exclude our new parameters
super(ExtendedDFM, self).update(params[:-3], transformed=True, complex_step=complex_step)
# Finally, set our new parameters in the design matrix
self.ssm['design', 3, 1:4] = params[-3:]
Explanation: Appendix 1: Extending the dynamic factor model
Recall that the previous specification was described by:
$$
\begin{align}
y_{i,t} & = \lambda_i f_t + u_{i,t} \
u_{i,t} & = c_{i,1} u_{1,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \
f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\
\end{align}
$$
Written in state space form, the previous specification of the model had the following observation equation:
$$
\begin{bmatrix}
y_{\text{indprod}, t} \
y_{\text{income}, t} \
y_{\text{sales}, t} \
y_{\text{emp}, t} \
\end{bmatrix} = \begin{bmatrix}
\lambda_\text{indprod} & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{income} & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{sales} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{emp} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_t \
f_{t-1} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix}
$$
and transition equation:
$$
\begin{bmatrix}
f_t \
f_{t-1} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix} = \begin{bmatrix}
a_1 & a_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & c_{\text{indprod}, 1} & 0 & 0 & 0 & c_{\text{indprod}, 2} & 0 & 0 & 0 \
0 & 0 & 0 & c_{\text{income}, 1} & 0 & 0 & 0 & c_{\text{income}, 2} & 0 & 0 \
0 & 0 & 0 & 0 & c_{\text{sales}, 1} & 0 & 0 & 0 & c_{\text{sales}, 2} & 0 \
0 & 0 & 0 & 0 & 0 & c_{\text{emp}, 1} & 0 & 0 & 0 & c_{\text{emp}, 2} \
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_{t-1} \
f_{t-2} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
u_{\text{indprod}, t-2} \
u_{\text{income}, t-2} \
u_{\text{sales}, t-2} \
u_{\text{emp}, t-2} \
\end{bmatrix}
+ R \begin{bmatrix}
\eta_t \
\varepsilon_{t}
\end{bmatrix}
$$
the DynamicFactor model handles setting up the state space representation and, in the DynamicFactor.update method, it fills in the fitted parameter values into the appropriate locations.
The extended specification is the same as in the previous example, except that we also want to allow employment to depend on lagged values of the factor. This creates a change to the $y_{\text{emp},t}$ equation. Now we have:
$$
\begin{align}
y_{i,t} & = \lambda_i f_t + u_{i,t} \qquad & i \in {\text{indprod}, \text{income}, \text{sales} }\
y_{i,t} & = \lambda_{i,0} f_t + \lambda_{i,1} f_{t-1} + \lambda_{i,2} f_{t-2} + \lambda_{i,2} f_{t-3} + u_{i,t} \qquad & i = \text{emp} \
u_{i,t} & = c_{i,1} u_{i,t-1} + c_{i,2} u_{i,t-2} + \varepsilon_{i,t} \qquad & \varepsilon_{i,t} \sim N(0, \sigma_i^2) \
f_t & = a_1 f_{t-1} + a_2 f_{t-2} + \eta_t \qquad & \eta_t \sim N(0, I)\
\end{align}
$$
Now, the corresponding observation equation should look like the following:
$$
\begin{bmatrix}
y_{\text{indprod}, t} \
y_{\text{income}, t} \
y_{\text{sales}, t} \
y_{\text{emp}, t} \
\end{bmatrix} = \begin{bmatrix}
\lambda_\text{indprod} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{income} & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{sales} & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
\lambda_\text{emp,1} & \lambda_\text{emp,2} & \lambda_\text{emp,3} & \lambda_\text{emp,4} & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_t \
f_{t-1} \
f_{t-2} \
f_{t-3} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix}
$$
Notice that we have introduced two new state variables, $f_{t-2}$ and $f_{t-3}$, which means we need to update the transition equation:
$$
\begin{bmatrix}
f_t \
f_{t-1} \
f_{t-2} \
f_{t-3} \
u_{\text{indprod}, t} \
u_{\text{income}, t} \
u_{\text{sales}, t} \
u_{\text{emp}, t} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
\end{bmatrix} = \begin{bmatrix}
a_1 & a_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & c_{\text{indprod}, 1} & 0 & 0 & 0 & c_{\text{indprod}, 2} & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & c_{\text{income}, 1} & 0 & 0 & 0 & c_{\text{income}, 2} & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 0 & c_{\text{sales}, 1} & 0 & 0 & 0 & c_{\text{sales}, 2} & 0 \
0 & 0 & 0 & 0 & 0 & 0 & 0 & c_{\text{emp}, 1} & 0 & 0 & 0 & c_{\text{emp}, 2} \
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_{t-1} \
f_{t-2} \
f_{t-3} \
f_{t-4} \
u_{\text{indprod}, t-1} \
u_{\text{income}, t-1} \
u_{\text{sales}, t-1} \
u_{\text{emp}, t-1} \
u_{\text{indprod}, t-2} \
u_{\text{income}, t-2} \
u_{\text{sales}, t-2} \
u_{\text{emp}, t-2} \
\end{bmatrix}
+ R \begin{bmatrix}
\eta_t \
\varepsilon_{t}
\end{bmatrix}
$$
This model cannot be handled out-of-the-box by the DynamicFactor class, but it can be handled by creating a subclass when alters the state space representation in the appropriate way.
First, notice that if we had set factor_order = 4, we would almost have what we wanted. In that case, the last line of the observation equation would be:
$$
\begin{bmatrix}
\vdots \
y_{\text{emp}, t} \
\end{bmatrix} = \begin{bmatrix}
\vdots & & & & & & & & & & & \vdots \
\lambda_\text{emp,1} & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
f_t \
f_{t-1} \
f_{t-2} \
f_{t-3} \
\vdots
\end{bmatrix}
$$
and the first line of the transition equation would be:
$$
\begin{bmatrix}
f_t \
\vdots
\end{bmatrix} = \begin{bmatrix}
a_1 & a_2 & a_3 & a_4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \
\vdots & & & & & & & & & & & \vdots \
\end{bmatrix}
\begin{bmatrix}
f_{t-1} \
f_{t-2} \
f_{t-3} \
f_{t-4} \
\vdots
\end{bmatrix}
+ R \begin{bmatrix}
\eta_t \
\varepsilon_{t}
\end{bmatrix}
$$
Relative to what we want, we have the following differences:
In the above situation, the $\lambda_{\text{emp}, j}$ are forced to be zero for $j > 0$, and we want them to be estimated as parameters.
We only want the factor to transition according to an AR(2), but under the above situation it is an AR(4).
Our strategy will be to subclass DynamicFactor, and let it do most of the work (setting up the state space representation, etc.) where it assumes that factor_order = 4. The only things we will actually do in the subclass will be to fix those two issues.
First, here is the full code of the subclass; it is discussed below. It is important to note at the outset that none of the methods defined below could have been omitted. In fact, the methods __init__, start_params, param_names, transform_params, untransform_params, and update form the core of all state space models in Statsmodels, not just the DynamicFactor class.
End of explanation
# Create the model
extended_mod = ExtendedDFM(endog)
initial_extended_res = extended_mod.fit(maxiter=1000, disp=False)
extended_res = extended_mod.fit(initial_extended_res.params, method='nm', maxiter=1000)
print(extended_res.summary(separate_params=False))
Explanation: So what did we just do?
__init__
The important step here was specifying the base dynamic factor model which we were operating with. In particular, as described above, we initialize with factor_order=4, even though we will only end up with an AR(2) model for the factor. We also performed some general setup-related tasks.
start_params
start_params are used as initial values in the optimizer. Since we are adding three new parameters, we need to pass those in. If we hadn't done this, the optimizer would use the default starting values, which would be three elements short.
param_names
param_names are used in a variety of places, but especially in the results class. Below we get a full result summary, which is only possible when all the parameters have associated names.
transform_params and untransform_params
The optimizer selects possibly parameter values in an unconstrained way. That's not usually desired (since variances can't be negative, for example), and transform_params is used to transform the unconstrained values used by the optimizer to constrained values appropriate to the model. Variances terms are typically squared (to force them to be positive), and AR lag coefficients are often constrained to lead to a stationary model. untransform_params is used for the reverse operation (and is important because starting parameters are usually specified in terms of values appropriate to the model, and we need to convert them to parameters appropriate to the optimizer before we can begin the optimization routine).
Even though we don't need to transform or untransform our new parameters (the loadings can in theory take on any values), we still need to modify this function for two reasons:
The version in the DynamicFactor class is expecting 3 fewer parameters than we have now. At a minimum, we need to handle the three new parameters.
The version in the DynamicFactor class constrains the factor lag coefficients to be stationary as though it was an AR(4) model. Since we actually have an AR(2) model, we need to re-do the constraint. We also set the last two autoregressive coefficients to be zero here.
update
The most important reason we need to specify a new update method is because we have three new parameters that we need to place into the state space formulation. In particular we let the parent DynamicFactor.update class handle placing all the parameters except the three new ones in to the state space representation, and then we put the last three in manually.
End of explanation
extended_res.plot_coefficients_of_determination(figsize=(8,2));
fig, ax = plt.subplots(figsize=(13,3))
# Compute the index
extended_coincident_index = compute_coincident_index(extended_mod, extended_res)
# Plot the factor
dates = endog.index._mpl_repr()
ax.plot(dates, coincident_index, '-', linewidth=1, label='Basic model')
ax.plot(dates, extended_coincident_index, '--', linewidth=3, label='Extended model')
ax.plot(usphci.index._mpl_repr(), usphci, label='USPHCI')
ax.legend(loc='lower right')
ax.set(title='Coincident indices, comparison')
# Retrieve and also plot the NBER recession indicators
ylim = ax.get_ylim()
ax.fill_between(dates[:-3], ylim[0], ylim[1], rec.values[:-4,0], facecolor='k', alpha=0.1);
Explanation: Although this model increases the likelihood, it is not preferred by the AIC and BIC mesaures which penalize the additional three parameters.
Furthermore, the qualitative results are unchanged, as we can see from the updated $R^2$ chart and the new coincident index, both of which are practically identical to the previous results.
End of explanation |
9,983 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
计算传播与机器学习
王成军
wangchengjun@nju.edu.cn
计算传播网 http
Step1: 使用sklearn做logistic回归
王成军
wangchengjun@nju.edu.cn
计算传播网 http
Step2: 使用sklearn实现贝叶斯预测
王成军
wangchengjun@nju.edu.cn
计算传播网 http
Step3: naive_bayes.GaussianNB Gaussian Naive Bayes (GaussianNB)
naive_bayes.MultinomialNB([alpha, ...]) Naive Bayes classifier for multinomial models
naive_bayes.BernoulliNB([alpha, binarize, ...]) Naive Bayes classifier for multivariate Bernoulli models.
Step4: cross-validation
k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”
Step5: 使用sklearn实现决策树
王成军
wangchengjun@nju.edu.cn
计算传播网 http
Step6: 使用sklearn实现SVM支持向量机
王成军
wangchengjun@nju.edu.cn
计算传播网 http | Python Code:
%matplotlib inline
from sklearn import datasets
from sklearn import linear_model
import matplotlib.pyplot as plt
import sklearn
print sklearn.__version__
# boston data
boston = datasets.load_boston()
y = boston.target
' '.join(dir(boston))
boston['feature_names']
regr = linear_model.LinearRegression()
lm = regr.fit(boston.data, y)
predicted = regr.predict(boston.data)
fig, ax = plt.subplots()
ax.scatter(y, predicted)
ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)
ax.set_xlabel('$Measured$', fontsize = 20)
ax.set_ylabel('$Predicted$', fontsize = 20)
plt.show()
lm.intercept_, lm.coef_, lm.score(boston.data, y)
import pandas as pd
df = pd.read_csv('/Users/chengjun/github/cjc2016/data/tianya_bbs_threads_list.txt', sep = "\t", header=None)
df=df.rename(columns = {0:'title', 1:'link', 2:'author',3:'author_page', 4:'click', 5:'reply', 6:'time'})
df[:2]
def randomSplit(dataX, dataY, num):
dataX_train = []
dataX_test = []
dataY_train = []
dataY_test = []
import random
test_index = random.sample(range(len(df)), num)
for k in range(len(dataX)):
if k in test_index:
dataX_test.append([dataX[k]])
dataY_test.append(dataY[k])
else:
dataX_train.append([dataX[k]])
dataY_train.append(dataY[k])
return dataX_train, dataX_test, dataY_train, dataY_test,
import numpy as np
# Use only one feature
data_X = df.reply
# Split the data into training/testing sets
data_X_train, data_X_test, data_y_train, data_y_test = randomSplit(np.log(df.click+1), np.log(df.reply+1), 20)
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(data_X_train, data_y_train)
# Explained variance score: 1 is perfect prediction
print'Variance score: %.2f' % regr.score(data_X_test, data_y_test)
# Plot outputs
plt.scatter(data_X_test, data_y_test, color='black')
plt.plot(data_X_test, regr.predict(data_X_test), color='blue', linewidth=3)
plt.show()
# The coefficients
print 'Coefficients: \n', regr.coef_
# The mean square error
print "Residual sum of squares: %.2f" % np.mean((regr.predict(data_X_test) - data_y_test) ** 2)
from sklearn.cross_validation import cross_val_score
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, [[c] for c in df.click], df.reply, cv = 3)
scores.mean()
from sklearn.cross_validation import cross_val_score
x = [[c] for c in np.log(df.click +0.1)]
y = np.log(df.reply+0.1)
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, x, y , cv = 3)
scores.mean()
Explanation: 计算传播与机器学习
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
1、 监督式学习
工作机制:
- 这个算法由一个目标变量或结果变量(或因变量)组成。
- 这些变量由已知的一系列预示变量(自变量)预测而来。
- 利用这一系列变量,我们生成一个将输入值映射到期望输出值的函数。
- 这个训练过程会一直持续,直到模型在训练数据上获得期望的精确度。
- 监督式学习的例子有:回归、决策树、随机森林、K – 近邻算法、逻辑回归等。
2、非监督式学习
工作机制:
- 在这个算法中,没有任何目标变量或结果变量要预测或估计。
- 这个算法用在不同的组内聚类分析。
- 这种分析方式被广泛地用来细分客户,根据干预的方式分为不同的用户组。
- 非监督式学习的例子有:关联算法和 K–均值算法。
3、强化学习
工作机制:
- 这个算法训练机器进行决策。
- 它是这样工作的:机器被放在一个能让它通过反复试错来训练自己的环境中。
- 机器从过去的经验中进行学习,并且尝试利用了解最透彻的知识作出精确的商业判断。
- 强化学习的例子有马尔可夫决策过程。alphago
<img src = './img/mlprocess.png' width = 800>
线性回归
逻辑回归
决策树
SVM
朴素贝叶斯
K最近邻算法
K均值算法
随机森林算法
降维算法
Gradient Boost 和 Adaboost 算法
使用sklearn做线性回归
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
线性回归
通常用于估计连续性变量的实际数值(房价、呼叫次数、总销售额等)。
通过拟合最佳直线来建立自变量X和因变量Y的关系。
这条最佳直线叫做回归线,并且用 $Y= \beta *X + C$ 这条线性等式来表示。
系数 a 和 b 可以通过最小二乘法获得
End of explanation
repost = []
for i in df.title:
if u'转载' in i.decode('utf8'):
repost.append(1)
else:
repost.append(0)
data_X = [[df.click[i], df.reply[i]] for i in range(len(df))]
data_X[:3]
from sklearn.linear_model import LogisticRegression
df['repost'] = repost
model.fit(data_X,df.repost)
model.score(data_X,df.repost)
def randomSplitLogistic(dataX, dataY, num):
dataX_train = []
dataX_test = []
dataY_train = []
dataY_test = []
import random
test_index = random.sample(range(len(df)), num)
for k in range(len(dataX)):
if k in test_index:
dataX_test.append(dataX[k])
dataY_test.append(dataY[k])
else:
dataX_train.append(dataX[k])
dataY_train.append(dataY[k])
return dataX_train, dataX_test, dataY_train, dataY_test,
# Split the data into training/testing sets
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
# Create linear regression object
log_regr = LogisticRegression()
# Train the model using the training sets
log_regr.fit(data_X_train, data_y_train)
# Explained variance score: 1 is perfect prediction
print'Variance score: %.2f' % log_regr.score(data_X_test, data_y_test)
logre = LogisticRegression()
scores = cross_val_score(logre, data_X,df.repost, cv = 3)
scores.mean()
Explanation: 使用sklearn做logistic回归
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
logistic回归是一个分类算法而不是一个回归算法。
可根据已知的一系列因变量估计离散数值(比方说二进制数值 0 或 1 ,是或否,真或假)。
简单来说,它通过将数据拟合进一个逻辑函数(logistic function)来预估一个事件出现的概率。
因此,它也被叫做逻辑回归。因为它预估的是概率,所以它的输出值大小在 0 和 1 之间(正如所预计的一样)。
$$odds= \frac{p}{1-p} = \frac{probability\: of\: event\: occurrence} {probability \:of \:not\: event\: occurrence}$$
$$ln(odds)= ln(\frac{p}{1-p})$$
$$logit(x) = ln(\frac{p}{1-p}) = b_0+b_1X_1+b_2X_2+b_3X_3....+b_kX_k$$
End of explanation
from sklearn import naive_bayes
' '.join(dir(naive_bayes))
Explanation: 使用sklearn实现贝叶斯预测
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
Naive Bayes algorithm
It is a classification technique based on Bayes’ Theorem with an assumption of independence among predictors.
In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature.
why it is known as ‘Naive’? For example, a fruit may be considered to be an apple if it is red, round, and about 3 inches in diameter. Even if these features depend on each other or upon the existence of the other features, all of these properties independently contribute to the probability that this fruit is an apple.
贝叶斯定理为使用$p(c)$, $p(x)$, $p(x|c)$ 计算后验概率$P(c|x)$提供了方法:
$$
p(c|x) = \frac{p(x|c) p(c)}{p(x)}
$$
P(c|x) is the posterior probability of class (c, target) given predictor (x, attributes).
P(c) is the prior probability of class.
P(x|c) is the likelihood which is the probability of predictor given class.
P(x) is the prior probability of predictor.
Step 1: Convert the data set into a frequency table
Step 2: Create Likelihood table by finding the probabilities like:
- p(Overcast) = 0.29, p(rainy) = 0.36, p(sunny) = 0.36
- p(playing) = 0.64, p(rest) = 0.36
Step 3: Now, use Naive Bayesian equation to calculate the posterior probability for each class. The class with the highest posterior probability is the outcome of prediction.
Problem: Players will play if weather is sunny. Is this statement is correct?
We can solve it using above discussed method of posterior probability.
$P(Yes | Sunny) = \frac{P( Sunny | Yes) * P(Yes) } {P (Sunny)}$
Here we have P (Sunny |Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P( Yes)= 9/14 = 0.64
Now, $P (Yes | Sunny) = \frac{0.33 * 0.64}{0.36} = 0.60$, which has higher probability.
End of explanation
#Import Library of Gaussian Naive Bayes model
from sklearn.naive_bayes import GaussianNB
import numpy as np
#assigning predictor and target variables
x= np.array([[-3,7],[1,5], [1,2], [-2,0], [2,3], [-4,0], [-1,1], [1,1], [-2,2], [2,7], [-4,1], [-2,7]])
Y = np.array([3, 3, 3, 3, 4, 3, 3, 4, 3, 4, 4, 4])
#Create a Gaussian Classifier
model = GaussianNB()
# Train the model using the training sets
model.fit(x[:8], Y[:8])
#Predict Output
predicted= model.predict([[1,2],[3,4]])
print predicted
model.score(x[8:], Y[8:])
Explanation: naive_bayes.GaussianNB Gaussian Naive Bayes (GaussianNB)
naive_bayes.MultinomialNB([alpha, ...]) Naive Bayes classifier for multinomial models
naive_bayes.BernoulliNB([alpha, binarize, ...]) Naive Bayes classifier for multivariate Bernoulli models.
End of explanation
data_X_train, data_X_test, data_y_train, data_y_test = randomSplit(df.click, df.reply, 20)
# Train the model using the training sets
model.fit(data_X_train, data_y_train)
#Predict Output
predicted= model.predict(data_X_test)
print predicted
model.score(data_X_test, data_y_test)
from sklearn.cross_validation import cross_val_score
model = GaussianNB()
scores = cross_val_score(model, [[c] for c in df.click], df.reply, cv = 5)
scores.mean()
Explanation: cross-validation
k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”:
- A model is trained using k-1 of the folds as training data;
- the resulting model is validated on the remaining part of the data (i.e., it is used as a test set to compute a performance measure such as accuracy).
End of explanation
from sklearn import tree
model = tree.DecisionTreeClassifier(criterion='gini')
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
model.fit(data_X_train,data_y_train)
model.score(data_X_train,data_y_train)
# Predict
model.predict(data_X_test)
# crossvalidation
scores = cross_val_score(model, data_X, df.repost, cv = 3)
scores.mean()
Explanation: 使用sklearn实现决策树
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
决策树
这个监督式学习算法通常被用于分类问题。
它同时适用于分类变量和连续因变量。
在这个算法中,我们将总体分成两个或更多的同类群。
这是根据最重要的属性或者自变量来分成尽可能不同的组别。
在上图中你可以看到,根据多种属性,人群被分成了不同的四个小组,来判断 “他们会不会去玩”。
为了把总体分成不同组别,需要用到许多技术,比如说 Gini、Information Gain、Chi-square、entropy。
End of explanation
from sklearn import svm
# Create SVM classification object
model=svm.SVC()
' '.join(dir(svm))
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
model.fit(data_X_train,data_y_train)
model.score(data_X_train,data_y_train)
# Predict
model.predict(data_X_test)
# crossvalidation
scores = []
cvs = [3, 5, 10, 25, 50, 75, 100]
for i in cvs:
score = cross_val_score(model, data_X, df.repost, cv = i)
scores.append(score.mean() ) # Try to tune cv
plt.plot(cvs, scores, 'b-o')
plt.xlabel('$cv$', fontsize = 20)
plt.ylabel('$Score$', fontsize = 20)
plt.show()
Explanation: 使用sklearn实现SVM支持向量机
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
将每个数据在N维空间中用点标出(N是你所有的特征总数),每个特征的值是一个坐标的值。
举个例子,如果我们只有身高和头发长度两个特征,我们会在二维空间中标出这两个变量,每个点有两个坐标(这些坐标叫做支持向量)。
现在,我们会找到将两组不同数据分开的一条直线。
两个分组中距离最近的两个点到这条线的距离同时最优化。
上面示例中的黑线将数据分类优化成两个小组
两组中距离最近的点(图中A、B点)到达黑线的距离满足最优条件。
这条直线就是我们的分割线。接下来,测试数据落到直线的哪一边,我们就将它分到哪一类去。
End of explanation |
9,984 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Load and prepare the dataset
You will use the MNIST dataset to train the generator and the discriminator. The generator will generate handwritten digits resembling the MNIST data.
Step2: Create the models
Both the generator and discriminator are defined using the Keras Sequential API.
The Generator
The generator uses tf.keras.layers.Conv2DTranspose (upsampling) layers to produce an image from a seed (random noise). Start with a Dense layer that takes this seed as input, then upsample several times until you reach the desired image size of 28x28x1. Notice the tf.keras.layers.LeakyReLU activation for each layer, except the output layer which uses tanh.
Step3: Use the (as yet untrained) generator to create an image.
Step4: The Discriminator
The discriminator is a CNN-based image classifier.
Step5: Use the (as yet untrained) discriminator to classify the generated images as real or fake. The model will be trained to output positive values for real images, and negative values for fake images.
Step6: Define the loss and optimizers
Define loss functions and optimizers for both models.
Step7: Discriminator loss
This method quantifies how well the discriminator is able to distinguish real images from fakes. It compares the discriminator's predictions on real images to an array of 1s, and the discriminator's predictions on fake (generated) images to an array of 0s.
Step8: Generator loss
The generator's loss quantifies how well it was able to trick the discriminator. Intuitively, if the generator is performing well, the discriminator will classify the fake images as real (or 1). Here, we will compare the discriminators decisions on the generated images to an array of 1s.
Step9: The discriminator and the generator optimizers are different since we will train two networks separately.
Step10: Save checkpoints
This notebook also demonstrates how to save and restore models, which can be helpful in case a long running training task is interrupted.
Step11: Define the training loop
Step12: The training loop begins with generator receiving a random seed as input. That seed is used to produce an image. The discriminator is then used to classify real images (drawn from the training set) and fakes images (produced by the generator). The loss is calculated for each of these models, and the gradients are used to update the generator and discriminator.
Step13: Train the model
Call the train() method defined above to train the generator and discriminator simultaneously. Note, training GANs can be tricky. It's important that the generator and discriminator do not overpower each other (e.g., that they train at a similar rate).
At the beginning of the training, the generated images look like random noise. As training progresses, the generated digits will look increasingly real. After about 50 epochs, they resemble MNIST digits. This may take about one minute / epoch with the default settings on Colab.
Step14: Restore the latest checkpoint.
Step15: Create a GIF | Python Code:
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
tf.__version__
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
from tensorflow.keras import layers
import time
from IPython import display
Explanation: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/lvm/dcgan_fashion_tf.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Deep convolutional generative adversarial networks (DCGAN)
This tutorial fits a DC-GAN to Fashion-MNIST. The code is based on
https://www.tensorflow.org/beta/tutorials/generative/dcgan
End of explanation
(train_images, train_labels), (_, _) = tf.keras.datasets.fashion_mnist.load_data()
# (train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype("float32")
# train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
train_images = train_images / 255 # Normalize the images to [0,1]
train_images = (train_images * 2) - 1 # Normalize the images to [-1, 1]
BUFFER_SIZE = 60000
BATCH_SIZE = 256
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
Explanation: Load and prepare the dataset
You will use the MNIST dataset to train the generator and the discriminator. The generator will generate handwritten digits resembling the MNIST data.
End of explanation
def make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7 * 7 * 256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 256)))
assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding="same", use_bias=False))
assert model.output_shape == (None, 7, 7, 128)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding="same", use_bias=False))
assert model.output_shape == (None, 14, 14, 64)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(
layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding="same", use_bias=False, activation="tanh")
) # assumes output is [-1,1]
# model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='sigmoid')) # assumes output is [0,1]
assert model.output_shape == (None, 28, 28, 1)
return model
Explanation: Create the models
Both the generator and discriminator are defined using the Keras Sequential API.
The Generator
The generator uses tf.keras.layers.Conv2DTranspose (upsampling) layers to produce an image from a seed (random noise). Start with a Dense layer that takes this seed as input, then upsample several times until you reach the desired image size of 28x28x1. Notice the tf.keras.layers.LeakyReLU activation for each layer, except the output layer which uses tanh.
End of explanation
generator = make_generator_model()
noise = tf.random.normal([1, 100])
generated_image = generator(noise, training=False)
plt.imshow(generated_image[0, :, :, 0], cmap="binary")
Explanation: Use the (as yet untrained) generator to create an image.
End of explanation
def make_discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding="same", input_shape=[28, 28, 1]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding="same"))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(1))
# model.add(layers.Dense(1, activation="sigmoid")) # cross-entropy loss assumes logits as input
return model
Explanation: The Discriminator
The discriminator is a CNN-based image classifier.
End of explanation
discriminator = make_discriminator_model()
decision = discriminator(generated_image)
print(decision)
Explanation: Use the (as yet untrained) discriminator to classify the generated images as real or fake. The model will be trained to output positive values for real images, and negative values for fake images.
End of explanation
# This method returns a helper function to compute cross entropy loss
cross_entropy = tf.keras.losses.BinaryCrossentropy(from_logits=True) # don't need sigmoid on output of discriminator
Explanation: Define the loss and optimizers
Define loss functions and optimizers for both models.
End of explanation
def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
Explanation: Discriminator loss
This method quantifies how well the discriminator is able to distinguish real images from fakes. It compares the discriminator's predictions on real images to an array of 1s, and the discriminator's predictions on fake (generated) images to an array of 0s.
End of explanation
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
Explanation: Generator loss
The generator's loss quantifies how well it was able to trick the discriminator. Intuitively, if the generator is performing well, the discriminator will classify the fake images as real (or 1). Here, we will compare the discriminators decisions on the generated images to an array of 1s.
End of explanation
# generator_optimizer = tf.keras.optimizers.Adam(1e-4)
# discriminator_optimizer = tf.keras.optimizers.Adam(1e-4)
generator_optimizer = tf.keras.optimizers.RMSprop()
discriminator_optimizer = tf.keras.optimizers.RMSprop()
Explanation: The discriminator and the generator optimizers are different since we will train two networks separately.
End of explanation
checkpoint_dir = "./training_checkpoints"
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(
generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator,
)
Explanation: Save checkpoints
This notebook also demonstrates how to save and restore models, which can be helpful in case a long running training task is interrupted.
End of explanation
noise_dim = 100
num_examples_to_generate = 25 # 16
# We will reuse this seed overtime (so it's easier)
# to visualize progress in the animated GIF)
seed = tf.random.normal([num_examples_to_generate, noise_dim])
# http://www.datawrangling.org/python-montage-code-for-displaying-arrays/
from numpy import array, flipud, shape, zeros, rot90, ceil, floor, sqrt
from scipy import io, reshape, size
import pylab
def montage(X, colormap=pylab.cm.gist_gray):
m, n, count = shape(X)
mm = int(ceil(sqrt(count)))
nn = mm
M = zeros((mm * m, nn * n))
image_id = 0
for j in range(mm):
for k in range(nn):
if image_id >= count:
break
sliceM, sliceN = j * m, k * n
M[sliceN : sliceN + n, sliceM : sliceM + m] = X[:, :, image_id]
image_id += 1
pylab.imshow(flipud(rot90(M)), cmap=colormap)
pylab.axis("off")
# We assume tensor is [N, H, W, 1].
def plot_montage(tensor):
tensor = tensor[:, :, :, 0]
X = np.transpose(tensor, [2, 1, 0])
montage(X)
tensor = train_images[:25, :, :]
plot_montage(tensor)
Explanation: Define the training loop
End of explanation
def generate_and_save_images(model, epoch, test_input):
# Notice `training` is set to False.
# This is so all layers run in inference mode (batchnorm).
predictions = model(test_input, training=False)
predictions = (predictions + 1) / 2 # map back to [0,1]
plot_montage(predictions)
plt.tight_layout()
plt.savefig("image_at_epoch_{:04d}.png".format(epoch))
plt.show()
# Notice the use of `tf.function`
# This annotation causes the function to be "compiled".
@tf.function
def train_step(images):
noise = tf.random.normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
fake_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(fake_output)
disc_loss = discriminator_loss(real_output, fake_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
# Produce images for the GIF as we go
display.clear_output(wait=True)
generate_and_save_images(generator, epoch + 1, seed)
# Save the model every 15 epochs
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix=checkpoint_prefix)
print("Time for epoch {} is {} sec".format(epoch + 1, time.time() - start))
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator, epochs, seed)
Explanation: The training loop begins with generator receiving a random seed as input. That seed is used to produce an image. The discriminator is then used to classify real images (drawn from the training set) and fakes images (produced by the generator). The loss is calculated for each of these models, and the gradients are used to update the generator and discriminator.
End of explanation
%%time
EPOCHS = 10
train(train_dataset, EPOCHS)
Explanation: Train the model
Call the train() method defined above to train the generator and discriminator simultaneously. Note, training GANs can be tricky. It's important that the generator and discriminator do not overpower each other (e.g., that they train at a similar rate).
At the beginning of the training, the generated images look like random noise. As training progresses, the generated digits will look increasingly real. After about 50 epochs, they resemble MNIST digits. This may take about one minute / epoch with the default settings on Colab.
End of explanation
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
!ls
Explanation: Restore the latest checkpoint.
End of explanation
# Display a single image using the epoch number
def display_image(epoch_no):
return PIL.Image.open("image_at_epoch_{:04d}.png".format(epoch_no))
# Remove border from image
# https://gist.github.com/kylemcdonald/bedcc053db0e7843ef95c531957cb90f
def full_frame(width=None, height=None):
import matplotlib as mpl
mpl.rcParams["savefig.pad_inches"] = 0
figsize = None if width is None else (width, height)
fig = plt.figure(figsize=figsize)
ax = plt.axes([0, 0, 1, 1], frameon=False)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.autoscale(tight=True)
step = 5
ndx = list(range(1, EPOCHS, step))
ndx.append(EPOCHS)
for i in ndx:
img = display_image(i)
full_frame()
plt.imshow(img)
plt.axis("off")
ttl = "epoch {}".format(i)
plt.title(ttl)
plt.show()
Explanation: Create a GIF
End of explanation |
9,985 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-1', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: DWD
Source ID: SANDBOX-1
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:57
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
9,986 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Downloaded a few comma-delimited data from the <b>United States Department of Transportation, Bureau of Transportation Statistics website.</b> <br>
This data ranges from the months of <b>Janurary - May in 2016.</b><br>
In this specific example, I collected data that had either <b>DEST_CITY_NAME</b> or <b>ORGIN_CITY_NAME</b> related to California. What does this mean? It means that a specific flight had either departed or landed in a California based airport.
You can use filters to identify a specific dataset you want to look at.<br>
You can download your own source here
Step1: Let's look at some basic info with the dataset.
Step2: Lets see what different airlines we have data for.
Step3: I retrieved this information from a lookup table from the transtats website, let's assign it to a key/value dict for later use.
Step4: Amount of flights recorded
Step5: So, as you can see here, 'WN' (Southwest) services the most flights IN/OUT of California. 'HA' (Hawaiian Airlines) has the fewest.<br>
Lets visualize that.
Step6: Delay Statistics
Step7: Frequency of delays
Step8: According to this, <b>Spirit Airlines</b> is the worst performing in terms of delay frequency at <b>48%</b> of the time, and <b>United Airlines</b> has the lowest chance, <b>27.61%</b><br>
Let's visualize that | Python Code:
# Assign a list of available files in my data directory (../data/2016/) to a variable
files = os.listdir("data/2016");
# Display
files
# Read through all files and concat all df into a single dataframe, df.
framelist = []
for file in files:
tempdf = pd.read_csv("data/2016/" + file)
framelist.append(tempdf)
df = pd.concat(framelist);
Explanation: Downloaded a few comma-delimited data from the <b>United States Department of Transportation, Bureau of Transportation Statistics website.</b> <br>
This data ranges from the months of <b>Janurary - May in 2016.</b><br>
In this specific example, I collected data that had either <b>DEST_CITY_NAME</b> or <b>ORGIN_CITY_NAME</b> related to California. What does this mean? It means that a specific flight had either departed or landed in a California based airport.
You can use filters to identify a specific dataset you want to look at.<br>
You can download your own source here
End of explanation
df.info()
df.head()
# Drop the last column in place, 'Unnamed: 21', which is the -1 index
df.drop(df.columns[[-1]], axis=1, inplace=True)
Explanation: Let's look at some basic info with the dataset.
End of explanation
df.UNIQUE_CARRIER.unique()
Explanation: Lets see what different airlines we have data for.
End of explanation
airlinekeys = {'AA': 'American Airlines Inc',
'AS': 'Alaska Airlines Inc.',
'B6': 'JetBlue Airways',
'DL': 'Delta Airlines Inc.',
'F9': 'Frontier Airlines Inc',
'HA': 'Hawaiian Airlines Inc',
'NK': 'Spirit Airlines',
'OO': 'SkyWest Airlines Inc',
'UA': 'United Airlines Inc.',
'VX': 'Virgin America',
'WN': 'Southwest Airlines Co' }
pd.DataFrame.from_dict(airlinekeys, orient="index")
Explanation: I retrieved this information from a lookup table from the transtats website, let's assign it to a key/value dict for later use.
End of explanation
# Display frequency of each airline recorded.
df['UNIQUE_CARRIER'].value_counts()
Explanation: Amount of flights recorded
End of explanation
fig = plt.figure(figsize=(20,10))
df.UNIQUE_CARRIER.value_counts().plot(kind='barh')
plt.ylabel('Frequency', fontsize=15); plt.xlabel('Airline', fontsize=15)
plt.tick_params(axis='both', labelsize=15)
plt.title('Amount of flights recorded per airline (2016)', fontsize=15)
Explanation: So, as you can see here, 'WN' (Southwest) services the most flights IN/OUT of California. 'HA' (Hawaiian Airlines) has the fewest.<br>
Lets visualize that.
End of explanation
# Function to return a dictionary of all delays greater than 0 per carrier
def delayratio(carrier):
carrier = carrier.upper()
total = (df.ix[(df['UNIQUE_CARRIER'] == carrier)]).shape[0]
delays = (df.ix[(df['UNIQUE_CARRIER'] == carrier ) & (df['ARR_DELAY_NEW'] > 0 )]).shape[0]
return({ 'Airline': carrier, 'Total': total, 'Delays': delays })
carrierlist = list(df.UNIQUE_CARRIER.unique())
print(carrierlist)
dflist = []
for val, item in enumerate(carrierlist):
dflist.append(delayratio(carrierlist[val]))
dflist
# Let's put the list of dicts into a dataframe
delayratiodf = pd.DataFrame(dflist)
# Let's set the index of the dataframe to be 'Airline'
delayratiodf.set_index('Airline', inplace=True)
# Let's use the airlinekey dictionary we made earlier to replace the airline codenames
delayratiodf.rename(index=airlinekeys, inplace=True)
delayratiodf
# Lets sort by total flights
delayratiodf.sort_values('Total', inplace=True)
# Create a stacked barchart
plot = delayratiodf[['Delays', 'Total']].plot(kind='barh', figsize=(20,15), legend=True, fontsize=20, color=['r', 'b'])
# Increase the legend size
plot.legend(loc=4, prop={'size':20})
Explanation: Delay Statistics
End of explanation
# Simple function to determine ratio
def ratio(df):
return float("%.2f" % ((df[0]/df[1])*100))
# Create a new column 'Percentage', and apply the function 'ratio' to the data.
delayratiodf['Percentage'] = delayratiodf.apply(ratio, axis=1)
# Sort values again by percentage
delayratiodf.sort_values('Percentage', inplace=True)
delayratiodf
Explanation: Frequency of delays
End of explanation
ax = delayratiodf.plot(y='Percentage', kind='barh', figsize=(20,10), title='Percntage of delayed flights', color='red')
plt.ylabel('Percentage', fontsize=15); plt.xlabel('Airline', fontsize=20)
plt.tick_params(axis='both', labelsize=20)
for p in ax.patches:
ax.annotate("%.2f" % p.get_width(), (p.get_x() + p.get_width(), p.get_y()), xytext=(5, 10), textcoords='offset points')
Explanation: According to this, <b>Spirit Airlines</b> is the worst performing in terms of delay frequency at <b>48%</b> of the time, and <b>United Airlines</b> has the lowest chance, <b>27.61%</b><br>
Let's visualize that
End of explanation |
9,987 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: ServerSim Overview and Tutorial
Introduction
This is an overview and tutorial about ServerSim, a framework for the creation of discrete event simulation models to analyze the performance, throughput, and scalability of services deployed on computer servers.
Following the overview of ServerSim, we will provide an example and tutorial of its use for the comparison of two major service deployment patterns.
This document is a Jupyter notebook. See http
Step3: Printing the simulation results
The following function prints the outputs from the above core simulation function.
Step4: Mini-batching, plotting, and comparison of results
The following three functions handle mini-batching, plotting, and comparison of results.
minibatch_resp_times -- This function takes the user group from the results of the deployment_example function, scans the service request log of the user group, and produces mini-batch statistics for every time_resolution time units. For example, with a simulation of 200 time units and a time_resolution of 5 time units, we end up with 40 mini-batches. The statistics produced are the x values corresponding to each mini-batch, and the counts, means, medians, 95th percentile, and 99th percentile in each mini-batch.
plot_counts_means_q95 -- Plots superimposed counts, means, and 95th percentiles for two mini-batch sets coming from two simulations.
compare_scenarios -- Combines the above two functions to produce comparison plots from two simulations.
Step5: Random number generator seed
We set the random number generator seed to a known value to produce repeatable simulations. Comment-out this line to have a different system-generated seed every time the simulations are executed.
Step6: Simulations
Several simulation scenarios are executed below. See the descriptions of the parameters and hard-coded given values of the core simulation function above.
With 10 servers and weight_1 = 2 and weight_2 = 1, this configuration supports 720 users with average response times close to the minimum possible. How did we arrive at that number? For svc_1, the heavier of the two services, the minimum possible average response time is 1 time unit (= 20 server compute units / 10 hardware threads / 2 average service compute units). One server can handle 10 concurrent svc_1 users without think time, or 60 concurrent svc_1 users with average think time of 6 time units. Thus, 10 servers can handle 600 concurrent svc_1 users. Doing the math for both services and taking their respective probabilities into account, the number of users is 720. For full details, see the spreadsheet CapacityPlanning.xlsx. Of course, due to randomness, there will be queuing and the average response times will be greater than the minimum possible. With these numbers, the servers will be running hot as there is no planned slack capacity.
Simulation 0
This is a simulation of one scenario (not a comparison) and printing out of its results. It illustrates the use of the print_results function. The scenario here is the same as the first scenario for Simulation 1 below.
Step7: Simulation 1
In the first scenario, there are 10 servers which are shared by both services. In the second scenario, there are 10 servers, of which 8 are allocated to the first service and 2 are allocated to the second service. This allocation is proportional to their respective loads.
Step8: Repeating above comparison to illustrate variability of results.
Step9: Conclusions
Step10: Conclusions
Step11: Conclusions
Step12: Compare the results of scenarios 1 and 2a
Step13: Compare the results of scenarios 1 and 2b
Step14: Conclusions
Step15: Simulation 5
This simulation is similar to Simulation 1, the difference being the users curve instead of a constant 720 users.
Step16: Conclusions
Step17: Compare the results of scenarios 1 and 2a
Step18: Compare the results of scenarios 1 and 2b
Step19: Conclusions
Step20: Compare the results of scenarios 1a and 2a1
Step21: Compare the results of scenarios 1a and 2a2
Step22: Compare the results of scenarios 1b and 2b | Python Code:
# %load simulate_deployment_scenario.py
from __future__ import print_function
from typing import List, Tuple, Sequence
from collections import namedtuple
import random
import simpy
from serversim import *
def simulate_deployment_scenario(num_users, weight1, weight2, server_range1,
server_range2):
# type: (int, float, float, Sequence[int], Sequence[int]) -> Result
Result = namedtuple("Result", ["num_users", "weight1", "weight2", "server_range1",
"server_range2", "servers", "grp"])
def cug(mid, delta):
Computation units generator
def f():
return random.uniform(mid - delta, mid + delta)
return f
def ld_bal(svc_name):
Application server load-balancer.
if svc_name == "svc_1":
svr = random.choice(servers1)
elif svc_name == "svc_2":
svr = random.choice(servers2)
else:
assert False, "Invalid service type."
return svr
simtime = 200
hw_threads = 10
sw_threads = 20
speed = 20
svc_1_comp_units = 2.0
svc_2_comp_units = 1.0
quantiles = (0.5, 0.95, 0.99)
env = simpy.Environment()
n_servers = max(server_range1[-1] + 1, server_range2[-1] + 1)
servers = [Server(env, hw_threads, sw_threads, speed, "AppServer_%s" % i)
for i in range(n_servers)]
servers1 = [servers[i] for i in server_range1]
servers2 = [servers[i] for i in server_range2]
svc_1 = CoreSvcRequester(env, "svc_1", cug(svc_1_comp_units,
svc_1_comp_units*.9), ld_bal)
svc_2 = CoreSvcRequester(env, "svc_2", cug(svc_2_comp_units,
svc_2_comp_units*.9), ld_bal)
weighted_txns = [(svc_1, weight1),
(svc_2, weight2)
]
min_think_time = 2.0 # .5 # 4
max_think_time = 10.0 # 1.5 # 20
svc_req_log = [] # type: List[Tuple[str, SvcRequest]]
grp = UserGroup(env, num_users, "UserTypeX", weighted_txns, min_think_time,
max_think_time, quantiles, svc_req_log)
grp.activate_users()
env.run(until=simtime)
return Result(num_users=num_users, weight1=weight1, weight2=weight2,
server_range1=server_range1, server_range2=server_range2,
servers=servers, grp=grp)
Explanation: ServerSim Overview and Tutorial
Introduction
This is an overview and tutorial about ServerSim, a framework for the creation of discrete event simulation models to analyze the performance, throughput, and scalability of services deployed on computer servers.
Following the overview of ServerSim, we will provide an example and tutorial of its use for the comparison of two major service deployment patterns.
This document is a Jupyter notebook. See http://jupyter.org/ for more information on Jupyter notebooks.
ServerSim Core Concepts
ServerSim is a small framework based on SimPy, a well-known discrete-event simulation framework written in the Python language. The reader should hav at least a cursory familiarity with Python and SimPy (https://simpy.readthedocs.io/en/latest/contents.html) in order to make the most of this document.
Python is well-suited to this kind of application due to its rapid development dynamic language characteristics and the availability of powerful libraries relevant for this kind of work. In addition to SimPy, we will use portions of SciPy, a powerful set of libraries for efficient data analysis and visualization that includes Matplotlib, which will be used for plotting graphs in our tutorial.
ServerSim consists of a several classes and utilities. The main classes are described below.
class Server
Represents a server -- physical, VM, or container, with a predetermined computation capacity. A server can execute arbitrary service request types. The computation capacity of a server is represented in terms of a number of hardware threads and a total speed number (computation units processed per unit of time). The total speed is equally apportioned among the hardware threads, to give the speed per hardware thread. A server also has a number of associated software threads (which must be no smaller than the number of hardware threads). Software threads are relevant for blocking computations only.
The simulations in this document assume non-blocking services, so the software threads will not be of consequence in the tutorial example.
Attributes:
- env: SimPy Environment, used to start and end simulations, and used internally by SimPy to control simulation events.
- max_concurrency: The maximum of _hardware threads for the server.
- num_threads: The maximum number of software threads for the server.
- speed: Aggregate server speed across all _hardware threads.
- name: The server's name.
- hw_svc_req_log: If not None, a list where hardware
service requests will be logged. Each log entry is a
triple ("hw", name, svc_req), where name is this server's
name and svc_req is the current service request asking for
hardware resources.
- sw_svc_req_log: If not None, a list where software thread
service requests will be logged. Each log entry is a
triple ("sw", name, svc_req), where name is this server's
name and svc_req is the current service request asking for a
software thread.
class SvcRequest
A request for execution of computation units on one or more servers.
A service request submission is implemented as a SimPy Process.
A service request can be a composite of sub-requests.
A service request that is part of a composite has an attribute that
is a reference to its parent service request. In addition,
such a composition is implemented through the gen attribute of the
service request, which is a generator. That generator can yield service
request submissions for other service requests.
By default, a service request is non-blocking, i.e., a thread is
held on the target server only while the service request itself
is executing; the thread is relinquished when the request
finishes executing and it passes control to its sub-requests.
However, blocking service requests can be modeled as well (see the
Blkg class).
Attributes:
- env: The SimPy Environment.
- parent: The immediately containing service request, in case this
is part of a composite service request. None othersise.
- svc_name: Name of the service this request is associated with.
- gen: Generator which defines the behavior of this request. The
generator produces an iterator which yields simpy.Event
instances. The submit() method wratps the iterator in a
simpy.Process object to schedule the request for execution
by SimPy..
- server: The target server. May be None for composite service
requests, i.e., those not produced by CoreSvcRequester.
- in_val: Optional input value of the request.
- in_blocking_call: Indicates whether this request is
in the scope of a blocking call. When this parameter
is true, the service request will hold a software
thread on the target server while the service
request itself and any of its sub-requests (calls
to other servers) are executing. Otherwise, the
call is non-blocking, so a thread is held on the
target server only while the service request itself
is executing; the thread is relinquished when
this request finishes executing and it passes control
to its sub-requests.
- out_val: Output value produced from in_val by the service
execution. None by default.
- id: The unique numerical ID of this request.
- time_log: List of tag-time pairs
representing significant occurrences for this request.
- time_dict: Dictionary with contents of time_log,
for easier access to information.
class SvcRequester
Base class of service requesters.
A service requester represents a service. In this framework,
a service requester is a factory for service requests. "Deploying"
a service on a server is modeled by having service requests
produced by the service requester sent to the target server.
A service requester can be a composite of sub-requesters, thus
representing a composite service.
Attributes:
- env: The SimPy Environment.
- svc_name: Name of the service.
- log: Optional list to collect all service request objects
produced by this service requester.
class UserGroup
Represents a set of identical users or clients that submit
service requests.
Each user repeatedly submits service requests produced
by service requesters randomly selected from the set
of service requesters specified for the group.
Attributes:
- env: The Simpy Environment.
- num_users: Number of users in group. This can be either a
positive integer or a sequence of (float, int), where
the floats are monotonically increasing. In this case,
the sequence represents a step function of time, where each pair
represents a step whose range of x values extend from the
first component of the pair (inclusive) to the first
component of the next pair (exclusive), and whose y value
is the second component of the pair. The first pair in
the sequence must have 0 as its first component.
If the num_users argument is an int, it is transformed
into the list [(0, num_users)].
- name: This user group's name.
- weighted_svcs: List of pairs of
SvcRequester instances and positive numbers
representing the different service request types issued by
the users in the group and their weights. The weights are
the relative frequencies with which the service requesters
will be executed (the weights do not need to add up to 1,
as they are normalized by this class).
- min_think_time: The minimum think time between service
requests from a user. Think time will be uniformly
distributed between min_think_time and max_think_time.
- max_think_time: The maximum think time between service
requests from a user. Think time will be uniformly
distributed between min_think_time and max_think_time.
- quantiles: List of quantiles to be tallied. It
defaults to [0.5, 0.95, 0.99] if not provided.
- svc_req_log: If not None, a sequence where service requests will
be logged. Each log entry is a pair (name, svc_req), where
name is this group's name and svc_req is the current
service request generated by this group.
- svcs: The first components of weighted_svcs.
class CoreSvcRequester(SvcRequester)
This is the core service requester implementation that
interacts with servers to utilize server resources.
All service requesters are either instances of this class or
composites of such instances created using the various
service requester combinators in this module
Attributes:
- env: See base class.
- svc_name: See base class.
- fcompunits: A (possibly randodm) function that
generates the number of compute units required to execute a
service request instance produced by this object.
- fserver: Function that produces a server (possibly round-robin,
random, or based on server load information) when given a
service request name. Models a load-balancer.
- log: See base class.
- f: An optional function that is applied to a service request's
in_val to produce its out_val. If f is None, the constant
function that always returns None is used.
Other service requester classes
Following are other service requester classes (subclasses of SvcRequester) in addition to CoreSvcRequester, that can be used to define more complex services, including blocking services, asynchronous fire-and-forget services, sequentially dependednt services, parallel service calls, and service continuations. These additional classes are not used in the simulations in this document.
class Async(SvcRequester)
Wraps a service requester to produce asynchronous fire-and-forget
service requests.
An asynchronous service request completes and returns immediately
to the parent request, while the underlying (child) service request is
scheduled for execution on its target server.
Attributes:
- env: See base class.
- svc_requester: The underlying service requester that is wrapped
by this one.
- log: See base class.
class Blkg(SvcRequester)
Wraps a service requester to produce blocking service requests.
A blocking service request will hold a software thread on the
target server until the service request itself and all of its
non-asynchronous sub-requests complete.
Attributes:
- env: See base class.
- svc_requester: The underlying service requester that is wrapped
by this one.
- log: See base class.
class Seq(SvcRequester)
Combines a non-empty list of service requesters to yield a
sequential composite service requester.
This service requester produces composite service requests.
A composite service request produced by this service
requester consists of a service request from each of the
provided service requesters. Each of the service requests is
submitted in sequence, i.e., each service request is
submitted when the previous one completes.
Attributes:
- env: See base class.
- svc_name: See base class.
- svc_requesters: A composite service request produced by this
service requester consists of a service request from each of
the provided service requesters
- cont: If true, the sequence executes as continuations
of the first request, all on the same server. Otherwise,
each request can execute on a different server.
- log: See base class.
class Par(SvcRequester)
Combines a non-empty list of service requesters to yield a
parallel composite service requester.
This service requester produces composite service requests.
A composite service request produced by this service
requester consists of a service request from each of the
provided service requesters. All of the service requests are
submitted concurrently.
When the attribute cont is True, this represents multi-threaded
execution of requests on the same server. Otherwise, each
service request can execute on a different server.
Attributes:
- env: See base class.
- svc_name: See base class.
- svc_requesters: See class docstring.
- f: Optional function that takes the outputs of all the component
service requests and produces the overall output
for the composite. If None then the constant function
that always produces None is used.
- cont: If true, all the requests execute on the same server.
Otherwise, each request can execute on a different server.
When cont is True, the server is the container service
request's server if not None, otherwise the server is
picked from the first service request in the list of
generated service requests.
- log: See base class.
Tutorial Example: Comparison of Two Service Deployment Patterns
Below we compare two major service deployment patterns by using discrete-event simulations. Ideally the reader will have had some prior exposure to the Python language in order to follow along all the details. However, the concepts and conclusions should be understandable to readers with software architecture or engineering background even if not familiar with Python.
We assume an application made up of multiple multi-threaded services and consider two deployment patterns:
Cookie-cutter deployment, where all services making up an application are deployed together on each VM or container. This is typical for "monolithic" applications but can also be used for micro-services. See Fowler and Hammant.
Individualized deployment, where each of the services is deployed on its own VM or (more likely) it own container.
In the simulations below, the application is made up of just two services, to simplify the model and the analysis, but without loss of generality in terms of the main conclusions.
Environment set-up
The code used in these simulations should be compatible with both Python 2.7 and Python 3.x.
Python and the following Python packages need to be installed in your computer:
- jupyter-notebook
- simpy
- matplotlib
- LiveStats
The model in this document should be run from the parent directory of the serversim package directory, which contains the source files for the ServerSim framework.
The core simulation function
Following is the the core function used in the simulations This function will be called with different arguments to simulate different scenarios.
This function sets up a simulation with the following givens:
Simulation duration of 200 time units (e.g., seconds).
A set of servers. Each server has 10 hardware threads, 20 software threads, and a speed of 20 compute units per unit of time. The number of servers is fixed by the server_range1 and server_range2 parameters described below.
Two services:
svc_1, which consumes a random number of compute units per request, with a range from 0.2 to 3.8, averaging 2.0 compute units per request
svc_2, which consumes a random number of compute units per request, with a range from 0.1 to 1.9, averaging 1.0 compute units per request
A user group, with a number of users determined by the num_users parameter described below. The user group generates service requests from the two services, with probabilities proportional to the parameters weight_1 and weight_2 described below. The think time for users in the user group ranges from 2.0 to 10.0 time units.
Parameters:
num_users: the number of users being simulated. This parameter can be either a positive integer or a list of pairs. In the second case, the list of pairs represents a number of users that varies over time as a step function. The first elements of the pairs in the list must be strictly monotonically increasing and each pair in the list represents a step in the step function. Each step starts (inclusive) at the time represented by the first component of the corresponding pair and ends (exclusive) at the time represented by the first component of the next pair in the list.
weight1: the relative frequency of service requests for the first service.
weight2: the relative frequency of service requests for the second service.
server_range1: a Python range representing the numeric server IDs of the servers on which the first service can be deployed.
server_range2: a Python range representing the numeric server IDs of the servers on which the second service can be deployed. This and the above range can be overlapping. In case they are overlapping, the servers in the intersection of the ranges will host both the first and the second service.
Imports
We import the required libraries, as well as the __future__ import for compatibility between Python 2.7 and Python 3.x.
End of explanation
# %load print_results.py
from __future__ import print_function
from typing import Sequence, Any, IO
from serversim import *
def print_results(num_users=None, weight1=None, weight2=None, server_range1=None,
server_range2=None, servers=None, grp=None, fi=None):
# type: (int, float, float, Sequence[int], Sequence[int], Sequence[Server], UserGroup, IO[str]) -> None
if fi is None:
import sys
fi = sys.stdout
print("\n\n***** Start Simulation --", num_users, ",", weight1, ",", weight2, ", [", server_range1[0], ",", server_range1[-1] + 1,
") , [", server_range2[0], ",", server_range2[-1] + 1, ") *****", file=fi)
print("Simulation: num_users =", num_users, file=fi)
print("<< ServerExample >>\n", file=fi)
indent = " " * 4
print("\n" + "Servers:", file=fi)
for svr in servers:
print(indent*1 + "Server:", svr.name, file=fi)
print(indent * 2 + "max_concurrency =", svr.max_concurrency, file=fi)
print(indent * 2 + "num_threads =", svr.num_threads, file=fi)
print(indent*2 + "speed =", svr.speed, file=fi)
print(indent * 2 + "avg_process_time =", svr.avg_process_time, file=fi)
print(indent * 2 + "avg_hw_queue_time =", svr.avg_hw_queue_time, file=fi)
print(indent * 2 + "avg_thread_queue_time =", svr.avg_thread_queue_time, file=fi)
print(indent * 2 + "avg_service_time =", svr.avg_service_time, file=fi)
print(indent * 2 + "avg_hw_queue_length =", svr.avg_hw_queue_length, file=fi)
print(indent * 2 + "avg_thread_queue_length =", svr.avg_thread_queue_length, file=fi)
print(indent * 2 + "hw_queue_length =", svr.hw_queue_length, file=fi)
print(indent * 2 + "hw_in_process_count =", svr.hw_in_process_count, file=fi)
print(indent * 2 + "thread_queue_length =", svr.thread_queue_length, file=fi)
print(indent * 2 + "thread_in_use_count =", svr.thread_in_use_count, file=fi)
print(indent*2 + "utilization =", svr.utilization, file=fi)
print(indent*2 + "throughput =", svr.throughput, file=fi)
print(indent*1 + "Group:", grp.name, file=fi)
print(indent*2 + "num_users =", grp.num_users, file=fi)
print(indent*2 + "min_think_time =", grp.min_think_time, file=fi)
print(indent*2 + "max_think_time =", grp.max_think_time, file=fi)
print(indent * 2 + "responded_request_count =", grp.responded_request_count(None), file=fi)
print(indent * 2 + "unresponded_request_count =", grp.unresponded_request_count(None), file=fi)
print(indent * 2 + "avg_response_time =", grp.avg_response_time(), file=fi)
print(indent * 2 + "std_dev_response_time =", grp.std_dev_response_time(None), file=fi)
print(indent*2 + "throughput =", grp.throughput(None), file=fi)
for svc in grp.svcs:
print(indent*2 + svc.svc_name + ":", file=fi)
print(indent * 3 + "responded_request_count =", grp.responded_request_count(svc), file=fi)
print(indent * 3 + "unresponded_request_count =", grp.unresponded_request_count(svc), file=fi)
print(indent * 3 + "avg_response_time =", grp.avg_response_time(svc), file=fi)
print(indent * 3 + "std_dev_response_time =", grp.std_dev_response_time(svc), file=fi)
print(indent*3 + "throughput =", grp.throughput(svc), file=fi)
Explanation: Printing the simulation results
The following function prints the outputs from the above core simulation function.
End of explanation
# %load report_resp_times.py
from typing import TYPE_CHECKING, Sequence, Tuple
import functools as ft
from collections import OrderedDict
import matplotlib.pyplot as plt
from livestats import livestats
if TYPE_CHECKING:
from serversim import UserGroup
def minibatch_resp_times(time_resolution, grp):
# type: (float, UserGroup) -> Tuple[Sequence[float], Sequence[float], Sequence[float], Sequence[float], Sequence[float], Sequence[float]]
quantiles = [0.5, 0.95, 0.99]
xys = [(int(svc_req.time_dict["submitted"]/time_resolution),
svc_req.time_dict["completed"] - svc_req.time_dict["submitted"])
for (_, svc_req) in grp.svc_req_log
if svc_req.is_completed]
def ffold(map_, p):
x, y = p
if x not in map_:
map_[x] = livestats.LiveStats(quantiles)
map_[x].add(y)
return map_
xlvs = ft.reduce(ffold, xys, dict())
xs = xlvs.keys()
xs.sort()
counts = [xlvs[x].count for x in xs]
means = [xlvs[x].average for x in xs]
q_50 = [xlvs[x].quantiles()[0] for x in xs]
q_95 = [xlvs[x].quantiles()[1] for x in xs]
q_99 = [xlvs[x].quantiles()[2] for x in xs]
return xs, counts, means, q_50, q_95, q_99
def plot_counts_means_q95(quantiles1, quantiles2):
x = quantiles1[0] # should be same as quantiles2[0]
counts1 = quantiles1[1]
counts2 = quantiles2[1]
means1 = quantiles1[2]
means2 = quantiles2[2]
q1_95 = quantiles1[4]
q2_95 = quantiles2[4]
# Plot counts
plt.plot(x, counts1, color='b', label="Counts 1")
plt.plot(x, counts2, color='r', label="Counts 2")
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.xlabel("Time buckets")
plt.ylabel("Throughput")
plt.show()
# Plot averages and 95th percentiles
plt.plot(x, means1, color='b', label="Means 1")
plt.plot(x, q1_95, color='c', label="95th Percentile 1")
plt.plot(x, means2, color='r', label="Means 2")
plt.plot(x, q2_95, color='m', label="95th Percentile 2")
# Hack to avoid duplicated labels (https://stackoverflow.com/questions/13588920/stop-matplotlib-repeating-labels-in-legend)
handles, labels = plt.gca().get_legend_handles_labels()
by_label = OrderedDict(zip(labels, handles))
plt.legend(by_label.values(), by_label.keys(), bbox_to_anchor=(1.05, 1),
loc=2, borderaxespad=0.)
plt.xlabel("Time buckets")
plt.ylabel("Response times")
plt.show()
def compare_scenarios(sc1, sc2):
grp1 = sc1.grp
grp2 = sc2.grp
quantiles1 = minibatch_resp_times(5, grp1)
quantiles2 = minibatch_resp_times(5, grp2)
plot_counts_means_q95(quantiles1, quantiles2)
Explanation: Mini-batching, plotting, and comparison of results
The following three functions handle mini-batching, plotting, and comparison of results.
minibatch_resp_times -- This function takes the user group from the results of the deployment_example function, scans the service request log of the user group, and produces mini-batch statistics for every time_resolution time units. For example, with a simulation of 200 time units and a time_resolution of 5 time units, we end up with 40 mini-batches. The statistics produced are the x values corresponding to each mini-batch, and the counts, means, medians, 95th percentile, and 99th percentile in each mini-batch.
plot_counts_means_q95 -- Plots superimposed counts, means, and 95th percentiles for two mini-batch sets coming from two simulations.
compare_scenarios -- Combines the above two functions to produce comparison plots from two simulations.
End of explanation
random.seed(123456)
Explanation: Random number generator seed
We set the random number generator seed to a known value to produce repeatable simulations. Comment-out this line to have a different system-generated seed every time the simulations are executed.
End of explanation
sc1 = simulate_deployment_scenario(num_users=720, weight1=2, weight2=1,
server_range1=range(0, 10), server_range2=range(0, 10))
print_results(**sc1.__dict__)
Explanation: Simulations
Several simulation scenarios are executed below. See the descriptions of the parameters and hard-coded given values of the core simulation function above.
With 10 servers and weight_1 = 2 and weight_2 = 1, this configuration supports 720 users with average response times close to the minimum possible. How did we arrive at that number? For svc_1, the heavier of the two services, the minimum possible average response time is 1 time unit (= 20 server compute units / 10 hardware threads / 2 average service compute units). One server can handle 10 concurrent svc_1 users without think time, or 60 concurrent svc_1 users with average think time of 6 time units. Thus, 10 servers can handle 600 concurrent svc_1 users. Doing the math for both services and taking their respective probabilities into account, the number of users is 720. For full details, see the spreadsheet CapacityPlanning.xlsx. Of course, due to randomness, there will be queuing and the average response times will be greater than the minimum possible. With these numbers, the servers will be running hot as there is no planned slack capacity.
Simulation 0
This is a simulation of one scenario (not a comparison) and printing out of its results. It illustrates the use of the print_results function. The scenario here is the same as the first scenario for Simulation 1 below.
End of explanation
rand_state = random.getstate()
sc1 = simulate_deployment_scenario(num_users=720, weight1=2, weight2=1,
server_range1=range(0, 10), server_range2=range(0, 10))
random.setstate(rand_state)
sc2 = simulate_deployment_scenario(num_users=720, weight1=2, weight2=1,
server_range1=range(0, 8), server_range2=range(8, 10))
compare_scenarios(sc1, sc2)
Explanation: Simulation 1
In the first scenario, there are 10 servers which are shared by both services. In the second scenario, there are 10 servers, of which 8 are allocated to the first service and 2 are allocated to the second service. This allocation is proportional to their respective loads.
End of explanation
rand_state = random.getstate()
sc1 = simulate_deployment_scenario(num_users=720, weight1=2, weight2=1,
server_range1=range(0, 10), server_range2=range(0, 10))
random.setstate(rand_state)
sc2 = simulate_deployment_scenario(num_users=720, weight1=2, weight2=1,
server_range1=range(0, 8), server_range2=range(8, 10))
compare_scenarios(sc1, sc2)
Explanation: Repeating above comparison to illustrate variability of results.
End of explanation
rand_state = random.getstate()
sc1 = simulate_deployment_scenario(num_users=720, weight1=5, weight2=1,
server_range1=range(0, 10), server_range2=range(0, 10))
random.setstate(rand_state)
sc2 = simulate_deployment_scenario(num_users=720, weight1=5, weight2=1,
server_range1=range(0, 8), server_range2=range(8, 10))
compare_scenarios(sc1, sc2)
Explanation: Conclusions: The results of the two deployment strategies are similar in terms of throughput, mean response times, and 95th percentile response times. This is as would be expected, since the capacities allocated under the individualized deployment strategy are proportional to the respective service loads.
Simulation 2
Now, we change the weights of the different services, significantly increasing the weight of svc_1 from 2 to 5.
End of explanation
rand_state = random.getstate()
sc1 = simulate_deployment_scenario(num_users=720, weight1=1, weight2=1,
server_range1=range(0, 10), server_range2=range(0, 10))
random.setstate(rand_state)
sc2 = simulate_deployment_scenario(num_users=720, weight1=1, weight2=1,
server_range1=range(0, 8), server_range2=range(8, 10))
compare_scenarios(sc1, sc2)
Explanation: Conclusions: The cookie-cutter deployment strategy was able to absorb the change in load mix, while the individualized strategy was not, with visibly lower throughput and higher mean and 95th percentile response times.
Simulation 3
For this simulation, we also change the weights of the two services, but now in the opposite direction -- we change the weight of svc_1 from 2 to 1.
End of explanation
rand_state = random.getstate()
sc1 = simulate_deployment_scenario(num_users=720, weight1=1, weight2=1,
server_range1=range(0, 9), server_range2=range(0, 9))
random.setstate(rand_state)
sc2a = simulate_deployment_scenario(num_users=720, weight1=1, weight2=1,
server_range1=range(0, 7), server_range2=range(7, 9))
random.setstate(rand_state)
sc2b = simulate_deployment_scenario(num_users=720, weight1=1, weight2=1,
server_range1=range(0, 6), server_range2=range(6, 9))
Explanation: Conclusions: Again the cookie-cutter deployment strategy was able to absorb the change in load mix, while the individualized strategy was not, with visibly lower throughput and higher mean and 95th percentile response times. Notice that due to the changed load mix, the total load was lower than before and, with the same number of servers, the cookie-cutter configuration had excess capacity while the individualized configuration had excess capacity for svc_1 and insufficient capacity for svc_2.
Simulation 4
We now continue with the weights used in Simulation 3, but adjust server capacity to account for the lower aggregate load and different load mix.
Below we have three scenarios:
- Scenario 1 (cookie-cutter) removes one server
- Scenario 2 (individualized) removes one server from the pool allocated to svc_1
- Scenario 3 (individualized) removes one server and reassigns one server from the svc_1 pool to the svc_2 pool.
Run the three scenarios:
End of explanation
compare_scenarios(sc1, sc2a)
Explanation: Compare the results of scenarios 1 and 2a:
End of explanation
compare_scenarios(sc1, sc2b)
Explanation: Compare the results of scenarios 1 and 2b:
End of explanation
users_curve = [(0, 900), (50, 540), (100, 900), (150, 540)]
Explanation: Conclusions: Scenario 1 performs significantly than better Scenario 2a and comparably to Scenario 2b. This simulation shows again that the cookie-cutter strategy is comparable in performance and throughput to a tuned individualized configuration, and beats hands-down an individualized configuration that is not perfectly tuned for the load mix.
Vary the number of users over time
The simulations below will vary the load over time by varying the number of users over time. The list below defines a step function the represents the number of users varying over time. In this case, the number of users changes every 50 time units.
End of explanation
rand_state = random.getstate()
sc1 = simulate_deployment_scenario(num_users=users_curve, weight1=2, weight2=1,
server_range1=range(0, 10), server_range2=range(0, 10))
random.setstate(rand_state)
sc2 = simulate_deployment_scenario(num_users=users_curve, weight1=2, weight2=1,
server_range1=range(0, 8), server_range2=range(8, 10))
compare_scenarios(sc1, sc2)
Explanation: Simulation 5
This simulation is similar to Simulation 1, the difference being the users curve instead of a constant 720 users.
End of explanation
rand_state = random.getstate()
sc1 = simulate_deployment_scenario(num_users=users_curve, weight1=1, weight2=1,
server_range1=range(0, 9), server_range2=range(0, 9))
random.setstate(rand_state)
sc2a = simulate_deployment_scenario(num_users=users_curve, weight1=1, weight2=1,
server_range1=range(0, 7), server_range2=range(7, 9))
random.setstate(rand_state)
sc2b = simulate_deployment_scenario(num_users=users_curve, weight1=1, weight2=1,
server_range1=range(0, 6), server_range2=range(6, 9))
Explanation: Conclusions: The cookie-cutter and individualized strategies produced similar results.
Simulation 6
We now run a simulation similar to Simulation 4, with the difference that the number of users varies over time. This combines load variability over time as well as a change in load mix. As in Simulation 4, we adjust server capacity to account for the lower aggregate load and different load mix.
Below we have three scenarios:
- Scenario 1 (cookie-cutter) removes one server
- Scenario 2a (individualized) removes one server from the pool allocated to svc_1
- Scenario 2b (individualized) removes one server and reassigns one server from the svc_1 pool to the svc_2 pool.
Run the three scenarios:
End of explanation
compare_scenarios(sc1, sc2a)
Explanation: Compare the results of scenarios 1 and 2a:
End of explanation
compare_scenarios(sc1, sc2b)
Explanation: Compare the results of scenarios 1 and 2b:
End of explanation
rand_state = random.getstate()
sc1a = simulate_deployment_scenario(num_users=864, weight1=2, weight2=1,
server_range1=range(0, 12), server_range2=range(0, 12))
random.setstate(rand_state)
sc2a1 = simulate_deployment_scenario(num_users=864, weight1=2, weight2=1,
server_range1=range(0, 9), server_range2=range(9, 12))
random.setstate(rand_state)
sc2a2 = simulate_deployment_scenario(num_users=864, weight1=2, weight2=1,
server_range1=range(0, 10), server_range2=range(10, 12))
random.setstate(rand_state)
sc1b = simulate_deployment_scenario(num_users=864, weight1=2, weight2=1,
server_range1=range(0, 13), server_range2=range(0, 13))
random.setstate(rand_state)
sc2b = simulate_deployment_scenario(num_users=864, weight1=2, weight2=1,
server_range1=range(0, 10), server_range2=range(10, 13))
Explanation: Conclusions: Scenario 1 performs significantly better than Scenario 2a and comparably to Scenario 2b. This simulation shows again that the cookie-cutter strategy is comparable in performance and throughput to a tuned individualized configuration, and beats an individualized configuration that is not perfectly tuned for the load mix.
Simulation 7
This final simulation is similar to Simulation 1, with the difference that the number of users is 864 instad of 720. In this scenario, the total number of servers required for best capacity utilization can be calculated to be 12 (see CapacityPlanning.xlsx). Under the individualized deployment strategy, the ideal number of servers allocated to svc_1 and svc_2 would be 9.6 and 2.4, respectively. Since the number of servers needs to be an integer, we will run simulations with server allocations to svc_1 and svc_2, respectively, of 10 and 2, 9 and 3, and 10 and 3.
Thus, we have five scenarios:
- Scenario 1a (cookie-cutter) with 12 servers
- Scenario 2a1 (individualized) with 9 servers for svc_1 and 3 servers for svc_2
- Scenario 2a2 (individualized) with 10 servers for svc_1 and 2 servers for svc_2
- Scenario 1b (cookie-cutter) with 13 servers
- Scenario 2b (individualized) with 10 servers for svc_1 and 3 servers for svc_2
Run the scenarios:
End of explanation
compare_scenarios(sc1a, sc2a1)
Explanation: Compare the results of scenarios 1a and 2a1:
End of explanation
compare_scenarios(sc1a, sc2a2)
Explanation: Compare the results of scenarios 1a and 2a2:
End of explanation
compare_scenarios(sc1b, sc2b)
Explanation: Compare the results of scenarios 1b and 2b:
End of explanation |
9,988 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introdução à Programação em Python
Nesse Notebook aprenderemos
Step1: A ordem que as operações são executadas seguem uma ordem de precedência
Step2: A ordem pode ser alterada com o uso de parênteses.
Step3: Além das operações básicas temos algumas operações mais complexas disponíveis.
Precisamos indicar que queremos usar tais funções com o comando
Step4: Os tipos numéricos em Python são chamados de
Step5: Quando uma operação tem resultado indefinido o Python indica com o valor nan.
Variáveis
No exercício anterior, para calcular cada valor de entropia foi necessário digitar toda a fórmula para cada valor de p.
Além disso o valor de p era utilizado duas vezes em cada equação, tornando o trabalho tedioso e com tendência a erros.
Seria interessante criar uma célula do Notebook customizável para calcular a entropia de qualquer valor de p.
Da mesma forma que na matemática, podemos utilizar variáveis para generalizar uma expressão e alterar apenas os valores de tais variáveis.
No computador, tais variáveis armazenam valores em memória para uso posterior.
Para utilizarmos variáveis no Python basta dar um nome a ela e, em seguida, atribuir um valor utilizando o operador "=".
Uma vez que tal variável tem um valor, podemos realizar qualquer operação com ela
Step6: Dessa forma o calculo da entropia poderia ser feito como
Step7: Ao alterar o valor de p, o programa retorna o resultado da nova entropia.
Podemos pedir que o usuário entre com os valores utilizando o comando input().
Esse comando vai esperar que o usuário digite algum valor e o atribuirá em uma variável.
Nota
Step8: O nome das variáveis deve
Step9: Outros tipos de dados
Booleanos
Além dos tipos numéricos temos também os tipos lógicos que podem conter o valor de verdadeiro (True) ou falso (False).
Esses valores são gerados através de operadores relacionais
Step10: Strings
Temos também a opção de trabalhar com textos, ou strings.
Esse tipo é caracterizado por aspas simples ou duplas.
Step11: Podemos usar os operadores + e *, representando concatenação e repetição
Step12: Podemos também obter um caractere da string utilizando o conceito de indexação.
Na indexação, colocamos um valor entre colchetes após o nome da variável para indicar a posição do elemento que nos interessa naquele momento.
A contagem de posição inicia do 0.
Step13: Podemos também usar a indexação para pegar faixas de valores
Step14: Listas
Listas de valores são criadas colocando elementos de quaisquer tipos entre colchetes e separados por vírgulas.
As listas podem ser úteis para armazenar uma coleção de valores que utilizaremos em nosso programa.
Step15: O acesso aos elementos da lista é feito de forma similar às strings
Step16: Existem outros tipos mais avançados que veremos ao longo do curso.
Complete o código abaixo para realizar o que é pedido
Step17: Entrada e Saída de Dados
Entrada de Dados
Durante essa primeira aula vimos sobre os comandos input() para capturar valores digitados pelo usuário e print() para imprimir os valores dos resultados.
O comando input() captura uma string digitada pelo usuário, essa string deve ser convertida de acordo com o que se espera dela
Step18: Alguns dos comandos de conversão disponíveis são
Step19: Saída de Dados
A função print() permite escrever o valor de uma variável ou expressão na tela pulando para a próxima linha ao final.
Esse comando permite a impressão de diversos valores na mesma linha separando-os por ",".
Adicionalmente, se o último argumento da função for end=" " ele não pula para a próxima linha após a impressão.
Step20: A função print() permite o uso de alguns caracteres especiais como
Step21: Bibliotecas Extras
O Python conta com diversas bibliotecas que automatizam muitas tarefas e permitem realizarmos experimentos interessantes.
Usualmente as bibliotecas, ou módulos, são importados através do comando
Step22: Se soubermos exatamente as funções que iremos utilizar do módulo, podemos importá-los seletivamente com a sintaxe
Step23: Finalmente podemos apelidar o nome do módulo para facilitar a chamada de funções
Step24: O comando dir lista todas as funções disponíveis em uma biblioteca
Step25: E o comando help imprime uma ajuda sobre determinada função
Step26: A biblioteca matplotlib permite plotar gráficos diversos.
Para usá-la no Jupyter Notebook primeiro devemos importar os comandos de plotagem | Python Code:
# todo texto escrito após "#" é um comentário e não é interpretado pelo Python
1+2 # realiza a operação 1+2 e imprime na tela
2-1
3*5
7/2
7//2
2**3
5%2
Explanation: Introdução à Programação em Python
Nesse Notebook aprenderemos:
operações básicas do Python,
tipos básicos de variáveis,
entrada e saída de dados e,
bibliotecas extras auxiliares.
Operações Aritméticas
O computador é essencialmente uma máquina de calcular. Para tanto temos diversas operações aritméticas disponíveis no Python:
soma (+): 1+2
subtração (-): 2-1
multiplicação (*): 3*5
divisão (/): 7/2
divisão inteira (//): 7//2
exponenciação (**): 2**3
resto da divisão (%): 5%2
End of explanation
1 + 2 * 3
Explanation: A ordem que as operações são executadas seguem uma ordem de precedência:
- primeiro as operações de divisão e multiplicação,
- em seguida adição e subtração.
End of explanation
(1+2)*3
1 + (2*3)
Explanation: A ordem pode ser alterada com o uso de parênteses.
End of explanation
import math
math.sqrt(2.0) # raiz de 2
math.log(10) # ln(10)
math.exp(1.0) # e^1
math.log(math.exp(1.0)) # ln(e)
math.cos(math.pi) # cos(pi)
Explanation: Além das operações básicas temos algumas operações mais complexas disponíveis.
Precisamos indicar que queremos usar tais funções com o comando:
python
import math
Nota: As funções matemáticas disponíveis podem ser consultadas na documentação.
End of explanation
# Exercício 01
# Vamos fazer algumas operações básicas: escreva o código logo abaixo das instruções, ao completar a tarefa aperte "shift+enter"
# e verifique se a resposta está de acordo com o esperado
# 1) A razão áurea é dada pela eq. (1 + raiz(5))/2, imprima o resultado
# resultado deve ser 1.61803398875
# 2) A entropia é dada por: -p*log(p) - (1-p)*log(1-p),
# Note que o logaritmo é na base 2, procure na documentação como calcular log2.
# Calcule a entropia para:
# p = 0.5, resultado = 1.0
# p = 1.0, resultado = nan
# p = 0.0, resultado = nan
# p = 0.4, resultado = 0.970950594455
Explanation: Os tipos numéricos em Python são chamados de:
- int, para números inteiros e,
- float, para números reais.
Complete os códigos abaixo e aperte SHIFT+ENTER para verificar se a saída é a esperada.
End of explanation
x = 10 # o computador armazena o valor 10 na variável de nome x
x + 2
x * 3
Explanation: Quando uma operação tem resultado indefinido o Python indica com o valor nan.
Variáveis
No exercício anterior, para calcular cada valor de entropia foi necessário digitar toda a fórmula para cada valor de p.
Além disso o valor de p era utilizado duas vezes em cada equação, tornando o trabalho tedioso e com tendência a erros.
Seria interessante criar uma célula do Notebook customizável para calcular a entropia de qualquer valor de p.
Da mesma forma que na matemática, podemos utilizar variáveis para generalizar uma expressão e alterar apenas os valores de tais variáveis.
No computador, tais variáveis armazenam valores em memória para uso posterior.
Para utilizarmos variáveis no Python basta dar um nome a ela e, em seguida, atribuir um valor utilizando o operador "=".
Uma vez que tal variável tem um valor, podemos realizar qualquer operação com ela:
End of explanation
p = 0.4
-p*math.log(p,2) - (1.0-p)*math.log(1.0-p,2)
Explanation: Dessa forma o calculo da entropia poderia ser feito como:
End of explanation
p = float(input("Digite o valor de p: "))
-p*math.log(p,2) - (1.0-p)*math.log(1.0-p,2)
Explanation: Ao alterar o valor de p, o programa retorna o resultado da nova entropia.
Podemos pedir que o usuário entre com os valores utilizando o comando input().
Esse comando vai esperar que o usuário digite algum valor e o atribuirá em uma variável.
Nota: precisamos especificar ao comando input o tipo de variável desejada, no nosso caso float.
End of explanation
x = 1
type(x)
x = 1.0
type(x)
Explanation: O nome das variáveis deve:
- começar com uma letra e,
- conter apenas letras, números e o símbolo "_".
Além disso existem alguns nomes já utilizados pelo Python e que não devem ser utilizados:
python
and, as, assert, break, class, continue, def, del, elif,
else, except, exec, finally, for, from, global, if,
import, in, is, lambda, not, or, pass, print, raise,
return, try, while, with, yield
O Python determina automaticamente o tipo que a variável vai conter, de acordo com o que é atribuído a ela.
Podemos utilizar o comando type() para determinar o tipo atual da variável:
End of explanation
True and False
True and True
True or False
True or True
not True
not False
x = 1
y = 2
x == y
x != y
x > y
x <= y
x <= y and x != y # x < y?
x <= y and not (x != y) # x >= y e x == y ==> x == y?
Explanation: Outros tipos de dados
Booleanos
Além dos tipos numéricos temos também os tipos lógicos que podem conter o valor de verdadeiro (True) ou falso (False).
Esses valores são gerados através de operadores relacionais:
- Igualdade: 1 == 2 => False
- Desigualdade: 1 != 2 => True
- Maior e Maior ou igual: 1 > 2, 1 >= 2 => False
- Menor e Menor ou igual: 1 < 2, 1 <= 2 => True
E compostos com operadores booleanos:
- E: True and False => False
- Ou: True or False => True
- Não: not True => False
End of explanation
texto = "Olá Mundo"
type(texto)
len(texto)
Explanation: Strings
Temos também a opção de trabalhar com textos, ou strings.
Esse tipo é caracterizado por aspas simples ou duplas.
End of explanation
ola = "Olá"
espaco = " "
mundo = "Mundo"
ola + espaco + mundo
(ola + espaco) * 6 + mundo
Explanation: Podemos usar os operadores + e *, representando concatenação e repetição:
End of explanation
texto = "Olá Mundo"
texto[0]
texto[2] # terceira letra
Explanation: Podemos também obter um caractere da string utilizando o conceito de indexação.
Na indexação, colocamos um valor entre colchetes após o nome da variável para indicar a posição do elemento que nos interessa naquele momento.
A contagem de posição inicia do 0.
End of explanation
texto[0:3]
texto[4:] # omitir o último valor significa que quero ir até o fim
texto[:3] # nesse caso omiti o valor inicial
texto[-1] # índice negativo representa uma contagem de trás para frente
texto[-1:-6:-1] # o último valor, -1, indica que quero andar de trás para frente.
Explanation: Podemos também usar a indexação para pegar faixas de valores:
End of explanation
coordenadas = [1.0, 3.0]
coordenadas
lista = [1,2,3.0,'batata',True]
lista
len(lista)
Explanation: Listas
Listas de valores são criadas colocando elementos de quaisquer tipos entre colchetes e separados por vírgulas.
As listas podem ser úteis para armazenar uma coleção de valores que utilizaremos em nosso programa.
End of explanation
lista[0] # imprime o primeiro elemento da lista
lista[0:3] # imprime os 3 primeiros elementos
lista[-1] # último elemento
Explanation: O acesso aos elementos da lista é feito de forma similar às strings:
End of explanation
#Exercício 02
import math
# 1) entre com o numero do mes do seu nascimento na variavel 'mes' e siga as instruções abaixo colocando o resultado
# na variável 'resultado'.
mes = int(input('Digite o mes do seu nascimento: '))
resultado = 0
# 1) multiplique o número por 2 e armazene na variavel de nome resultado
# 2) some 5 ao resultado e armazene nela mesma
# 3) multiplique por 50 armazenando em resultado
# 4) some sua idade ao resultado
idade = int(input('Digite sua idade: '))
# 5) subtraia resultado por 250
# o primeiro digito deve ser o mês e os dois últimos sua idade
print(resultado)
#Exercício 03
# 1) Peça para o usuario digitar o nome e imprima "Ola <nome>, como vai?"
# não esqueça que o nome deve ser digitado entre aspas.
nome =
print()
# 2) Crie uma lista com 3 zeros peça para o usuário digitar 2 valores, armazenando nas 2 primeiras
# posicoes. Calcule a media dos dois primeiros valores e armazene na terceira posição.
# utilize input() para capturar os valores
lista =
print(lista)
Explanation: Existem outros tipos mais avançados que veremos ao longo do curso.
Complete o código abaixo para realizar o que é pedido:
End of explanation
x = float(input("Entre com um valor numérico: ")) # o valor digitado será um número em ponto flutuante
x/2
Explanation: Entrada e Saída de Dados
Entrada de Dados
Durante essa primeira aula vimos sobre os comandos input() para capturar valores digitados pelo usuário e print() para imprimir os valores dos resultados.
O comando input() captura uma string digitada pelo usuário, essa string deve ser convertida de acordo com o que se espera dela:
End of explanation
x = 10
int(x) + 2
float(x) * 2
str(x) * 2
Explanation: Alguns dos comandos de conversão disponíveis são: int() para inteiros, float() para ponto flutuante e str() para strings.
End of explanation
print(2) # imprime 2 e vai para a próxima linha
print(3, "ola", 4+5) # imprime múltiplos valores de diferentes tipos e vai para a próxima linha
print(1, 2, end=" ") # imprime os dois valores mas não pula para a próxima linha
print(3,4) # continua impressão na mesma linha
Explanation: Saída de Dados
A função print() permite escrever o valor de uma variável ou expressão na tela pulando para a próxima linha ao final.
Esse comando permite a impressão de diversos valores na mesma linha separando-os por ",".
Adicionalmente, se o último argumento da função for end=" " ele não pula para a próxima linha após a impressão.
End of explanation
m = 1
n = 6
d = 1/float(n)
print('{}: {}/{} = {:.4}'.format("O valor da divisão", m, n, d))
# {} indica que as lacunas devem ser preenchidas e .4 é o número de casas decimais para o número.
print("Elemento 1 \t Elemento 2 \nElemento 3 \t Elemento 4")
Explanation: A função print() permite o uso de alguns caracteres especiais como:
- '\t': adiciona uma tabulação
- '\n': pula uma linha
Podemos complementar a formatação da string com o comando format():
End of explanation
import math
math.sqrt(2)
Explanation: Bibliotecas Extras
O Python conta com diversas bibliotecas que automatizam muitas tarefas e permitem realizarmos experimentos interessantes.
Usualmente as bibliotecas, ou módulos, são importados através do comando:
python
import NOME
e as funções das bibliotecas são chamadas precedidas pelo NOME do módulo.
Vimos isso anteriormente com a biblioteca math:
End of explanation
from math import sqrt, exp
print(sqrt(2), exp(3))
Explanation: Se soubermos exatamente as funções que iremos utilizar do módulo, podemos importá-los seletivamente com a sintaxe:
python
from NOME import FUNCAO
Nesse caso não precisamos preceder a função com NOME para executá-la:
End of explanation
import math as mt
print(mt.sqrt(2), mt.exp(3))
Explanation: Finalmente podemos apelidar o nome do módulo para facilitar a chamada de funções:
End of explanation
print(dir(math))
Explanation: O comando dir lista todas as funções disponíveis em uma biblioteca:
End of explanation
help(math.sin)
Explanation: E o comando help imprime uma ajuda sobre determinada função:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
x = [-5,-4,-3,-2,-1,0,1,2,3,4,5]
y = [25,16,9,4,1,0,1,4,9,16,25]
plt.figure(figsize=(8,8)) # Cria uma nova figura com largura e altura definida por figsize
plt.style.use('fivethirtyeight') # define um estilo de plotagem: # http://tonysyu.github.io/raw_content/matplotlib-style-gallery/gallery.html
plt.title('Eq. do Segundo Grau') # Escreve um título para o gráfico
plt.xlabel('x') # nome do eixo x
plt.ylabel(r'$x^2$') # nome do eixo y, as strings entre $ $ são formatadas como no latex
plt.plot(x,y, color='green') # cria um gráfico de linha com os valores de x e y e a cor definida por "color"
plt.show() # mostra o gráfico
# Leiam mais em: http://matplotlib.org/users/pyplot_tutorial.html
Explanation: A biblioteca matplotlib permite plotar gráficos diversos.
Para usá-la no Jupyter Notebook primeiro devemos importar os comandos de plotagem:
python
%matplotlib inline
import matplotlib.pyplot as plt
O comando %matplotlib inline indica para o Jupyter que todos os gráficos devem ser mostrados dentro do próprio notebook.
Vamos plotar alguns pontos de uma função quadrática utilizando o comando:
python
plt.plot(x,y)
onde x e y são listas com valores correspondentes a abscissa e ordenada, respectivamente.
Basicamente temos que:
- x será uma lista de pontos que queremos plotar e,
- y a aplicação da função $f(x) = x^2$ em cada ponto de x.
End of explanation |
9,989 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kaggle Dogs Vs. Cats Using LeNet on Google Colab TPU
Required setup
Update api_token with kaggle api key for downloading dataset
- Login to kaggle
- My Profile > Edit Profile > Createt new API Token
- Update **api_token** dict below with the values
Change Notebook runtime to TPU
- In colab notebook menu, Runtime > Change runtime type
- Select TPU in the list
Install kaggle package, download and extract zip file
Step1: Re-arrange classes to 2 separate directories
Step2: Training configs
Step3: Setup generators to provide with train and validation batches
Step4: Define LeNet model architecture
Step5: Check for TPU availability
Step6: Convert keras model to TPU model
Step7: Run training
Step8: Save the model weights
Step9: Download model weights locally | Python Code:
!pip install kaggle
api_token = {"username":"xxxxx","key":"xxxxxxxxxxxxxxxxxxxxxxxx"}
import json
import zipfile
import os
os.mkdir('/root/.kaggle')
with open('/root/.kaggle/kaggle.json', 'w') as file:
json.dump(api_token, file)
!chmod 600 /root/.kaggle/kaggle.json
# !kaggle config path -p /root
!kaggle competitions download -c dogs-vs-cats
zip_ref = zipfile.ZipFile('/content/train.zip', 'r')
zip_ref.extractall()
zip_ref.close()
Explanation: Kaggle Dogs Vs. Cats Using LeNet on Google Colab TPU
Required setup
Update api_token with kaggle api key for downloading dataset
- Login to kaggle
- My Profile > Edit Profile > Createt new API Token
- Update **api_token** dict below with the values
Change Notebook runtime to TPU
- In colab notebook menu, Runtime > Change runtime type
- Select TPU in the list
Install kaggle package, download and extract zip file
End of explanation
!mkdir train/cat train/dog
!mv train/*cat*.jpg train/cat
!mv train/*dog*.jpg train/dog
Explanation: Re-arrange classes to 2 separate directories
End of explanation
BATCH_SIZE = 64
IMG_DIM = (256, 256, 3)
NUM_EPOCHS = 1
Explanation: Training configs
End of explanation
import tensorflow as tf
from tensorflow import keras
print(keras.__version__)
print(tf.__version__)
datagen = keras.preprocessing.image.ImageDataGenerator(
rescale=1./255,
validation_split=0.2)
traingen = datagen.flow_from_directory(
'train',
batch_size = BATCH_SIZE,
target_size = IMG_DIM[:-1],
class_mode = 'categorical',
subset='training')
valgen = datagen.flow_from_directory(
'train',
batch_size = BATCH_SIZE,
target_size = IMG_DIM[:-1],
class_mode = 'categorical',
subset='validation')
Explanation: Setup generators to provide with train and validation batches
End of explanation
input = keras.layers.Input(IMG_DIM, name="input")
conv1 = keras.layers.Conv2D(20, kernel_size=(5, 5), padding='same')(input)
pool1 = keras.layers.MaxPooling2D(pool_size=(2,2), strides=(2,2))(conv1)
conv2 = keras.layers.Conv2D(50, kernel_size=(5,5), padding='same')(pool1)
pool2 = keras.layers.MaxPooling2D(pool_size=(2,2), strides=(2,2))(conv1)
flatten1 = keras.layers.Flatten()(pool2)
fc1 = keras.layers.Dense(500, activation='relu')(flatten1)
fc2 = keras.layers.Dense(2, activation='softmax')(fc1)
model = keras.models.Model(inputs=input, outputs=fc2)
model.compile(
loss='categorical_crossentropy',
optimizer=keras.optimizers.SGD(lr=0.01),
metrics=['accuracy'])
print(model.summary())
Explanation: Define LeNet model architecture
End of explanation
import os
try:
device_name = os.environ['COLAB_TPU_ADDR']
TPU_ADDRESS = 'grpc://' + device_name
print('Found TPU at: {}'.format(TPU_ADDRESS))
except KeyError:
print('TPU not found')
Explanation: Check for TPU availability
End of explanation
tpu_model = tf.contrib.tpu.keras_to_tpu_model(
model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(TPU_ADDRESS)))
Explanation: Convert keras model to TPU model
End of explanation
tpu_model.fit_generator(
traingen,
steps_per_epoch=traingen.n//traingen.batch_size,
epochs=1,
validation_data=valgen,
validation_steps=valgen.n//valgen.batch_size)
Explanation: Run training
End of explanation
tpu_model.save_weights('./lenet-catdog.h5', overwrite=True)
Explanation: Save the model weights
End of explanation
from google.colab import files
files.download("lenet-catdog.h5")
Explanation: Download model weights locally
End of explanation |
9,990 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Four Axes
This notebook demonstrates the use of Cube Browser to produce multiple plots using only the code (i.e. no selection widgets).
Step1: Load and prepare your cubes
Step2: Set up your map projection and your axes, and then make your plots using your preferred plot-types and layout.
Finally, display your plots with their associated sliders. | Python Code:
import iris
import iris.plot as iplt
import matplotlib.pyplot as plt
from cube_browser import Contour, Browser, Contourf, Pcolormesh
Explanation: Four Axes
This notebook demonstrates the use of Cube Browser to produce multiple plots using only the code (i.e. no selection widgets).
End of explanation
air_potential_temperature = iris.load_cube(iris.sample_data_path('colpex.pp'), 'air_potential_temperature')
print air_potential_temperature
Explanation: Load and prepare your cubes
End of explanation
projection = iplt.default_projection(air_potential_temperature)
ax1 = plt.subplot(221, projection=projection)
ax2 = plt.subplot(222, projection=projection)
ax3 = plt.subplot(223, projection=projection)
ax4 = plt.subplot(224, projection=projection)
cf1 = Pcolormesh(air_potential_temperature[0, 0], ax1)
cf2 = Contour(air_potential_temperature[:, 0], ax2)
cf3 = Contour(air_potential_temperature[0], ax3)
cf4 = Pcolormesh(air_potential_temperature, ax4)
Browser([cf1, cf2, cf3, cf4]).display()
Explanation: Set up your map projection and your axes, and then make your plots using your preferred plot-types and layout.
Finally, display your plots with their associated sliders.
End of explanation |
9,991 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Whiskey Data
This data set contains data on a small number of whiskies
Step1: Summaries
Shown below are the following charts
Step2: Some Analysis
Here we use the sci-kit decision tree regression tool to predict the price of a whiskey given its age, rating and ABV value.
We transform the output for plotting purposes, but note that the tooltips give the original data
Step3: Simple Linked Charts
Click on a bar to see the proportions of Whiskey categories per country | Python Code:
import pandas as pd
from numpy import log, abs, sign, sqrt
import brunel
whiskey = pd.read_csv("data/whiskey.csv")
print('Data on whiskies:', ', '.join(whiskey.columns))
Explanation: Whiskey Data
This data set contains data on a small number of whiskies
End of explanation
%%brunel data('whiskey') x(country, category) color(rating) treemap label(name:3) tooltip(#all)
style('.label {font-size:7pt}') legends(none)
:: width=900, height=600
%%brunel data('whiskey') bubble color(rating:red) sort(rating) size(abv) label(name:6) tooltip(#all) filter(price, category)
:: height=500
%%brunel data('whiskey')
line x(age) y(rating) mean(rating) using(interpolate) label(country) split(country)
bin(age:8) color(#selection) legends(none) |
treemap x(category) interaction(select) size(#count) color(#selection) legends(none) sort(#count:ascending) bin(category:9)
tooltip(country) list(country) label(#count) style('.labels .label {font-size:14px}')
:: width=900
%%brunel data('whiskey')
bubble label(country:3) bin(country) size(#count) color(#selection) sort(#count) interaction(select) tooltip(name) list(name) legends(none) at(0,10,60,100)
| x(abv) y(rating) color(#count:blue) legends(none) bin(abv:8) bin(rating:5) style('symbol:rect; stroke:none; size:100%')
interaction(select) label(#selection) list(#selection) at(60,15,100,100) tooltip(rating, abv,#count) legends(none)
| bar label(brand:70) list(brand) at(0,0, 100, 10) axes(none) color(#selection) legends(none) interaction(filter)
:: width=900, height=600
Explanation: Summaries
Shown below are the following charts:
A treemap display for each whiskey, broken down by country and category. The cells are colored by the rating, with lower-rated whiskies in blue, and higher-rated in reds. Missing data for ratings show as black.
A filtered chart allowing you to select whiskeys based on price and category
A line chart showing the relationship between age and rating. A simple treemap of categories is linked to this chart
A bubble chart of countries linked to a heatmap of alcohol level (ABV) by rating
End of explanation
from sklearn import tree
D = whiskey[['Name', 'ABV', 'Age', 'Rating', 'Price']].dropna()
X = D[ ['ABV', 'Age', 'Rating'] ]
y = D['Price']
clf = tree.DecisionTreeRegressor(min_samples_leaf=4)
clf.fit(X, y)
D['Predicted'] = clf.predict(X)
f = D['Predicted'] - D['Price']
D['Diff'] = sqrt(abs(f)) * sign(f)
D['LPrice'] = log(y)
%brunel data('D') y(diff) x(LPrice) tooltip(name, price, predicted, rating) color(rating) :: width=700
Explanation: Some Analysis
Here we use the sci-kit decision tree regression tool to predict the price of a whiskey given its age, rating and ABV value.
We transform the output for plotting purposes, but note that the tooltips give the original data
End of explanation
%%brunel data('whiskey')
bar x(country) y(#count) interaction(select) color(#selection) |
bar color(category) y(#count) percent(#count) polar stack label(category) legends(none) interaction(filter) tooltip(#count,category)
:: width=900, height=300
Explanation: Simple Linked Charts
Click on a bar to see the proportions of Whiskey categories per country
End of explanation |
9,992 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interpolation Exercise 2
Step1: Sparse 2d interpolation
In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain
Step2: The following plot should show the points on the boundary and the single point in the interior
Step3: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain
Step4: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set_style('white')
from scipy.interpolate import griddata
Explanation: Interpolation Exercise 2
End of explanation
x = np.empty((1,),dtype=int)
x[0] = 0
for i in range(-4,5):
x = np.hstack((x,np.array((i,i))))
x = np.hstack((x,np.array([-5]*11)))
x = np.hstack((x,np.array([5]*11)))
y = np.empty((1,),dtype=int)
y[0]=0
y = np.hstack((y,np.array((5,-5)*9)))
for i in range(-5,6):
y = np.hstack((y,np.array((i))))
for i in range(-5,6):
y = np.hstack((y,np.array((i))))
f=np.zeros_like(y)
f[0]=1
Explanation: Sparse 2d interpolation
In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain:
The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$.
The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points.
The value of $f$ is known at a single interior point: $f(0,0)=1.0$.
The function $f$ is not known at any other points.
Create arrays x, y, f:
x should be a 1d array of the x coordinates on the boundary and the 1 interior point.
y should be a 1d array of the y coordinates on the boundary and the 1 interior point.
f should be a 1d array of the values of f at the corresponding x and y coordinates.
You might find that np.hstack is helpful.
End of explanation
plt.scatter(x, y);
assert x.shape==(41,)
assert y.shape==(41,)
assert f.shape==(41,)
assert np.count_nonzero(f)==1
Explanation: The following plot should show the points on the boundary and the single point in the interior:
End of explanation
xnew = np.linspace(-5, 5, 100)
ynew = np.linspace(-5, 5, 100)
Xnew, Ynew = np.meshgrid(xnew, ynew)
Fnew = griddata((x,y), f, (Xnew, Ynew), method='cubic', fill_value=0.0)
assert xnew.shape==(100,)
assert ynew.shape==(100,)
assert Xnew.shape==(100,100)
assert Ynew.shape==(100,100)
assert Fnew.shape==(100,100)
Explanation: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain:
xnew and ynew should be 1d arrays with 100 points between $[-5,5]$.
Xnew and Ynew should be 2d versions of xnew and ynew created by meshgrid.
Fnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xnew,Ynew).
Use cubic spline interpolation.
End of explanation
plt.contourf(Xnew,Ynew,Fnew,cmap='jet');
plt.colorbar(shrink=.8);
plt.xlabel('x value');
plt.ylabel('y value');
plt.title('Contour Plot of Interpolated Function');
assert True # leave this to grade the plot
Explanation: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
End of explanation |
9,993 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hands-on Tutorial
Step1: Install library and data dependencies
Load and pre-process data sets
Step2: Let's examine some rows in these datasets.
Step3: Understanding the data
There are many column in the data set, however some columns you may want to pay closer attention to are
Step4: We will need to convert toxicity and identity columns to booleans, in order to work with our neural net and metrics calculcations. For this tutorial, we will consider any value >= 0.5 as True (i.e. a comment should be considered toxic if 50% or more crowd raters labeled it as toxic). Note that this code also converts missing identity fields to False.
Step5: Exercise #1
Count the number of comments in the training set which are labeled as referring to the "female" group.
What percentage of comments which are labeled as referring to the "female" group are toxic?
How does this percentage compare to other identity groups in the training set?
How does this compare to the percentage of toxic comments in the entire training set?
Step6: Solution (click to expand)
Step7: Define a text classification model
This code creates and trains a convolutional neural net using the Keras framework. This neural net accepts a text comment, encoded using GloVe embeddings, and outputs a probably that the comment is toxic. Don't worry if you do not understand all of this code, as we will be treating this neural net as a black box later in the tutorial.
Note that for this colab, we will be loading pretrained models from disk, rather than using this code to train a new model which would take over 30 minutes.
Step9: Optional
Step10: Score test set with our text classification model
Using our new model, we can score the set of test comments for toxicity.
Step11: Let's see how our model performed against the test set. We can compare the models predictions against the actual labels, and calculate the overall ROC-AUC for the model.
Step12: Evaluate the overall ROC-AUC
This calculates the models performance on the entire test set using the ROC-AUC metric.
Step16: Compute Bias Metrics
Using metrics based on ROC-AUC, we can measure our model for biases against different identity groups. We only calculate bias metrics on identities that are refered to in 100 or more comments, to minimize noise.
The 3 bias metrics compare different subsets of the data as illustrated in the following image
Step17: Plot a heatmap of bias metrics
Plot a heatmap of the bias metrics. Higher scores indicate better results.
* Subgroup AUC measures the ability to separate toxic and non-toxic comments for this identity.
* Negative cross AUC measures the ability to separate non-toxic comments for this identity from toxic comments from the background distribution.
* Positive cross AUC measures the ability to separate toxic comments for this identity from non-toxic comments from the background distribution.
Step18: Exercise #2
Examine the bias heatmap above - what biases can you spot? Do the biases appear to be false positives (non-toxic comments incorrectly classified as toxic) or false negatives (toxic comments incorrectly classified as non-toxic)?
Solution (click to expand)
Some groups have lower subgroup AUC scores, for example the groups "heterosexual", "transgender", and "homosexual_gay_or_lesbian". Because the "Negative Cross AUC" is lower than the "Positive Cross AUC" for this group, it appears that this groups has more false positives, i.e. many non-toxic comments about homosexuals are scoring higher for toxicity than actually toxic comments about other topics.
Plot histograms showing comment scores
We can graph a histogram of comment scores in each identity. In the following graphs, the X axis represents the toxicity score given by our new model, and the Y axis represents the comment count. Blue values are comment whose true label is non-toxic, while red values are those whose true label is toxic.
Step19: Exercise #3
By comparing the toxicity histograms for comments that refer to different groups with each other, and with the background distribution, what additional information can we learn about bias in our model?
Step20: Solution (click to expand)
This is one possible interpretation of the data. We encourage you to explore other identity categories and come up with your own conclusions.
We can see that for some identities such as Asian, the model scores most non-toxic comments as less than 0.2 and most toxic comments as greater than 0.2. This indicates that for the Asian identity, our model is able to distinguish between toxic and non-toxic comments. However, for the black identity, there are many non-toxic comments with scores over 0.5, along with many toxic comments with scores of less than 0.5. This shows that for the black identity, our model will be less accurate at separating toxic comments from non-toxic comments. We can see that the model also has difficulty separating toxic from non-toxic data for comments labeled as applying to the "white" identity. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import metrics
from keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Embedding
from keras.layers import Input
from keras.layers import Conv1D
from keras.layers import MaxPooling1D
from keras.layers import Flatten
from keras.layers import Dropout
from keras.layers import Dense
from tensorflow.keras.optimizers import RMSprop
from keras.models import Model
from keras.models import load_model
%matplotlib inline
# autoreload makes it easier to interactively work on code in imported libraries
%load_ext autoreload
%autoreload 2
# Set pandas display options so we can read more of the comment text.
pd.set_option('max_colwidth', 300)
# Download and unzip files used in this colab
!curl -O -J -L https://storage.googleapis.com/civil_comments/fat_star_tutorial/fat-star.zip
!unzip -o fat-star.zip
# Seed for Pandas sampling, to get consistent sampling results
RANDOM_STATE = 123456789
Explanation: Hands-on Tutorial: Measuring Unintended Bias in Text Classification Models with Real Data
Copyright 2019 Google LLC.
SPDX-License-Identifier: Apache-2.0
Unintended bias is a major challenge for machine learning systems. In this tutorial, we will demonstrate a way to measure unintended bias in a text classification model using a large set of online comments which have been labeled for toxicity and identity references. We will provide participants with starter code that builds and evaluates a machine learning model, written using open source Python libraries. Using this code they can explore different ways to measure and visualize model bias. At the end of this tutorial, participants should walk away with new techniques for bias measurement.
WARNING: Some text examples in this notebook include profanity, offensive statments, and offensive statments involving identity terms. Please feel free to avoid using this notebook.
To get started, please click "CONNECT" in the top right of the screen. You can use SHIFT + ↲ to run cells in this notebook. Please be sure to run each cell before moving on to the next cell in the notebook.
End of explanation
# Read the initial train, test, and validate data into Pandas dataframes.
train_df_float = pd.read_csv('public_train.csv')
test_df_float = pd.read_csv('public_test.csv')
validate_df_float = pd.read_csv('public_validate.csv')
print('training data has %d rows' % len(train_df_float))
print('validation data has %d rows' % len(validate_df_float))
print('test data has %d rows' % len(test_df_float))
print('training data columns are: %s' % train_df_float.columns)
Explanation: Install library and data dependencies
Load and pre-process data sets
End of explanation
train_df_float.head()
Explanation: Let's examine some rows in these datasets.
End of explanation
pd.concat([
# Select 3 rows where 100% of raters said it applied to the male identity.
train_df_float[['toxicity', 'male', 'comment_text']].query('male == 1').head(3),
# Select 3 rows where 50% of raters said it applied to the male identity.
train_df_float[['toxicity', 'male', 'comment_text']].query('male == 0.5').head(3),
# Select 3 rows where 0% of raters said it applied to the male identity.
train_df_float[['toxicity', 'male', 'comment_text']].query('male == 0.0').head(3),
# Select 3 rows that were not labeled for the male identity (have NaN values).
# See https://stackoverflow.com/questions/26535563 if you would like to
# understand this Pandas behavior.
train_df_float[['toxicity', 'male', 'comment_text']].query('male != male').head(3)])
Explanation: Understanding the data
There are many column in the data set, however some columns you may want to pay closer attention to are:
* comment_text: this is the the text which we will pass into our model.
* toxicity: this is the percentage of raters who labeled this comment as being toxic.
* identity columns, such as "male", "female", "white", "black", and others: there are the percentage of raters who labeled this comment as refering to a given identity. Unlike comment_text and toxicity, these columns may be missing for many rows and will display as NaN initially.
Let's now look at some unprocessed rows. We will filter the output to only show the "toxicity", "male", and "comment_text" columns, however keep in mind that there are 24 total identity columns.
End of explanation
# List all identities
identity_columns = [
'male', 'female', 'transgender', 'other_gender', 'heterosexual',
'homosexual_gay_or_lesbian', 'bisexual', 'other_sexual_orientation', 'christian',
'jewish', 'muslim', 'hindu', 'buddhist', 'atheist', 'other_religion', 'black',
'white', 'asian', 'latino', 'other_race_or_ethnicity',
'physical_disability', 'intellectual_or_learning_disability',
'psychiatric_or_mental_illness', 'other_disability']
def convert_to_bool(df, col_name):
df[col_name] = np.where(df[col_name] >= 0.5, True, False)
def convert_dataframe_to_bool(df):
bool_df = df.copy()
for col in ['toxicity'] + identity_columns:
convert_to_bool(bool_df, col)
return bool_df
train_df = convert_dataframe_to_bool(train_df_float)
validate_df = convert_dataframe_to_bool(validate_df_float)
test_df = convert_dataframe_to_bool(test_df_float)
train_df[['toxicity', 'male', 'comment_text']].sample(5, random_state=RANDOM_STATE)
Explanation: We will need to convert toxicity and identity columns to booleans, in order to work with our neural net and metrics calculcations. For this tutorial, we will consider any value >= 0.5 as True (i.e. a comment should be considered toxic if 50% or more crowd raters labeled it as toxic). Note that this code also converts missing identity fields to False.
End of explanation
# Your code here
#
# HINT: you can query dataframes for identities using code like:
# train_df.query('black == True')
# and
# train_df.query('toxicity == True')
#
# You can print the identity_columns variable to see the full list of identities
# labeled by crowd raters.
#
# Pandas Dataframe documentation is available at https://pandas.pydata.org/pandas-docs/stable/api.html#dataframe
Explanation: Exercise #1
Count the number of comments in the training set which are labeled as referring to the "female" group.
What percentage of comments which are labeled as referring to the "female" group are toxic?
How does this percentage compare to other identity groups in the training set?
How does this compare to the percentage of toxic comments in the entire training set?
End of explanation
def print_count_and_percent_toxic(df, identity):
# Query all training comments where the identity column equals True.
identity_comments = train_df.query(identity + ' == True')
# Query which of those comments also have "toxicity" equals True
toxic_identity_comments = identity_comments.query('toxicity == True')
# Alternatively you could also write a query using & (and), e.g.:
# toxic_identity_comments = train_df.query(identity + ' == True & toxicity == True')
# Print the results.
num_comments = len(identity_comments)
percent_toxic = len(toxic_identity_comments) / num_comments
print('%d comments refer to the %s identity, %.2f%% are toxic' % (
num_comments,
identity,
# multiply percent_toxic by 100 for easier reading.
100 * percent_toxic))
# Print values for comments labeled as referring to the female identity
print_count_and_percent_toxic(train_df, 'female')
# Compare this with comments labeled as referring to the male identity
print_count_and_percent_toxic(train_df, 'male')
# Print the percent toxicity for the entire training set
all_toxic_df = train_df.query('toxicity == True')
print('%.2f%% of all comments are toxic' %
(100 * len(all_toxic_df) / len(train_df)))
Explanation: Solution (click to expand)
End of explanation
MAX_NUM_WORDS = 10000
TOXICITY_COLUMN = 'toxicity'
TEXT_COLUMN = 'comment_text'
# Create a text tokenizer.
tokenizer = Tokenizer(num_words=MAX_NUM_WORDS)
tokenizer.fit_on_texts(train_df[TEXT_COLUMN])
# All comments must be truncated or padded to be the same length.
MAX_SEQUENCE_LENGTH = 250
def pad_text(texts, tokenizer):
return pad_sequences(tokenizer.texts_to_sequences(texts), maxlen=MAX_SEQUENCE_LENGTH)
# Load the first model from disk.
model = load_model('model_2_3_4.h5')
Explanation: Define a text classification model
This code creates and trains a convolutional neural net using the Keras framework. This neural net accepts a text comment, encoded using GloVe embeddings, and outputs a probably that the comment is toxic. Don't worry if you do not understand all of this code, as we will be treating this neural net as a black box later in the tutorial.
Note that for this colab, we will be loading pretrained models from disk, rather than using this code to train a new model which would take over 30 minutes.
End of explanation
EMBEDDINGS_PATH = 'glove.6B.100d.txt'
EMBEDDINGS_DIMENSION = 100
DROPOUT_RATE = 0.3
LEARNING_RATE = 0.00005
NUM_EPOCHS = 10
BATCH_SIZE = 128
def train_model(train_df, validate_df, tokenizer):
# Prepare data
train_text = pad_text(train_df[TEXT_COLUMN], tokenizer)
train_labels = to_categorical(train_df[TOXICITY_COLUMN])
validate_text = pad_text(validate_df[TEXT_COLUMN], tokenizer)
validate_labels = to_categorical(validate_df[TOXICITY_COLUMN])
# Load embeddings
embeddings_index = {}
with open(EMBEDDINGS_PATH) as f:
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
embedding_matrix = np.zeros((len(tokenizer.word_index) + 1,
EMBEDDINGS_DIMENSION))
num_words_in_embedding = 0
for word, i in tokenizer.word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
num_words_in_embedding += 1
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
# Create model layers.
def get_convolutional_neural_net_layers():
Returns (input_layer, output_layer)
sequence_input = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedding_layer = Embedding(len(tokenizer.word_index) + 1,
EMBEDDINGS_DIMENSION,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False)
x = embedding_layer(sequence_input)
x = Conv1D(128, 2, activation='relu', padding='same')(x)
x = MaxPooling1D(5, padding='same')(x)
x = Conv1D(128, 3, activation='relu', padding='same')(x)
x = MaxPooling1D(5, padding='same')(x)
x = Conv1D(128, 4, activation='relu', padding='same')(x)
x = MaxPooling1D(40, padding='same')(x)
x = Flatten()(x)
x = Dropout(DROPOUT_RATE)(x)
x = Dense(128, activation='relu')(x)
preds = Dense(2, activation='softmax')(x)
return sequence_input, preds
# Compile model.
input_layer, output_layer = get_convolutional_neural_net_layers()
model = Model(input_layer, output_layer)
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(lr=LEARNING_RATE),
metrics=['acc'])
# Train model.
model.fit(train_text,
train_labels,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
validation_data=(validate_text, validate_labels),
verbose=2)
return model
# Uncomment this code to run model training
# model = train_model(train_df, validate_df, tokenizer)
Explanation: Optional: dive into model architecture
Expand this code to see how our text classification model is defined, and optionally train your own model. Warning: training a new model maybe take over 30 minutes.
End of explanation
# Use the model to score the test set.
test_comments_padded = pad_text(test_df[TEXT_COLUMN], tokenizer)
MODEL_NAME = 'fat_star_tutorial'
test_df[MODEL_NAME] = model.predict(test_comments_padded)[:, 1]
Explanation: Score test set with our text classification model
Using our new model, we can score the set of test comments for toxicity.
End of explanation
# Print some records to compare our model results with the correct labels
pd.concat([
test_df.query('toxicity == False').sample(3, random_state=RANDOM_STATE),
test_df.query('toxicity == True').sample(3, random_state=RANDOM_STATE)])[[TOXICITY_COLUMN, MODEL_NAME, TEXT_COLUMN]]
Explanation: Let's see how our model performed against the test set. We can compare the models predictions against the actual labels, and calculate the overall ROC-AUC for the model.
End of explanation
def calculate_overall_auc(df, model_name):
true_labels = df[TOXICITY_COLUMN]
predicted_labels = df[model_name]
return metrics.roc_auc_score(true_labels, predicted_labels)
calculate_overall_auc(test_df, MODEL_NAME)
Explanation: Evaluate the overall ROC-AUC
This calculates the models performance on the entire test set using the ROC-AUC metric.
End of explanation
# Get a list of identity columns that have >= 100 True records. This will remove groups such
# as "other_disability" which do not have enough records to calculate meaningful metrics.
identities_with_over_100_records = []
for identity in identity_columns:
num_records = len(test_df.query(identity + '==True'))
if num_records >= 100:
identities_with_over_100_records.append(identity)
SUBGROUP_AUC = 'subgroup_auc'
BACKGROUND_POSITIVE_SUBGROUP_NEGATIVE_AUC = 'background_positive_subgroup_negative_auc'
BACKGROUND_NEGATIVE_SUBGROUP_POSITIVE_AUC = 'background_negative_subgroup_positive_auc'
def compute_auc(y_true, y_pred):
try:
return metrics.roc_auc_score(y_true, y_pred)
except ValueError:
return np.nan
def compute_subgroup_auc(df, subgroup, label, model_name):
subgroup_examples = df[df[subgroup]]
return compute_auc(subgroup_examples[label], subgroup_examples[model_name])
def compute_background_positive_subgroup_negative_auc(df, subgroup, label, model_name):
Computes the AUC of the within-subgroup negative examples and the background positive examples.
subgroup_negative_examples = df[df[subgroup] & ~df[label]]
non_subgroup_positive_examples = df[~df[subgroup] & df[label]]
examples = subgroup_negative_examples.append(non_subgroup_positive_examples)
return compute_auc(examples[label], examples[model_name])
def compute_background_negative_subgroup_positive_auc(df, subgroup, label, model_name):
Computes the AUC of the within-subgroup positive examples and the background negative examples.
subgroup_positive_examples = df[df[subgroup] & df[label]]
non_subgroup_negative_examples = df[~df[subgroup] & ~df[label]]
examples = subgroup_positive_examples.append(non_subgroup_negative_examples)
return compute_auc(examples[label], examples[model_name])
def compute_bias_metrics_for_model(dataset,
subgroups,
model,
label_col,
include_asegs=False):
Computes per-subgroup metrics for all subgroups and one model.
records = []
for subgroup in subgroups:
record = {
'subgroup': subgroup,
'subgroup_size': len(dataset[dataset[subgroup]])
}
record[SUBGROUP_AUC] = compute_subgroup_auc(
dataset, subgroup, label_col, model)
record[BACKGROUND_POSITIVE_SUBGROUP_NEGATIVE_AUC] = compute_background_positive_subgroup_negative_auc(
dataset, subgroup, label_col, model)
record[BACKGROUND_NEGATIVE_SUBGROUP_POSITIVE_AUC] = compute_background_negative_subgroup_positive_auc(
dataset, subgroup, label_col, model)
records.append(record)
return pd.DataFrame(records).sort_values('subgroup_auc', ascending=True)
bias_metrics_df = compute_bias_metrics_for_model(test_df, identities_with_over_100_records, MODEL_NAME, TOXICITY_COLUMN)
Explanation: Compute Bias Metrics
Using metrics based on ROC-AUC, we can measure our model for biases against different identity groups. We only calculate bias metrics on identities that are refered to in 100 or more comments, to minimize noise.
The 3 bias metrics compare different subsets of the data as illustrated in the following image:
End of explanation
def plot_auc_heatmap(bias_metrics_results, models):
metrics_list = [SUBGROUP_AUC, BACKGROUND_POSITIVE_SUBGROUP_NEGATIVE_AUC, BACKGROUND_NEGATIVE_SUBGROUP_POSITIVE_AUC]
df = bias_metrics_results.set_index('subgroup')
columns = []
vlines = [i * len(models) for i in range(len(metrics_list))]
for metric in metrics_list:
for model in models:
columns.append(metric)
num_rows = len(df)
num_columns = len(columns)
fig = plt.figure(figsize=(num_columns, 0.5 * num_rows))
ax = sns.heatmap(df[columns], annot=True, fmt='.2', cbar=True, cmap='Reds_r',
vmin=0.5, vmax=1.0)
ax.xaxis.tick_top()
plt.xticks(rotation=90)
ax.vlines(vlines, *ax.get_ylim())
return ax
plot_auc_heatmap(bias_metrics_df, [MODEL_NAME])
Explanation: Plot a heatmap of bias metrics
Plot a heatmap of the bias metrics. Higher scores indicate better results.
* Subgroup AUC measures the ability to separate toxic and non-toxic comments for this identity.
* Negative cross AUC measures the ability to separate non-toxic comments for this identity from toxic comments from the background distribution.
* Positive cross AUC measures the ability to separate toxic comments for this identity from non-toxic comments from the background distribution.
End of explanation
def plot_histogram(non_toxic_scores, toxic_scores, description):
NUM_BINS=10
sns.distplot(non_toxic_scores, norm_hist=True, bins=NUM_BINS, color="skyblue", label='non-toxic ' + description, kde=False)
ax = sns.distplot(toxic_scores, norm_hist=True, bins=NUM_BINS, color="red", label='toxic ' + description, kde=False)
ax.set(xlabel='model toxicity score', ylabel='relative % of comments', yticklabels=[])
plt.legend()
plt.figure()
# Plot toxicity distributions of different identities to visualize bias.
def plot_histogram_for_identity(df, identity):
toxic_scores = df.query(identity + ' == True & toxicity == True')[MODEL_NAME]
non_toxic_scores = df.query(identity + ' == True & toxicity == False')[MODEL_NAME]
plot_histogram(non_toxic_scores, toxic_scores, 'labeled for ' + identity)
def plot_background_histogram(df):
toxic_scores = df.query('toxicity == True')[MODEL_NAME]
non_toxic_scores = df.query('toxicity == False')[MODEL_NAME]
plot_histogram(non_toxic_scores, toxic_scores, 'for all test data')
# Plot the histogram for the background data, and for a few identities
plot_background_histogram(test_df)
plot_histogram_for_identity(test_df, 'heterosexual')
plot_histogram_for_identity(test_df, 'transgender')
plot_histogram_for_identity(test_df, 'homosexual_gay_or_lesbian')
plot_histogram_for_identity(test_df, 'atheist')
plot_histogram_for_identity(test_df, 'christian')
plot_histogram_for_identity(test_df, 'asian')
Explanation: Exercise #2
Examine the bias heatmap above - what biases can you spot? Do the biases appear to be false positives (non-toxic comments incorrectly classified as toxic) or false negatives (toxic comments incorrectly classified as non-toxic)?
Solution (click to expand)
Some groups have lower subgroup AUC scores, for example the groups "heterosexual", "transgender", and "homosexual_gay_or_lesbian". Because the "Negative Cross AUC" is lower than the "Positive Cross AUC" for this group, it appears that this groups has more false positives, i.e. many non-toxic comments about homosexuals are scoring higher for toxicity than actually toxic comments about other topics.
Plot histograms showing comment scores
We can graph a histogram of comment scores in each identity. In the following graphs, the X axis represents the toxicity score given by our new model, and the Y axis represents the comment count. Blue values are comment whose true label is non-toxic, while red values are those whose true label is toxic.
End of explanation
# Your code here
#
# HINT: you can display the background distribution by running:
# plot_background_histogram(test_df)
#
# You can plot the distribution for a given identity by running
# plot_histogram_for_identity(test_df, identity_name)
# e.g. plot_histogram_for_identity(test_df, 'male')
Explanation: Exercise #3
By comparing the toxicity histograms for comments that refer to different groups with each other, and with the background distribution, what additional information can we learn about bias in our model?
End of explanation
plot_histogram_for_identity(test_df, 'asian')
plot_histogram_for_identity(test_df, 'black')
plot_histogram_for_identity(test_df, 'white')
Explanation: Solution (click to expand)
This is one possible interpretation of the data. We encourage you to explore other identity categories and come up with your own conclusions.
We can see that for some identities such as Asian, the model scores most non-toxic comments as less than 0.2 and most toxic comments as greater than 0.2. This indicates that for the Asian identity, our model is able to distinguish between toxic and non-toxic comments. However, for the black identity, there are many non-toxic comments with scores over 0.5, along with many toxic comments with scores of less than 0.5. This shows that for the black identity, our model will be less accurate at separating toxic comments from non-toxic comments. We can see that the model also has difficulty separating toxic from non-toxic data for comments labeled as applying to the "white" identity.
End of explanation |
9,994 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas 数据类型
Step1: 数据丢弃
Step2: 排序
Step3: 获取数据信息
Step4: 数据摘要
Step5: 选择
Step6: 数据聚合
Step7: 帮助
help(pd.Series.loc)
使用函数
Step8: 数据对齐
Step9: 输入输出
读取和写入csv
pd.read_csv('file.csv', header=None, nrows=5)
df.to_csv('myDataFrame.csv')
读取和写入excel
pd.read_excel('file.xlsx')
pd.to_excel('dir/myDataFrame.xlsx', sheet_name='Sheet1')
从多个表单读取数据
xlsx = pd.ExcelFile('file.xls')
df = pd.read_excel(xlsx, 'Sheet1')
从SQL查询或者数据库表读取和写入数据
from sqlalchemy import create_engine
engine = create_engine('sqlite
Step10: join | Python Code:
s = pd.Series([3, -5, 7, 4], index=['a', 'b', 'c', 'd'])
data = {'Country': ['Belgium', 'India', 'Brazil'],
'Capital': ['Brussels', 'New Delhi', 'Brasília'],
'Population': [11190846, 1303171035, 207847528]}
df = pd.DataFrame(data,
columns=['Country', 'Capital', 'Population'])
#Pivvot,
data = {'Date': ['2016-03-01', '2016-03-02', '2016-03-01','2016-03-03','2016-03-02',
'2016-03-03'],
'Type': ['a', 'b', 'c','a','a','c'],
'Value': [11.432, 13.031, 20.784,99.906,1.303,20.784]}
df2 = pd.DataFrame(data,
columns=['Date', 'Type', 'Value'])
df3= df2.pivot(index='Date',
columns='Type',
values='Value')
print df2
print df3
df4 = pd.pivot_table(df2,values='Value',index='Date',columns=['Type'])
print df2
print df4
df4 = pd.pivot_table(df2,
values='Value',
index='Date',
columns=['Type'])
print df4
df5=pd.melt(df2,
id_vars=["Date"],
value_vars=["Type", "Value"],
value_name="Observations")
print df5
Explanation: Pandas 数据类型
End of explanation
s.drop(['a','c'])
df.drop('Country', axis=1)
Explanation: 数据丢弃
End of explanation
df.sort_index()
df.sort_values(by='Country')
df.rank()
Explanation: 排序
End of explanation
df.shape
df.index
df.columns
df.info()
df.count()
Explanation: 获取数据信息
End of explanation
df.sum()
df.cumsum()
#df.min()/df.max()
#df.idxmin()/df.idxmax()
df.describe()
df.mean()
df.median()
Explanation: 数据摘要
End of explanation
s['b']
df[1:]
df.iloc[[0],[0]]
df.iat([0])
#df.loc[[0], ['Country']]
#df.at([0], ['Country'])
df.ix[2]
df.ix[:,'Capital']
df.ix[1,'Capital']
#Boolean Indexing
s[~(s > 1)]
s[(s < -1) | (s > 2)]
df[df['Population']>1200000000]
s['a'] = 6
#Selecting
df3.loc[:,(df3>1).any()]
df3.loc[:,(df3>1).all()]
df3.loc[:,df3.isnull().any()]
df3.loc[:,df3.notnull().all()]
#Indexing With isin
df[(df.Country.isin(df2.Type))]
df3.filter(items=["a","b"])
df.select(lambda x: not x%5)
#Where
s.where(s > 0)
#Query
#df.query('second > first')
#Setting/Resetting Index
df.set_index('Country')
df4 = df.reset_index()
df = df.rename(index=str,columns={"Country":"cntry","Capital":"cptl",
"Population":"ppltn"})
s2 = s.reindex(['a','c','d','e','b'])
ss= df.reindex(range(4),method='ffill')
print ss
s3 = s.reindex(range(5),method='bfill')
print s3
Explanation: 选择
End of explanation
#Aggregation
df2.groupby(by=['Date','Type']).mean()
df4.groupby(level=0).sum()
print df4
#df4.groupby(level=0).agg({'Capital':lambda x:sum(x)/len(x), 'Population': np.sum})
#Transformation
#customSum = lambda x: (x+x%2)
#df4.groupby(level=0).transform(customSum)
Explanation: 数据聚合
End of explanation
f=lambda x:x*2
df.apply(f)
df.applymap(f)
Explanation: 帮助
help(pd.Series.loc)
使用函数
End of explanation
s3 = pd.Series([7, -2, 3], index=['a', 'c', 'd'])
s + s3
Explanation: 数据对齐
End of explanation
dict1 = {'X1': ['a', 'b', 'c'],
'X2': ['11.432', '1.303', '99.906']}
dict2 = {'X1': ['a', 'b', 'd'],
'X3': ['20.784', 'NaN', '20.784']}
data1 = pd.DataFrame(dict1,
columns=['X1', 'X2'])
data2 = pd.DataFrame(dict2,
columns=['X1', 'X3'])
pd.merge(data1,
data2,
how='left',
on='X1')
pd.merge(data1,
data2,
how='right',
on='X1')
pd.merge(data1,
data2,
how='inner',
on='X1')
pd.merge(data1,
data2,
how='outer',
on='X1')
Explanation: 输入输出
读取和写入csv
pd.read_csv('file.csv', header=None, nrows=5)
df.to_csv('myDataFrame.csv')
读取和写入excel
pd.read_excel('file.xlsx')
pd.to_excel('dir/myDataFrame.xlsx', sheet_name='Sheet1')
从多个表单读取数据
xlsx = pd.ExcelFile('file.xls')
df = pd.read_excel(xlsx, 'Sheet1')
从SQL查询或者数据库表读取和写入数据
from sqlalchemy import create_engine
engine = create_engine('sqlite:///:memory:')
pd.read_sql("SELECT * FROM my_table;", engine)
pd.read_sql_table('my_table', engine)
pd.read_sql_query("SELECT * FROM my_table;", engine)
数据合并
End of explanation
#help(df.join)
caller = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'],
'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']})
other = pd.DataFrame({'key': ['K0', 'K1', 'K2'],
'B': ['B0', 'B1', 'B2']})
caller.join(other, lsuffix='_caller', rsuffix='_other')
print caller
#data1.set_index('X1')
#data2.set_index('X1')
#data1.join(data2, lsuffix='data1', rsuffix='data2', how='right')
caller = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'],
'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']})
other = pd.DataFrame({'key': ['K0', 'K1', 'K2'],
'B': ['B0', 'B1', 'B2']})
caller.set_index('key').join(other.set_index('key'))
print caller
caller = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'],
'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']})
other = pd.DataFrame({'key': ['K0', 'K1', 'K2'],
'B': ['B0', 'B1', 'B2']})
caller.join(other.set_index('key'), on='key')
print caller
Explanation: join
End of explanation |
9,995 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding Sprints, Formally
We present an example use of Montre on a data set obtained by tracking positions of players in a real soccer match. In this example, we find all sprints performed by a single player where a sprint is formally specified by a timed regular expression over speed and acceleration behaviors. The data are obtained by a computer vision algorithm with a frame rate of 10 Hz so we have a xy-coordinate for each player on the field at every 100 milliseconds. Therefore we use milliseconds as our base time unit for behaviors and expressions.
Writing patterns
In order to specify a pattern for sprints, we need to address two issues in order
Step1: Often a sprint effort is characterized by any movement above a certain speed
threshold for a limited time. This gives us our first sprint pattern such that
a period of high speed between 1-10 seconds, formally written as follows
Step2: Above we use anchor operators from both sides on the proposition s to obtain only maximal periods that satisfy s; otherwise, any sub-period satisfies the pattern as well. The operator % specifies that the duration is restricted to be in 1000 and 10000 milliseconds. Alternatively we may want to find other efforts starting with high acceleration but not reaching top speeds necessarily. This gives us our second sprint pattern such that a period of high acceleration followed by a period of medium or high speed between 1-10 seconds, formally written as follows
Step3: Notice that we do not use the right-anchor on g. This allows a medium or high speed period to overlap with a high acceleration period as it is usually the case that they are concurrent. Writing an equivalent pattern using classical regular expressions over a product alphabet would be a very tedious task partly due to a requirement to handle such interleavings explicitly (and the lack of timing constraints). For timed regular expressions all propositions are considered to be concurrent by definition, which results in concise and intuitive expressions. Finally we give a third pattern to find rather short but intense sprints such that
Step4: Then we visualize all sprints found by Montre for patterns P1-P3 in Figure 3 over the behavior of a single player during one half of the game (45 min.) containing 27K data points that reduces to timed behaviors of 5K segments after pre-processing.
Timed Pattern Matching, Pre- and Post-Processing
Step5: For this tutorial we use the data from a real soccer match, Galatasaray v Sivasspor, played at January 16th, 2016. My favorite team Galatasaray won the match so that's cool. More info about the match (like jersey numbers of players, we'll use next.) can be found here. These data are obtained by a sport analytics company, Sentio. Their soccer player tracking technology tracks players by using state-of-the-art computer vision and machine learning algorithms. But here we just use the output of these algorithms
Step6: Now I choose a player to find his sprints during the match. My usual suspect is Olcan Adin (Jersey Number #29) from Galatasaray (Team #1). I choose him because he is a winger and usually sprints more than others. Also we specify whether we want to find sprints in the first half or the second half.
Step7: First we sort datapoints according to timestamps/seconds/minutes and then introduce a single time axis. Besides the tracking technology sometimes gets confused and misses some samples; therefore, we fill these values by interpolation in the following.
Step8: If you would like to see the entire movement of the player, it's here
Step9: We calculate displacement, speed and accelaration from xy-coordinates. Here some Euclidian geometry and Newtonian mechanics
Step10: Finally we apply speed and acceleration categories and I obtain symbolic speed and acceleration behaviors.
Step11: Then write symbolic behaviors into a file named as INPUT.txt.
Step12: Now we call Montre to perform timed pattern matching for one of given patterns (P1, P2, P3) and the file INPUT.txt.
Step13: The output containing a list of zones is in the OUTPUT.txt. The zone from the first line is as follows. | Python Code:
# Speed (m/s) and acceleration (m/s^2) categories
accel_desc = ['nhigh','nmedium', 'nlow','around_zero', 'low', 'medium', 'high']
accel_syms = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
accel_bins = [-100, -1.60, -1.17,-0.57, 0.57, 1.17, 1.60, 100]
speed_desc = ['low', 'medium', 'high', 'very_high']
speed_syms = ['p', 'q', 'r', 's']
speed_bins = [-1.0, 2, 3.7, 6, 20]
Explanation: Finding Sprints, Formally
We present an example use of Montre on a data set obtained by tracking positions of players in a real soccer match. In this example, we find all sprints performed by a single player where a sprint is formally specified by a timed regular expression over speed and acceleration behaviors. The data are obtained by a computer vision algorithm with a frame rate of 10 Hz so we have a xy-coordinate for each player on the field at every 100 milliseconds. Therefore we use milliseconds as our base time unit for behaviors and expressions.
Writing patterns
In order to specify a pattern for sprints, we need to address two issues in order: (1) how to categorize continuous speed and acceleration axes, and (2) which composition of these categories defines a sprinting effort best. Clearly, there are no universal answers for these questions so we rely on the study in the following. First, we partition speed and acceleration axes into four categories (near-zero, low, medium, and high), and we associate a letter for each category below. For example, a period of medium speed, denoted by r, means the speed value resides between 3.7 and 6 m/s during the period.
End of explanation
P1 = "\'(<:s:>)%(1000,10000)\'"
Explanation: Often a sprint effort is characterized by any movement above a certain speed
threshold for a limited time. This gives us our first sprint pattern such that
a period of high speed between 1-10 seconds, formally written as follows:
End of explanation
P2 = "\'(<:g);(<:(r||s):>)%(1000,10000)\'"
Explanation: Above we use anchor operators from both sides on the proposition s to obtain only maximal periods that satisfy s; otherwise, any sub-period satisfies the pattern as well. The operator % specifies that the duration is restricted to be in 1000 and 10000 milliseconds. Alternatively we may want to find other efforts starting with high acceleration but not reaching top speeds necessarily. This gives us our second sprint pattern such that a period of high acceleration followed by a period of medium or high speed between 1-10 seconds, formally written as follows:
End of explanation
P3 = "\'(<:(f||g));((<:s:>)%(1000,2000))\'"
Explanation: Notice that we do not use the right-anchor on g. This allows a medium or high speed period to overlap with a high acceleration period as it is usually the case that they are concurrent. Writing an equivalent pattern using classical regular expressions over a product alphabet would be a very tedious task partly due to a requirement to handle such interleavings explicitly (and the lack of timing constraints). For timed regular expressions all propositions are considered to be concurrent by definition, which results in concise and intuitive expressions. Finally we give a third pattern to find rather short but intense sprints such that
End of explanation
%matplotlib inline
# In order to process data
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# In order to call montre
import subprocess, shlex
Explanation: Then we visualize all sprints found by Montre for patterns P1-P3 in Figure 3 over the behavior of a single player during one half of the game (45 min.) containing 27K data points that reduces to timed behaviors of 5K segments after pre-processing.
Timed Pattern Matching, Pre- and Post-Processing
End of explanation
# Sentio constants
FRAME_PER_SECOND = 10
# Read Sentio data
df = pd.read_csv('data/match.csv', index_col=["MINUTE", "SECOND", 'TIMESTAMP'], header=0,
names=["TIMESTAMP", "HALF", "MINUTE", "SECOND", "TEAM_ID",
"JERSEY_NUMBER", "X_POS", "Y_POS", "DISTANCE", "SPEED"])
Explanation: For this tutorial we use the data from a real soccer match, Galatasaray v Sivasspor, played at January 16th, 2016. My favorite team Galatasaray won the match so that's cool. More info about the match (like jersey numbers of players, we'll use next.) can be found here. These data are obtained by a sport analytics company, Sentio. Their soccer player tracking technology tracks players by using state-of-the-art computer vision and machine learning algorithms. But here we just use the output of these algorithms: Raw coordinates of players monitored with a rate of 10 samples per second for 90 minutes. We read the data from a csv file.
End of explanation
# Usual Suspect
PLAYER_TEAM = 1 # Galatasaray
PLAYER_NUMBER = 29 # Olcan Adin
HALF = 1
# Select the data for the specific player
data = df[(df['TEAM_ID'] == PLAYER_TEAM) &
(df['JERSEY_NUMBER'] == PLAYER_NUMBER) &
(df['HALF'] == HALF)].copy(deep=True)
Explanation: Now I choose a player to find his sprints during the match. My usual suspect is Olcan Adin (Jersey Number #29) from Galatasaray (Team #1). I choose him because he is a winger and usually sprints more than others. Also we specify whether we want to find sprints in the first half or the second half.
End of explanation
data['TIME'] = pd.Series(range(len(data)), index=data.index)
data = data.set_index(['TIME'])
data = data.drop(['HALF', 'TEAM_ID', "JERSEY_NUMBER", "DISTANCE", "SPEED"], axis=1).copy(deep=True)
data.loc[data['X_POS'] < 0, 'X_POS'] = np.nan; data['X_POS'] = data['X_POS'].interpolate()
data.loc[data['Y_POS'] < 0, 'Y_POS'] = np.nan; data['Y_POS'] = data['Y_POS'].interpolate()
Explanation: First we sort datapoints according to timestamps/seconds/minutes and then introduce a single time axis. Besides the tracking technology sometimes gets confused and misses some samples; therefore, we fill these values by interpolation in the following.
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111, aspect='equal')
# Draw a simple soccer pitch (105x68m)
ax.set_ylim(0,68)
ax.set_xlim(0,105)
plt.plot([52.5,52.5], [0,68], color='black', linewidth=2)
plt.plot([0,16.5,16.5,0], [54,54, 14,14], color='black', linewidth=2)
plt.plot([105,105-16.5,105-16.5,105], [54,54,14,14], color='black', linewidth=2)
circle1=plt.Circle((52.5,34),10, color='black', linewidth=2, fill=False)
ax.add_artist(circle1)
plt.plot(data['X_POS'],data['Y_POS'])
Explanation: If you would like to see the entire movement of the player, it's here:
End of explanation
data['DISPLACEMENT'] = np.sqrt((data['X_POS'].diff())**2 + (data['Y_POS'].diff())**2)
data['DISPLACEMENT'][0] = 0
data['SPEED'] = data['DISPLACEMENT'] * FRAME_PER_SECOND
data['ACCEL'] = data['SPEED'].diff() * FRAME_PER_SECOND
data['ACCEL'][0] = 0
data['ACCEL'][1] = 0
# In case more smoothing is needed.
# data['SPEED'][2:] = data['SPEED'].rolling(center=False,window=3).mean()[2:]
# data['ACCEL'][2:] = data['ACCEL'].rolling(center=False,window=3).mean()[2:]
Explanation: We calculate displacement, speed and accelaration from xy-coordinates. Here some Euclidian geometry and Newtonian mechanics:
End of explanation
data['SPEEDSYM'] = pd.cut(data['SPEED'], bins=speed_bins, labels=speed_syms)
data['ACCELSYM'] = pd.cut(data['ACCEL'], bins=accel_bins, labels=accel_syms)
Explanation: Finally we apply speed and acceleration categories and I obtain symbolic speed and acceleration behaviors.
End of explanation
def collect(x, *ys):
y = [''.join(str(i) for i in e) for e in zip(*ys)]
xp = x[0]
yp = y[0]
for (xi, yi) in zip(x,y):
if yi != yp:
yield (xi-xp,yp)
xp = xi
yp = yi
yield (xi-xp,yp)
with open('INPUT.txt', 'w') as f:
for (xi, yi) in collect(data.index.values, list(data['ACCELSYM'].values), list(data['SPEEDSYM'].values)):
f.write('{0} {1}\n'.format(xi*100, yi))
Explanation: Then write symbolic behaviors into a file named as INPUT.txt.
End of explanation
command = shlex.split("montre" + " " + "-b" + " " + P1 + " " + "-f" + " " + "INPUT.txt" + " " + "-o" + " " + "OUTPUT.txt")
subprocess.call(command)
Explanation: Now we call Montre to perform timed pattern matching for one of given patterns (P1, P2, P3) and the file INPUT.txt.
End of explanation
%%latex
\begin{equation*}
144800 \leq t \leq 148800\\
144800 \leq t' \leq 148800\\
4000 \leq t' -t \leq 4000
\end{equation*}
Explanation: The output containing a list of zones is in the OUTPUT.txt. The zone from the first line is as follows.
End of explanation |
9,996 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
QuTiP Example
Step1: Overview
Here we aim to reproduce the experimental results from
Step2: Run Simulation
Step3: Plot Results
Step4: Versions | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from qutip import *
Explanation: QuTiP Example: Birth and Death of Photons in a Cavity
J.R. Johansson and P.D. Nation
For more information about QuTiP see http://qutip.org
End of explanation
N=5
a=destroy(N)
H=a.dag()*a # Simple oscillator Hamiltonian
psi0=basis(N,1) # Initial Fock state with one photon
kappa=1.0/0.129 # Coupling rate to heat bath
nth= 0.063 # Temperature with <n>=0.063
# Build collapse operators for the thermal bath
c_ops = []
c_ops.append(np.sqrt(kappa * (1 + nth)) * a)
c_ops.append(np.sqrt(kappa * nth) * a.dag())
Explanation: Overview
Here we aim to reproduce the experimental results from:
<blockquote>
Gleyzes et al., "Quantum jumps of light recording the birth and death of a photon in a cavity", [Nature **446**, 297 (2007)](http://dx.doi.org/10.1038/nature05589).
</blockquote>
In particular, we will simulate the creation and annihilation of photons inside the optical cavity due to the thermal environment when the initial cavity is a single-photon Fock state $ |1\rangle$, as presented in Fig. 3 from the article.
System Setup
End of explanation
ntraj = [1,5,15,904] # number of MC trajectories
tlist = np.linspace(0,0.8,100)
mc = mcsolve(H,psi0,tlist,c_ops,[a.dag()*a],ntraj)
me = mesolve(H,psi0,tlist,c_ops, [a.dag()*a])
Explanation: Run Simulation
End of explanation
fig = plt.figure(figsize=(8, 8), frameon=False)
plt.subplots_adjust(hspace=0.0)
# Results for a single trajectory
ax1 = plt.subplot(4,1,1)
ax1.xaxis.tick_top()
ax1.plot(tlist,mc.expect[0][0],'b',lw=2)
ax1.set_xticks([0,0.2,0.4,0.6])
ax1.set_yticks([0,0.5,1])
ax1.set_ylim([-0.1,1.1])
ax1.set_ylabel(r'$\langle P_{1}(t)\rangle$')
# Results for five trajectories
ax2 = plt.subplot(4,1,2)
ax2.plot(tlist,mc.expect[1][0],'b',lw=2)
ax2.set_yticks([0,0.5,1])
ax2.set_ylim([-0.1,1.1])
ax2.set_ylabel(r'$\langle P_{1}(t)\rangle$')
# Results for fifteen trajectories
ax3 = plt.subplot(4,1,3)
ax3.plot(tlist,mc.expect[2][0],'b',lw=2)
ax3.plot(tlist,me.expect[0],'r--',lw=2)
ax3.set_yticks([0,0.5,1])
ax3.set_ylim([-0.1,1.1])
ax3.set_ylabel(r'$\langle P_{1}(t)\rangle$')
# Results for 904 trajectories
ax4 = plt.subplot(4,1,4)
ax4.plot(tlist,mc.expect[3][0],'b',lw=2)
ax4.plot(tlist,me.expect[0],'r--',lw=2)
plt.xticks([0,0.2,0.4,0.6])
plt.yticks([0,0.5,1])
ax4.set_xlim([0,0.8])
ax4.set_ylim([-0.1,1.1])
ax4.set_xlabel(r'Time (s)')
ax4.set_ylabel(r'$\langle P_{1}(t)\rangle$')
xticklabels = ax2.get_xticklabels()+ax3.get_xticklabels()
plt.setp(xticklabels, visible=False);
Explanation: Plot Results
End of explanation
from qutip.ipynbtools import version_table
version_table()
Explanation: Versions
End of explanation |
9,997 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyScopus
Step1: <hr>
General Search
Step2: Full text link
Step3: For those with full text links, you are able to get all the text by calling scopus.retrieve_full_text()
Step4: <hr>
Search for a specific author
Step5: Then we can retrieve more detailed info about the author we are looking for using his/her author_id
Step6: Search for his publications explicitly
Step7: Abstract retrieval
If the 2nd argument download_path is not given, the JSON response would not be saved
Step8: <hr>
Note that Searching for articles in specific journals (venues) is not supported anymore since this can be easily done by general search.
<hr>
Citation count retrieval
Note that the use of citation overview API needs to be approved by Elsevier.
Step9: Serial Title Metadata
If interested in meta information and metrics at publication venue level (e.g., journal/conference), we can now use search_serial or retrieve_serial
Search by title
Step10: See more about CiteScore
Step11: The last dataframe below lists the rank/percentile of this serial in each subject area it is assigned to across years
- More about subject area code in Scopus link
Step12: Retrieve by ISSN
Given a ISSN, we can use retrieve_serial
Step13: Affiliation | Python Code:
import pyscopus
pyscopus.__version__
from pyscopus import Scopus
key = 'YOUR_OWN_API'
scopus = Scopus(key)
Explanation: PyScopus: Quick Start
PyScopus is a Python wrapper of Elsevier Scopus API. More details of this Python package can be found here.
<hr>
Import Scopus class and initialize with your own API Key
End of explanation
search_df = scopus.search("KEY(topic modeling)", count=30)
print(search_df.head(10))
Explanation: <hr>
General Search
End of explanation
full_text_link_arr = search_df.full_text.values
full_text_link_arr
Explanation: Full text link
End of explanation
full_text = scopus.retrieve_full_text(full_text_link_arr[2])
start = 39500
full_text[start:start+10000]
Explanation: For those with full text links, you are able to get all the text by calling scopus.retrieve_full_text()
End of explanation
author_result_df = scopus.search_author("AUTHLASTNAME(Zuo) and AUTHFIRST(Zhiya) and AFFIL(Iowa)")
print(author_result_df)
Explanation: <hr>
Search for a specific author
End of explanation
zuo_info_dict = scopus.retrieve_author('57189222659')
zuo_info_dict.keys()
print('\n'.join(zuo_info_dict['affiliation-history'].name.values))
Explanation: Then we can retrieve more detailed info about the author we are looking for using his/her author_id:
End of explanation
zuo_pub_df = scopus.search_author_publication('57189222659')
zuo_pub_df[['title', 'cover_date', 'publication_name', 'scopus_id']].sort_values('cover_date').reset_index(drop=True)
Explanation: Search for his publications explicitly
End of explanation
pub_info = scopus.retrieve_abstract('85049552190', './')
pub_info
cat 85049552190.json
Explanation: Abstract retrieval
If the 2nd argument download_path is not given, the JSON response would not be saved
End of explanation
pub_citations_df = scopus.retrieve_citation(scopus_id_array=['85049552190', '85004154180'],
year_range=[2016, 2018])
print(pub_citations_df)
Explanation: <hr>
Note that Searching for articles in specific journals (venues) is not supported anymore since this can be easily done by general search.
<hr>
Citation count retrieval
Note that the use of citation overview API needs to be approved by Elsevier.
End of explanation
meta_df, citescore_df, sj_rank_df = scopus.search_serial('informetrics')
meta_df
Explanation: Serial Title Metadata
If interested in meta information and metrics at publication venue level (e.g., journal/conference), we can now use search_serial or retrieve_serial
Search by title
End of explanation
citescore_df
Explanation: See more about CiteScore
End of explanation
sj_rank_df.head(2)
Explanation: The last dataframe below lists the rank/percentile of this serial in each subject area it is assigned to across years
- More about subject area code in Scopus link
End of explanation
meta_df, citescore_df, sj_rank_df = scopus.retrieve_serial('2330-1643')
meta_df
citescore_df
sj_rank_df.head(2)
Explanation: Retrieve by ISSN
Given a ISSN, we can use retrieve_serial:
End of explanation
uiowa = scopus.retrieve_affiliation('60024324')
uiowa
Explanation: Affiliation
End of explanation |
9,998 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preprocessing and Pipelines
Step1: Cross-validated pipelines including scaling, we need to estimate mean and standard deviation separately for each fold.
To do that, we build a pipeline.
Step2: Cross-validation with a pipeline
Step3: Grid Search with a pipeline | Python Code:
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data,
digits.target)
Explanation: Preprocessing and Pipelines
End of explanation
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
standard_scaler = StandardScaler()
standard_scaler.fit(X_train)
X_train_scaled = standard_scaler.transform(X_train)
svm = SVC()
svm.fit(X_train_scaled, y_train)
X_test_scaled = standard_scaler.transform(X_test)
svm.score(X_test_scaled, y_test)
pipeline = make_pipeline(StandardScaler(), SVC())
pipeline.fit(X_train, y_train)
pipeline.score(X_test, y_test)
pipeline.predict(X_test)
Explanation: Cross-validated pipelines including scaling, we need to estimate mean and standard deviation separately for each fold.
To do that, we build a pipeline.
End of explanation
from sklearn.cross_validation import cross_val_score
cross_val_score(pipeline, X_train, y_train)
Explanation: Cross-validation with a pipeline
End of explanation
import numpy as np
from sklearn.grid_search import GridSearchCV
param_grid = {'svc__C': 10. ** np.arange(-3, 3),
'svc__gamma' : 10. ** np.arange(-3, 3)
}
grid_pipeline = GridSearchCV(pipeline, param_grid=param_grid)
grid_pipeline.fit(X_train, y_train)
grid_pipeline.score(X_test, y_test)
Explanation: Grid Search with a pipeline
End of explanation |
9,999 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aprendizado Supervisionado
Vamos começar o estudo da aprendizagem de máquina com o tipo de aprendizado denominado de Supervisionado.
Aprendizado Supervisionado (Supervised Learning)
Step1: O primeiro passo é carregar a base de dados. Ela está disponível a partir do link
Step2: O dataset possui 3 atributos
Step3: O nosso objetivo é analisar os dados e tirar certas conclusões a partir deles. Basicamente, queremos responder as seguintes perguntas
Step4: Em seguida, vamos instanciar o modelo de Regressão Linear do ScikitLearn e treina-lo com os dados.
Step5: Como dito anteriormente, o modelo aprendeu, baseado no conjunto de dados, valores para $\beta_0$ e $\beta_1$. Vamos visualizar os valores encontrados.
Step6: Esse valores representam os valores de $\beta_0$ e $\beta_1$ da nossa equação que representa um modelo simples de regressão linear onde é levado em consideração somente um atributo.
Com esses valores é possível estimar quanto será vendido dado um determinado gasto em propaganda de TV. Além disso, o coeficiente $\beta_1$ nos conta mais sobre o problema.
O valor de $0.047536640433$ indica que cada unidade que aumentarmos em propaganda de TV implica em um aumento de $0.047536640433$ nas vendas. Em outras palavras, cada $1,000$ gastos em TV está associado com um aumento de 47.537 de unidades nas vendas.
Vamos usar esses valores para estimar quanto será vendido se gastarmos $50000$ em TV.
$y = 7.03259354913 + 0.047536640433 \times 50$
Step7: Desta forma, poderíamos prever a venda de 9409 unidades.
No entanto, nosso objetivo não é fazer isso manualmente. A idéia é construir o modelo e utiliza-lo para fazer a estimativa de valores. Para isso, vamos utilizar o método predict.
Podemos estimar para uma entrada apenas
Step8: Ou várias
Step9: Para entender melhor como a Regressão Linear funciona vamos visualizar no gráfico o modelo construído.
Step10: A reta em vermelho representa o modelo de regressão linear construído a partir dos dados passados.
Avaliando o Modelo Construído
Para avaliar o modelo construído vamos utilizar uma métrica denominada de $R^2$ (R-squared ou coeficiente de determinação).
(By Wikipedia)
O coeficiente de determinação, também chamado de R², é uma medida de ajustamento de um modelo estatístico linear generalizado, como a Regressão linear, em relação aos valores observados. O R² varia entre 0 e 1, indicando, em percentagem, o quanto o modelo consegue explicar os valores observados. Quanto maior o R², mais explicativo é modelo, melhor ele se ajusta à amostra. Por exemplo, se o R² de um modelo é 0,8234, isto significa que 82,34\% da variável dependente consegue ser explicada pelos regressores presentes no modelo.
Para entender melhor a métrica, vamos analisar o gráfico a seguir
Step11: Sozinho esse valor não nos conta muito. No entanto, ele será bastante útil quando formos comparar este modelo com outros mais à frente.
Multiple Linear Regression
Podemos estender o modelo visto anteriormente para trabalhar com mais de um atributo, a chamada Multiple Linear Regression. Matematicamente, teríamos
Step12: O modelo construído foi
Step13: Avaliando o modelo, temos o valor para o $R^2$
Step14: Entendendo os resultados obtidos
Vamos analisar alguns resultados obtidos nos dois modelos construídos anteriormente. A primeira coisa é verificar o valor de $\beta_1$. Esse valor deu positivo para os atributos TV e Radio e negativo para o atributo Newspaper. Isso significa que o gasto em propaganda está relacionado positivamente às vendas nos dois primeiros atributos. Diferente do que acontece com o Newspaper
Step15: Os dados representam informações de altura e peso coletadas de homens e mulheres. Se plotarmos tais informações no gráfico, temos
Step16: Considere que com base nestes dados, desejamos classificar uma nova instância. Vamos considerar uma instância onde a altura seja 1.70 e o peso 50. Se plotarmos esse ponto no gráfico, temos (a nova instância está representada pelo x)
Step17: O KNN vai classificar a nova instância com base nos vizinhos mais próximos. Neste caso, a nova instância seria classificada como mulher. Essa comparação é feita com os $k$ vizinhos mais próximos.
Por exemplo, se considerarmos 3 vizinhos mais próximos e destes 3, dois são mulheres e 1 homem, a instância seria classificada como mulher já que corresponde a classe da maioria dos vizinhos.
A distância entre dois pontos pode ser calculada de diversas formas. A biblioteca do ScikitLearn lista uma série de métricas de distância que podem ser usadas. Vamos considerar um novo ponto e simular o que o algoritmo do KNN faz.
Step18: Vamos trabalhar com o ponto {'altura'
Step19: Uma vez que calculamos a distância do novo ponto a todos os demais pontos da base, devemos verificar os $k$ pontos mais próximos e ver qual classe predomina nestes pontos. Considerando os 3 vizinhos mais próximos ($k=3$), temos
Step20: Dá para perceber que o valor de $k$ influencia bastante na classificação dos objetos. Mais para frente usaremos a precisão do modelo para determinar o melhor valor de $k$. Quando o valor de $k$ é muito pequeno, o modelo está mais sensível aos pontos de ruído da base. Quando o $k$ é muito grande, a vizinhança pode incluir elementos de outra classe. Vale ressaltar que, normalmente, escolhemos valores de $k$ ímpar para evitar empates.
Um caso especial do KNN é quando utilizamos K = 1. Vamos considerar o exemplo da imagem a seguir
Step21: Ao instanciar o modelo do KNN devemos passar o parâmetro n_neighbors que corresponde ao valor $k$, a quantidade de vizinhos próximos que será considerada.
Step22: Instanciado o modelo, vamos treina-lo com os dados de treinamento.
Step23: Assim como fizemos com a regressão linear, podemos utilizar o modelo construído para fazer a predição de dados que ainda não foram analisados.
Step24: Avaliando e escolhendo o melhor modelo
Para que possamos escolher o melhor modelo, devemos primeiro avalia-los. A avaliação do modelo de classificação é feita por meio da métrica denominada de acurácia. A acurácia corresponde a taxa de acerto do modelo. Um modelo que possui $90\%$ de acurácia acertou a classe em $90\%$ dos casos que foram analisados.
Vale ressaltar que a escolha do melhor modelo depende de vários fatores por isso precisamos testar diferentes modelos para diferentes bases com diferentes parâmetros. Isso será melhor abordado mais a frente no nosso curso. Para simplificar, vamos trabahar com dois modelos do KNN para a base Iris. O primeiro com um K=3 e outro com K = 10. | Python Code:
# Imports necessários para a parte de Regressão Linear
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
%matplotlib inline
Explanation: Aprendizado Supervisionado
Vamos começar o estudo da aprendizagem de máquina com o tipo de aprendizado denominado de Supervisionado.
Aprendizado Supervisionado (Supervised Learning): A training set of examples with the correct responses (targets) is provided and, based on this training set, the algorithm generalises to respond correctly to all possible inputs. This also colled learning from exemplars.
Um algoritmo supervisionado é uma função que, dado um conjunto de exemplos rotulados, constrói um preditor. Os rótulos atribuídos aos exemplos são definidos a partir de um domínio conhecido. Se este domínio for um conjunto de valores nominais, estamos lidando com um problema de classificação. Agora se este domínio for um conjunto infinito e ordenado de valores, passamos a lidar com um problema de regressão. O preditor construído recebe nomes distintos a depender da tarefa. Chamamos de classificador (para o primeiro tipo de rótulo) ou regressor (para o segundo tipo).
Um classificador (ou regressor) também é uma função que recebe um exemplo não rotulado e é capaz de definir um rótulo dentro dos valores possíveis. Se estivermos trabalhando com um problema de regressão este rótulo está dentro do intervalo real assumido no problema. Se for uma tarefa de classificação, esse rótulo é uma das classes definidas.
Podemos definir formalmente da seguinte maneira, segundo (FACELI, et. al, 2011):
Uma definição formal seria, dado um conjunto de observações de pares $D={(x_i, f(x_i)), i = 1, ..., n}$, em que $f$ representa uma função desconhecida, um algoritmo de AM preditivo (ou supervisionado) aprende uma aproximação $f'$ da função desconhecida $f$. Essa função aproximada, $f'$, permite estimar o valor de $f$ para novas observações de $x$.
Temos duas situações para $f$:
Classificação: $y_i = f(x_i) \in {c_1,...,c_m}$, ou seja, $f(x_i)$ assume valores em um conjunto discreto, não ordenado;
Regressão: $y_i = f(x_i) \in R$, ou seja, $f(x_i)$ assume valores em um cojunto infinito e ordenado de valores.
Regressão Linear
Vamos mostrar como a regressão funciona através de um método denominado regressão linear. Esse tutorial é baseado em 3 materiais:
Tutorial de Regressão Linear: https://github.com/justmarkham/DAT4/blob/master/notebooks/08_linear_regression.ipynb por
Slides sobre Regressão Linear: http://pt.slideshare.net/perone/intro-ml-slides20min
Cap. 3 do Livro "An Introduction to Statistical Learning" disponível em: http://www-bcf.usc.edu/~gareth/ISL/
Livro "Inteligência Artificial - Uma Abordagem de Aprendizado de Máquina" disponível em: https://www.amazon.com.br/dp/8521618808/ref=cm_sw_r_tw_dp_x_MiGdybV5B9TTT
Para o nosso trabalho, vamos trabalhar com a base de Advertising disponibilizada pelo livro "An Introduction to Statistical Learning". Essa base consiste de 3 atributos que representam os gastos de propaganda (em milhares de dólares) de um determinado produto na TV, Rádio e Jornal. Além disso, é conhecido a quantidade de vendas realizadas (em milhares de unidades) para cada instância. Vamos explorar a base de ados a seguir:
End of explanation
# Carrega a base e imprime os dez primeiros registros da base
data = pd.read_csv("http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv", index_col=0)
print(data.head(10))
Explanation: O primeiro passo é carregar a base de dados. Ela está disponível a partir do link: http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv. Para carregar a base vamos utilizar a biblioteca Pandas. Detalhes dessa biblioteca estão fora do escopo destes tutoriais. Sendo assim, apenas vamos usá-la sem tecer muitos detalhes sobre as operações realizadas. Basicamente, vamos utiliza-la para carregar a base de arquivos e plotar dados nos gráficos. Mais informações podem ser encontradas na documentação da biblioteca.
End of explanation
fig, axs = plt.subplots(1, 3, sharey=True)
data.plot(kind='scatter', x='TV', y='sales', ax=axs[0], figsize=(16, 8))
data.plot(kind='scatter', x='radio', y='sales', ax=axs[1])
data.plot(kind='scatter', x='newspaper', y='sales', ax=axs[2])
Explanation: O dataset possui 3 atributos: TV, Radio e Newspaper. Cada um deles corresponde a quantidade de dólares gastos em propaganda em cada uma das mídias para um produto específico. Já a responsta (Sales) consiste da quantidade de produtos vendidos para cada produto. Esse dataset possui 200 instâncias.
Para melhor visualizar, vamos plotar as informações da base de dados.
End of explanation
# Carregando os dados de treinamento e os labels
feature_cols = ['TV']
X = data[feature_cols] # Dados de Treinamento
y = data.sales # Labels dos dados de Treinamento
Explanation: O nosso objetivo é analisar os dados e tirar certas conclusões a partir deles. Basicamente, queremos responder as seguintes perguntas:
Com base nestes dados, como poderíamos gastar o dinheiro designado para propaganda no futuro?
Em outras palavras:
Existe uma relação entre o dinheiro gasto em propaganda e a quantidade de vendas?
Quão forte é esse relacionamento?
Quais são os tipos de propaganda que contribuem para as vendas?
Qual o efeito de cada tipo de propaganda nas vendas?
Dado um gasto específico em propaganda, é possível prever quanto será vendido?
Para explorar essas e outras questões, vamos utilizar, inicialmente, uma Regressão Linear Simples.
Regressão Linear Simples
Como o próprio nome diz, a regressão linear simples é um método muito (muito++) simples para prever valores (Y) a partir de uma única variável (X). Para este modelo, é assumido que existe uma aproximação linear entre X e Y. Matematicamente, podemos escrever este relacionamento a partir da seguinte função:
$Y \approx \beta_0 + \beta_1X$, onde $\approx$ pode ser lido como aproximadamente.
$\beta_0$ e $\beta_1$ são duas constantes desconhecidas que representam a intercepção da reta com o eixo vertical ($\beta_0$) e o declive (coeficiente angular) da reta ($\beta_1$). As duas constantes são conhecidas como coeficientes ou parâmetros do modelo. O propósito da regressão linear é utilizar o conjunto de dados conhecidos para estimar os valores destas duas variáveis e definir o modelo aproximado:
$\hat{y} = \hat{\beta_0} + \hat{\beta_1}x$,
onde $\hat{y}$ indica um valor estimado de $Y$ baseado em $X = x$. Com essa equação podemo prever, neste caso, as vendas de um determinado produto baseado em um gasto específico em propaganda na TV.
Mas como podemos estimar estes valores?
Estimando Valores
Na prática, $\beta_0$ e $\beta_1$ são desconhecidos. Para que a gente possa fazer as estimativas, devemos conhecer os valores destes atributos. Para isso, vamos utilizar os dados já conhecidos.
Considere,
$(x_1,y_1), (x_2,y_2), ..., (x_n, y_n)$ $n$ pares de instâncias observadas em um conjunto de dados. O primeiro valor consiste de uma observação de $X$ e o segundo de $Y$. Na base de propagandas, esses dados consistem dos 200 valores vistos anteriormente.
O objetivo na construção do modelo de regressão linear é estimar os valores de $\beta_0$ e $\beta_1$ tal que o modelo linear encontrado represente, da melhor foma, os dados disponibilizados. Em outras palavras, queremos encontrar os valores dos coenficientes de forma que a reta resultante seja a mais próxima possível dos dados utilizados.
Basicamente, vamos encontrar várias retas e analisar qual delas se aproxima mais dos dados apresentados. Existem várias maneiras de medir essa "proximidade". Uma delas é a RSS (residual sum of squares), que é representada pela equação:
$\sum_{i=1}^{N}{(\hat{y_i}-y_i)^2}$, onde $\hat{y_i}$ o valor estimado de y e $y_i$, o valor real.
A figura a seguir apresenta um exemplo que mostra os valores estimados e a diferença residual.
Os pontos vermelhos representam os dados observados; a linha azul, o modelo construído e as linhas cinzas, a diferença residual entre o que foi estimado e o que era real.
Vamos estimar tais parâmetros utilizando scikit-learn.
Aplicação do modelo de regressão linear
O primeiro passo é separar dos dados (features) das classes (labels) dos dados que serão utilizados para treinar nosso modelo.
End of explanation
lm = LinearRegression() # Instanciando o modelo
lm.fit(X, y) # Treinando com os dados de treinamento
Explanation: Em seguida, vamos instanciar o modelo de Regressão Linear do ScikitLearn e treina-lo com os dados.
End of explanation
#Imprimindo beta_0
print("Valor de Beta_0: " + str(lm.intercept_))
#Imprimindo beta_1
print("Valor de Beta_1: " + str(lm.coef_[0]))
Explanation: Como dito anteriormente, o modelo aprendeu, baseado no conjunto de dados, valores para $\beta_0$ e $\beta_1$. Vamos visualizar os valores encontrados.
End of explanation
7.03259354913+0.047536640433*50
Explanation: Esse valores representam os valores de $\beta_0$ e $\beta_1$ da nossa equação que representa um modelo simples de regressão linear onde é levado em consideração somente um atributo.
Com esses valores é possível estimar quanto será vendido dado um determinado gasto em propaganda de TV. Além disso, o coeficiente $\beta_1$ nos conta mais sobre o problema.
O valor de $0.047536640433$ indica que cada unidade que aumentarmos em propaganda de TV implica em um aumento de $0.047536640433$ nas vendas. Em outras palavras, cada $1,000$ gastos em TV está associado com um aumento de 47.537 de unidades nas vendas.
Vamos usar esses valores para estimar quanto será vendido se gastarmos $50000$ em TV.
$y = 7.03259354913 + 0.047536640433 \times 50$
End of explanation
lm.predict([[50]])
Explanation: Desta forma, poderíamos prever a venda de 9409 unidades.
No entanto, nosso objetivo não é fazer isso manualmente. A idéia é construir o modelo e utiliza-lo para fazer a estimativa de valores. Para isso, vamos utilizar o método predict.
Podemos estimar para uma entrada apenas:
End of explanation
lm.predict([[50], [200], [10]])
Explanation: Ou várias:
End of explanation
'''
O código a seguir faz a predição para o menor e maior valores de X na base de treinamento. Estes valores
são utilizados para construir uma reta que é plotada sobre os dados de treinamento.
'''
X_new = pd.DataFrame({'TV': [data.TV.min(), data.TV.max()]}) # Menor e Maior valores de X na base de treinamento
preds = lm.predict(X_new) # Predição destes valores
data.plot(kind='scatter', x='TV', y='sales') # Plotagem dos valores da base de treinamento
plt.plot(X_new, preds, c='red', linewidth=2) # Plotagem da reta
Explanation: Para entender melhor como a Regressão Linear funciona vamos visualizar no gráfico o modelo construído.
End of explanation
lm.score(X, y)
Explanation: A reta em vermelho representa o modelo de regressão linear construído a partir dos dados passados.
Avaliando o Modelo Construído
Para avaliar o modelo construído vamos utilizar uma métrica denominada de $R^2$ (R-squared ou coeficiente de determinação).
(By Wikipedia)
O coeficiente de determinação, também chamado de R², é uma medida de ajustamento de um modelo estatístico linear generalizado, como a Regressão linear, em relação aos valores observados. O R² varia entre 0 e 1, indicando, em percentagem, o quanto o modelo consegue explicar os valores observados. Quanto maior o R², mais explicativo é modelo, melhor ele se ajusta à amostra. Por exemplo, se o R² de um modelo é 0,8234, isto significa que 82,34\% da variável dependente consegue ser explicada pelos regressores presentes no modelo.
Para entender melhor a métrica, vamos analisar o gráfico a seguir:
*Fonte da Imagem: https://github.com/justmarkham/DAT4/ *
Observe que a função representada pela cor vemelha se ajusta melhor aos dados do que as retas de cor azul e verde. Visualmente podemos ver que, de fato, a curva vemelha descreve melhor a distribuição dos dados plotados.
Vamos calcular o valor do R-squared para o modelo construído utilizando o método score que recebe como parâmetro os dados de treinamento.
End of explanation
# Carregando os dados de X e y do dataset
feature_cols = ['TV','radio','newspaper']
X = data[feature_cols]
y = data.sales
#Instanciando e treinando o modelo de regressão linear
lm = LinearRegression()
lm.fit(X, y)
#Imprimindo os coeficientes encontrados
print("Valor de Beta_0: ")
print(str(lm.intercept_))
print()
print("Valores de Beta_1, Beta_2, ..., Beta_n: ")
print(list(zip(feature_cols, lm.coef_)))
Explanation: Sozinho esse valor não nos conta muito. No entanto, ele será bastante útil quando formos comparar este modelo com outros mais à frente.
Multiple Linear Regression
Podemos estender o modelo visto anteriormente para trabalhar com mais de um atributo, a chamada Multiple Linear Regression. Matematicamente, teríamos:
$y \approx \beta_0 + \beta_1 x_1 + ... \beta_n x_n$
Cada $x$ representa um atribuito e cada atributo possui seu próprio coeficiente. Para nossa base de dados, teríamos:
$y \approx \beta_0 + \beta_1 \times TV + \beta_2 \times Radio + \beta_3 \times Newspaper$
Vamos construir nosso modelo para este caso:
End of explanation
lm.predict([[100, 25, 25], [200, 10, 10]])
Explanation: O modelo construído foi:
$y \approx 2.93888936946 + 0.045764645455397601_1 \times TV + 0.18853001691820448 \times Radio -0.0010374930424762578 \times Newspaper$
Assim como fizemos no primeiro exemplo, podemos utilzar o método predict para prever valores não conhecidos.
End of explanation
lm.score(X, y)
Explanation: Avaliando o modelo, temos o valor para o $R^2$:
End of explanation
data = pd.read_csv("http://www.data2learning.com/datasets/basehomemulher.csv", index_col=0)
data
Explanation: Entendendo os resultados obtidos
Vamos analisar alguns resultados obtidos nos dois modelos construídos anteriormente. A primeira coisa é verificar o valor de $\beta_1$. Esse valor deu positivo para os atributos TV e Radio e negativo para o atributo Newspaper. Isso significa que o gasto em propaganda está relacionado positivamente às vendas nos dois primeiros atributos. Diferente do que acontece com o Newspaper: o gasto está negativamente associado às vendas.
Uma outra coisa que podemos perceber é que o R-squared aumentou quando aumentamos o número de atributos. Isso normalmente acontece com essa métrica. Basicamente, podemos concluir que este último modelo tem um valor mais alto para o R-squared que o modelo anterior que considerou apenas a TV como atributo. Isto significa que este modelo fornece um melhor "ajuste" aos dados fornecidos.
No entanto, o R-squared não é a melhor métrica para avaliar tais modelos. Se fizermos um análise estatística mais aprofundada (essa análise foge do escopo desta disciplina. Detalhes podem ser encontrados aqui) vamos perceber que o atributo Newspaper não influencia (estatisticamente) no total de vendas. Teoricamente, poderíamos descartar tal atributo. No entanto, se calcularmos o valor do R-squared para um modelo sem Newspaper e para o modelo com Newspaper, o valor do segundo será maior que o primeiro.
Essa tarefa fica como atividade ;)
KNN: k-Nearest Neighbors
No início desse tutorial tratamos o problema de aprendizado supervisionado a partir de dois pontos de vista. O primeiro da regressão. Mostramos como trabalhar com a regressão linear para prever valores em um intervalo. O segundo, como o problema de classificação de instâncias em classes. Para exemplificar esse problema, vamos trabalhar com o KNN, uma das técnicas mais simples de classificação.
A idéia básica do KNN é que podemos classificar uma instância desconhecida com base nas informações dos vizinhos mais próximos. Para isso, exergamos os dados como pontos marcados em um sistema cartesiano e utilizamos a distância entre pontos para identificar quais estão mais próximoas.
Para entender um pouco mais do KNN, assista este vídeo
Para começar, vamos analisar o conjunto de dados a seguir:
End of explanation
colors = {0:'red', 1:'blue'}
# Plotagem dos valores da base de treinamento
data.plot(kind='scatter', x='peso', y='altura',c=data['classe'].apply(lambda x: colors[x]))
Explanation: Os dados representam informações de altura e peso coletadas de homens e mulheres. Se plotarmos tais informações no gráfico, temos:
End of explanation
colors = {0:'red', 1:'blue'}
# Plotagem dos valores da base de treinamento
data.plot(kind='scatter', x='peso', y='altura',c=data['classe'].apply(lambda x: colors[x]))
plt.plot([50], [1.70], 'x', c='green')
Explanation: Considere que com base nestes dados, desejamos classificar uma nova instância. Vamos considerar uma instância onde a altura seja 1.70 e o peso 50. Se plotarmos esse ponto no gráfico, temos (a nova instância está representada pelo x):
End of explanation
colors = {0:'red', 1:'blue'}
# Plotagem dos valores da base de treinamento
data.plot(kind='scatter', x='peso', y='altura',c=data['classe'].apply(lambda x: colors[x]))
plt.plot([77], [1.68], 'x', c='green')
Explanation: O KNN vai classificar a nova instância com base nos vizinhos mais próximos. Neste caso, a nova instância seria classificada como mulher. Essa comparação é feita com os $k$ vizinhos mais próximos.
Por exemplo, se considerarmos 3 vizinhos mais próximos e destes 3, dois são mulheres e 1 homem, a instância seria classificada como mulher já que corresponde a classe da maioria dos vizinhos.
A distância entre dois pontos pode ser calculada de diversas formas. A biblioteca do ScikitLearn lista uma série de métricas de distância que podem ser usadas. Vamos considerar um novo ponto e simular o que o algoritmo do KNN faz.
End of explanation
import math
# Calcula a distância euclideana de dois pontos
def euclideanDistance(instance1, instance2, length):
distance = 0
for x in range(length):
distance += pow((instance1[x] - instance2[x]), 2)
return math.sqrt(distance)
# Só para visualização: converte os valores para labels em String
def convert_label(value):
if value == 0.0: return 'Mulher'
else: return 'Homem'
# 0 = mulher , 1 = homem
for index, row in data.iterrows():
print(convert_label(row['classe']), '%0.2f' % euclideanDistance([row['peso'], row['altura']],[77, 1.68], 2))
Explanation: Vamos trabalhar com o ponto {'altura': 1.68, 'peso':77} e calcular sua distância para todos os demais pontos. No exemplo vamos usar a distância euclideana: $\sqrt{\sum{(x_1 - x_2)^2 + (y_1 - y_2)^2}}$. Para simplificar, vamos utilizar nossa própria implementação da distância euclideana.
End of explanation
import math
# Calcula a distância euclideana de dois pontos
def euclideanDistance(instance1, instance2, length):
distance = 0
for x in range(length):
distance += pow((instance1[x] - instance2[x]), 2)
return math.sqrt(distance)
# Só para visualização: converte os valores para labels em String
def convert_label(value):
if value == 0.0: return 'Mulher'
else: return 'Homem'
# 0 = mulher , 1 = homem
for index, row in data.iterrows():
print(convert_label(row['classe']), '%0.2f' % euclideanDistance([row['peso'], row['altura']],[77, 1.68], 2))
Explanation: Uma vez que calculamos a distância do novo ponto a todos os demais pontos da base, devemos verificar os $k$ pontos mais próximos e ver qual classe predomina nestes pontos. Considerando os 3 vizinhos mais próximos ($k=3$), temos:
Homem: 10.0
Homem: 14.0
Mulher: 15.0
Sendo assim, a instância selecionada seria classificada como Homem.
E se considerássemos $k=5$?
Homem: 10.0
Homem: 14.0
Mulher: 15.0
Mulher: 17.0
Mulher: 24.0
Neste caso, a instância seria classificada como Mulher.
End of explanation
# Importando o Dataset
from sklearn.datasets import load_iris
data_iris = load_iris()
X = data_iris.data
y = data_iris.target
Explanation: Dá para perceber que o valor de $k$ influencia bastante na classificação dos objetos. Mais para frente usaremos a precisão do modelo para determinar o melhor valor de $k$. Quando o valor de $k$ é muito pequeno, o modelo está mais sensível aos pontos de ruído da base. Quando o $k$ é muito grande, a vizinhança pode incluir elementos de outra classe. Vale ressaltar que, normalmente, escolhemos valores de $k$ ímpar para evitar empates.
Um caso especial do KNN é quando utilizamos K = 1. Vamos considerar o exemplo da imagem a seguir:
Dataset de treinamento
Ao considerarmos K = 1, podemos construir um mapa de classificação, como é mostrado a seguir:
Mapa de classificação para o KNN (K=1)
Image Credits: Data3classes, Map1NN, Map5NN by Agor153. Licensed under CC BY-SA 3.0
Uma nova instância será classificada de acordo com a região na qual ela se encontra.
Para finalizar, vale ressaltar dois pontos. O primeiro é que em alguns casos faz-se necessário normalizar os valores da base de treinamento por conta da discrepância entre as escalas dos atributos. Por exemplo, podemos ter altura em um intervalo de 1.50 à 1.90, peso no intervalo de 60 à 100 e salário no intervalo de 800 à 1500. Essa diferença de escalas pode fazer com que as medidas das distâncias sejam influenciadas por um único atributo.
Um outro pronto é em relação as vantagens e desvantagens desta técnica. A principal vantagem de se usar um KNN é que ele é um modelo simples de ser implementado. No entanto, ele possui um certo custo computacional no cálculo da distância entre os pontos. Um outro problema é que a qualidade da classificação pode ser severamente prejudicada com a presença de ruídos na base.
Implementando o KNN com ScikitLearn
Vamos implementar o KNN utilizando o ScikitLearn e realizar as tarefas de classificação para a base da Iris.
End of explanation
# Importando e instanciando o modelo do KNN com k = 1
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
Explanation: Ao instanciar o modelo do KNN devemos passar o parâmetro n_neighbors que corresponde ao valor $k$, a quantidade de vizinhos próximos que será considerada.
End of explanation
knn.fit(X, y)
Explanation: Instanciado o modelo, vamos treina-lo com os dados de treinamento.
End of explanation
# O método predict retorna a classe na qual a instância foi classificada
predict_value = knn.predict([[3, 5, 4, 2],[1,2,3,4]])
print(predict_value)
print(data_iris.target_names[predict_value[0]])
print(data_iris.target_names[predict_value[1]])
Explanation: Assim como fizemos com a regressão linear, podemos utilizar o modelo construído para fazer a predição de dados que ainda não foram analisados.
End of explanation
knn_3 = KNeighborsClassifier(n_neighbors=3)
knn_3.fit(X, y)
knn_10 = KNeighborsClassifier(n_neighbors=10)
knn_10.fit(X, y)
accuracy_3 = knn_3.score(X, y)
accuracy_10 = knn_10.score(X, y)
print('Acurácia com k = 3: ', '%0.4f'% accuracy_3)
print('Acurácia com k = 10: ', '%0.4f'% accuracy_10)
Explanation: Avaliando e escolhendo o melhor modelo
Para que possamos escolher o melhor modelo, devemos primeiro avalia-los. A avaliação do modelo de classificação é feita por meio da métrica denominada de acurácia. A acurácia corresponde a taxa de acerto do modelo. Um modelo que possui $90\%$ de acurácia acertou a classe em $90\%$ dos casos que foram analisados.
Vale ressaltar que a escolha do melhor modelo depende de vários fatores por isso precisamos testar diferentes modelos para diferentes bases com diferentes parâmetros. Isso será melhor abordado mais a frente no nosso curso. Para simplificar, vamos trabahar com dois modelos do KNN para a base Iris. O primeiro com um K=3 e outro com K = 10.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.