repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
jorisvandenbossche/geopandas | doc/source/gallery/choro_legends.ipynb | bsd-3-clause | import geopandas
from geopandas import read_file
import mapclassify
mapclassify.__version__
import libpysal
libpysal.__version__
libpysal.examples.available()
_ = libpysal.examples.load_example('South')
pth = libpysal.examples.get_path('south.shp')
df = read_file(pth)
"""
Explanation: Choro legends
End of explanation
"""
%matplotlib inline
ax = df.plot(column='HR60', scheme='QUANTILES', k=4, \
cmap='BuPu', legend=True,
legend_kwds={'loc': 'center left', 'bbox_to_anchor':(1,0.5)})
labels = [t.get_text() for t in ax.get_legend().get_texts()]
labels
q4 = mapclassify.Quantiles(df.HR60, k=4)
q4
labels == q4.get_legend_classes()
"""
Explanation: Default legend formatting
End of explanation
"""
ax = df.plot(column='HR60', scheme='QUANTILES', k=4, \
cmap='BuPu', legend=True,
legend_kwds={'loc': 'center left', 'bbox_to_anchor':(1,0.5)},
)
ax = df.plot(column='HR60', scheme='QUANTILES', k=4, \
cmap='BuPu', legend=True,
legend_kwds={'loc': 'center left', 'bbox_to_anchor':(1,0.5), 'fmt':"{:.4f}"})
ax = df.plot(column='HR60', scheme='QUANTILES', k=4, \
cmap='BuPu', legend=True,
legend_kwds={'loc': 'center left', 'bbox_to_anchor':(1,0.5), 'fmt':"{:.0f}"})
"""
Explanation: Note that in this case, the first interval is closed on the minimum value in the dataset. The other intervals have an open lower bound. This can be now displayed in the legend using legend_kwds={'interval': True}.
Overriding numerical format
End of explanation
"""
ax = df.plot(column='HR60', scheme='BoxPlot', \
cmap='BuPu', legend=True,
legend_kwds={'loc': 'center left', 'bbox_to_anchor':(1,0.5),
'fmt': "{:.0f}"})
bp = mapclassify.BoxPlot(df.HR60)
bp
bp.get_legend_classes(fmt="{:.0f}")
"""
Explanation: The new legends_kwds arg fmt takes a string to set the numerical formatting.
When first class lower bound < y.min()
End of explanation
"""
ax = df.plot(column='HR60', scheme='BoxPlot', \
cmap='BuPu', legend=True,
legend_kwds={'loc': 'center left', 'bbox_to_anchor':(1,0.5),
'interval': True})
"""
Explanation: In some classifiers the user should be aware that the lower (upper) bound of the first (last) interval is not equal to the minimum (maximum) of the attribute values. This is useful to detect extreme values and highly skewed distributions.
Show interval bracket
End of explanation
"""
ax = df.plot(column='STATE_NAME', categorical=True, legend=True, \
legend_kwds={'loc': 'center left', 'bbox_to_anchor':(1,0.5),
'fmt': "{:.0f}"}) # fmt is ignored for categorical data
"""
Explanation: Categorical Data
End of explanation
"""
|
ethen8181/machine-learning | python/class.ipynb | mit | # code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic to print version
# 2. magic so that the notebook will reload external python modules
%load_ext watermark
%load_ext autoreload
%autoreload 2
%watermark -a 'Ethen' -d -t -v
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Working-with-Python-Classes" data-toc-modified-id="Working-with-Python-Classes-1"><span class="toc-item-num">1 </span>Working with Python Classes</a></span><ul class="toc-item"><li><span><a href="#Public,-Private,-Protected" data-toc-modified-id="Public,-Private,-Protected-1.1"><span class="toc-item-num">1.1 </span>Public, Private, Protected</a></span></li><li><span><a href="#Class-Decorators" data-toc-modified-id="Class-Decorators-1.2"><span class="toc-item-num">1.2 </span>Class Decorators</a></span><ul class="toc-item"><li><span><a href="#@Property" data-toc-modified-id="@Property-1.2.1"><span class="toc-item-num">1.2.1 </span>@Property</a></span></li><li><span><a href="#@classmethod-and-@staticmethod" data-toc-modified-id="@classmethod-and-@staticmethod-1.2.2"><span class="toc-item-num">1.2.2 </span>@classmethod and @staticmethod</a></span></li></ul></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
"""
class A:
def __init__(self):
self.__priv = "I am private"
self._prot = "I am protected"
self.pub = "I am public"
x = A()
print(x.pub)
# Whenever we assign or retrieve any object attribute
# Python searches it in the object's __dict__ dictionary
print(x.__dict__)
"""
Explanation: Working with Python Classes
Encapsulation is seen as the bundling of data with the methods that operate on that data. It is often accomplished by providing two kinds of methods for attributes: The methods for retrieving or accessing the values of attributes are called getter methods. Getter methods do not change the values of attributes, they just return the values. The methods used for changing the values of attributes are called setter methods.
Public, Private, Protected
There are two ways to restrict the access to class attributes:
protected. First, we can prefix an attribute name with a leading underscore "_". This marks the attribute as protected. It tells users of the class not to use this attribute unless, somebody writes a subclass.
private. Second, we can prefix an attribute name with two leading underscores "__". The attribute is now inaccessible and invisible from outside. It's neither possible to read nor write to those attributes except inside of the class definition itself.
End of explanation
"""
class Celsius:
def __init__(self, temperature = 0):
self.set_temperature(temperature)
def to_fahrenheit(self):
return (self.get_temperature() * 1.8) + 32
def get_temperature(self):
return self._temperature
def set_temperature(self, value):
if value < -273:
raise ValueError('Temperature below -273 is not possible')
self._temperature = value
# c = Celsius(-277) # this returns an error
c = Celsius(37)
c.get_temperature()
"""
Explanation: When the Python compiler sees a private attribute, it actually transforms the actual name to _[Class name]__[private attribute name]. However, this still does not prevent the end-user from accessing the attribute. Thus in Python land, it is more common to use public and protected attribute, write proper docstrings and assume that everyone is a consenting adult, i.e. won't do anything with the protected method unless they know what they are doing.
Class Decorators
@property The Pythonic way to introduce attributes is to make them public, and not introduce getters and setters to retrieve or change them.
@classmethod To add additional constructor to the class.
@staticmethod To attach functions to classes so people won't misuse them in wrong places.
@Property
Let's assume one day we decide to make a class that could store the temperature in degree Celsius. The temperature will be a private method, so our end-users won't have direct access to it.
The class will also implement a method to convert the temperature into degree Fahrenheit. And we also want to implement a value constraint to the temperature, so that it cannot go below -273 degree Celsius. One way of doing this is to define a getter and setter interfaces to manipulate it.
End of explanation
"""
class Celsius:
def __init__(self, temperature = 0):
self._temperature = temperature
def to_fahrenheit(self):
return (self.temperature * 1.8) + 32
# have access to the value like it is an attribute instead of a method
@property
def temperature(self):
return self._temperature
# like accessing the attribute with an extra layer of error checking
@temperature.setter
def temperature(self, value):
if value < -273:
raise ValueError('Temperature below -273 is not possible')
print('Setting value')
self._temperature = value
c = Celsius(37)
# much easier to access then the getter, setter way
print(c.temperature)
# note that you can still access the private attribute
# and violate the temperature checking,
# but then it's the users fault not yours
c._temperature = -300
print(c._temperature)
# accessing the attribute will return the ValueError error
# c.temperature = -300
"""
Explanation: Instead of that, now the property way. Where we define the @property and the @[attribute name].setter.
End of explanation
"""
print(dict.fromkeys(['raymond', 'rachel', 'mathew']))
import time
class Date:
# Primary constructor
def __init__(self, year, month, day):
self.year = year
self.month = month
self.day = day
# Alternate constructor
@classmethod
def today(cls):
t = time.localtime()
return cls(t.tm_year, t.tm_mon, t.tm_mday)
# Primary
a = Date(2012, 12, 21)
print(a.__dict__)
# Alternate
b = Date.today()
print(b.__dict__)
"""
Explanation: @classmethod and @staticmethod
@classmethods create alternative constructors for the class. An example of this behavior is there are different ways to construct a dictionary.
End of explanation
"""
class NewDate(Date):
pass
# Creates an instance of Date (cls=Date)
c = Date.today()
print(c.__dict__)
# Creates an instance of NewDate (cls=NewDate)
d = NewDate.today()
print(d.__dict__)
"""
Explanation: The cls is critical, as it is an object that holds the class itself. This makes them work with inheritance.
End of explanation
"""
class Date:
# Primary constructor
def __init__(self, year, month, day):
self.year = year
self.month = month
self.day = day
# Alternate constructor
@classmethod
def today(cls):
t = time.localtime()
return cls(t.tm_year, t.tm_mon, t.tm_mday)
# the logic belongs with the date class
@staticmethod
def show_tomorrow_date():
t = time.localtime()
return t.tm_year, t.tm_mon, t.tm_mday + 1
Date.show_tomorrow_date()
"""
Explanation: The purpose of @staticmethod is to attach functions to classes. We do this to improve the findability of the function and to make sure that people are using the function in the appropriate context.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/csiro-bom/cmip6/models/sandbox-3/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csiro-bom', 'sandbox-3', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: CSIRO-BOM
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:56
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
jurajmajor/ltl3tela | Experiments/Evaluation_FOSSACS19.ipynb | gpl-3.0 | from ltlcross_runner import LtlcrossRunner
from IPython.display import display
import pandas as pd
import spot
import sys
spot.setup(show_default='.a')
pd.options.display.float_format = '{: .0f}'.format
pd.options.display.latex.multicolumn_format = 'c'
"""
Explanation: Experiments for FOSSACS'19
Paper: LTL to Smaller Self-Loop Alternating Automata
Authors: František Blahoudek, Juraj Major, Jan Strejček
End of explanation
"""
import os
os.environ['SPOT_HOA_TOLERANT']='TRUE'
%%bash
ltl3ba -v
ltl3tela -v
ltl2tgba --version
# If there are already files with results, and rerun is False, ltlcross is not run again.
rerun = False
"""
Explanation: Hack that alows to parse ltl3ba automata without universal branching.
End of explanation
"""
def is_mergable(f, level=3):
'''Runs ltl3tela with the -m argument to detect
whether the given formula `f` is mergable.
level 1: F-mergeable
level 2: G-mergeable
level 3: F,G-mergeable
'''
if level == 3:
return is_mergable(f,2) or is_mergable(f,1)
res = !ltl3tela -m{level} -f "{f}"
return res[0] == '1'
is_mergable('FGa',2)
"""
Explanation: $\newcommand{\F}{\mathsf{F}}$
$\newcommand{\G}{\mathsf{G}}$
$\newcommand{\FG}{\mathsf{F,G}}$
Formulae
Detect mergable formulae
End of explanation
"""
tmp_file = 'formulae/tmp.ltl'
lit_pref = 'formulae/literature'
lit_file = lit_pref + '.ltl'
lit_merg_file = 'formulae/lit.ltl'
# The well-known set of formulae from literature
!genltl --dac-patterns --eh-patterns --sb-patterns --beem-patterns --hkrss-patterns > $tmp_file
# We add also negation of all the formulae.
# We remove all M and W operators as LTL3BA does not understand them.
# The `relabel-bool` option renames `G(a | b)` into `G a`.
!ltlfilt --negate $tmp_file | \
ltlfilt $tmp_file -F - --unique -r3 --remove-wm --relabel-bool=abc | \
ltlfilt -v --equivalent-to=0 | ltlfilt -v --equivalent-to=1> $lit_file
"""
Explanation: Literature
End of explanation
"""
lit_f_mergable = [is_mergable(l,1) for l in open(lit_file)]
lit_mergable = [is_mergable(l,3) for l in open(lit_file)]
counts = '''Out of {} formulae known from literature, there are:
{} with F-merging,
{} with F,G-merging, and
{} with no merging possibility
'''
print(counts.format(
len(lit_mergable),
lit_f_mergable.count(True),
lit_mergable.count(True),
lit_mergable.count(False)))
with open(lit_merg_file,'w') as out:
for l in open(lit_file):
if is_mergable(l):
out.write(l)
"""
Explanation: Mergeable formulae
We first count the numbers of formulae with $\F$- and $\FG$-merging. After that we save the $\FG$-mergeable formulae into a separate file.
End of explanation
"""
def generate(n=100,func=(lambda x: True),filename=None,priorities='M=0,W=0,xor=0',ap=['a','b','c','d','e']):
if filename is not None:
if filename is sys.stdout:
file_h = filename
else:
file_h = open(filename,'w')
f = spot.randltl(ap,
ltl_priorities=priorities,
simplify=3,tree_size=15).relabel_bse(spot.Abc)\
.unabbreviate('WM')
i = 0
printed = set()
while(i < n):
form = next(f)
if form in printed:
continue
if func(form) and not form.is_tt() and not form.is_ff():
if filename is not None:
print(form,file=file_h)
printed.add(form)
i += 1
return list(printed)
def measure_rand(n=1000,priorities='M=0,W=0,xor=0',ap=['a','b','c','d','e']):
rand = generate(n,priorities=priorities,ap=ap)
rand_mergable = [is_mergable(l,3) for l in rand]
rand_f_mergable = [is_mergable(l,1) for l in rand]
counts = '''Out of {} random formulae, there are:
{} with F-merging,
{} with F,G-merging, and
{} with no merging possibility
'''
print(counts.format(
len(rand_mergable),
rand_f_mergable.count(True),
rand_mergable.count(True),
rand_mergable.count(False)))
return rand, rand_f_mergable, rand_mergable
def get_priorities(n):
'''Returns the `priority string` for ltlcross
where `n` is the priority of both F and G. The
operators W,M,xor have priority 0 and the rest
has the priority 1.
'''
return 'M=0,W=0,xor=0,G={0},F={0}'.format(n)
measure_rand();
measure_rand(priorities=get_priorities(2));
rand4 = measure_rand(priorities=get_priorities(4))
randfg = measure_rand(priorities='xor=0,implies=0,equiv=0,X=0,W=0,M=0,R=0,U=0,F=2,G=2')
"""
Explanation: Random
End of explanation
"""
fg_priorities = [1,2,4]
!mkdir -p formulae
#generate(total_r,filename=fg_f,priorities='xor=0,implies=0,equiv=0,X=0,W=0,M=0,R=0,U=0,F=3,G=3');
for i in fg_priorities:
generate(1000,func=lambda x:is_mergable(x,3),
filename='formulae/rand{}.ltl'.format(i),
priorities=get_priorities(i))
generate(1000,func=lambda x:is_mergable(x,3),
filename='formulae/randfg.ltl'.format(i),
priorities='xor=0,implies=0,equiv=0,X=0,W=0,M=0,R=0,U=0,F=2,G=2');
"""
Explanation: Generate 1000 mergeable formulae with priorities 1,2,4
End of explanation
"""
resfiles = {}
runners = {}
### Tools' setting ###
# a dict of a form (name : ltlcross cmd)
ltl3tela_shared = "ltl3tela -p1 -t0 -n0 -a3 -f %f "
#end = " | awk '!p;/^--END--/{p=1}' > %O"
end = " > %O"
tools = {"FG-merging" : ltl3tela_shared + end,
#"FG-merging+compl" : ltl3tela_shared + "-n1" + end,
"F-merging" : ltl3tela_shared + "-G0" + end,
#"G-merging" : ltl3tela_shared + "-F0" + end,
"basic" : ltl3tela_shared + "-F0 -G0" + end,
"LTL3BA" : "ltl3ba -H1 -f %s" + end,
}
### Order in which we want to sort the translations
MI_order = ["LTL3BA",
"basic","F-merging","FG-merging"]
### Files with measured statistics ###
resfiles['lit'] = 'MI_alt-lit.csv'
resfiles['randfg'] = 'MI_alt-randfg.csv'
for i in fg_priorities:
resfiles['rand{}'.format(i)] = 'MI_alt-rand{}.csv'.format(i)
### Measures to be measured
cols = ["states","transitions","nondet_states","nondet_aut","acc"]
for name,rfile in resfiles.items():
runners[name] = LtlcrossRunner(tools,res_filename=rfile,
formula_files=['formulae/{}.ltl'.format(name)],
cols=cols)
for r in runners.values():
if rerun:
r.run_ltlcross()
r.parse_results()
t1 = {}
for name,r in runners.items():
tmp = r.cummulative(col=cols).unstack(level=0).loc[MI_order,cols]
t1_part = tmp.loc[:,['states','acc']]
t1_part["det. automata"] = len(r.values)-tmp.nondet_aut
t1[name] = t1_part
t1_merged = pd.concat(t1.values(),axis=1,keys=t1.keys()).loc[MI_order,:]
t1_merged
row_map={"basic" : 'basic',
"F-merging" : '$\F$-merging',
"G-merging" : '$\G$-merging',
"FG-merging" : '$\FG$-merging',
"FG-merging+compl" : "$\FG$-merging + complement"}
t1_merged.rename(row_map,inplace=True);
t1 = t1_merged.rename_axis(['',"translation"],axis=1)
t1.index.name = None
t1
rand = t1.copy()
rand.columns = rand.columns.swaplevel()
rand.sort_index(axis=1,level=1,inplace=True,sort_remaining=False,ascending=True)
idx = pd.IndexSlice
corder = ['states','acc']
parts = [rand.loc[:,idx[[c]]] for c in corder]
rand = pd.concat(parts,names=corder,axis=1)
rand
print(rand.to_latex(escape=False,bold_rows=False),file=open('fossacs_t1.tex','w'))
cp fossacs_t1.tex /home/xblahoud/research/ltl3tela_papers/
"""
Explanation: Evaluating the impact of $\F$- and $\FG$-merging
We compare the $\F$- and $\FG$-merging translation to the basic one. We compare the sizes of SLAA (alternating). We use a wrapper script ltlcross_runner for ltlcross that uses the pandas library to manipulate data. It requires some settings.
End of explanation
"""
def fix_tools(tool):
return tool.replace('FG-','$\\FG$-').replace('F-','$\\F$-')
def sc_plot(r,t1,t2,filename=None,include_equal = True,col='states',log=None,size=(5.5,5),kw=None,clip=None, add_count=True):
merged = isinstance(r,list)
if merged:
vals = pd.concat([run.values[col] for run in r])
vals.index = vals.index.droplevel(0)
vals = vals.groupby(vals.index).first()
else:
vals = r.values[col]
to_plot = vals.loc(axis=1)[[t1,t2]] if include_equal else\
vals[vals[t1] != vals[t2]].loc(axis=1)[[t1,t2]]
to_plot['count'] = 1
to_plot.dropna(inplace=True)
to_plot = to_plot.groupby([t1,t2]).count().reset_index()
if filename is not None:
print(scatter_plot(to_plot, log=log, size=size,kw=kw,clip=clip, add_count=add_count),file=open(filename,'w'))
else:
return scatter_plot(to_plot, log=log, size=size,kw=kw,clip=clip, add_count=add_count)
def scatter_plot(df, short_toolnames=True, log=None, size=(5.5,5),kw=None,clip=None,add_count = True):
t1, t2, _ = df.columns.values
if short_toolnames:
t1 = fix_tools(t1.split('/')[0])
t2 = fix_tools(t2.split('/')[0])
vals = ['({},{}) [{}]\n'.format(v1,v2,c) for v1,v2,c in df.values]
plots = '''\\addplot[
scatter, scatter src=explicit,
only marks, fill opacity=0.5,
draw opacity=0] coordinates
{{{}}};'''.format(' '.join(vals))
start_line = 0 if log is None else 1
line = '\\addplot[darkgreen,domain={}:{}]{{x}};'.format(start_line, min(df.max(axis=0)[:2])+1)
axis = 'axis'
mins = 'xmin=0,ymin=0,'
clip_str = ''
if clip is not None:
clip_str = '\\draw[red,thick] ({},{}) rectangle ({},{});'.format(*clip)
if log:
if log == 'both':
axis = 'loglogaxis'
mins = 'xmin=1,ymin=1,'
else:
axis = 'semilog{}axis'.format(log)
mins = mins + '{}min=1,'.format(log)
args = ''
if kw is not None:
if 'title' in kw and add_count:
kw['title'] = '{{{} ({})}}'.format(kw['title'],df['count'].sum())
args = ['{}={},\n'.format(k,v) for k,v in kw.items()]
args = ''.join(args)
res = '''%\\begin{{tikzpicture}}
\\pgfplotsset{{every axis legend/.append style={{
cells={{anchor=west}},
draw=none,
}}}}
\\pgfplotsset{{colorbar/width=.3cm}}
\\pgfplotsset{{title style={{align=center,
font=\\small}}}}
\\pgfplotsset{{compat=1.14}}
\\begin{{{0}}}[
{1}
colorbar,
colormap={{example}}{{
color(0)=(blue)
color(500)=(green)
color(1000)=(red)
}},
%thick,
axis x line* = bottom,
axis y line* = left,
width={2}cm, height={3}cm,
xlabel={{{4}}},
ylabel={{{5}}},
cycle list={{%
{{darkgreen, solid}},
{{blue, densely dashed}},
{{red, dashdotdotted}},
{{brown, densely dotted}},
{{black, loosely dashdotted}}
}},
{6}%
]
{7}%
{8}%
{9}%
\\end{{{0}}}
%\\end{{tikzpicture}}
'''.format(axis,mins,
size[0],size[1],t1,t2,
args,plots,line,
clip_str)
return res
ltl3ba = 'LTL3BA'
fgm = 'FG-merging'
fm = 'F-merging'
basic = 'basic'
size = (4,4)
clip_names = ('xmin','ymin','xmax','ymax')
kw = {}
sc_plot(runners['lit'],basic,fgm,'sc_lit.tex',size=size,kw=kw.copy())
size = (4.3,4.5)
kw['title'] = 'literature'
sc_plot(runners['lit'],basic,fgm,'sc_lit.tex',size=size,kw=kw.copy())
for suff in ['1','2','4','fg']:
kw['title'] = 'rand'+suff
sc_plot(runners['rand'+suff],basic,fgm,'sc_rand{}.tex'.format(suff),size=size,kw=kw.copy())
cp sc_lit.tex sc_rand*.tex ~/research/ltl3tela_papers
r = runners['rand4']
r.smaller_than('basic','F-merging')
"""
Explanation: Scatter plots
End of explanation
"""
|
Britefury/deep-learning-tutorial-pydata2016 | INTRO ML 02 - gradient descent for machine learning.ipynb | mit | import numpy as np
import pandas as pd
"""
Explanation: Gradient descent for machine learning - a quick introduction
In this notebook we are going to use gradient descent to estimate the parameters of a model. In this case we are going to compute the parameters to convert temperatures fom Farenheit to Kelvin.
Given that we are approaching this from a machine learning perspective, we are going to determine our scaling factor and offset value by gradient descent, given some example temperatures on both scales. In other words, we are going to learn the parameters from the data.
First, some imports:
End of explanation
"""
INDEX = ['Boiling point of He',
'Boiling point of N',
'Melting point of H2O',
'Body temperature',
'Boiling point of H2O']
X = np.array([-452.1, -320.4, 32.0, 98.6, 212.0])
Y = np.array([4.22, 77.36, 273.2, 310.5, 373.2])
"""
Explanation: Our data set:
End of explanation
"""
pd.DataFrame(np.stack([X, Y]).T, index=INDEX,
columns=['Fahrenheit ($x$)', 'Kelvin ($y$)'])
"""
Explanation: Show our data set in a table:
End of explanation
"""
# Lets initialise `a` to between 1.0 and 2.0; it is therefore impossible
# for it to choose a (nearly) correct value at the start, forcing our model to do some work.
a = np.random.uniform(1.0, 2.0, size=())
b = 0.0
print('a={}, b={}'.format(a, b))
Y_pred = X * a + b
pd.DataFrame(np.stack([X, Y, Y_pred]).T, index=INDEX,
columns=['Fahrenheit ($x$)', 'Kelvin ($y$)', '$y_{pred}$'])
"""
Explanation: Model - linear regression
Temperatures can be converted using a linear model of the form $y=ax+b$.
$x$ and $y$ are samples in our dataset; $X={x_0...X_N}$ and $Y={y_0...Y_N}$ while $a$ and $b$ are the model parameters.
Initialise model
Lets try initialising the parameters $a$ randomly and $b$ to 0 and see what it predicts:
End of explanation
"""
sqr_err = (Y_pred - Y)**2
pd.DataFrame(np.stack([X, Y, Y_pred, sqr_err]).T, index=INDEX,
columns=['Fahrenheit ($x$)', 'Kelvin ($y$)', '$y_{pred}$', 'squared err ($\epsilon$)'])
"""
Explanation: How good is our guess?
To estimate the accuracy of our model, lets compute the squared error. We use the squared error since its value will always be positive and larger errors will have a greater cost due to being squared:
End of explanation
"""
def iterative_gradient_descent_step(a, b, lr):
"""
A single gradient descent iteration
:param a: current value of `a`
:param b: current value of `b`
:param lr: learning rate
:return: a tuple `(a_next, b_next)` that are the values of `a` and `b` after the iteration.
"""
# Derivative of a and b w.r.t. epsilon:
da_depsilon = (2 * a * X**2 + 2 * b * X - 2 * X * Y).mean()
db_depsilon = (2 * b + 2 * a * X - 2 * Y).mean()
# Gradient descent:
a = a - da_depsilon * lr
b = b - db_depsilon * lr
# Return new values
return a, b
def state_as_table(a, b):
"""
Helper function to generate a Pandas DataFrame showing the current state, including predicted values and errors
:param a: current value of `a`
:param b: current value of `b`
:return: tuple `(df, mean_sqr_err)` where `df` is the Pandas DataFrame and `sqr_err` is the mean squared error
"""
Y_pred = X * a + b
sqr_err = (Y_pred - Y)**2
df = pd.DataFrame(np.stack([X, Y, Y_pred, sqr_err]).T, index=INDEX,
columns=['Fahrenheit ($x$)', 'Kelvin ($y$)', '$y_{pred}$', 'squared err ($\epsilon$)'])
return df, sqr_err.mean()
"""
Explanation: Reducing the error
We reduce the error by taking the gradient of the squared error with respect to the parameters $a$ and $b$ and iteratively modifying the values of $a$ and $b$ in the direction of the negated gradient.
Lets determine the expressions for the gradient of the squared error $\epsilon$ with respect to $a$ and $b$:
$\epsilon_i = (ax_i + b - y_i)^2 = a^2x_i^2 + 2abx_i - 2ax_iy_i + b^2 + y_i^2 - 2by_i$
In terms of $a$: $\epsilon_i = a^2x_i^2 + a(2bx_i - 2x_iy_i) + b^2 + y_i^2 - 2by_i$
So ${d\epsilon_i\over{da}} = 2ax_i^2 + 2bx_i - 2x_iy_i$
In terms of $b$: $\epsilon = b^2 + b(2ax_i - 2y_i) + a^2x_i^2- 2ax_iy_i - 2by_i$
So ${d\epsilon_i\over{db}} = 2b + 2ax_i - 2y_i$
The above expressions apply to single samples only. To apply them to all of our 5 data points, we need to use the mean squared error. The mean squared error is the sum of the individual errors divided by the number of data points $N$. The derivative of the mean squared error w.r.t. $a$ and $b$ will also be the sum of the individual derivatives, divided by $N$.
Gradient descent
Gradient descent is performed iteratively; each parameter is modified independently as so:
$a' = a - \gamma {d\epsilon_i\over{da}}$
$b' = b - \gamma {d\epsilon_i\over{db}}$
where $\gamma$ is the learning rate.
Implementation
We now have all we need to define some gradient descent helper functions:
End of explanation
"""
LEARNING_RATE = 0.00001
N_ITERATIONS = 50000
df, mean_sqr_err = state_as_table(a, b)
print('a = {}, b = {}, mean sqr. err. = {}'.format(a, b, mean_sqr_err))
df
"""
Explanation: Define learning rate and show initial state:
End of explanation
"""
for i in xrange(N_ITERATIONS):
a, b = iterative_gradient_descent_step(a, b, LEARNING_RATE)
df, mean_sqr_err = state_as_table(a, b)
print('a = {}, b = {}, mean sqr. err. = {}'.format(a, b, mean_sqr_err))
df
"""
Explanation: Gradient descent
Run this cell repeatedly to see gradient descent in action:
End of explanation
"""
|
mspieg/dynamical-systems | .ipynb_checkpoints/Bifurcations-checkpoint.ipynb | cc0-1.0 | %matplotlib inline
import numpy
import matplotlib.pyplot as plt
"""
Explanation: <table>
<tr align=left><td><img align=left src="./images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman</td>
</table>
End of explanation
"""
def bifurcation_plot(f,f_x,r,x,rlabel='r'):
""" produce a bifurcation diagram for a function f(r,x) given
f and its partial derivative f_x(r,x) over a domain given by numpy arrays r and x
f(r,x) : RHS function of autonomous ode dx/dt = f(r,x)
f_x(r,x): partial derivative of f with respect to x
r : numpy array giving r coordinates of domain
x : numpy array giving x coordinates of domain
rlabel : string for x axis parameter label
"""
# set up a mesh grid and extract the 0 level set of f
R,X = numpy.meshgrid(r,x)
plt.figure()
CS = plt.contour(R,X,f(R,X),[0],colors='k')
plt.clf()
c0 = CS.collections[0]
# for each path in the contour extract vertices and mask by the sign of df/dx
for path in c0.get_paths():
vertices = path.vertices
vr = vertices[:,0]
vx = vertices[:,1]
mask = numpy.sign(f_x(vr,vx))
stable = mask < 0.
unstable = mask > 0.
# plot the stable and unstable branches for each path
plt.plot(vr[stable],vx[stable],'b')
plt.hold(True)
plt.plot(vr[unstable],vx[unstable],'b--')
plt.xlabel('parameter {0}'.format(rlabel))
plt.ylabel('x')
plt.legend(('stable','unstable'),loc='best')
plt.xlim(r[0],r[-1])
plt.ylim(x[0],x[-1])
"""
Explanation: Plotting Bifurcations
GOAL: Find where $f(x,r) = 0$ and label the stable and unstable branches.
A bifurcation diagram provides information on how the fixed points of a dynamical system $f(x,r)=0$ vary as a function of the control parameter $r$
Here we write a snazzy little python function to extract the zero contour and label it with respect to whether it is a stable branch with ($\partial f/\partial x < 0$) or an unstable branch ($\partial f/\partial x > 0 $)
End of explanation
"""
f = lambda r,x: r + x*x
f_x = lambda r,x: 2.*x
x = numpy.linspace(-4,4,100)
r = numpy.linspace(-4,4,100)
bifurcation_plot(f,f_x,r,x)
"""
Explanation: Example #1: Saddle node bifurcation
consider the problem
$$ f(r,x) = r + x^2$$
and we will define $f$ and $\partial f/\partial x$ using inlined python lambda functions
End of explanation
"""
f = lambda h,x: x*(1-x) - h
f_x = lambda h,x: 1. - 2.*x
x = numpy.linspace(0,1.,100)
h = numpy.linspace(0,.5,100)
bifurcation_plot(f,f_x,h,x,rlabel='h')
"""
Explanation: Example #2: Logistic equation with constant harvesting
$$ dx/dt = x(1-x) - h $$
End of explanation
"""
f = lambda r,x: r*x - x*x
f_x = lambda r,x: r - 2.*x
x = numpy.linspace(-1.,1.,100)
r = numpy.linspace(-1.,1.,100)
bifurcation_plot(f,f_x,r,x)
"""
Explanation: Example #3: transcritical bifurcation
$$f(r,x) = rx - x^2$$
End of explanation
"""
f = lambda r,x: r*x - x**3
f_x = lambda r,x: r - 3.*x**2
x = numpy.linspace(-1.,1.,100)
r = numpy.linspace(-1.,1.,100)
bifurcation_plot(f,f_x,r,x)
"""
Explanation: Example #4 super-critical pitchfork bifurcation
$$f(r,x) = rx - x^3$$
End of explanation
"""
f = lambda r,x: r*x + x**3
f_x = lambda r,x: r + 3.*x**2
x = numpy.linspace(-1.,1.,100)
r = numpy.linspace(-1.,1.,100)
bifurcation_plot(f,f_x,r,x)
"""
Explanation: Example #4 sub-critical pitchfork bifurcation
$$f(r,x) = rx + x^3$$
End of explanation
"""
f = lambda r,x: r*x + x**3 - x**5
f_x = lambda r,x: r + 3.*x**2 -5.*x**4
x = numpy.linspace(-2.,2.,100)
r = numpy.linspace(-.5,.5,100)
bifurcation_plot(f,f_x,r,x)
"""
Explanation: Example #6 subcritical pitchfork bifurcation with stabilization
$$f(r,x) = rx + x^3 - x^5 $$
End of explanation
"""
f = lambda r,x: # < your function here >
f_x = lambda r,x: # < your derivative here >
# Adjust your domain and resolution
x = numpy.linspace(-10.,10.,100)
r = numpy.linspace(-10.,10.,100)
#plot and pray (and watch out for glitches)
bifurcation_plot(f,f_x,r,x)
"""
Explanation: FIXME: this plot needs to mask out the spurious stable branch which is a plotting error
And now you can play with your own function
End of explanation
"""
|
cdawei/digbeta | dchen/tour/data_stats.ipynb | gpl-3.0 | plt.figure(figsize=[15, 5])
ax = plt.subplot()
ax.set_xlabel('#Trajectories')
ax.set_ylabel('#Queries')
ax.set_title('Histogram of #Trajectories')
queries = sorted(dat_obj.TRAJID_GROUP_DICT.keys())
X = [len(dat_obj.TRAJID_GROUP_DICT[q]) for q in queries]
pd.Series(X).hist(ax=ax, bins=20)
"""
Explanation: Plot the histogram of the number of trajectories over queries.
End of explanation
"""
dat_obj.poi_all.index
startPOI = 20
X = [len(dat_obj.traj_dict[tid]) for tid in dat_obj.trajid_set_all \
if dat_obj.traj_dict[tid][0] == startPOI and len(dat_obj.traj_dict[tid]) >= 2]
if len(X) > 0:
plt.figure(figsize=[15, 5])
ax = plt.subplot()
ax.set_xlabel('Trajectory Length')
ax.set_ylabel('#Trajectories')
ax.set_title('Histogram of Trajectory Length (startPOI: %d)' % startPOI)
pd.Series(X).hist(ax=ax, bins=20)
print('Trajectory Length:', X)
"""
Explanation: Plot the histogram of the length of trajectory given a start point.
End of explanation
"""
multi_label_queries = [q for q in dat_obj.TRAJID_GROUP_DICT if len(dat_obj.TRAJID_GROUP_DICT[q]) > 1]
nqueries = len(dat_obj.TRAJID_GROUP_DICT)
print('%d/%d ~ %.1f%%' % (len(multi_label_queries), nqueries, 100 * len(multi_label_queries) / nqueries))
"""
Explanation: Compute the ratio of multi-label when query=(start, length).
End of explanation
"""
dat_obj.traj_user['userID'].unique().shape
query_dict = dict()
for tid in dat_obj.trajid_set_all:
t = dat_obj.traj_dict[tid]
if len(t) >= 2:
query = (t[0], dat_obj.traj_user.loc[tid, 'userID'])
try: query_dict[query].add(tid)
except: query_dict[query] = set({tid})
multi_label_queries = [q for q in query_dict.keys() if len(query_dict[q]) > 1]
print('%d/%d ~ %.1f%%' % (len(multi_label_queries), len(query_dict), 100 * len(multi_label_queries) / len(query_dict)))
"""
Explanation: Compute the ratio of multi-label when query=(start, user).
End of explanation
"""
query_dict = dict()
for tid in dat_obj.trajid_set_all:
t = dat_obj.traj_dict[tid]
if len(t) >= 2:
query = (t[0], dat_obj.traj_user.loc[tid, 'userID'], len(t))
try: query_dict[query].add(tid)
except: query_dict[query] = set({tid})
multi_label_queries = [q for q in query_dict.keys() if len(query_dict[q]) > 1]
print('%d/%d ~ %.1f%%' % (len(multi_label_queries), len(query_dict), 100 * len(multi_label_queries) / len(query_dict)))
"""
Explanation: Compute the ratio of multi-label when query=(start, user, length).
End of explanation
"""
|
queirozfcom/python-sandbox | python3/notebooks/number-formatting-post/main.ipynb | mit | '{:.2f}'.format(8.499)
"""
Explanation: View the original blog post at http://queirozf.com/entries/python-number-formatting-examples
round to 2 decimal places
End of explanation
"""
'{:.2f}%'.format(10.12345)
"""
Explanation: format float as percentage
End of explanation
"""
import re
def truncate(num,decimal_places):
dp = str(decimal_places)
return re.sub(r'^(\d+\.\d{,'+re.escape(dp)+r'})\d*$',r'\1',str(num))
truncate(8.499,decimal_places=2)
truncate(8.49,decimal_places=2)
truncate(8.4,decimal_places=2)
truncate(8,decimal_places=2)
"""
Explanation: truncate to at most 2 decimal places
turn it into a string then replace everything after the second digit after the point
End of explanation
"""
# make the total string size AT LEAST 9 (including digits and points), fill with zeros to the left
'{:0>9}'.format(3.499)
# make the total string size AT LEAST 2 (all included), fill with zeros to the left
'{:0>2}'.format(3)
"""
Explanation: left padding with zeros
End of explanation
"""
# make the total string size AT LEAST 11 (including digits and points), fill with zeros to the RIGHT
'{:<011}'.format(3.499)
"""
Explanation: right padding with zeros
End of explanation
"""
"{:,}".format(100000)
"""
Explanation: comma separators
End of explanation
"""
num = 1.12745
formatted = f"{num:.2f}"
formatted
"""
Explanation: variable interpolation and f-strings
End of explanation
"""
|
t-vi/candlegp | notebooks/mcmc.ipynb | apache-2.0 | import sys, os
sys.path.append(os.path.join(os.getcwd(),'..'))
import candlegp
import candlegp.training.hmc
import numpy
import torch
from torch.autograd import Variable
from matplotlib import pyplot
pyplot.style.use('ggplot')
%matplotlib inline
X = Variable(torch.linspace(-3,3,20,out=torch.DoubleTensor()))
Y = Variable(torch.from_numpy(numpy.random.exponential(((X.data.sin())**2).numpy())))
"""
Explanation: Fully Bayesian inference for generalized GP models with HMC
James Hensman, 2015-16
Converted to candlegp Thomas Viehmann
It's possible to construct a very flexible models with Gaussian processes by combining them with different likelihoods (sometimes called 'families' in the GLM literature). This makes inference of the GP intractable since the likelihoods is not generally conjugate to the Gaussian process. The general form of the model is
$$\theta \sim p(\theta)\f \sim \mathcal {GP}(m(x; \theta),\, k(x, x'; \theta))\y_i \sim p(y | g(f(x_i))\,.$$
To perform inference in this model, we'll run MCMC using Hamiltonian Monte Carlo (HMC) over the function-values and the parameters $\theta$ jointly. Key to an effective scheme is rotation of the field using the Cholesky decomposition. We write
$$\theta \sim p(\theta)\v \sim \mathcal {N}(0,\, I)\LL^\top = K\f = m + Lv\y_i \sim p(y | g(f(x_i))\,.$$
Joint HMC over v and the function values is not widely adopted in the literature becate of the difficulty in differentiating $LL^\top=K$. We've made this derivative available in tensorflow, and so application of HMC is relatively straightforward.
Exponential Regression example
The first illustration in this notebook is 'Exponential Regression'. The model is
$$\theta \sim p(\theta)\f \sim \mathcal {GP}(0, k(x, x'; \theta))\f_i = f(x_i)\y_i \sim \mathcal {Exp} (e^{f_i})$$
We'll use MCMC to deal with both the kernel parameters $\theta$ and the latent function values $f$. first, generate a data set.
End of explanation
"""
#build the model
k = candlegp.kernels.Matern32(1,ARD=False).double() + candlegp.kernels.Bias(1).double()
l = candlegp.likelihoods.Exponential()
m = candlegp.models.GPMC(X[:,None], Y[:,None], k, l)
m
"""
Explanation: GPflow's model for fully-Bayesian MCMC is called GPMC. It's constructed like any other model, but contains a parameter V which represents the centered values of the function.
End of explanation
"""
m.kern.kern_list[0].lengthscales.prior = candlegp.priors.Gamma(1., 1., ttype=torch.DoubleTensor)
m.kern.kern_list[0].variance.prior = candlegp.priors.Gamma(1.,1., ttype=torch.DoubleTensor)
m.kern.kern_list[1].variance.prior = candlegp.priors.Gamma(1.,1., ttype=torch.DoubleTensor)
m.V.prior = candlegp.priors.Gaussian(0.,1., ttype=torch.DoubleTensor)
m
"""
Explanation: The V parameter already has a prior applied. We'll add priors to the parameters also (these are rather arbitrary, for illustration).
End of explanation
"""
# start near MAP
opt = torch.optim.LBFGS(m.parameters(), lr=1e-2, max_iter=40)
def eval_model():
obj = m()
opt.zero_grad()
obj.backward()
return obj
for i in range(50):
obj = m()
opt.zero_grad()
obj.backward()
opt.step(eval_model)
if i%5==0:
print(i,':',obj.data[0])
m
res = candlegp.training.hmc.hmc_sample(m,500,0.2,burn=50, thin=10)
xtest = torch.linspace(-4,4,100).double().unsqueeze(1)
f_samples = []
for i in range(len(res[0])):
for j,mp in enumerate(m.parameters()):
mp.set(res[j+1][i])
f_samples.append(m.predict_f_samples(Variable(xtest), 5).squeeze(0).t())
f_samples = torch.cat(f_samples, dim=0)
rate_samples = torch.exp(f_samples)
pyplot.figure(figsize=(12, 6))
line, = pyplot.plot(xtest.numpy(), rate_samples.data.mean(0).numpy(), lw=2)
pyplot.fill_between(xtest[:,0], numpy.percentile(rate_samples.data.numpy(), 5, axis=0), numpy.percentile(rate_samples.data.numpy(), 95, axis=0), color=line.get_color(), alpha = 0.2)
pyplot.plot(X.data.numpy(), Y.data.numpy(), 'kx', mew=2)
pyplot.ylim(-0.1, numpy.max(numpy.percentile(rate_samples.data.numpy(), 95, axis=0)))
import pandas
df = pandas.DataFrame(res[1:],index=[n for n,p in m.named_parameters()]).transpose()
df[:10]
df["kern.kern_list.1.variance"].apply(lambda x: x[0]).hist(bins=20)
"""
Explanation: Running HMC is as easy as hitting m.sample(). GPflow only has HMC sampling for the moment, and it's a relatively vanilla implementation (no NUTS, for example). There are two setting to tune, the step size (epsilon) and the maximum noumber of steps Lmax. Each proposal will take a random number of steps between 1 and Lmax, each of length epsilon.
We'll use the verbose setting so that we can see the acceptance rate.
End of explanation
"""
Z = torch.linspace(-3,3,5).double().unsqueeze(1)
k2 = candlegp.kernels.Matern32(1,ARD=False).double() + candlegp.kernels.Bias(1).double()
l2 = candlegp.likelihoods.Exponential()
m2 = candlegp.models.SGPMC(X[:,None], Y[:,None], k2, l2, Z)
m2.kern.kern_list[0].lengthscales.prior = candlegp.priors.Gamma(1., 1., ttype=torch.DoubleTensor)
m2.kern.kern_list[0].variance.prior = candlegp.priors.Gamma(1.,1., ttype=torch.DoubleTensor)
m2.kern.kern_list[1].variance.prior = candlegp.priors.Gamma(1.,1., ttype=torch.DoubleTensor)
m2.V.prior = candlegp.priors.Gaussian(0.,1., ttype=torch.DoubleTensor)
m2
# start near MAP
opt = torch.optim.LBFGS(m2.parameters(), lr=1e-2, max_iter=40)
def eval_model():
obj = m2()
opt.zero_grad()
obj.backward()
return obj
for i in range(50):
obj = m2()
opt.zero_grad()
obj.backward()
opt.step(eval_model)
if i%5==0:
print(i,':',obj.data[0])
m2
res = candlegp.training.hmc.hmc_sample(m,500,0.2,burn=50, thin=10)
xtest = torch.linspace(-4,4,100).double().unsqueeze(1)
f_samples = []
for i in range(len(res[0])):
for j,mp in enumerate(m.parameters()):
mp.set(res[j+1][i])
f_samples.append(m.predict_f_samples(Variable(xtest), 5).squeeze(0).t())
f_samples = torch.cat(f_samples, dim=0)
rate_samples = torch.exp(f_samples)
pyplot.figure(figsize=(12, 6))
line, = pyplot.plot(xtest.numpy(), rate_samples.data.mean(0).numpy(), lw=2)
pyplot.fill_between(xtest[:,0], numpy.percentile(rate_samples.data.numpy(), 5, axis=0), numpy.percentile(rate_samples.data.numpy(), 95, axis=0), color=line.get_color(), alpha = 0.2)
pyplot.plot(X.data.numpy(), Y.data.numpy(), 'kx', mew=2)
pyplot.plot(m2.Z.get().data.numpy(),numpy.zeros(m2.num_inducing),'o')
pyplot.ylim(-0.1, numpy.max(numpy.percentile(rate_samples.data.numpy(), 95, axis=0)))
"""
Explanation: Sparse Version
Do the same with sparse:
End of explanation
"""
|
ALEXKIRNAS/DataScience | CS231n/assignment2/Dropout.ipynb | mit | # As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
"""
Explanation: Dropout
Dropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.
[1] Geoffrey E. Hinton et al, "Improving neural networks by preventing co-adaptation of feature detectors", arXiv 2012
End of explanation
"""
np.random.seed(231)
x = np.random.randn(500, 500) + 10
for p in [0.3, 0.6, 0.75]:
out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})
print('Running tests with p = ', p)
print('Mean of input: ', x.mean())
print('Mean of train-time output: ', out.mean())
print('Mean of test-time output: ', out_test.mean())
print('Fraction of train-time output set to zero: ', (out == 0).mean())
print('Fraction of test-time output set to zero: ', (out_test == 0).mean())
print()
"""
Explanation: Dropout forward pass
In the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.
Once you have done so, run the cell below to test your implementation.
End of explanation
"""
np.random.seed(231)
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)
dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)
print('dx relative error: ', rel_error(dx, dx_num))
"""
Explanation: Dropout backward pass
In the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.
End of explanation
"""
np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))
for dropout in [0, 0.25, 0.5]:
print('Running check with dropout = ', dropout)
model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
weight_scale=5e-2, dtype=np.float64,
dropout=dropout, seed=123)
loss, grads = model.loss(X, y)
print('Initial loss: ', loss)
for name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
print()
"""
Explanation: Fully-connected nets with Dropout
In the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.
End of explanation
"""
# Train two identical nets, one with dropout and one without
np.random.seed(231)
num_train = 500
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
solvers = {}
dropout_choices = [0, 0.75]
for dropout in dropout_choices:
model = FullyConnectedNet([500], dropout=dropout)
print(dropout)
solver = Solver(model, small_data,
num_epochs=25, batch_size=100,
update_rule='adam',
optim_config={
'learning_rate': 5e-4,
},
verbose=True, print_every=100)
solver.train()
solvers[dropout] = solver
# Plot train and validation accuracies of the two models
train_accs = []
val_accs = []
for dropout in dropout_choices:
solver = solvers[dropout]
train_accs.append(solver.train_acc_history[-1])
val_accs.append(solver.val_acc_history[-1])
plt.subplot(3, 1, 1)
for dropout in dropout_choices:
plt.plot(solvers[dropout].train_acc_history, 'o-', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
plt.plot(solvers[dropout].val_acc_history, 'o-', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
plt.gcf().set_size_inches(15, 15)
plt.show()
"""
Explanation: Regularization experiment
As an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time.
End of explanation
"""
|
netodeolino/TCC | TCC 02/Resultados/Maio/Maio.ipynb | mit | all_crime_tipos.head(10)
all_crime_tipos_top10 = all_crime_tipos.head(10)
all_crime_tipos_top10.plot(kind='barh', figsize=(12,6), color='#3f3fff')
plt.title('Top 10 crimes por tipo (Mai 2017)')
plt.xlabel('Número de crimes')
plt.ylabel('Crime')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
"""
Explanation: Filtro dos 10 crimes com mais ocorrências em maio
End of explanation
"""
all_crime_tipos
"""
Explanation: Todas as ocorrências criminais de maio
End of explanation
"""
group_df_maio = df_maio.groupby('CLUSTER')
crimes = group_df_maio['NATUREZA DA OCORRÊNCIA'].count()
crimes.plot(kind='barh', figsize=(10,7), color='#3f3fff')
plt.title('Número de crimes por região (Mai 2017)')
plt.xlabel('Número')
plt.ylabel('Região')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
"""
Explanation: Quantidade de crimes por região
End of explanation
"""
regioes = df_maio.groupby('CLUSTER').count()
grupo_de_regioes = regioes.sort_values('NATUREZA DA OCORRÊNCIA', ascending=False)
grupo_de_regioes['TOTAL'] = grupo_de_regioes.ID
top_5_regioes_qtd = grupo_de_regioes.TOTAL.head(6)
top_5_regioes_qtd.plot(kind='barh', figsize=(10,4), color='#3f3fff')
plt.title('Top 5 regiões com mais crimes')
plt.xlabel('Número de crimes')
plt.ylabel('Região')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
"""
Explanation: As 5 regiões com mais ocorrências
End of explanation
"""
regiao_4_detalhe = df_maio[df_maio['CLUSTER'] == 4]
regiao_4_detalhe
"""
Explanation: Acima podemos ver que a região 4 teve o maior número de ocorrências criminais
Podemos agora ver quais são essas ocorrências de forma mais detalhada
End of explanation
"""
crime_types = regiao_4_detalhe[['NATUREZA DA OCORRÊNCIA']]
crime_type_total = crime_types.groupby('NATUREZA DA OCORRÊNCIA').size()
crime_type_counts = regiao_4_detalhe[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum()
crime_type_counts['TOTAL'] = crime_type_total
all_crime_types = crime_type_counts.sort_values(by='TOTAL', ascending=False)
crimes_top_5 = all_crime_types.head(5)
crimes_top_5.plot(kind='barh', figsize=(11,3), color='#3f3fff')
plt.title('Top 5 crimes na região 4')
plt.xlabel('Número de crimes')
plt.ylabel('Crime')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
"""
Explanation: Uma análise sobre as 5 ocorrências mais comuns
End of explanation
"""
horas_mes = df_maio.HORA.value_counts()
horas_mes_top10 = horas_mes.head(10)
horas_mes_top10.plot(kind='barh', figsize=(11,4), color='#3f3fff')
plt.title('Crimes por hora (Mai 2017)')
plt.xlabel('Número de ocorrências')
plt.ylabel('Hora do dia')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
"""
Explanation: Filtro dos 10 horários com mais ocorrências em maio
End of explanation
"""
crime_hours = regiao_4_detalhe[['HORA']]
crime_hours_total = crime_hours.groupby('HORA').size()
crime_hours_counts = regiao_4_detalhe[['HORA']].groupby('HORA').sum()
crime_hours_counts['TOTAL'] = crime_hours_total
all_hours_types = crime_hours_counts.sort_values(by='TOTAL', ascending=False)
all_hours_types.head(5)
all_hours_types_top5 = all_hours_types.head(5)
all_hours_types_top5.plot(kind='barh', figsize=(11,3), color='#3f3fff')
plt.title('Top 5 crimes por hora na região 4')
plt.xlabel('Número de ocorrências')
plt.ylabel('Hora do dia')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
"""
Explanation: Filtro dos 5 horários com mais ocorrências na região 4 (região com mais ocorrências em maio)
End of explanation
"""
crimes_mes = df_maio.BAIRRO.value_counts()
crimes_mes_top10 = crimes_mes.head(10)
crimes_mes_top10.plot(kind='barh', figsize=(11,4), color='#3f3fff')
plt.title('Top 10 Bairros com mais crimes (Mai 2017)')
plt.xlabel('Número de ocorrências')
plt.ylabel('Bairro')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
"""
Explanation: Filtro dos 10 bairros com mais ocorrências em maio
End of explanation
"""
barra_do_ceara = df_maio[df_maio['BAIRRO'] == 'JANGURUSSU']
crime_types = barra_do_ceara[['NATUREZA DA OCORRÊNCIA']]
crime_type_total = crime_types.groupby('NATUREZA DA OCORRÊNCIA').size()
crime_type_counts = barra_do_ceara[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum()
crime_type_counts['TOTAL'] = crime_type_total
all_crime_types = crime_type_counts.sort_values(by='TOTAL', ascending=False)
all_crime_tipos_5 = all_crime_types.head(5)
all_crime_tipos_5.plot(kind='barh', figsize=(15,4), color='#3f3fff')
plt.title('Top 5 crimes no Jangurussú')
plt.xlabel('Número de Crimes')
plt.ylabel('Crime')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
"""
Explanation: O Bairro com o maior número de ocorrências em janeiro foi o Jungurussú
Vamos agora ver de forma mais detalhadas quais foram estes crimes
End of explanation
"""
crime_types_bairro = regiao_4_detalhe[['BAIRRO']]
crime_type_total_bairro = crime_types_bairro.groupby('BAIRRO').size()
crime_type_counts_bairro = regiao_4_detalhe[['BAIRRO']].groupby('BAIRRO').sum()
crime_type_counts_bairro['TOTAL'] = crime_type_total_bairro
all_crime_types_bairro = crime_type_counts_bairro.sort_values(by='TOTAL', ascending=False)
crimes_top_5_bairro = all_crime_types_bairro.head(5)
crimes_top_5_bairro.plot(kind='barh', figsize=(11,3), color='#3f3fff')
plt.title('Top 5 bairros na região 4')
plt.xlabel('Quantidade')
plt.ylabel('Bairro')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
"""
Explanation: Os 5 bairros mais comuns na região 4
End of explanation
"""
passare = df_maio[df_maio['BAIRRO'] == 'JANGURUSSU']
crime_types = passare[['NATUREZA DA OCORRÊNCIA']]
crime_type_total = crime_types.groupby('NATUREZA DA OCORRÊNCIA').size()
crime_type_counts = passare[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum()
crime_type_counts['TOTAL'] = crime_type_total
all_crime_types = crime_type_counts.sort_values(by='TOTAL', ascending=False)
all_crime_tipos_5 = all_crime_types.head(5)
all_crime_tipos_5.plot(kind='barh', figsize=(15,4), color='#3f3fff')
plt.title('Top 5 crimes no Jangurussú')
plt.xlabel('Número de Crimes')
plt.ylabel('Crime')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
"""
Explanation: Análise sobre o bairro Jangurussú
End of explanation
"""
|
dsolanno/BarcelonaRentsStatus | airbnb data exploration/host_until/calculate_host_until.ipynb | mit | df1 = pd.read_csv('listings/30042015/30042015.csv', sep = ";")
df2 = pd.read_csv('listings/17072015/17072015.csv', sep = ";")
df3 = pd.read_csv('listings/02102015/02102015.csv', sep = ";")
df4 = pd.read_csv('listings/03012016/03012016.csv', sep = ";")
df5 = pd.read_csv('listings/08122016/08122016.csv', sep = ";")
df6 = pd.read_csv('listings/08042017/08042017.csv', sep = ";")
dfs_l = (df1, df2, df3, df4, df5, df6)
#convertim a datime per cada df
for df in dfs_l:
df.host_since = pd.to_datetime(df.host_since, format="%Y-%m-%d")
df.last_scraped = pd.to_datetime(df.last_scraped, format="%Y-%m-%d")
"""
Explanation: Llegim els .csv reduïts de cada scraping. Es troben dins la següent estructura de directoris:
bash
.
├── 02102015
│ ├── 02102015.csv
│ └── listings.csv
├── 03012016
│ ├── 03012016.csv
│ └── listings.csv
├── 08042017
│ ├── 08042017.csv
│ └── listings.csv
├── 08122016
│ ├── 08122016.csv
│ └── listings.csv
├── 17072015
│ ├── 17072015.csv
│ └── listings.csv
└── 30042015
├── 30042015.csv
└── listings.csv
on "listing.csv" és l'arxiu original de descarregar d'inside airbnb i "[0-9].csv" és l'arxiu reduït amb una subselecció de columnes
End of explanation
"""
l_hosts = [df['host_id'].values for df in dfs_l]
df_hosts = pd.DataFrame(l_hosts)
df_hosts = df_hosts.T
df_hosts.columns = ['2015-04-30','2015-07-17','2015-10-02','2016-01-03','2016-12-08','2017-04-08']
df_hosts = df_hosts.apply(lambda x: x.sort_values().values)
print ([len(x) for x in l_hosts])
df_hosts.head()
"""
Explanation: feim un DataFrame on cada columna conté els host_id de cada scrap i de nom li posam la data de l'scrap
End of explanation
"""
uniq_id=np.sort(np.unique(np.hstack(l_hosts)))
id_df = pd.DataFrame(uniq_id)
id_df.set_index(0, inplace=True)
#molt millorable
## Ignasi no miris :/
for date in tqdm_notebook(df_hosts.columns):
id_df[date]=''
for i in tqdm_notebook(id_df.index):
if np.any(df_hosts[date].isin([i])):
id_df[date].loc[i] = i
else:
id_df[date].loc[i] = np.nan
id_df.head()
"""
Explanation: Feim un dataframe amb l'índex dels IDs únics de tots els dataframes i hi afegim els valors de les altres llistes a la posició corresponent, deixant espais buits on no s'ha trobat el host_id
End of explanation
"""
last_seen = id_df.apply(lambda x: x.last_valid_index(), axis=1) #magic function last_valid_index!
last_seen = pd.DataFrame(last_seen, columns=['host_until'])
last_seen.host_until = pd.to_datetime(last_seen.host_until, format="%Y-%m-%d")
last_seen_dict = pd.Series(last_seen, index = last_seen.index).to_dict()
#mapejam el valor de l'ultima entrada valida al host_id per obtenir "host_until"
listing_tot = pd.concat(dfs_l)
listing_tot['host_until'] = listing_tot.host_id.map(last_seen_dict)
listing_tot.head()
listing_tot.to_csv('listings_host_until.csv',sep=';', index=False)
"""
Explanation: L'última entrada vàlida de cada fila ens dirà quin va ser el derrer cop que aquell host va ser vist en un scrap:
End of explanation
"""
|
samgoodgame/sf_crime | iterations/KK_scripts/W207_Final_Project_logisticRegressionOnly_updated_08_20_1230.ipynb | mit | # Additional Libraries
%matplotlib inline
import matplotlib.pyplot as plt
# Import relevant libraries:
import time
import numpy as np
import pandas as pd
from sklearn.neighbors import KNeighborsClassifier
from sklearn import preprocessing
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.grid_search import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import log_loss
from sklearn.linear_model import LogisticRegression
from sklearn import svm
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
# Import Meta-estimators
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import GradientBoostingClassifier
# Import Calibration tools
from sklearn.calibration import CalibratedClassifierCV
# Set random seed and format print output:
np.random.seed(0)
np.set_printoptions(precision=3)
"""
Explanation: Kaggle San Francisco Crime Classification
Berkeley MIDS W207 Final Project: Sam Goodgame, Sarah Cha, Kalvin Kao, Bryan Moore
Environment and Data
End of explanation
"""
# Data path to your local copy of Kalvin's "x_data.csv", which was produced by the negated cell above
data_path = "./data/x_data_3.csv"
df = pd.read_csv(data_path, header=0)
x_data = df.drop('category', 1)
y = df.category.as_matrix()
# Impute missing values with mean values:
#x_complete = df.fillna(df.mean())
x_complete = x_data.fillna(x_data.mean())
X_raw = x_complete.as_matrix()
# Scale the data between 0 and 1:
X = MinMaxScaler().fit_transform(X_raw)
# Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time:
np.random.seed(0)
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, y = X[shuffle], y[shuffle]
test_data, test_labels = X[800000:], y[800000:]
dev_data, dev_labels = X[700000:800000], y[700000:800000]
train_data, train_labels = X[:700000], y[:700000]
mini_train_data, mini_train_labels = X[:200000], y[:200000]
mini_dev_data, mini_dev_labels = X[430000:480000], y[430000:480000]
crime_labels = list(set(y))
crime_labels_mini_train = list(set(mini_train_labels))
crime_labels_mini_dev = list(set(mini_dev_labels))
print(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev))
print(len(train_data),len(train_labels))
print(len(dev_data),len(dev_labels))
print(len(mini_train_data),len(mini_train_labels))
print(len(mini_dev_data),len(mini_dev_labels))
print(len(test_data),len(test_labels))
"""
Explanation: Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.
End of explanation
"""
#lr_param_grid_1 = {'C': [0, 0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 5.0, 10.0]}
lr_param_grid_1 = {'C': [7.5, 10.0, 12.5, 15.0, 20.0, 30.0, 50.0, 100.0]}
LR_l1 = GridSearchCV(LogisticRegression(penalty='l1'), param_grid=lr_param_grid_1, scoring='neg_log_loss')
LR_l1.fit(train_data, train_labels)
print('L1: best C value:', str(LR_l1.best_params_['C']))
LR_l1_prediction_probabilities = LR_l1.predict_proba(dev_data)
LR_l1_predictions = LR_l1.predict(dev_data)
print("L1 Multi-class Log Loss:", log_loss(y_true = dev_labels, y_pred = LR_l1_prediction_probabilities, labels = crime_labels), "\n\n")
#create an LR-L1 classifier with the best params
bestL1 = LogisticRegression(penalty='l1', C=LR_l1.best_params_['C'])
bestL1.fit(train_data, train_labels)
#L1weights = bestL1.coef_
"""
Explanation: Logistic Regression
Hyperparameter tuning:
For the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag')
Model calibration:
See above
LR with L1-Penalty Hyperparameter Tuning
End of explanation
"""
#methods = ['sigmoid', 'isotonic']
#bestL1 = LogisticRegression(penalty='l1', C=LR_l1.best_params_['C'])
#ccvL1 = CalibratedClassifierCV(bestL1, method = m, cv = '3')
#ccvL1.fit(train_data, train_labels)
"""
Explanation: Calibration for LR with L1-Penalty
End of explanation
"""
columns = ['hour_of_day','dayofweek',\
'x','y','bayview','ingleside','northern',\
'central','mission','southern','tenderloin',\
'park','richmond','taraval','HOURLYDRYBULBTEMPF',\
'HOURLYRelativeHumidity','HOURLYWindSpeed',\
'HOURLYSeaLevelPressure','HOURLYVISIBILITY',\
'Daylight']
allCoefsL1 = pd.DataFrame(index=columns)
for a in range(len(bestL1.coef_)):
allCoefsL1[crime_labels[a]] = bestL1.coef_[a]
allCoefsL1
"""
Explanation: Dataframe for Coefficients
End of explanation
"""
f = plt.figure(figsize=(15,8))
allCoefsL1.plot(kind='bar', figsize=(15,8))
plt.legend(loc='center left', bbox_to_anchor=(1.0,0.5))
plt.show()
"""
Explanation: Plot for Coefficients
End of explanation
"""
lr_param_grid_2 = {'C': [0, 0.0001, 0.001, 0.01, 0.1, 0.5, 1.0, 5.0, 10.0], \
'solver':['liblinear','newton-cg','lbfgs', 'sag']}
LR_l2 = GridSearchCV(LogisticRegression(penalty='l2'), param_grid=lr_param_grid_2, scoring='neg_log_loss')
LR_l2.fit(train_data, train_labels)
print('L2: best C value:', str(LR_l2.best_params_['C']))
print('L2: best solver:', str(LR_l2.best_params_['solver']))
LR_l2_prediction_probabilities = LR_l2.predict_proba(dev_data)
LR_l2_predictions = LR_l2.predict(dev_data)
print("L2 Multi-class Log Loss:", log_loss(y_true = dev_labels, y_pred = LR_l2_prediction_probabilities, labels = crime_labels), "\n\n")
#create an LR-L2 classifier with the best params
bestL2 = LogisticRegression(penalty='l2', solver=LR_l2.best_params_['solver'], C=LR_l2.best_params_['C'])
bestL2.fit(train_data, train_labels)
#L2weights = bestL2.coef_
"""
Explanation: LR with L2-Penalty Hyperparameter Tuning
End of explanation
"""
columns = ['hour_of_day','dayofweek',\
'x','y','bayview','ingleside','northern',\
'central','mission','southern','tenderloin',\
'park','richmond','taraval','HOURLYDRYBULBTEMPF',\
'HOURLYRelativeHumidity','HOURLYWindSpeed',\
'HOURLYSeaLevelPressure','HOURLYVISIBILITY',\
'Daylight']
allCoefsL2 = pd.DataFrame(index=columns)
for a in range(len(bestL2.coef_)):
allCoefsL2[crime_labels[a]] = bestL2.coef_[a]
allCoefsL2
"""
Explanation: Dataframe for Coefficients
End of explanation
"""
f = plt.figure(figsize=(15,8))
allCoefsL2.plot(kind='bar', figsize=(15,8))
plt.legend(loc='center left', bbox_to_anchor=(1.0,0.5))
plt.show()
"""
Explanation: Plot of Coefficients
End of explanation
"""
|
malogrisard/NTDScourse | algorithms/02_ex_clustering.ipynb | mit | # Load libraries
# Math
import numpy as np
# Visualization
%matplotlib notebook
import matplotlib.pyplot as plt
plt.rcParams.update({'figure.max_open_warning': 0})
from mpl_toolkits.axes_grid1 import make_axes_locatable
from scipy import ndimage
# Print output of LFR code
import subprocess
# Sparse matrix
import scipy.sparse
import scipy.sparse.linalg
# 3D visualization
import pylab
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import pyplot
# Import data
import scipy.io
# Import functions in lib folder
import sys
sys.path.insert(1, 'lib')
# Import helper functions
%load_ext autoreload
%autoreload 2
from lib.utils import construct_kernel
from lib.utils import compute_kernel_kmeans_EM
from lib.utils import compute_kernel_kmeans_spectral
from lib.utils import compute_purity
# Import distance function
import sklearn.metrics.pairwise
# Remove warnings
import warnings
warnings.filterwarnings("ignore")
# Load MNIST raw data images
mat = scipy.io.loadmat('datasets/mnist_raw_data.mat')
X = mat['Xraw']
n = X.shape[0]
d = X.shape[1]
Cgt = mat['Cgt'] - 1; Cgt = Cgt.squeeze()
nc = len(np.unique(Cgt))
print('Number of data =',n)
print('Data dimensionality =',d);
print('Number of classes =',nc);
"""
Explanation: A Network Tour of Data Science
Xavier Bresson, Winter 2016/17
Exercise 4 - Code 2 : Unsupervised Learning
Unsupervised Clustering with Kernel K-Means
End of explanation
"""
Ker=construct_kernel(X,'linear')
Theta= np.ones(n)
n_clusters = 8
[C_solution, En_solution]=compute_kernel_kmeans_EM(nc,Ker,Theta,10)
[C_kmeans, En_kmeans]=compute_kernel_kmeans_EM(n_clusters,Ker,Theta,10)
C_computed = C_kmeans
print('C_kmeans',C_kmeans);
print('En_kmeans=',En_kmeans);
accuracy = compute_purity(C_computed,Cgt,nc)
print('accuracy = ',accuracy);
"""
Explanation: Question 1a: What is the clustering accuracy of standard/linear K-Means?<br>
Hint: You may use functions Ker=construct_kernel(X,'linear') to compute the
linear kernel and [C_kmeans, En_kmeans]=compute_kernel_kmeans_EM(n_classes,Ker,Theta,10) with Theta= np.ones(n) to run the standard K-Means algorithm, and accuracy = compute_purity(C_computed,C_solution,n_clusters) that returns the
accuracy.
End of explanation
"""
Ker=construct_kernel(X,'gaussian')
Theta= np.ones(n)
n_clusters = 8
[C_solution, En_solution]=compute_kernel_kmeans_EM(nc,Ker,Theta,10)
[C_kmeans, En_kmeans]=compute_kernel_kmeans_EM(n_clusters,Ker,Theta,10)
C_computed = C_kmeans
print('C_kmeans',C_kmeans);
print('En_kmeans=',En_kmeans);
accuracy = compute_purity(C_computed,Cgt,nc)
print('accuracy = ',accuracy);
Ker=construct_kernel(X,'polynomial',[1,0,2])
Theta= np.ones(n)
n_clusters = 8
[C_solution, En_solution]=compute_kernel_kmeans_EM(nc,Ker,Theta,10)
[C_kmeans, En_kmeans]=compute_kernel_kmeans_EM(n_clusters,Ker,Theta,10)
C_computed = C_kmeans
print('C_kmeans',C_kmeans);
print('En_kmeans=',En_kmeans);
accuracy = compute_purity(C_computed,C_solution,n_clusters)
print('accuracy = ',accuracy);
"""
Explanation: Question 1b: What is the clustering accuracy for the kernel K-Means algorithm with<br>
(1) Gaussian Kernel for the EM approach and the Spectral approach?<br>
(2) Polynomial Kernel for the EM approach and the Spectral approach?<br>
Hint: You may use functions Ker=construct_kernel(X,'gaussian') and Ker=construct_kernel(X,'polynomial',[1,0,2]) to compute the non-linear kernels<br>
Hint: You may use functions C_kmeans,__ = compute_kernel_kmeans_EM(K,Ker,Theta,10) for the EM kernel KMeans algorithm and C_kmeans,__ = compute_kernel_kmeans_spectral(K,Ker,Theta,10) for the Spectral kernel K-Means algorithm.<br>
End of explanation
"""
KNN_kernel = 50;
Ker = construct_kernel(X,'kNN_gaussian',50)
Theta= np.ones(n)
n_clusters = 8
[C_solution, En_solution]=compute_kernel_kmeans_EM(nc,Ker,Theta,10)
[C_kmeans, En_kmeans]=compute_kernel_kmeans_EM(nc,Ker,Theta,10)
C_computed = C_kmeans
print('C_kmeans',C_kmeans);
print('En_kmeans=',En_kmeans);
accuracy = compute_purity(C_computed,Cgt,nc)
print('accuracy = ',accuracy);
KNN_kernel = 50;
Ker = construct_kernel(X,'kNN_cosine_binary',KNN_kernel)
[C_kmeans, En_kmeans]=compute_kernel_kmeans_EM(nc,Ker,Theta,10)
C_computed = C_kmeans
accuracy = compute_purity(C_computed,Cgt,nc)
print('accuracy = ',accuracy);
"""
Explanation: Question 1c: What is the clustering accuracy for the kernel K-Means algorithm with<br>
(1) KNN_Gaussian Kernel for the EM approach and the Spectral approach?<br>
(2) KNN_Cosine_Binary Kernel for the EM approach and the Spectral approach?<br>
You can test for the value KNN_kernel=50.<br>
Hint: You may use functions Ker = construct_kernel(X,'kNN_gaussian',KNN_kernel)
and Ker = construct_kernel(X,'kNN_cosine_binary',KNN_kernel) to compute the
non-linear kernels.
End of explanation
"""
|
PWhiddy/kbmod | notebooks/kbmod_CNN.ipynb | bsd-2-clause | import numpy as np
import matplotlib.pyplot as plt
import keras
from keras.utils import np_utils
%matplotlib inline
"""
Explanation: Creating a CNN to identify real objects in kbmod data
End of explanation
"""
data = np.genfromtxt('../data/postage_stamp_training.dat')
"""
Explanation: Training Set
Here we are going to use Keras to create a neural network to identify real asteroids from noise in the kbmod results. We have a dataset generated from a kbmod run that contains real objects and false detections. We are going to split it into a training set and a test set and train a neural network to filter between the two.
End of explanation
"""
for idx in range(len(data)):
data[idx] -= np.min(data[idx])
data[idx] /= np.max(data[idx])
"""
Explanation: We will first normalize the data to be between 0.0 and 1.0
End of explanation
"""
fig = plt.figure(figsize=(50, 25))
set_on = 1
print 'Starting at %i' % int((set_on - 1)*100)
for i in range((set_on-1)*100,set_on*100):
fig.add_subplot(10,10,i-(set_on-1)*100+1)
plt.imshow(data[i].reshape(25,25), cmap=plt.cm.Greys_r,
interpolation=None)
plt.title(str(i))
classes = np.zeros(1000)
"""
Explanation: First we need to classify the data by eye. 0 will be a false image and 1 will be a true detection. We only show the first of 10 sets of 100 here. To iterate just change the "set_on" parameter.
End of explanation
"""
classes[[29, 37, 58, 79, 86, 99, 115, 118, 123, 130, 131,
135, 138, 142, 149, 157, 160, 165, 166, 172, 177,
227, 262, 347, 369, 393,
426, 468, 478, 530, 560, 567, 602, 681]] = 1.
plt.imshow(data[138].reshape(25,25), cmap=plt.cm.Greys_r,
interpolation=None)
"""
Explanation: The following are the positive identifications from our training set as classified by eye.
End of explanation
"""
np.random.seed(42)
assignments = np.random.choice(np.arange(1000), replace=False, size=1000)
train = assignments[:700]
test = assignments[700:]
train_set = data[train]
test_set = data[test]
train_classes = classes[train]
test_classes = classes[test]
"""
Explanation: Now we will divide the data into training and test sets (70/30 split).
End of explanation
"""
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
"""
Explanation: Creating and Training Keras Neural Network
End of explanation
"""
model.add(Dense(128, input_shape=(625,), activation='sigmoid'))
model.add(Dense(1, input_shape=(128,), activation='sigmoid'))
model.output_shape
"""
Explanation: We are currently using a simple neural network model with a 128 unit hidden layer with a sigmoid activation function. The 25 x 25 pixel postage stamps are passed in as a 1-d array for a total of 625 input features. The final output is a binary classification where 1 means a positive identification as an asteroid-like object.
End of explanation
"""
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
"""
Explanation: The loss function we choose is the binary cross-entropy function.
End of explanation
"""
model.fit(train_set, train_classes, batch_size=32, verbose=0, nb_epoch=100)
"""
Explanation: We fit our model and iterate 100 times in the optimization process.
End of explanation
"""
score = model.evaluate(test_set, test_classes, verbose=0)
print score, model.metrics_names
"""
Explanation: Evaluating Results
Once the model is fit we evaluate how it does on the test set and calculate the accuracy. This fit gets to 100% accuracy with our current training data.
End of explanation
"""
class_results = model.predict_classes(test_set, batch_size=32)
pos_results = np.where(class_results==1.)[0]
print pos_results
plt.imshow(test_set[pos_results[6]].reshape(25,25), cmap=plt.cm.Greys_r,
interpolation=None)
"""
Explanation: Here we take a look at the results with the class predictor and plot one of the positive identifications.
End of explanation
"""
model.save('../data/kbmod_model.h5')
"""
Explanation: Save Model
Satisfied with our results we save the model to use in real analysis.
End of explanation
"""
|
hailing-li/hailing-li.github.io | HW4/Frequent Itemset.ipynb | mit | import sqlite3
import pandas as pd
from pprint import pprint
from pandas import DataFrame
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
import math
import numpy as np
conn = sqlite3.connect('bicycle.db')
c=conn.cursor()
c.execute('SELECT LoTemp, Precip, Manhattan FROM bicycle')
data=c.fetchall()
"""
Explanation: The New York Bike Load dataset contains the bike load information of 4 different bridges, Brooklyn Bridge, Manhattan Bridge, Williamsburg Bridge and Queensboro Bridge with given information on date, day, high temperature, low temperature and precipitation.
In my frequent Itemset Analysis, I would like to examine the relationship between temperature, Precipitation and the bike load on Manhattan Bridge. I would like to know in most cases what causes increment or decrement of bike load on the bridge. When temperature goes up, and the precipitation also goes up, does the bike load goes up or down in most of the cases? What will happen to bike load in most of time when the temperature and the precipitation both go up?
To examine these relationship, we need change the data in the way that represent the relationship between 2 days. If the temperature goes up, we write 1. If the temperature goes down, we write -1, if it’s not changing, we write 0. After that we can use PyMing package to determine which combination of -1, 0, 1 occur the most frequently.
Step1: Get the data needed from database
End of explanation
"""
Data=DataFrame(data, columns=[ 'LoTemp', 'Precip', 'Manhattan'])
n=len(Data)
newData = DataFrame( index=range(0,n-1),columns=[ 'LoTemp', 'Precip', 'Manhattan'])
Data.loc[1,'LoTemp']
newData.loc[1,'LoTemp']=1
for x in range(0,n-1):
for y in [ 'LoTemp', 'Precip', 'Manhattan']:
if (Data.loc[x,y]<Data.loc[x+1,y]):
newData.loc[x, y]=1
elif (Data.loc[x,y]>Data.loc[x+1,y]):
newData.loc[x, y]=-1
else:
newData.loc[x, y]=0
datalist=newData.values.tolist()
"""
Explanation: Step2: Transform the Data
In this step, I need to create another dataframe, use for loop to fill in the datafram based on the old dataframe.
End of explanation
"""
from pymining import seqmining
freq_seqs = seqmining.freq_seq_enum(datalist, 20)
sorted(freq_seqs)
conn.close()
"""
Explanation: Step3: PyMining
In my data, each colum represent different attributes so the sequence of the dataset actually matters. I decided to use teh sequence mining method.
End of explanation
"""
|
hashiprobr/redes-sociais | encontro04.ipynb | gpl-3.0 | import numpy as np
import socnet as sn
import easyplot as ep
"""
Explanation: Encontro 04: Suporte para Análise Espectral de Grafos
Este guia foi escrito para ajudar você a atingir os seguintes objetivos:
lembrar conceitos básicos de geometria analítica e álgebra linear;
explicar conceitos básicos de matriz de adjacência.
As seguintes bibliotecas serão usadas:
End of explanation
"""
from random import randint, uniform
from math import pi, cos, sin
NUM_PAIRS = 10
NUM_FRAMES = 10
# devolve um versor positivo aleatório
def random_pair():
angle = uniform(0, pi / 2)
return np.array([cos(angle), sin(angle)])
# devolve uma cor aleatória
def random_color():
r = randint(0, 255)
g = randint(0, 255)
b = randint(0, 255)
return (r, g, b)
# matriz da qual queremos descobrir um autovetor
A = np.array([
[ 2, -1],
[-1, 2]
])
# versores positivos e cores aleatórias
pairs = []
colors = []
for i in range(NUM_PAIRS):
pairs.append(random_pair())
colors.append(random_color())
frames = []
for i in range(NUM_FRAMES):
frames.append(ep.frame_vectors(pairs, colors))
# multiplica cada vetor por A
pairs = [A.dot(pair) for pair in pairs]
ep.show_animation(frames, xrange=[-5, 5], yrange=[-5, 5])
"""
Explanation: Terminologia e notação
Um escalar $\alpha \in \mathbb{R}$ é denotado por uma letra grega minúscula.
Um vetor $a \in \mathbb{R}^n$ é denotado por uma letra romana minúscula.
Uma matriz $A \in \mathbb{R}^{n \times m}$ é denotada por uma letra romana maiúscula.
Geometria analítica
Considere dois vetores, $a = (\alpha_0, \ldots, \alpha_{n-1})$ e $b = (\beta_0, \ldots, \beta_{n-1})$. O produto interno desses vetores é denotado por $a \cdot b$ e definido como
$\sum^{n-1}_{i = 0} \alpha_i \beta_i$.
Dizemos que $a$ e $b$ são ortogonais se $a \cdot b = 0$.
A norma de $a$ é denotada por $\|a\|$ e definida como $\sqrt{a \cdot a}$, ou seja, $\sqrt{\sum^{n-1}_{i = 0} \alpha^2_i}$.
Dizemos que $a$ é um versor se $\|a\| = 1$.
Normalizar $a$ significa considerar o versor $\frac{a}{\|a\|}$.
Álgebra linear
Considere um conjunto de vetores $a_0, \ldots, a_{m-1}$. Uma combinação linear desses vetores é uma soma $\gamma_0 a_0 + \cdots + \gamma_{m-1} a_{m-1}$.
Dizemos que $a_0, \ldots, a_{m-1}$ é uma base se todo vetor em $\mathbb{R}^n$ é uma combinação linear desses vetores.
Considere uma matriz $A$. Sua transposta é denotada por $A^t$ e definida como uma matriz tal que, para todo $i$ e $j$, o elemento na linha $i$ e coluna $j$ de $A$ é o elemento na linha $j$ e coluna $i$ de $A^t$.
Em multiplicações, um vetor é por padrão "de pé", ou seja, uma matriz com uma única coluna. Disso segue que o produto $Ab$ é uma combinação linear das colunas de $A$.
Como consequência, a transposta de um vetor é por padrão "deitada", ou seja, uma matriz com uma única linha. Disse segue que o produto $b^t A$ é a transposta de uma combinação linear das linhas de $A$.
Autovetores e autovalores
Considere um vetor $b$ e uma matriz $A$. Dizemos que $b$ é um autovetor de $A$ se existe $\lambda$ tal que
$$Ab = \lambda b.$$
Nesse caso, dizemos que $\lambda$ é o autovalor de $A$ correspondente a $b$.
Note que a multiplicação pela matriz pode mudar o módulo de um autovetor, mas não pode mudar sua direção. Essa interpretação geométrica permite visualizar um algoritmo surpreendentemente simples para obter um autovetor.
End of explanation
"""
# normaliza um vetor
def normalize(a):
return a / np.linalg.norm(a)
# versores positivos e cores aleatórias
pairs = []
colors = []
for i in range(NUM_PAIRS):
pairs.append(random_pair())
colors.append(random_color())
frames = []
for i in range(NUM_FRAMES):
frames.append(ep.frame_vectors(pairs, colors))
# multiplica cada vetor por A e normaliza
pairs = [normalize(A.dot(pair)) for pair in pairs]
ep.show_animation(frames, xrange=[-1, 1], yrange=[-1, 1])
"""
Explanation: Note que as multiplicações por $A$ fazem o módulo dos vetores aumentar indefinidamente, mas a direção converge. Para deixar isso mais claro, vamos normalizar depois de multiplicar.
End of explanation
"""
sn.graph_width = 320
sn.graph_height = 180
g = sn.load_graph('encontro02/3-bellman.gml', has_pos=True)
for n in g.nodes():
g.node[n]['label'] = str(n)
sn.show_graph(g, nlab=True)
matrix = sn.build_matrix(g)
print(matrix)
"""
Explanation: Portanto, o algoritmo converge para uma direção que a multiplicação por $A$ não pode mudar. Isso corresponde à definição de autovetor dada acima!
Cabe enfatizar, porém, que nem toda matriz garante convergência.
Matriz de adjacência
Considere um grafo $(N, E)$ e uma matriz $A \in {0, 1}^{|N| \times |N|}$. Denotando por $\alpha_{ij}$ o elemento na linha $i$ e coluna $j$, dizemos que $A$ é a matriz de adjacência do grafo $(N, E)$ se:
$$\textrm{supondo } (N, E) \textrm{ não-dirigido}, \alpha_{ij} = 1 \textrm{ se } {i, j} \in E \textrm{ e } \alpha_{ij} = 0 \textrm{ caso contrário};$$
$$\textrm{supondo } (N, E) \textrm{ dirigido}, \alpha_{ij} = 1 \textrm{ se } (i, j) \in E \textrm{ e } \alpha_{ij} = 0 \textrm{ caso contrário}.$$
Vamos construir a matriz de adjacência de um grafo dos encontros anteriores.
End of explanation
"""
|
uwbmrb/BMRB-API | documentation/notebooks/Vicinal Disulfides.ipynb | gpl-3.0 | %%capture
!pip install requests;
import requests
"""
Explanation: Example notebook for using the PDB and BMRB APIs for structural biology data science applications
Introduction
This notebook is designed to walk through some sample queries of both the PDB and BMRB in order to correlate NMR parameters with structure. It is hoped that this will give some guidance as to the utility of the wwPDB API's as well as to an overall strategy of how to gather data from the different databases. This is not intended to be a tutorial on Python and no claim is made about the efficiency or correctness of the code.
Research Problem
For this example we will explore vicinal disulfide bonds in proteins - disulfide bonds between adjacent cysteines in a protein. Vicinal disulfide bonds are rare in nature but can be biologically important<sup>1</sup>. As the protein backbone is strained from such a linkage, the hypothetical research question for this notebook is whether there are any abnormal NMR chemical shifts associated with such a structure.
Figure 1. This illustration shows a comparison of the abnormal dihedral angles observed for vicinal disulfides (right). This figure is from the poster presented at the 46th Experimental NMR Conference in Providence, RI. Susan Fox-Erlich, Heidi J.C. Ellis, Timothy O. Martyn, & Michael R. Gryk. (2005) StrucCheck: a JAVA Application to Derive Geometric Attributes from Arbitrary Subsets of Spatial Coordinates Obtained from the PDB.
<sup>1</sup>Xiu-hong Wang, Mark Connor, Ross Smith, Mark W. Maciejewski, Merlin E.H. Howden, Graham M. Nicholson, Macdonald J. Christie & Glenn F. King. Discovery and characterization of a family of insecticidal neurotoxins with a rare vicinal disulfide bridge. Nat Struct Mol Biol 7, 505–513 (2000). https://www.nature.com/articles/nsb0600_505 https://doi.org/10.1038/nsb0600_505
Strategy
Our overall strategy will be to query the RCSB PDB for all entries which have vicinal disulfide bonds. We will then cross-reference those entries with the BMRB in order to get available chemical shifts. Since we are interested in NMR chemical shifts, when we query the PDB it will be useful to limit our search to structures determined by NMR.
First we need to install and import the REST module which will be required for the PDB and BMRB.
https://www.rcsb.org/pages/webservices
https://github.com/uwbmrb/BMRB-API
End of explanation
"""
pdbAPI = "https://search.rcsb.org/rcsbsearch/v1/query?json="
disulfide_filter = '{"type": "terminal", "service": "text", "parameters": {"operator": "greater_or_equal", "value": 1, "attribute": "rcsb_entry_info.disulfide_bond_count"}}'
NMR_filter = '{"type": "terminal", "service": "text", "parameters": {"operator": "exact_match", "value": "NMR", "attribute": "rcsb_entry_info.experimental_method"}}'
GK_filter = '{"type": "terminal", "service": "text", "parameters": {"operator": "exact_match", "value": "King, G.F.", "attribute": "audit_author.name"}}'
"""
Explanation: Building the PDB Query (Search API)
In order to find all PDB entries with vicinal disulfides, we will first search for all entries with at least one disulfide bond. This is the disulfide_filter portion of the query.
In addition, as we are interested in the chemical shifts for vicinal disulfides, we will also restrict the results to only solution NMR studies.
Finally, as this is an example for illustration purposes and we want to keep the number of results small, we will further restrict the results to stuctures determined by Glenn King. Hi Glenn!
This section makes use of the Search API at PDB. Later, we will use the Data API.
End of explanation
"""
filters = '{"type": "group", "logical_operator": "and", "nodes": [' + disulfide_filter + ',' + NMR_filter + ',' + GK_filter + ']}'
"""
Explanation: Now we can combine these three filters together using AND
End of explanation
"""
full_query = '{"query": ' + filters + ', "request_options": {"return_all_hits": true}, "return_type": "polymer_instance"}'
"""
Explanation: And add the return information. Note that we are specifying the polymer_instance ID's as that is where the disulfide bonds are noted.
End of explanation
"""
response = requests.get(pdbAPI + full_query)
print(response) # should return 200
print(type(response.json()))
#print(response.json()) #uncomment this line if you want to see the results
"""
Explanation: And finally submit the requst to the PDB. The response should be 200 if the query was successful.
End of explanation
"""
pdb_results = response.json()
pdb_list = []
for x in pdb_results['result_set']:
pdb_list.append (x['identifier'])
print (pdb_list)
"""
Explanation: Next we will extract just the PDB codes from our results and build a list.
End of explanation
"""
data_query_base = "https://data.rcsb.org/rest/v1/core/polymer_entity_instance/"
def swapSymbols(iter):
return iter.replace(".","/")
pdb_list2 = list(map(swapSymbols,pdb_list))
print(pdb_list2)
"""
Explanation: PDB Data API
The basics of the data API are illustrated with this link:
https://data.rcsb.org/rest/v1/core/polymer_entity_instance/1DL0/A
This illustrates the REST query string, as well as how we need to append the PDB entry ID and polymer instance to the end.
End of explanation
"""
data_response = requests.get(data_query_base + "1DL0/A")
print(data_response) # should return 200
vds_list = []
for instance in pdb_list2:
data_response = requests.get(data_query_base + instance)
if data_response.status_code == 200:
data_result = data_response.json()
for x in data_result['rcsb_polymer_struct_conn']:
if (x['connect_type'] == 'disulfide bridge' and x['connect_partner']['label_seq_id']-x['connect_target']['label_seq_id']==1):
vds_list.append (data_result['rcsb_polymer_entity_instance_container_identifiers']['entry_id'])
print(vds_list)
"""
Explanation: Now we can loop through each PDB entry and request the polymer_entity_instance information. We will only care about disulfide bridges of adjacent residues
End of explanation
"""
BMRB_LookupString = 'http://api.bmrb.io/v2/search/get_bmrb_ids_from_pdb_id/'
BMRB_ID_List = []
for PDB_ID in vds_list:
BMRB_response = requests.get(BMRB_LookupString + PDB_ID)
if BMRB_response.status_code == 200:
BMRB_result = BMRB_response.json()
for x in BMRB_result:
for y in x['match_types']:
if y == 'Author Provided':
BMRB_ID_List.append (x['bmrb_id'])
print(BMRB_ID_List)
chemical_shifts_list = []
for ID in BMRB_ID_List:
x = requests.get("http://api.bmrb.io/v2/entry/" + ID + "?saveframe_category=assigned_chemical_shifts")
chemical_shifts_list.append (x.json())
#print(chemical_shifts_list)
"""
Explanation: Our list is small (intentionally) but we can now use it to fetch chemical shifts from the BMRB.
BMRB API
Our first step is to find the corresponding BMRB entries for the PDB entries in our list. The query we want is shown below:
http://api.bmrb.io/v2/search/get_bmrb_ids_from_pdb_id/
End of explanation
"""
bmrb_link = "https://bmrb.io/ftp/pub/bmrb/nmr_pdb_integrated_data/adit_nmr_matched_pdb_bmrb_entry_ids.csv"
"""
Explanation: Alternate Approach
Look up through the BMRB adit_nmr_match csv file
loop_
_Assembly_db_link.Author_supplied
_Assembly_db_link.Database_code
_Assembly_db_link.Accession_code
_Assembly_db_link.Entry_mol_code
_Assembly_db_link.Entry_mol_name
_Assembly_db_link.Entry_experimental_method
_Assembly_db_link.Entry_structure_resolution
_Assembly_db_link.Entry_relation_type
_Assembly_db_link.Entry_details
_Assembly_db_link.Entry_ID
_Assembly_db_link.Assembly_ID
yes PDB 1AXH . . . . .
End of explanation
"""
|
Leguark/pynoddy | docs/notebooks/.ipynb_checkpoints/Feature-Sampling-checkpoint.ipynb | gpl-2.0 | from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
import sys, os
import matplotlib.pyplot as plt
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
import pynoddy.history
%matplotlib inline
reload(pynoddy.history)
reload(pynoddy.events)
reload(pynoddy)
history = "feature_model.his"
output_name = "feature_out"
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 5,
'layer_names' : ['layer 1', 'layer 2', 'layer 3',
'layer 4', 'layer 5'],
'layer_thickness' : [1500, 500, 500, 1500, 500]}
nm.add_event('stratigraphy', strati_options )
fold_options = {'name' : 'Fold',
'pos' : (4000, 3500, 5000),
'amplitude' : 200,
'dip_dir' : 135.0,
'wavelength' : 5000}
nm.add_event('fold', fold_options)
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_W',
'pos' : (4000, 3500, 5000),
'dip_dir' : 90,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (6000, 3500, 5000),
'dip_dir' : 270,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
nm.write_history(history)
# Change cube size
nm1 = pynoddy.history.NoddyHistory(history)
nm1.change_cube_size(200)
nm1.write_history(history)
# Compute the model
pynoddy.compute_model(history, output_name)
# Plot output
import pynoddy.output
reload(pynoddy.output)
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('x', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title="",
savefig = False, fig_filename = "ex01_faults_combined.eps",
cmap = 'YlOrRd') # note: YlOrRd colourmap should be suitable for colorblindness!
nout.export_to_vtk(vtk_filename = "feature_model")
import os
history_file = os.path.join(repo_path, "examples/GBasin_Ve1_V4.his")
his_gipps = pynoddy.history.NoddyHistory(history_file)
his_gipps.events[2].properties
import numpy as np
np.unique(nout.block)
"""
Explanation: Using pynoddy to generate features in geological model space
End of explanation
"""
cov = [[0.08, 0.0, 0.],
[0.0, 0.001, 0.],
[0., 0., 0.05]]
# define mean values for features
feature1_means = [1.0, 1.5, 1.2, 1.1, 1.9]
feature2_means = [5.1, 5.15, 5.12, 5.02, 5.07]
feature3_means = [1.0, 1.2, 1.4, 1.2, 1.0]
# resort into unit means
means_units = [[m1, m2, m3] for m1, m2, m3 in
zip(feature1_means, feature2_means, feature3_means)]
print means_units
f1, f2, f3 = np.random.multivariate_normal(means_units[0], cov, 1000).T
n1 = int(np.sum(nout.block[nout.block == 1.0]))
# sample for geological unit 1
f1, f2, f3 = np.random.multivariate_normal(means_units[0], cov, n1).T
tmp = np.copy(nout.block)
tmp[tmp == 1.0] = f1
plt.imshow(tmp[0,:,:].T, origin = 'lower_left', interpolation = 'nearest')
"""
Explanation: Adding features to geological layers
The first step is to define the covariance matrix and mean values for all features and for all geological units. Then, for each cell in the model, a random feature value is generated.
For this test, we consider the following model:
- each layer has a different feature mean value
- the covariance matrix is identical for all layers (assuming some physical relationship, for examlpe between porosity/ permability; or density, vp)
End of explanation
"""
# create empty feature fields:
feature_field_1 = np.copy(nout.block)
feature_field_2 = np.copy(nout.block)
feature_field_3 = np.copy(nout.block)
for unit_id in np.unique(nout.block):
print unit_id
n_tmp = int(np.sum(nout.block == unit_id))
f1, f2, f3 = np.random.multivariate_normal(means_units[int(unit_id-1)], cov, n_tmp).T
feature_field_1[feature_field_1 == unit_id] = f1
feature_field_2[feature_field_2 == unit_id] = f2
feature_field_3[feature_field_3 == unit_id] = f3
# Export feature fields to VTK (via pynoddy output file)
nout.block = feature_field_1
nout.export_to_vtk(vtk_filename = "feature_field_1")
nout.block = feature_field_2
nout.export_to_vtk(vtk_filename = "feature_field_2")
nout.block = feature_field_3
nout.export_to_vtk(vtk_filename = "feature_field_3")
# write to feature file for Jack
feature_file = open("features_lowres.csv", 'w')
feature_file.write("x, y, z, f1, f2, f3\n")
for zz in range(nout.nz):
for yy in range(nout.ny):
for xx in range(nout.nx):
feature_file.write("%d, %d, %d, %.5f, %.5f, %.5f\n" %
(xx, yy, zz, feature_field_1[xx, yy, zz],
feature_field_2[xx, yy, zz], feature_field_3[xx, yy, zz]))
feature_file.close()
nout.n_total
"""
Explanation: ok, seems to work - now for all:
End of explanation
"""
|
Upward-Spiral-Science/spect-team | Code/Assignment-9/Independent Analysis-3.ipynb | apache-2.0 | # Standard
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import statsmodels.api as sm
# Dimensionality reduction and Clustering
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.cluster import MeanShift, estimate_bandwidth
from sklearn import manifold, datasets
from sklearn import preprocessing
from itertools import cycle
# Plotting tools and classifiers
from matplotlib.colors import ListedColormap
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA
from sklearn import cross_validation
from sklearn.cross_validation import LeaveOneOut
# Let's read the data in and clean it
def get_NaNs(df):
columns = list(df.columns.get_values())
row_metrics = df.isnull().sum(axis=1)
rows_with_na = []
for i, x in enumerate(row_metrics):
if x > 0: rows_with_na.append(i)
return rows_with_na
def remove_NaNs(df):
rows_with_na = get_NaNs(df)
cleansed_df = df.drop(df.index[rows_with_na], inplace=False)
return cleansed_df
initial_data = pd.DataFrame.from_csv('Data_Adults_1_reduced_2.csv')
cleansed_df = remove_NaNs(initial_data)
# Let's also get rid of nominal data
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
X = cleansed_df.select_dtypes(include=numerics)
print X.shape
"""
Explanation: Independent Analysis 3 - Srinivas (handle: thewickedaxe)
PLEASE SCROLL TO THE BOTTOM OF THE NOTEBOOK TO FIND THE QUESTIONS AND THEIR ANSWERS AND OTHER GENERAL FINDINGS
In this notebook we explore throwing out everything but the basal ganglia, performing PCA, and trying our classification experiments
Initial Data Cleaning
End of explanation
"""
# Let's extract ADHd and Bipolar patients (mutually exclusive)
ADHD = X.loc[X['ADHD'] == 1]
ADHD = ADHD.loc[ADHD['Bipolar'] == 0]
BP = X.loc[X['Bipolar'] == 1]
BP = BP.loc[BP['ADHD'] == 0]
print ADHD.shape
print BP.shape
# Keeping a backup of the data frame object because numpy arrays don't play well with certain scikit functions
ADHD = pd.DataFrame(ADHD.drop(['Patient_ID', 'Age', 'ADHD', 'Bipolar'], axis = 1, inplace = False))
BP = pd.DataFrame(BP.drop(['Patient_ID', 'Age', 'ADHD', 'Bipolar'], axis = 1, inplace = False))
print ADHD.shape
print BP.shape
"""
Explanation: we've now dropped the last of the discrete numerical inexplicable data, and removed children from the mix
Extracting the samples we are interested in
End of explanation
"""
ADHD_clust = pd.DataFrame(ADHD)
BP_clust = pd.DataFrame(BP)
# This is a consequence of how we dropped columns, I apologize for the hacky code
data = pd.concat([ADHD_clust, BP_clust])
"""
Explanation: Clustering and other grouping experiments
End of explanation
"""
kmeans = KMeans(n_clusters=2)
kmeans.fit(data.get_values())
labels = kmeans.labels_
cluster_centers = kmeans.cluster_centers_
print('Estimated number of clusters: %d' % len(cluster_centers))
print data.shape
for label in [0, 1]:
ds = data.get_values()[np.where(labels == label)]
plt.plot(ds[:,0], ds[:,1], '.',alpha=0.6)
lines = plt.plot(cluster_centers[label,0], cluster_centers[label,1], 'o', alpha=0.7)
"""
Explanation: K-Means clustering
End of explanation
"""
ADHD_iso = pd.DataFrame(ADHD_clust)
BP_iso = pd.DataFrame(BP_clust)
BP_iso['ADHD-Bipolar'] = 0
ADHD_iso['ADHD-Bipolar'] = 1
print BP_iso.columns
data = pd.DataFrame(pd.concat([ADHD_iso, BP_iso]))
class_labels = data['ADHD-Bipolar']
data = data.drop(['ADHD-Bipolar'], axis = 1, inplace = False)
print data.shape
data_backup = data.copy(deep = True)
data = data.get_values()
# Based on instructor suggestion i'm going to run a PCA to examine dimensionality reduction
pca = PCA(n_components = 2, whiten = "True").fit(data)
data = pca.transform(data)
print sum(pca.explained_variance_ratio_)
# Leave one Out cross validation
def leave_one_out(classifier, values, labels):
leave_one_out_validator = LeaveOneOut(len(values))
classifier_metrics = cross_validation.cross_val_score(classifier, values, labels, cv=leave_one_out_validator)
accuracy = classifier_metrics.mean()
deviation = classifier_metrics.std()
return accuracy, deviation
svc = SVC(gamma = 2, C = 1)
bc = BaggingClassifier(n_estimators = 22)
gb = GradientBoostingClassifier()
dt = DecisionTreeClassifier(max_depth = 22)
qda = QDA()
gnb = GaussianNB()
vc = VotingClassifier(estimators=[('gb', gb), ('bc', bc), ('gnb', gnb)],voting='hard')
classifier_accuracy_list = []
classifiers = [(gnb, "Gaussian NB"), (qda, "QDA"), (svc, "SVM"), (bc, "Bagging Classifier"),
(vc, "Voting Classifier"), (dt, "Decision Trees")]
for classifier, name in classifiers:
accuracy, deviation = leave_one_out(classifier, data, class_labels)
print '%s accuracy is %0.4f (+/- %0.3f)' % (name, accuracy, deviation)
classifier_accuracy_list.append((name, accuracy))
"""
Explanation: Classification Experiments
Let's experiment with a bunch of classifiers
End of explanation
"""
X = data_backup
y = class_labels
X1 = sm.add_constant(X)
est = sm.OLS(y, X1).fit()
est.summary()
"""
Explanation: given the number of people who have ADHD and Bipolar disorder the chance line would be at around 0.6. The classifiers fall between 0.7 and 0.75 which makes them just barely better than chance. This is still an improvement over last time.
Running Multivariate regression
On data pre dimensionality reduction
End of explanation
"""
X = data
y = class_labels
X1 = sm.add_constant(X)
est = sm.OLS(y, X1).fit()
est.summary()
"""
Explanation: On data post dimensionality reduction
End of explanation
"""
|
caromedellin/Python-notes | python-intro/Untitled1.ipynb | mit | import csv
import requests
"""
Explanation: APIs
There are a few cases when a data set isn't a good enough solution.
An Application Program Interface API is an alternative, it allows you to dinamically query andretrive data
End of explanation
"""
response = requests.get("http://api.open-notify.org/iss-now.json")
response.status_code
"""
Explanation: Status Codes
200 -- everything went okay, and the result has been returned (if any)
301 -- the server is redirecting you to a different endpoint. This can happen when a company switches domain names, or an endpoint name is changed.
401 -- the server thinks you're not authenticated. This happens when you don't send the right credentials to access an API (we'll talk about this in a later mission).
400 -- the server thinks you made a bad request. This can happen when you don't send along the right data, among other things.
403 -- the resource you're trying to access is forbidden -- you don't have the right permissions to see it.
404 -- the resource you tried to access wasn't found on the server.
End of explanation
"""
# Set up the parameters we want to pass to the API.
# This is the latitude and longitude of New York City.
parameters = {"lat": 40.71, "lon": -74}
# Make a get request with the parameters.
response = requests.get("http://api.open-notify.org/iss-pass.json", params=parameters)
# Print the content of the response (the data the server returned)
print(response.content)
# This gets the same data as the command above
response = requests.get("http://api.open-notify.org/iss-pass.json?lat=40.71&lon=-74")
print(response.content)
"""
Explanation: Query Parameters
A 400 status code indicates a bad request, in this case it means that we need to add some parameters to the request.
End of explanation
"""
|
taesiri/noteobooks | old:misc/graph_analysis/check_planarity.ipynb | mit | # generate random graph
G = nx.generators.fast_gnp_random_graph(10, 0.4)
# check planarity and draw the graph
print("The graph is {0} planar".format("" if planarity.is_planar(G) else "not"))
if(planarity.is_planar(G)):
planarity.draw(G)
nx.draw(G)
"""
Explanation: Hello Networkx and Planarity
NetworkX Homepage
Planarity's Github Page
Pre-generated Graph DB - On Google Drive
End of explanation
"""
# for the sake of this experiment, I'd used a fix 32x32 grid for adjacency matrix. This is a huge assumption, but this is just a stupid test!
def create_db():
# create empty list
planar_list = []
non_planar_list = []
# generate random graphs and store their adjacency lists
for p in range(1, 95):
for _ in range(50000):
G = nx.generators.fast_gnp_random_graph(32, p/100.0)
if planarity.is_planar(G):
planar_list.append(nx.to_numpy_array(G))
else:
if(len(planar_list) > len(non_planar_list)):
non_planar_list.append(nx.to_numpy_array(G))
# let see how many graph we've got
print(len(planar_list))
print(len(non_planar_list))
# save planar graphs as numpy nd-array
planar_db = np.array(planar_list)
np.save("planar_db.npy", planar_db)
# save non planar graphs as numpy nd-array
non_planar_db = np.array(non_planar_list)
np.save("not_planar_db.npy", non_planar_db)
"""
Explanation: Naïve Database creator script
End of explanation
"""
planar_db = np.load("planar_db.npy")
non_planar_db = np.load("not_planar_db.npy")
np.random.shuffle(planar_db)
np.random.shuffle(non_planar_db)
all_data = np.append(planar_db, non_planar_db, axis=0)
all_labels = np.append(np.ones(len(planar_db)), np.zeros(len(non_planar_db))).astype(int)
# shuffling ...
permutation = np.random.permutation(all_data.shape[0])
all_data = all_data[permutation]
all_labels = all_labels[permutation]
training_set = all_data[:250000]
training_label = all_labels[:250000]
test_set = all_data[250000:]
test_label = all_labels[250000:]
"""
Explanation: Create a new database or load one!
End of explanation
"""
import torch
import torchvision
import torchvision.transforms as transforms
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 2)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net
# Let's go CUDA!
net.cuda()
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
running_loss = 0.0
batch_size = 5000
train_size = training_set.shape[0]
batch_in_epoch = int(train_size/batch_size) + 1
for epoch in range(500):
for i in range(batch_in_epoch):
batch_data = training_set[i*batch_size: (i+1)*batch_size]
if(batch_data.shape[0] == 0):
continue;
batch_data = torch.from_numpy(training_set[i*batch_size: (i+1)*batch_size]).float().unsqueeze_(1)
batch_label = torch.from_numpy(training_label[i*batch_size: (i+1)*batch_size]).long()
inputs, labels = Variable(batch_data.cuda()), Variable(batch_label.cuda())
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.8f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
test_tensor = torch.from_numpy(test_set).float().unsqueeze_(1)
output = net(Variable(test_tensor.cuda()))
_, predicted = torch.max(output.data, 1)
predicted = predicted.cpu().numpy()
correct = (predicted == batch_test_label).sum()
print("prediction accuracy is {0}".format(correct / test_label.size))
wrong_predictions = test_set[predicted != batch_test_label]
"""
Explanation: Create a Simple ConvNet using PyTorch
End of explanation
"""
samples = np.random.choice(len(wrong_predictions), 3)
for i in samples:
G = nx.from_numpy_matrix(wrong_predictions[i])
isolated_edges = list(nx.isolates(G))
G.remove_nodes_from(isolated_edges)
plt.figure(figsize=(21, 5))
plt.subplot(1, 4, 1)
nx.draw(G, pos=nx.circular_layout (G) )
plt.title("Circular Layout")
plt.draw()
plt.subplot(1, 4, 2)
plt.imshow(wrong_predictions[i], cmap='Greys', interpolation='nearest')
plt.title("Adjacency Matrix")
plt.subplot(1, 4, 3)
if(planarity.is_planar(G)):
planarity.draw(G)
else:
plt.text(0.5, 0.5,
'not planar',
ha='center', va='center',
fontsize=20,
color="b")
plt.title("Planar Embedding")
plt.subplot(1, 4, 4)
nx.draw(G, pos=nx.spring_layout (G) )
plt.draw()
plt.title("Spring Layout")
plt.show()
"""
Explanation: Let's plot some graphs that our ConvNet failed to classify correctly!
End of explanation
"""
|
maubarsom/biotico-tools | ipython_nb/blast_hits_visualization.ipynb | apache-2.0 | blast_cols = ["query_id","subject_id","pct_id","ali_len","mism","gap_open","q_start","q_end","s_start","s_end","e_value","bitscore"]
pax_hits = pd.read_csv("PAXhs_vs_Pw.tblastn.txt",sep="\t",header=None,names=blast_cols)
print( "Size of dataframe: {}".format(pax_hits.shape ))
pax_hits.head()
"""
Explanation: 1. Read Tblastn hits from the human PAX proteins to the salamander scaffolds
End of explanation
"""
def get_seq_len_from_fasta(fasta_file):
seqs = defaultdict(int)
current_id = ""
with open(fasta_file) as fh:
for line in fh:
if line.startswith(">"):
current_id = line.lstrip(">").rstrip("\n")
else:
seqs[current_id] += len(line.rstrip("\n"))
return seqs
pax_lengths = get_seq_len_from_fasta("PAX.Hsapiens.aa.fasta")
print(pax_lengths)
print(pax_hits.shape)
print(pax_hits[pax_hits.e_value < 1e-10].shape)
pax_hits.groupby("query_id")["subject_id"].size()
lol = pax_hits.groupby("query_id")["subject_id"].unique()
lol.apply(lambda x:len(x))
pylab.rcParams['figure.figsize'] = (10.0, 8.0)
gene_name = "PAX7_Hsapiens"
df = pax_hits[pax_hits.query_id == gene_name]
#Order contigs by query start
contig_idx = dict( (ctg_id, idx+1 ) for idx,ctg_id in enumerate(
df[["subject_id","q_start"]].sort_values(by="q_start").drop_duplicates("subject_id",keep="first").subject_id
))
lines = [ [(q_st,contig_idx[s_id]),(q_end,contig_idx[s_id])] for s_id,q_st,q_end in
df[["subject_id","q_start","q_end"]].sort_values(by="q_start").apply(lambda row: tuple(row),axis=1)
]
lc1 = mc.LineCollection(lines[0::3],linewidths=2,color="red")
lc2 = mc.LineCollection(lines[1::3],linewidths=2,color="blue")
lc3 = mc.LineCollection(lines[2::3],linewidths=2,color="green")
fig,ax = plt.subplots()
ax.add_collection(lc1)
ax.add_collection(lc2)
ax.add_collection(lc3)
ax.axvline(pax_lengths[gene_name])
ax.autoscale()
"""
Explanation: 2. Calculate protein lenghts from human PAX proteins fasta
End of explanation
"""
!mkdir out/
#Filter by gene
df = pax_hits[pax_hits.query_id == "PAX7_Hsapiens"]
#Filter by gene region
df = df[(df.q_start > 180) & (df.q_start < 250)]
#Write to file
df[["subject_id","s_start","s_end"]].to_csv("out/pax7_middle_hits.bed",sep="\t",index=None,header=False)
#Print table
df.sort_values(by="q_start").head()
"""
Explanation: Hit extraction
End of explanation
"""
def calculate_cov(contig_set,gene_length):
bitmap = np.zeros(gene_length)
for hit in contig_set:
bitmap[hit[1]-1:hit[2]] = 1
return bitmap.sum()
gene_name = "PAX6_Hsapiens"
df = pax_hits[pax_hits.query_id == gene_name]
#Extract lists of hits for protein
hits_list = df[["subject_id","q_start","q_end"]].apply(lambda row:tuple(row),axis=1).tolist()
#Greedy Algorithm starts here
contig_set = tuple()
last_it_cvg= 0
it_cvg_delta = 11
min_cov_increase = 10
while hits_list and (it_cvg_delta > min_cov_increase):
it_best_hit = tuple()
it_best_cvg = 0
#Try which combination of current contig_set + a hit gives the largest coverage increase
for hit in hits_list:
cvg = calculate_cov( contig_set + (hit,) , pax_lengths[gene_name])
if cvg > it_best_cvg:
it_best_hit = hit
it_best_cvg = cvg
print("Max_it_cvg : {}".format(it_best_cvg))
#Calculate coverage increase
it_cvg_delta = it_best_cvg - last_it_cvg
#IF delta if better than the minimun, add the hit
if it_cvg_delta > min_cov_increase:
last_it_cvg = it_best_cvg
contig_set = contig_set + (it_best_hit,)
hits_list.remove(hit)
print(contig_set)
"""
Explanation: Greedy strategy to extract longest contigs for each gene
End of explanation
"""
|
google-research/google-research | linear_identifiability/identifiability_of_GPT_2_models.ipynb | apache-2.0 | import numpy as np
import torch
import matplotlib.pyplot as plt
from tqdm import tqdm
num_layers = 13
num_sentences = 2000
# Install and import Huggingface Transformer models
!pip install transformers ftfy spacy
from transformers import *
def get_model(model_id):
print('Loading model: ', model_id)
models = {
'gpt2': (GPT2Model, GPT2Tokenizer, 'gpt2'),
'gpt2-medium': (GPT2Model, GPT2Tokenizer, 'gpt2-medium'),
'gpt2-large': (GPT2Model, GPT2Tokenizer, 'gpt2-large'),
'gpt2-xl': (GPT2Model, GPT2Tokenizer, 'gpt2-xl'),
}
model_class, tokenizer_class, pretrained_weights = models[model_id]
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights, output_hidden_states=True, cache_dir=cache_dir)
def text2features(text):
input_ids = torch.tensor([tokenizer.encode(text, add_special_tokens=True)]) # Add special tokens takes care of adding [CLS], [SEP], <s>... tokens in the right way for each model.
with torch.no_grad():
hidden_states = model(input_ids=input_ids)
return hidden_states
return text2features
# Compute model activations of data for an architecture.
# Returns a 2D numpy array with [n_datapoints, n_features].
# Each word is one datapoint, and converts to one feature vector.
def get_activations(f_model, data, num_layers=num_layers):
print('Computing activations...')
h = []
for i in tqdm(range(len(data))):
_data = data[i]
hiddens = f_model(_data)[2] # Get all hidden layers
hiddens = [h.numpy() for h in hiddens]
hiddens = np.concatenate(hiddens, axis=0)
hiddens = hiddens[-num_layers:]
# hiddens.shape = (num_layers, num_datapoints, num_features)
h.append(hiddens)
h = np.concatenate(h, axis=1)
print('Activations shape: ', h.shape)
return h
# Load data: a subset of the Penn Treebank dataset.
# Returns a list of strings
def get_data():
import pandas as pd
data = pd.read_json('https://raw.githubusercontent.com/nlp-compromise/penn-treebank/f96fffb8e78a9cc924240c27b25fb1dcd8974ebf/penn-data.json')
data = data[0].tolist() # First element contains raw text
return data
def compute_and_save_activations(data, model_id):
# Get models
f_model = get_model(model_id)
# Compute activations for model
h = get_activations(f_model, data)
# Save to drive
path = data_folder+'h_'+model_id+'.npy'
print("Saving to path:", path)
np.save(path, h)
def load_activations(model_id):
print("Loading activations from disk...")
x = np.load(data_folder+'h_'+model_id+'.npy')
return x
def compute_svd(x, normalize=True):
print("Normalizing and computing SVD...")
results = []
for i in range(x.shape[0]):
print(i)
_x = x[i]
if normalize:
_x -= _x.mean(axis=0, keepdims=True)
_x /= _x.std(axis=0, keepdims=True)
_x = np.linalg.svd(_x, full_matrices=False)[0]
results.append(_x)
x = np.stack(results)
return x
def save_svd(model_id, x):
np.save(data_folder+'svd_'+model_id+'.npy', x)
def load_svd(model_id):
return np.load(data_folder+'svd_'+model_id+'.npy')
def compute_and_save_svd(model_id):
x = load_activations(model_id)
x = x[-num_layers:]
print(x.shape)
h = compute_svd(x)
save_svd(model_id, h)
def get_cca_similarity_fast(x, y, i1, i2, k=None):
"""Performs singular vector CCA on two matrices.
Args:
X: numpy matrix, activations from first network (num_samples x num_units1).
Y: numpy matrix, activations from second network (num_samples x num_units2).
k: int or None, number of components to keep before doing CCA (optional:
default=None, which means all components are kept).
Returns:
corr_coeff: numpy vector, canonical correlation coefficients.
Reference:
Raghu M, Gilmer J, Yosinski J, Sohl-Dickstein J. "SVCCA: Singular Vector
Canonical Correlation Analysis for Deep Learning Dynamics and
Interpretability." NIPS 2017.
"""
return np.linalg.svd(np.dot(x[i1, :, :k].T, y[i2, :, :k]), compute_uv=False)
def compute_correlations(x, y, ks):
num_layers = min(x.shape[0], y.shape[0])
results = {}
for k in ks:
# Calculate all correlation coefficients
rs = np.asarray([get_cca_similarity_fast(x, y, i, i, k) for i in range(-num_layers, 0)])
results[k] = rs
return results
def plot_correlations(results):
i = 1
plt.figure(figsize=[len(results)*5,4])
for k in results:
plt.subplot(1,len(results),i)
plt.ylim(0, 1)
plt.xlabel('CCA coefficient index')
plt.ylabel('$r$ (correlation coefficient)')
for j in [0,3,6,9,12]:
label = "last"
if j < 12: label += " - "+str(12-j)
plt.plot(results[k][j], label=label)
plt.title(str(k)+' principal components')
plt.legend()
i += 1
"""
Explanation: Copyright 2019 The Google Research Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
End of explanation
"""
# Get data
all_data = get_data()
print(len(all_data))
data = all_data[:num_sentences]
# Compute and save activations
compute_and_save_activations(data, 'gpt2')
compute_and_save_activations(data, 'gpt2-medium')
compute_and_save_activations(data, 'gpt2-large')
compute_and_save_activations(data, 'gpt2-xl')
"""
Explanation: Compute and save activations
End of explanation
"""
compute_and_save_svd('gpt2')
compute_and_save_svd('gpt2-medium')
compute_and_save_svd('gpt2-large')
compute_and_save_svd('gpt2-xl')
"""
Explanation: Compute and save SVD
End of explanation
"""
# Load SVDs
x_small = load_svd('gpt2')
x_medium = load_svd('gpt2-medium')
x_large = load_svd('gpt2-large')
x_xl = load_svd('gpt2-xl')
ks = [16,64,256,768] # Layers to compute correlations over
r_s_m = compute_correlations(x_small, x_medium, ks)
r_m_l = compute_correlations(x_medium, x_large, ks)
r_l_xl = compute_correlations(x_large, x_xl, ks)
r_s_xl = compute_correlations(x_small, x_xl, ks)
r_m_xl = compute_correlations(x_medium, x_xl, ks)
r_s_xl = compute_correlations(x_small, x_large, ks)
results = [r_s_m, r_m_l, r_l_xl, r_s_xl, r_m_xl, r_s_xl]
def get_mean_results(results, k):
x = np.asarray([result[k] for result in results])
x = np.mean(x, axis=0)
return x[::-1]
r_16 = get_mean_results(results, 16)[0::4]
r_64 = get_mean_results(results, 64)[0::4]
r_256 = get_mean_results(results, 256)[0::4]
r_768 = get_mean_results(results, 768)[0::4]
rs = {16: r_16, 64: r_64, 256: r_256, 768: r_768}
i = 1
plt.figure(figsize=[len(rs)*5,4])
for k in rs:
plt.subplot(1,len(rs),i)
plt.ylim(0, 1)
plt.xlabel('CCA coefficient index')
plt.ylabel('Mean correlation coefficient')
for j in range(len(rs[k])):
if j == 0: label = "last layer"
if j > 0: label = "(last - "+str(4*j)+")th layer"
plt.plot(rs[k][j], label=label)
plt.title(str(k)+' principal components')
plt.legend()
i += 1
plt.savefig(root_folder+"fig_layers.pdf")
"""
Explanation: Compute all pairwise correlations
End of explanation
"""
|
abatula/MachineLearningIntro | Diabetes_DataSet.ipynb | gpl-2.0 | # Print figures in the notebook
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets # Import datasets from scikit-learn
import matplotlib.cm as cm
from matplotlib.colors import Normalize
"""
Explanation: What is a dataset?
A dataset is a collection of information (or data) that can be used by a computer. A dataset typically has some number of examples, where each example has features associated with it. Some datasets also include labels, which is an identifying piece of information that is of interest.
What is an example?
An example is a single element of a dataset, typically a row (similar to a row in a table). Multiple examples are used to generalize trends about the dataset as a whole. When predicting the list price of a house, each house would be considered a single example.
Examples are often referred to with the letter $x$.
What is a feature?
A feature is a measurable characteristic that describes an example in a dataset. Features make up the information that a computer can use to learn and make predictions. If your examples are houses, your features might be: the square footage, the number of bedrooms, or the number of bathrooms. Some features are more useful than others. When predicting the list price of a house the number of bedrooms is a useful feature while the color of the walls is not, even though they both describe the house.
Features are sometimes specified as a single element of an example, $x_i$
What is a label?
A label identifies a piece of information about an example that is of particular interest. In machine learning, the label is the information we want the computer to learn to predict. In our housing example, the label would be the list price of the house.
Labels can be continuous (e.g. price, length, width) or they can be a category label (e.g. color, species of plant/animal). They are typically specified by the letter $y$.
The Diabetes Dataset
Here, we use the Diabetes dataset, available through scikit-learn. This dataset contains information related to specific patients and disease progression of diabetes.
Examples
The datasets consists of 442 examples, each representing an individual diabetes patient.
Features
The dataset contains 10 features: Age, sex, body mass index, average blood pressure, and 6 blood serum measurements.
Target
The target is a quantitative measure of disease progression after one year.
Our goal
The goal, for this dataset, is to train a computer to predict the progression of diabetes after one year.
Setup
Tell matplotlib to print figures in the notebook. Then import numpy (for numerical data), pyplot (for plotting figures), and datasets (to download the iris dataset from scikit-learn). Also import colormaps to customize plot coloring and Normalize to normalize data for use with colormaps.
End of explanation
"""
# Import some data to play with
diabetes = datasets.load_diabetes()
# List the data keys
print('Keys: ' + str(diabetes.keys()))
print('Feature names: ' + str(diabetes.feature_names))
print('')
# Store the labels (y), features (X), and feature names
y = diabetes.target # Labels are stored in y as numbers
X = diabetes.data
featureNames = diabetes.feature_names
# Show the first five examples
X[:5,:]
"""
Explanation: Import the dataset
Import the dataset and store it to a variable called diabetes. This dataset is similar to a python dictionary, with the keys: ['DESCR', 'target', 'data', 'feature_names']
The data features are stored in diabetes.data, where each row is an example from a single patient, and each column is a single feature. The feature names are stored in diabetes.feature_names. Target values are stored in diabetes.target.
End of explanation
"""
norm = Normalize(vmin=y.min(), vmax=y.max()) # need to normalize target to [0,1] range for use with colormap
plt.scatter(X[:, 4], X[:, 9], c=norm(y), cmap=cm.bone_r)
plt.colorbar()
plt.xlabel('Serum Measurement 1 (s1)')
plt.ylabel('Serum Measurement 6 (s6)')
plt.show()
"""
Explanation: Visualizing the data
Visualizing the data can help us better understand the data and make use of it. The following block of code will create a plot of serum measurement 1 (x-axis) vs serum measurement 6 (y-axis). The level of diabetes progression has been mapped to fit in the [0,1] range and is shown as a color scale.
End of explanation
"""
# Put your code here!
"""
Explanation: Make your own plot
Below, try making your own plots. First, modify the previous code to create a similar plot, comparing different pairs of features. You can start by copying and pasting the previous block of code to the cell below, and modifying it to work.
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
print('Original dataset size: ' + str(X.shape))
print('Training dataset size: ' + str(X_train.shape))
print('Test dataset size: ' + str(X_test.shape))
"""
Explanation: Training and Testing Sets
In order to evaluate our data properly, we need to divide our dataset into training and testing sets.
* Training Set - Portion of the data used to train a machine learning algorithm. These are the examples that the computer will learn from in order to try to predict data labels.
* Testing Set - Portion of the data (usually 10-30%) not used in training, used to evaluate performance. The computer does not "see" this data while learning, but tries to guess the data labels. We can then determine the accuracy of our method by determining how many examples it got correct.
* Validation Set - (Optional) A third section of data used for parameter tuning or classifier selection. When selecting among many classifiers, or when a classifier parameter must be adjusted (tuned), a this data is used like a test set to select the best parameter value(s). The final performance is then evaluated on the remaining, previously unused, testing set.
Creating training and testing sets
Below, we create a training and testing set from the iris dataset using using the train_test_split() function.
End of explanation
"""
from sklearn.model_selection import KFold
# Older versions of scikit learn used n_folds instead of n_splits
kf = KFold(n_splits=5)
for trainInd, valInd in kf.split(X_train):
X_tr = X_train[trainInd,:]
y_tr = y_train[trainInd]
X_val = X_train[valInd,:]
y_val = y_train[valInd]
print("%s %s" % (X_tr.shape, X_val.shape))
"""
Explanation: Create validation set using crossvalidation
Crossvalidation allows us to use as much of our data as possible for training without training on our test data. We use it to split our training set into training and validation sets.
* Divide data into multiple equal sections (called folds)
* Hold one fold out for validation and train on the other folds
* Repeat using each fold as validation
The KFold() function returns an iterable with pairs of indices for training and testing data.
End of explanation
"""
|
philiptromans/mapswipe-ml-dataset-generator | 1 - Analysing InceptionV3 results.ipynb | apache-2.0 | from mapswipe_analysis import *
all_projects_solution = Solution(
ground_truth_solutions_file_to_map('../experiment_1/all_projects_dataset/test/solutions.csv'),
predictions_file_to_map('../experiment_1/inception_v3_all_layers.results')
)
all_projects_solution.accuracy
"""
Explanation: Much of the world isn't mapped. This seems odd at first, but it basically comes down to a question of cash, and a large chunk of the world doesn't have enough of it. Maps are important, and when big charities like the Red Cross, or Médecins Sans Frontières try to respond to crises, or run public health projects, the lack of mapping is a serious problem. This is why the Missing Maps project came into existence. It's a volunteer project with the goal of putting the world's most vulnerable people on the map. In more concrete terms, volunteers spend time pouring over satellite imagery, tracing over things like roads and buildings (you can learn more here), and this data's then available for anyone to use. This is a time-consuming process, and much of the world is pretty empty (you don't see many buildings in the rainforest, or the desert). The MapSwipe app was created to help accelerate the mapping process, by pre-filtering the tiles. MapSwipe users scroll through bits of satellite imagery (in a mobile app), and identify images with buildings and other features in (depending on the project). Once this data has been gathered it means that the mapping volunteers can maximize their productivity, by going straight to the tiles that need mapping and not waste their time pouring over large expanses of forest (say).
When I first heard about this, I thought that it sounded like a machine learning problem. I'm not necessarily looking to automate MapSwipe - that might well be quite hard. A good chunk of the tiles in a MapSwipe problem are pretty easy to identify though, and it makes sense for humans to be principally involved in the more difficult ones. A good ML solution could also be used to partially verify the output of the human mappers - it might help notice missing buildings or roads for example. It's also a useful exercise in trying to solve the eventual MissingMaps problem - generating maps straight from the raw satellite imagery. Before we continue, we need to properly define the MapSwipe problem. MapSwipe is a classification problem - users classify a single tile of satellite imagery as either:
| Example | Class
| :-------------: |:-------------:|
| | Bad Imagery means that something on the ground can't be seen. This is often because of cloud cover obstructing the satellite's view, or sometimes because something seems to be broken with the satellite. |
| | Built imagery means that there are buildings in view. |
| | Empty imagery contains no buildings. |
To make life a little easier, I chose to only consider the projects that are solely focussed on finding buildings (roads can be tackled another day).
For my first attempt at using machine learning to solve the MapSwipe problem, I followed the approach laid out in the first few lectures of the fast.ai course. Basically, you take a neural network that has already been trained to solve the ImageNet problem, and adapt it for your own computer vision problem. The next section outlines exactly what I did, but feel free to skip to the results section.
My first experiment
All scripts used are present in my mapswipe-ml repository.
I started by generating a dataset. There's a fuller explanation of the generate_dataset.py script in the repository, but essentially it downloads as many examples as possible of the three categories: bad imagery, built and empty, whilst keeping the sizes of the three groups the same. The projects that I selected were all that had their lookFor property set to buildings only. (It now transpires that there's a similar category, which some of the newer projects fall into, which is just buildings - these were not included). This is approximately 1.4 million images. They are split 80-10-10 into a training set, a validation set and a test set.
python3 generate_dataset.py 124 303 407 692 1166 1333 1440 1599 1788 1901 2020 2158 2293 2473 2644 2671 2809 2978 3121 3310 3440 3610 3764 3906 4103 4242 4355 4543 4743 4877 5061 5169 5291 5368 5519 5688 5870 5990 6027 6175 6310 6498 6628 6637 6646 6794 6807 6918 6930 7049 7056 7064 7108 7124 7125 7260 7280 7281 7605 7738 7871 8059 8324 -k <bing maps api key> -o experiment_1/all_projects_dataset --inner-test-dir-for-keras
To actually create the model, I used Keras to fine-tune Google's InceptionV3 model. This means removing its top layer of output neurons, and replacing them with three fully connected output neurons (one for each class), with a Softmax output (see the script for exact details - I've omitted a couple of layers for brevity). During the training process, only the top (newly added) layers are trained.
python3 train.py --dataset-dir experiment_1/all_projects_dataset --output-dir experiment_1/inception_v3_fine_tuned --fine-tune --num-epochs 1
After one epoch of training, you get a model with a validation accuracy of approximately 54%. With extra epochs of fine tuning this increases slightly, but I didn't feel that it was particularly worth doing. Instead, I thought about the ImageNet problem. ImageNet is primarily concerned with identifying the one object that dominates the foreground of any particular photo. MapSwipe is fundamentally different, in that it's more about considering the whole image, and any piece of the image may either have something obscuring it (in the case of bad imagery), or a building, which changes the entire image's classification. The objects being identified are less complex than ImageNet (where you need to be able to, say, differentiate between a cat's face and a dog's), but the whole image is more important in the MapSwipe problem (whereas ImageNet has a better separation of foreground and background). Considering this hypothesis, I decided to train all layers of ImageNet for several epochs:
python3 train.py --dataset-dir experiment_1/all_projects_dataset --output-dir experiment_1/inception_v3_all_layers --num-epochs 10 --start_model experiment_1/inception_v3_fine_tuned/model.01-0.906-0.539.hdf5
I let it train for 9 epochs before stopping it (I was using an Amazon AWS P3.2xlarge instance, which isn't cheap) to see how it was progressing. The final trained model had a validation accuracy of 65%. The accuracy was always increasing, but the rate of increase had slowed significantly. I suspect that there's more improvement to be made by training for longer, but I wanted to start analysing the results.
To classify the test set:
python3 test.py --dataset-dir experiment_1/all_projects_dataset/test/ -m experiment_1/inception_v3_all_layers/model.01-0.906-0.539.hdf5.09-0.737-0.649.hdf5 -o experiment_1/inception_v3_all_layers.results
Results
The first question on your mind is probably, "How accurate was it?".
End of explanation
"""
category_accuracies_df = pd.DataFrame(all_projects_solution.category_accuracies, index=class_names, columns=['Test dataset'])
display(HTML(category_accuracies_df.transpose().to_html()))
"""
Explanation: So, we're about 64% accurate. This means that 64% of the time, we select the right class for the tile (bad imagery, built, or empty). If we guessed at random, we'd expect to be 33% accurate (there are three classes, so we have a one in three chance of being correct). Let's break down that accuracy in to a per-category accuracy:
End of explanation
"""
conf_matrix_df = pd.DataFrame(all_projects_solution.confusion_matrix, index=class_names, columns=class_names)
display(HTML(conf_matrix_df.to_html()))
"""
Explanation: It seems almost suspicious that our bad image detection accuracy is so much lower than the other categories. Let's break down this accuracy data further into a confusion matrix:
End of explanation
"""
quadkeys = [x[0] for x in all_projects_solution.classified_as(predicted_class='empty', solution_class='bad_imagery')[0:9]]
tableau(quadkeys, all_projects_solution)
"""
Explanation: The rows correspond to what our model predicted, and the columns correspond to the official solution. If our model was perfect, we'd expect to have non-zero entries on the main diagonal (top left to bottom right), and zeroes everywhere else. The biggest non-zero entry corresponds to examples that officially (according to the MapSwipe data) are bad imagery, but our model has classified as empty. Let's take a look at the examples where we were most confident that the imagery was empty, but was actually bad (according to the official solution).
End of explanation
"""
quadkeys = [x[0] for x in all_projects_solution.classified_as(predicted_class='built', solution_class='empty')[0:9]]
tableau(quadkeys, all_projects_solution)
"""
Explanation: (note that the prediction vectors have the form $(\mathbb{P}(\text{bad_imagery}), \mathbb{P}(\text{built}), \mathbb{P}(\text{empty}))$, where $\mathbb{P}$ denotes a probability)
As you can see, all of these images seem perfectly fine, and all in fact show land with no buildings. Now, we've only looked at the 9 that the model's most confident about, but I've skimmed through a large number of them (not included here for brevity) and whilst the occasional one has a small amount of cloud cover, the vast majority are absolutely fine.
I'm not sure why this is happening, but I have a few hypotheses:
* A significant number of users may be mistaken about the definition of bad imagery, or unsure about what to do for empty tiles (and are triple tapping to feed back that the images are empty, when they should just be ignoring them).
* Bing may have updated the imagery since the feedback was gained from the users.
It's also interesting to review some other scenarios. Here are some images that the solution defines as empty, but the model believes that they contain buildings:
End of explanation
"""
import json
from os.path import isdir, join
import os
import urllib.request
from bokeh.plotting import figure, ColumnDataSource
from bokeh.models import HoverTool
from bokeh.io import output_notebook, show
with urllib.request.urlopen("http://api.mapswipe.org/projects.json") as url:
projects = json.loads(url.read().decode())
individual_projects_dir = '../individual_projects/'
project_dirs = [d for d in os.listdir(individual_projects_dir) if isdir(join(individual_projects_dir, d))]
project_dirs.sort(key=int)
project_ids = []
accuracies = []
names = []
tile_counts = []
for project_id in project_dirs:
solutions_csv = join(individual_projects_dir, project_id, 'test', 'solutions.csv')
if (os.path.getsize(solutions_csv) > 0):
solution = Solution(
ground_truth_solutions_file_to_map(solutions_csv),
predictions_file_to_map(join(individual_projects_dir, project_id, 'initial_inception_v3_all_layers.out'))
)
project_ids.append(project_id)
accuracies.append(solution.accuracy * 100)
names.append(projects[project_id]['name'])
tile_counts.append(solution.tile_count)
output_notebook()
source = ColumnDataSource(data=dict(
x=project_ids,
y=accuracies,
names=names,
tile_counts=tile_counts
))
hover = HoverTool(tooltips=[
("Project ID", "@x"),
("Accuracy", "@y%"),
("Name", "@names"),
("Tile count", "@tile_counts")
])
p = figure(plot_width=800, plot_height=600, tools=[hover],
title="Test accuracy for each MapSwipe project")
p.circle('x', 'y', size=10, source=source)
show(p)
"""
Explanation: So, it's not quite as open-and-shut as the previous set of examples, but it still helps build confidence in the model, and support the hypothesis that the MapSwipe data is far from accurate.
Individual Project Accuracy
Everything we've done so far has considered one giant dataset, composed of a large number of projects (where each project corresponds to relatively small geographic area). It's interesting to see if the model's accuracy varies between the individual projects. To do this, I generated individual datasets for each project (using a similar workflow to that described previously), and then used the same model as before to grade each individual project's test dataset.
End of explanation
"""
|
Britefury/deep-learning-tutorial-pydata2016 | TUTORIAL 05 - Dogs vs cats with transfer learning and data augmentation.ipynb | mit | %matplotlib inline
"""
Explanation: Dogs vs Cats with Transfer Learning and Data Augmentation
In this Notebook we're going to use transfer learning to attempt to crack the Dogs vs Cats Kaggle competition.
We add data augmentation and assess its effectiveness.
We are going to downsample the images to 64x64; that's pretty small, but should be enough (I hope). Furthermore, large images means longer training time and I'm too impatient for that. ;)
--- Changes for data augmentation ---
The modifications to the code for data augmentation are marked in blocks with italic headings.
Lets have plots appear inline:
End of explanation
"""
import os, time, glob, tqdm
import numpy as np
from matplotlib import pyplot as plt
import torch, torch.nn as nn, torch.nn.functional as F
import torchvision
import skimage.transform, skimage.util
from skimage.util import montage
from sklearn.model_selection import StratifiedShuffleSplit
import cv2
from batchup import work_pool, data_source
import utils
import imagenet_classes
torch_device = torch.device('cuda:0')
"""
Explanation: We're going to need os, numpy, matplotlib, skimage, theano and lasagne. We also want to import some layer classes and utilities from Lasagne for convenience.
End of explanation
"""
TRAIN_PATH = r'E:\datasets\dogsvscats\train'
TEST_PATH = r'E:\datasets\dogsvscats\test1'
# Get the paths of the images
trainval_image_paths = glob.glob(os.path.join(TRAIN_PATH, '*.jpg'))
tests_image_paths = glob.glob(os.path.join(TEST_PATH, '*.jpg'))
"""
Explanation: Data loading
We are loading images from a folder of files, so we could approach this a number of ways.
Our dataset consists of 25,000 images so we could load them all into memory then access them from there. It would work, but it wouldn't scale. I'd prefer to demonstrate an approach that is more scalable and useful outside of this notebook, so we are going to load them on the fly.
Loading images on the fly poses a challenge as we may find that the GPU is waiting doing nothing while the CPU is loading images in order to build the next mini-batch to train with. It would therefore be desirable to load images in background threads so that mini-batches of images are ready to process when the GPU is able to take one. Luckily my batchup library can help here.
We must provide the logic for:
getting a list of paths where we can find the image files
given a list of indices identifying the images that are to make up this mini-batch, for each image in the mini-batch:
load each one
scale each one to the fixed size that we need
standardise each image (subtract mean, divide by standard deviation)
gather them in a mini-batch of shape (sample, channel, height, width)
Getting a list of paths where we can find the image files
Join the Kaggle competition and download the training and test data sets. Unzip them into a directory of your choosing, and modify the path definitions below to point to the appropriate location.
We split the images into training and validation later on, so we call them trainval for now.
End of explanation
"""
# The ground truth classifications are given by the filename having either a 'dog.' or 'cat.' prefix
# Use:
# 0: cat
# 1: dog
trainval_y = [(1 if os.path.basename(p).lower().startswith('dog.') else 0) for p in trainval_image_paths]
trainval_y = np.array(trainval_y).astype(np.int32)
"""
Explanation: Okay. We have our image paths. Now we need to create our ground truths. Luckily the filename of each file starts with either cat. or dog. indicating which it is. We will assign dogs a class of 1 and cats a class of 0.
End of explanation
"""
# We only want one split, with 10% of the data for validation
splitter = StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=12345)
# Get the training set and validation set sample indices
train_ndx, val_ndx = next(splitter.split(trainval_y, trainval_y))
print('{} training, {} validation'.format(len(train_ndx), len(val_ndx)))
"""
Explanation: Split into training and validation
We use Scikit-Learn StratifiedShuffleSplit for this.
End of explanation
"""
MODEL_MEAN = np.array([0.485, 0.456, 0.406])
MODEL_STD = np.array([0.229, 0.224, 0.225])
TARGET_SIZE = 80
def img_to_net(img):
"""
Convert an image from
image format; shape (height, width, channel) range [0-1]
to
network format; shape (channel, height, width), standardised by mean MODEL_MEAN and std-dev MODEL_STD
"""
# (H, W, C) -> (C, H, W)
img = (img - MODEL_MEAN) / MODEL_STD
img = img.transpose(2, 0, 1)
return img.astype(np.float32)
def net_to_img(img):
"""
Convert an image from
network format; shape (sample, channel, height, width), standardised by mean MODEL_MEAN and std-dev MODEL_STD
to
image format; shape (height, width, channel) range [0-1]
"""
# (C, H, W) -> (H, W, C)
img = img.transpose(1, 2, 0)
img = img * MODEL_STD + MODEL_MEAN
return img.astype(np.float32)
def load_image(path):
"""
Load an image from a given path and convert to network format (4D tensor)
"""
# Read
img = cv2.imread(path)
# OpenCV loads images in BGR channel order; reverse to RGB
img = img[:, :, ::-1]
# Compute scaled dimensions, while preserving aspect ratio
# py0, py1, px0, px1 are the padding required to get the image to `TARGET_SIZE` x `TARGET_SIZE`
if img.shape[0] >= img.shape[1]:
height = TARGET_SIZE
width = int(img.shape[1] * float(TARGET_SIZE) / float(img.shape[0]) + 0.5)
py0 = py1 = 0
px0 = (TARGET_SIZE - width) // 2
px1 = (TARGET_SIZE - width) - px0
else:
width = TARGET_SIZE
height = int(img.shape[0] * float(TARGET_SIZE) / float(img.shape[1]) + 0.5)
px0 = px1 = 0
py0 = (TARGET_SIZE - height) // 2
py1 = (TARGET_SIZE - height) - py0
# Resize the image using OpenCV resize
# We use OpenCV as it is fast
# We also resize *before* converting from uint8 type to float type as uint8 is significantly faster
img = cv2.resize(img, (width, height))
# Convert to float
img = skimage.util.img_as_float(img)
# Convert to network format
img = img_to_net(img)
# Apply padding to get it to a fixed size
img = np.pad(img, [(0, 0), (py0, py1), (px0, px1)], mode='constant')
return img
"""
Explanation: Define a function for loading a mini-batch of images
Given a list of indices into the train_image_paths list we must:
load each one
scale each one to the fixed size that we need
standardise each image (subtract mean, divide by standard deviation)
<< ONE SMALL CHANGE HERE >>
One change here; we scale to a fixed size of 80x80 rather than 64x64 so that we can randomly crop 64x64 regions in a data augmentation function.
End of explanation
"""
plt.imshow(net_to_img(load_image(trainval_image_paths[0])))
plt.show()
"""
Explanation: Show an image to check our code so far:
End of explanation
"""
class ImageAccessor (object):
def __init__(self, paths):
"""
Constructor
paths - the list of paths of the images that we are to access
"""
self.paths = paths
def __len__(self):
"""
The length of this array
"""
return len(self.paths)
def __getitem__(self, item):
"""
Get images identified by item
item can be:
- an index as an integer
- an array of incies
"""
if isinstance(item, int):
# item is an integer; get a single item
path = self.paths[item]
return load_image(path)
elif isinstance(item, np.ndarray):
# item is an array of indices
# Get the paths of the images in the mini-batch
paths = [self.paths[i] for i in item]
# Load each image
images = [load_image(path) for path in paths]
# Stack in axis 0 to make an array of shape `(sample, channel, height, width)`
return np.stack(images, axis=0)
"""
Explanation: Looks okay.
Make a BatchUp data source
BatchUp can extract mini-batches from data sources that have an array-like interface.
We must first define an image accessor that looks like an array. We do this by implementing __len__ and __getitem__ methods:
End of explanation
"""
# image accessor
trainval_X = ImageAccessor(trainval_image_paths)
train_ds = data_source.ArrayDataSource([trainval_X, trainval_y], indices=train_ndx)
val_ds = data_source.ArrayDataSource([trainval_X, trainval_y], indices=val_ndx)
"""
Explanation: Now we make ArrayDataSource instances for the training and validation sets. These provide methods for getting mini-batches that we will use for training.
End of explanation
"""
def augment_train_batch(batch_X, batch_y):
n = len(batch_X)
# Random crop
crop = np.random.randint(low=0, high=16, size=(n, 2))
batch_X_cropped = np.zeros((n, 3, 64, 64), dtype=np.float32)
for i in range(n):
batch_X_cropped[i, :, :, :] = batch_X[i, :, crop[i, 0]:crop[i, 0]+64, crop[i, 1]:crop[i, 1]+64]
batch_X = batch_X_cropped
# Random horizontal flip
flip_h = np.random.randint(low=0, high=2, size=(n,)) == 1
batch_X[flip_h, :, :, :] = batch_X[flip_h, :, :, ::-1]
# Random colour offset; normally distributed, std-dev 0.1
colour_offset = np.random.normal(scale=0.1, size=(n, 3))
batch_X += colour_offset[:, :, None, None]
return batch_X, batch_y
"""
Explanation: << CHANGES START HERE >>
Data augmentation function
We now define a function to apply random data augmentation to a mini-batch of images.
End of explanation
"""
train_ds = train_ds.map(augment_train_batch)
"""
Explanation: Apply the augmentation function; the map method of the data source will pass each mini-batch through the provided function, in this case augment_train_batch. Note that the augmentation will also be performed in background threads.
End of explanation
"""
# A pool with 4 threads
pool = work_pool.WorkerThreadPool(4)
"""
Explanation: << CHANGES END HERE >>
Process mini-batches in background threads
We want to do all the image loading in background threads so that the images are ready for the main thread that must feed the GPU with data to work on.
BatchUp provides worker pools for this purpose.
End of explanation
"""
train_ds = pool.parallel_data_source(train_ds)
val_ds = pool.parallel_data_source(val_ds)
"""
Explanation: Wrap our training and validation data sources so that they generate mini-batches in parallel background threads
End of explanation
"""
class XferPetClassifier (nn.Module):
def __init__(self, pretrained_vgg16):
super(XferPetClassifier, self).__init__()
self.features = pretrained_vgg16.features
# Size at this point will be 512 channels, 2x2
self.fc6 = nn.Linear(512 * 2 * 2, 256)
self.drop = nn.Dropout()
self.fc7 = nn.Linear(256, 2)
def forward(self, x):
x = self.features(x)
x = x.view(x.shape[0], -1)
x = F.relu(self.fc6(x))
x = self.drop(x)
x = self.fc7(x)
return x
# Build it
vgg16 = torchvision.models.vgg.vgg16(pretrained=True)
pet_net = XferPetClassifier(vgg16).to(torch_device)
"""
Explanation: Build the network using the convolutional layers from VGG-16
Now we will define a class for the pet classifier network.
End of explanation
"""
loss_function = nn.CrossEntropyLoss()
# Get a list of all of the parameters
all_params = list(pet_net.parameters())
# Get a list of pre-trained parameters
pretrained_params = list(pet_net.features.parameters())
# Get their IDs and use to get a list of new parameters
pretrained_param_ids = set([id(p) for p in pretrained_params])
new_params = [p for p in all_params if id(p) not in pretrained_param_ids]
# Build optimizer with separate learning rates for pre-trained and new parameters
optimizer = torch.optim.Adam([dict(params=new_params, lr=1e-3),
dict(params=pretrained_params, lr=1e-4)])
"""
Explanation: Set up loss and optimizer
We separate the pre-trained parameters from the new parameters. We train the pre-trained parameters using a learning rate that is 10 times smaller.
End of explanation
"""
NUM_EPOCHS = 25
BATCH_SIZE = 128
"""
Explanation: Train the network
Define settings for training; note we only need 25 epochs here:
End of explanation
"""
print('Training...')
for epoch_i in range(NUM_EPOCHS):
t1 = time.time()
# TRAIN
pet_net.train()
train_loss = 0.0
n_batches = 0
# Ask train_ds for batches of size `BATCH_SIZE` and shuffled in random order
for i, (batch_X, batch_y) in enumerate(train_ds.batch_iterator(batch_size=BATCH_SIZE, shuffle=True)):
t_x = torch.tensor(batch_X, dtype=torch.float, device=torch_device)
t_y = torch.tensor(batch_y, dtype=torch.long, device=torch_device)
# Clear gradients
optimizer.zero_grad()
# Predict logits
pred_logits = pet_net(t_x)
# Compute loss
loss = loss_function(pred_logits, t_y)
# Back-prop
loss.backward()
# Optimizer step
optimizer.step()
# Accumulate training loss
train_loss += float(loss)
n_batches += 1
# Divide by number of samples to get mean loss
train_loss /= float(n_batches)
# VALIDATE
pet_net.eval()
val_loss = val_err = 0.0
# For each batch:
with torch.no_grad():
for batch_X, batch_y in val_ds.batch_iterator(batch_size=BATCH_SIZE, shuffle=False):
t_x = torch.tensor(batch_X, dtype=torch.float, device=torch_device)
# Predict logits
pred_logits = pet_net(t_x).detach().cpu().numpy()
pred_cls = np.argmax(pred_logits, axis=1)
val_err += (batch_y != pred_cls).sum()
# Divide by number of samples to get mean loss and error
val_err /= float(len(val_ndx))
t2 = time.time()
# REPORT
print('Epoch {} took {:.2f}s: train loss={:.6f}; val err={:.2%}'.format(
epoch_i, t2 - t1, train_loss, val_err))
"""
Explanation: The training loop:
End of explanation
"""
# Number of samples to try
N_TEST = 15
# Shuffle test sample indcies
rng = np.random.RandomState(12345)
test_ndx = rng.permutation(len(tests_image_paths))
# Select first `N_TEST` samples
test_ndx = test_ndx[:N_TEST]
for test_i in test_ndx:
# Load the image
X = load_image(tests_image_paths[test_i])
with torch.no_grad():
t_x = torch.tensor(X[None, ...], dtype=torch.float, device=torch_device)
# Predict class probabilities
pred_logits = pet_net(t_x)
pred_prob = F.softmax(pred_logits, dim=1).detach().cpu().numpy()
# Get predicted class
pred_y = np.argmax(pred_prob, axis=1)
# Get class name
pred_cls = 'dog' if pred_y[0] == 1 else 'cat'
# Report
print('Sample {}: predicted as {}, confidence {:.2%}'.format(test_i, pred_cls, pred_prob[0,pred_y[0]]))
# Show the image
plt.figure()
plt.imshow(net_to_img(X))
plt.show()
"""
Explanation: Apply to some example images from the test set
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.18/_downloads/e79896208a72b920b6d32cefb5c9c4b8/plot_point_spread.ipynb | bsd-3-clause | import os.path as op
import numpy as np
from mayavi import mlab
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, apply_inverse
from mne.simulation import simulate_stc, simulate_evoked
"""
Explanation: Corrupt known signal with point spread
The aim of this tutorial is to demonstrate how to put a known signal at a
desired location(s) in a :class:mne.SourceEstimate and then corrupt the
signal with point-spread by applying a forward and inverse solution.
End of explanation
"""
seed = 42
# parameters for inverse method
method = 'sLORETA'
snr = 3.
lambda2 = 1.0 / snr ** 2
# signal simulation parameters
# do not add extra noise to the known signals
nave = np.inf
T = 100
times = np.linspace(0, 1, T)
dt = times[1] - times[0]
# Paths to MEG data
data_path = sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_fwd = op.join(data_path, 'MEG', 'sample',
'sample_audvis-meg-oct-6-fwd.fif')
fname_inv = op.join(data_path, 'MEG', 'sample',
'sample_audvis-meg-oct-6-meg-fixed-inv.fif')
fname_evoked = op.join(data_path, 'MEG', 'sample',
'sample_audvis-ave.fif')
"""
Explanation: First, we set some parameters.
End of explanation
"""
fwd = mne.read_forward_solution(fname_fwd)
fwd = mne.convert_forward_solution(fwd, force_fixed=True, surf_ori=True,
use_cps=False)
fwd['info']['bads'] = []
inv_op = read_inverse_operator(fname_inv)
raw = mne.io.read_raw_fif(op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw.fif'))
raw.set_eeg_reference(projection=True)
events = mne.find_events(raw)
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2}
epochs = mne.Epochs(raw, events, event_id, baseline=(None, 0), preload=True)
epochs.info['bads'] = []
evoked = epochs.average()
labels = mne.read_labels_from_annot('sample', subjects_dir=subjects_dir)
label_names = [l.name for l in labels]
n_labels = len(labels)
"""
Explanation: Load the MEG data
End of explanation
"""
cov = mne.compute_covariance(epochs, tmin=None, tmax=0.)
"""
Explanation: Estimate the background noise covariance from the baseline period
End of explanation
"""
# The known signal is all zero-s off of the two labels of interest
signal = np.zeros((n_labels, T))
idx = label_names.index('inferiorparietal-lh')
signal[idx, :] = 1e-7 * np.sin(5 * 2 * np.pi * times)
idx = label_names.index('rostralmiddlefrontal-rh')
signal[idx, :] = 1e-7 * np.sin(7 * 2 * np.pi * times)
"""
Explanation: Generate sinusoids in two spatially distant labels
End of explanation
"""
hemi_to_ind = {'lh': 0, 'rh': 1}
for i, label in enumerate(labels):
# The `center_of_mass` function needs labels to have values.
labels[i].values.fill(1.)
# Restrict the eligible vertices to be those on the surface under
# consideration and within the label.
surf_vertices = fwd['src'][hemi_to_ind[label.hemi]]['vertno']
restrict_verts = np.intersect1d(surf_vertices, label.vertices)
com = labels[i].center_of_mass(subject='sample',
subjects_dir=subjects_dir,
restrict_vertices=restrict_verts,
surf='white')
# Convert the center of vertex index from surface vertex list to Label's
# vertex list.
cent_idx = np.where(label.vertices == com)[0][0]
# Create a mask with 1 at center vertex and zeros elsewhere.
labels[i].values.fill(0.)
labels[i].values[cent_idx] = 1.
"""
Explanation: Find the center vertices in source space of each label
We want the known signal in each label to only be active at the center. We
create a mask for each label that is 1 at the center vertex and 0 at all
other vertices in the label. This mask is then used when simulating
source-space data.
End of explanation
"""
stc_gen = simulate_stc(fwd['src'], labels, signal, times[0], dt,
value_fun=lambda x: x)
"""
Explanation: Create source-space data with known signals
Put known signals onto surface vertices using the array of signals and
the label masks (stored in labels[i].values).
End of explanation
"""
kwargs = dict(subjects_dir=subjects_dir, hemi='split', smoothing_steps=4,
time_unit='s', initial_time=0.05, size=1200,
views=['lat', 'med'])
clim = dict(kind='value', pos_lims=[1e-9, 1e-8, 1e-7])
figs = [mlab.figure(1), mlab.figure(2), mlab.figure(3), mlab.figure(4)]
brain_gen = stc_gen.plot(clim=clim, figure=figs, **kwargs)
"""
Explanation: Plot original signals
Note that the original signals are highly concentrated (point) sources.
End of explanation
"""
evoked_gen = simulate_evoked(fwd, stc_gen, evoked.info, cov, nave,
random_state=seed)
# Map the simulated sensor-space data to source-space using the inverse
# operator.
stc_inv = apply_inverse(evoked_gen, inv_op, lambda2, method=method)
"""
Explanation: Simulate sensor-space signals
Use the forward solution and add Gaussian noise to simulate sensor-space
(evoked) data from the known source-space signals. The amount of noise is
controlled by nave (higher values imply less noise).
End of explanation
"""
figs = [mlab.figure(5), mlab.figure(6), mlab.figure(7), mlab.figure(8)]
brain_inv = stc_inv.plot(figure=figs, **kwargs)
"""
Explanation: Plot the point-spread of corrupted signal
Notice that after applying the forward- and inverse-operators to the known
point sources that the point sources have spread across the source-space.
This spread is due to the minimum norm solution so that the signal leaks to
nearby vertices with similar orientations so that signal ends up crossing the
sulci and gyri.
End of explanation
"""
|
GraysonR/titanic-data-analysis | 2015-12-24-titanic-gender-grouping.ipynb | mit | # Import magic
%matplotlib inline
# More imports
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
#Set general plot properties
sns.set_style("white")
sns.set_context({"figure.figsize": (18, 8)})
# Load CSV data
titanic_data = pd.read_csv('titanic_data.csv')
survived = titanic_data[titanic_data['Survived'] == 1]
died = titanic_data[titanic_data['Survived'] == 0]
gender_grouped = titanic_data.groupby('Sex')
gender_died = died.groupby('Sex')
gender_survived = survived.groupby('Sex')
gender_grouped.hist(column=['Fare', 'Age', 'Pclass'])
"""
Explanation: Exploring Titanic Dataset
Questions:
How does gender affect different aspects of survivorship
End of explanation
"""
# Not null ages
female_survived_nn = gender_survived.get_group('female')[pd.notnull(gender_survived.get_group('female')['Age'])]
female_died_nn = gender_died.get_group('female')[pd.notnull(gender_died.get_group('female')['Age'])]
male_survived_nn = gender_survived.get_group('male')[pd.notnull(gender_survived.get_group('male')['Age'])]
male_died_nn = gender_died.get_group('male')[pd.notnull(gender_died.get_group('male')['Age'])]
fig, (ax1, ax2) = plt.subplots(ncols=2, sharey=True)
sns.distplot(female_survived_nn['Age'], kde=False, ax=ax1, bins=20)
sns.distplot(male_survived_nn['Age'], kde=False, ax=ax2, bins=20)
sns.distplot(female_died_nn['Age'], kde=False, ax=ax1, color='r', bins=20)
sns.distplot(male_died_nn['Age'], kde=False, ax=ax2, color='r', bins=20)
"""
Explanation: Not really what I was looking for. Was hoping to see survived and died side by side.
End of explanation
"""
|
fdion/infographics_research | Figure1.6.ipynb | mit | !wget 'http://esa.un.org/unpd/wpp/DVD/Files/1_Indicators%20(Standard)/EXCEL_FILES/2_Fertility/WPP2015_FERT_F04_TOTAL_FERTILITY.XLS'
"""
Explanation: Reproducible visualization
In "The Functional Art: An introduction to information graphics and visualization" by Alberto Cairo, on page 12 we are presented with a visualization of UN data time series of Fertility rate (average number of children per woman) per country:
Figure 1.6 Highlighting the relevant, keeping the secondary in the background.
Let's try to reproduce this.
Getting the data
The visualization was done in 2012, but limited the visualization to 2010. This should make it easy, in theory, to get the data, since it is historical. These are directly available as excel spreadsheets now, we'll just ignore the last bucket (2010-2015).
Pandas allows loading an excel spreadsheet straight from a URL, but here we will download it first so we have a local copy.
End of explanation
"""
df = pd.read_excel('WPP2015_FERT_F04_TOTAL_FERTILITY.XLS', skiprows=16, index_col = 'Country code')
df = df[df.index < 900]
len(df)
df.head()
"""
Explanation: World Population Prospects: The 2015 Revision
File FERT/4: Total fertility by major area, region and country, 1950-2100 (children per woman)
Estimates, 1950 - 2015
POP/DB/WPP/Rev.2015/FERT/F04
July 2015 - Copyright © 2015 by United Nations. All rights reserved
Suggested citation: United Nations, Department of Economic and Social Affairs, Population Division (2015). World Population Prospects: The 2015 Revision, DVD Edition.
End of explanation
"""
df.rename(columns={df.columns[2]:'Description'}, inplace=True)
df.drop(df.columns[[0, 1, 3, 16]], axis=1, inplace=True) # drop what we dont need
df.head()
highlight_countries = ['Niger','Yemen','India',
'Brazil','Norway','France','Sweden','United Kingdom',
'Spain','Italy','Germany','Japan', 'China'
]
# Subset only countries to highlight, transpose for timeseries
df_high = df[df.Description.isin(highlight_countries)].T[1:]
# Subset the rest of the countries, transpose for timeseries
df_bg = df[~df.Description.isin(highlight_countries)].T[1:]
"""
Explanation: First problem... The book states on page 8:
-- <cite>"Using the filters the site offers, I asked for a table that included the more than 150 countries on which the UN has complete research."</cite>
Yet we have 201 countries (codes 900+ are regions) with complete data. We do not have a easy way to identify which countries were added to this. Still, let's move forward and prep our data.
End of explanation
"""
# background
ax = df_bg.plot(legend=False, color='k', alpha=0.02, figsize=(12,12))
ax.xaxis.tick_top()
# highlighted countries
df_high.plot(legend=False, ax=ax)
# replacement level line
ax.hlines(y=2.1, xmin=0, xmax=12, color='k', alpha=1, linestyle='dashed')
# Average over time on all countries
df.mean().plot(ax=ax, color='k', label='World\naverage')
# labels for highlighted countries on the right side
for country in highlight_countries:
ax.text(11.2,df[df.Description==country].values[0][12],country)
# start y axis at 1
ax.set_ylim(ymin=1)
"""
Explanation: Let's make some art
End of explanation
"""
df.describe()
df[df['1995-2000']<1.25]
df[df['2000-2005']<1.25]
"""
Explanation: For one thing, the line for China doesn't look like the one in the book. Concerning. The other issue is that there are some lines that are going lower than Italy or Spain in 1995-2000 and in 2000-2005 (majority in the Balkans) and that were not on the graph in the book, AFAICT:
End of explanation
"""
|
rsignell-usgs/notebook | NEXRAD/.ipynb_checkpoints/THREDDS_Radar_Server-checkpoint.ipynb | mit | import matplotlib
import warnings
warnings.filterwarnings("ignore", category=matplotlib.cbook.MatplotlibDeprecationWarning)
%matplotlib inline
"""
Explanation: Using Python to Access NCEI Archived NEXRAD Level 2 Data
This notebook shows how to access the THREDDS Data Server (TDS) instance that is serving up archived NEXRAD Level 2 data hosted on Amazon S3. The TDS provides a mechanism to query for available data files, as well as provides access to the data as native volume files, through OPeNDAP, and using its own CDMRemote protocol. Since we're using Python, we can take advantage of Unidata's Siphon package, which provides an easy API for talking to THREDDS servers.
NOTE: Due to data charges, the TDS instance in AWS only allows access to .edu domains. For other users interested in using Siphon to access radar data, you can access recent (2 weeks') data by changing the server URL below to: http://thredds.ucar.edu/thredds/radarServer/nexrad/level2/IDD/
But first!
Bookmark these resources for when you want to use Siphon later!
+ latest Siphon documentation
+ Siphon github repo
+ TDS documentation
Downloading the single latest volume
Just a bit of initial set-up to use inline figures and quiet some warnings.
End of explanation
"""
# The S3 URL did not work for me, despite .edu domain
#url = 'http://thredds-aws.unidata.ucar.edu/thredds/radarServer/nexrad/level2/S3/'
#Trying motherlode URL
url = 'http://thredds.ucar.edu/thredds/radarServer/nexrad/level2/IDD/'
from siphon.radarserver import RadarServer
rs = RadarServer(url)
"""
Explanation: First we'll create an instance of RadarServer to point to the appropriate radar server access URL.
End of explanation
"""
from datetime import datetime, timedelta
query = rs.query()
query.stations('KLVX').time(datetime.utcnow())
"""
Explanation: Next, we'll create a new query object to help request the data. Using the chaining methods, let's ask for the latest data at the radar KLVX (Louisville, KY). We see that when the query is represented as a string, it shows the encoded URL.
End of explanation
"""
rs.validate_query(query)
"""
Explanation: We can use the RadarServer instance to check our query, to make sure we have required parameters and that we have chosen valid station(s) and variable(s)
End of explanation
"""
catalog = rs.get_catalog(query)
"""
Explanation: Make the request, which returns an instance of TDSCatalog; this handles parsing the returned XML information.
End of explanation
"""
catalog.datasets
"""
Explanation: We can look at the datasets on the catalog to see what data we found by the query. We find one volume in the return, since we asked for the volume nearest to a single time.
End of explanation
"""
ds = list(catalog.datasets.values())[0]
ds.access_urls
"""
Explanation: We can pull that dataset out of the dictionary and look at the available access URLs. We see URLs for OPeNDAP, CDMRemote, and HTTPServer (direct download).
End of explanation
"""
from siphon.cdmr import Dataset
data = Dataset(ds.access_urls['CdmRemote'])
"""
Explanation: We'll use the CDMRemote reader in Siphon and pass it the appropriate access URL.
End of explanation
"""
import numpy as np
def raw_to_masked_float(var, data):
# Values come back signed. If the _Unsigned attribute is set, we need to convert
# from the range [-127, 128] to [0, 255].
if var._Unsigned:
data = data & 255
# Mask missing points
data = np.ma.array(data, mask=data==0)
# Convert to float using the scale and offset
return data * var.scale_factor + var.add_offset
def polar_to_cartesian(az, rng):
az_rad = np.deg2rad(az)[:, None]
x = rng * np.sin(az_rad)
y = rng * np.cos(az_rad)
return x, y
"""
Explanation: We define some helper functions to make working with the data easier. One takes the raw data and converts it to floating point values with the missing data points appropriately marked. The other helps with converting the polar coordinates (azimuth and range) to Cartesian (x and y).
End of explanation
"""
sweep = 0
ref_var = data.variables['Reflectivity_HI']
ref_data = ref_var[sweep]
rng = data.variables['distanceR_HI'][:]
az = data.variables['azimuthR_HI'][sweep]
"""
Explanation: The CDMRemote reader provides an interface that is almost identical to the usual python NetCDF interface. We pull out the variables we need for azimuth and range, as well as the data itself.
End of explanation
"""
ref = raw_to_masked_float(ref_var, ref_data)
x, y = polar_to_cartesian(az, rng)
"""
Explanation: Then convert the raw data to floating point values and the polar coordinates to Cartesian.
End of explanation
"""
from metpy.plots import ctables # For NWS colortable
ref_norm, ref_cmap = ctables.registry.get_with_steps('NWSReflectivity', 5, 5)
"""
Explanation: MetPy is a Python package for meteorology (Documentation: http://metpy.readthedocs.org and GitHub: http://github.com/MetPy/MetPy). We import MetPy and use it to get the colortable and value mapping information for the NWS Reflectivity data.
End of explanation
"""
import matplotlib.pyplot as plt
import cartopy
def new_map(fig, lon, lat):
# Create projection centered on the radar. This allows us to use x
# and y relative to the radar.
proj = cartopy.crs.LambertConformal(central_longitude=lon, central_latitude=lat)
# New axes with the specified projection
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add coastlines
ax.coastlines('50m', 'black', linewidth=2, zorder=2)
# Grab state borders
state_borders = cartopy.feature.NaturalEarthFeature(
category='cultural', name='admin_1_states_provinces_lines',
scale='50m', facecolor='none')
ax.add_feature(state_borders, edgecolor='black', linewidth=1, zorder=3)
return ax
"""
Explanation: Finally, we plot them up using matplotlib and cartopy. We create a helper function for making a map to keep things simpler later.
End of explanation
"""
query = rs.query()
#dt = datetime(2012, 10, 29, 15) # Our specified time
dt = datetime(2016, 6, 8, 18) # Our specified time
query.lonlat_point(-73.687, 41.175).time_range(dt, dt + timedelta(hours=1))
"""
Explanation: Download a collection of historical data
This time we'll make a query based on a longitude, latitude point and using a time range.
End of explanation
"""
cat = rs.get_catalog(query)
cat.datasets
"""
Explanation: The specified longitude, latitude are in NY and the TDS helpfully finds the closest station to that point. We can see that for this time range we obtained multiple datasets.
End of explanation
"""
ds = list(cat.datasets.values())[0]
data = Dataset(ds.access_urls['CdmRemote'])
# Pull out the data of interest
sweep = 0
rng = data.variables['distanceR_HI'][:]
az = data.variables['azimuthR_HI'][sweep]
ref_var = data.variables['Reflectivity_HI']
# Convert data to float and coordinates to Cartesian
ref = raw_to_masked_float(ref_var, ref_var[sweep])
x, y = polar_to_cartesian(az, rng)
"""
Explanation: Grab the first dataset so that we can get the longitude and latitude of the station and make a map for plotting. We'll go ahead and specify some longitude and latitude bounds for the map.
End of explanation
"""
fig = plt.figure(figsize=(10, 10))
ax = new_map(fig, data.StationLongitude, data.StationLatitude)
# Set limits in lat/lon space
ax.set_extent([-77, -70, 38, 42])
# Add ocean and land background
ocean = cartopy.feature.NaturalEarthFeature('physical', 'ocean', scale='50m',
edgecolor='face',
facecolor=cartopy.feature.COLORS['water'])
land = cartopy.feature.NaturalEarthFeature('physical', 'land', scale='50m',
edgecolor='face',
facecolor=cartopy.feature.COLORS['land'])
ax.add_feature(ocean, zorder=-1)
ax.add_feature(land, zorder=-1)
#ax = new_map(fig, data.StationLongitude, data.StationLatitude)
ax.pcolormesh(x, y, ref, cmap=ref_cmap, norm=ref_norm, zorder=0);
"""
Explanation: Use the function to make a new map and plot a colormapped view of the data
End of explanation
"""
meshes = []
for item in sorted(cat.datasets.items()):
# After looping over the list of sorted datasets, pull the actual Dataset object out
# of our list of items and access over CDMRemote
ds = item[1]
data = Dataset(ds.access_urls['CdmRemote'])
# Pull out the data of interest
sweep = 0
rng = data.variables['distanceR_HI'][:]
az = data.variables['azimuthR_HI'][sweep]
ref_var = data.variables['Reflectivity_HI']
# Convert data to float and coordinates to Cartesian
ref = raw_to_masked_float(ref_var, ref_var[sweep])
x, y = polar_to_cartesian(az, rng)
# Plot the data and the timestamp
mesh = ax.pcolormesh(x, y, ref, cmap=ref_cmap, norm=ref_norm, zorder=0)
text = ax.text(0.65, 0.03, data.time_coverage_start, transform=ax.transAxes,
fontdict={'size':16})
# Collect the things we've plotted so we can animate
meshes.append((mesh, text))
"""
Explanation: Now we can loop over the collection of returned datasets and plot them. As we plot, we collect the returned plot objects so that we can use them to make an animated plot. We also add a timestamp for each plot.
End of explanation
"""
# Set up matplotlib to do the conversion to HTML5 video
import matplotlib
matplotlib.rcParams['animation.html'] = 'html5'
# Create an animation
from matplotlib.animation import ArtistAnimation
ArtistAnimation(fig, meshes)
"""
Explanation: Using matplotlib, we can take a collection of Artists that have been plotted and turn them into an animation. With matplotlib 1.5 (1.5-rc2 is available now!), this animation can be converted to HTML5 video viewable in the notebook.
End of explanation
"""
|
PyDataMadrid2016/Conference-Info | workshops_materials/20160408_1100_Pandas_for_beginners/tutorial/EN - Tutorial 04 - Selecting data.ipynb | mit | # first, the imports
import os
import datetime as dt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display
np.random.seed(19760812)
%matplotlib inline
# We read the data in the file 'mast.txt'
ipath = os.path.join('Datos', 'mast.txt')
def dateparse(date, time):
YY = 2000 + int(date[:2])
MM = int(date[2:4])
DD = int(date[4:])
hh = int(time[:2])
mm = int(time[2:])
return dt.datetime(YY, MM, DD, hh, mm, 0)
cols = ['Date', 'time', 'wspd', 'wspd_max', 'wdir',
'x1', 'x2', 'x3', 'x4', 'x5',
'wspd_std']
wind = pd.read_csv(ipath, sep = "\s*", names = cols,
parse_dates = [[0, 1]], index_col = 0,
date_parser = dateparse)
"""
Explanation: Again, first of all, lets read some data
Read some wind data
End of explanation
"""
wind[0:10]
"""
Explanation: Selecting data
Access the elements like if it was a numpy array
We can access the elements using indexing like if it was a numpy array or how we do it in Python:
In Python, indexing starts by 0 and the last element of the slice is not included.
End of explanation
"""
wind['2013-09-04 00:00:00':'2013-09-04 01:30:00']
"""
Explanation: Indexes in a numpy array can only be integers.
Access the elements through indexing the labels of the index
Also, unlike numpy, we can access indexes that are not integers:
End of explanation
"""
wind['wspd'].head(3)
"""
Explanation: In this second example indexing is made using strings, that are the representation on the indexes (labels). We can also highlight that, in this case, the last element in the slice IS INCLUDED.
Basic selection of a column (DataFrame)
In previous examples, we havel also seen that we could select columns using its name:
End of explanation
"""
# Thi is similar to what we did in the previous code cell
wind.wspd.head(3)
# An example that can raise an error
df1 = pd.DataFrame(np.random.randn(5,2), columns = [1, 2])
df1
# This will be wrong
df1.1
# In order to use it we have to use
df1[1]
"""
Explanation: Depending how are defined the column names we can access the column values using dot notation but this way not always work so I strongly recommend not to use it:
End of explanation
"""
# Create a Series
wspd = wind['wspd']
# Access the elements located at positions 0, 100 and 1000
print(wspd[[0, 100, 1000]])
print('\n' * 3)
# Using indexes at locations 0, 100 and 1000
idx = wspd[[0, 100, 1000]].index
print(idx)
print('\n' * 3)
# We access the same elements than initially but using the labels instead
# the location of the elements
print(wspd[idx])
"""
Explanation: Fancy indexing with Series
You can also use Fancy indexing with Series, like if we were indexing with a list or a boolean array:
End of explanation
"""
# Try it...
"""
Explanation: With DataFrames the fancy indexing can be ambiguous and it will raise an IndexError.
End of explanation
"""
idx = wind['wspd'] > 35
wind[idx]
"""
Explanation: Boolean indexing
Like with numpy, we can access values using boolean indexing:
End of explanation
"""
idx = (wind['wspd'] > 35) & (wind['wdir'] > 225)
wind[idx]
"""
Explanation: We can use several conditions. for instance, let's refine the previous result:
End of explanation
"""
# To make it more efficient you should install 'numexpr'
# tht is the default engine. If you don't have it installed
# and you don't define the engine ('python') you will get an ImportError
wind.query('wspd > 35 and wdir > 225', engine = 'python')
"""
Explanation: Using conditions coud be less readable. Since version 0.13 you can use the query method to make the expression more readable.
End of explanation
"""
s1 = pd.Series(np.arange(0,10), index = np.arange(0,10))
s2 = pd.Series(np.arange(10,20), index = np.arange(5,15))
print(s1)
print(s2)
"""
Explanation: Using these ways of selection can be ambiguous in some cases. Let's make a parenthetical remark to come bacllater to see more advanced ways of selection.
Remark: Aligment of data when we operate with pandas data structures
When we perform an operation between two pandas data structures we get a very practical alignment effect. Let's see this by examples:
End of explanation
"""
s1 + s2
"""
Explanation: Now, if we perform an operation between both Series, where there are the same index we can perform the operation and where there are no indexes on both sides of the operation we conserve the index in the result but the operation could not be performed and a NaN is returned but we will not get an error:
End of explanation
"""
wind['wspd_std']
"""
Explanation: Coming back to indexing (recap)
One of the basic features of pandas is the rows and columns index labeling, this can make that indexing could be more complex than in numpy. We have to distinguish between:
selecting by label
selecting by position (numpy)
Indexing in Series is simpler as the labels refer to row labels (indexes) as there is only one column. As we have been learning in a vague manner, for a DataFrame, basic indexing select columns.
To select only a column, as we have seen previously:
End of explanation
"""
wind[['wspd', 'wspd_std']]
"""
Explanation: Or we can select several columns:
End of explanation
"""
wind['2015/01/01 00:00':'2015/01/01 02:00']
"""
Explanation: But with slicing we will access the indexes:
End of explanation
"""
wind['wspd':'wdir']
wind[['wspd':'wdir']]
"""
Explanation: So the following will provide an error:
End of explanation
"""
wind.loc['2013-09-04 00:00:00':'2013-09-04 00:20:00', 'wspd':'wspd_max']
wind.iloc[0:3, 0:2] # similar to indexing a numpy arrays wind.values[0:3, 0:2]
wind.ix[0:3, 'wspd':'wspd_max']
"""
Explanation: Uh, what a mess!!
Indexing (à la pandas)
We have several available methods to index in a pandas data structure:
loc: it is used when we use the columns and rows labels to index (it also accepts boolean arrays).
iloc: this option is based in element positions (like if it was a numpy array).
ix: it is a combination of both previous methods.
This methods are also available in Series but with Series are not so useful as indexing is not ambiguous.
Let's see how these methods work in a DataFrame...
Select the first three items in columns 'wspd' and 'wspd_max':
End of explanation
"""
wind[0:3][['wspd', 'wspd_max']]
wind[['wspd', 'wspd_max']][0:3]
"""
Explanation: A fourth way not seen before would be:
End of explanation
"""
wind.between_time('00:00', '00:30').head(20)
# It also works with series:
wind['wspd'].between_time('00:00', '00:30').head(20)
"""
Explanation: Let's practice all of this
Return all the January 2014 values
Compute the mean wind speed during february 2014
Use the query method to obtain all wind speeds coming from North (in a range between $\pm$ 10 º considering North oriented towards North 0º) and with a wind speed above 10 m/s
The same as before but using a boolean array
All the previous problems can be solved loc, iloc and/or ix. Practice all the possibilities.
Last curiosity in case you work with time series
pandas data structures have a method to select between times:
End of explanation
"""
|
danielgoncalvesti/BIGDATA2017 | Atividade03/Lab4a_regressao_linear.ipynb | gpl-3.0 | sc = SparkContext.getOrCreate()
# carregar base de dados
from test_helper import Test
import os.path
baseDir = os.path.join('Data')
inputPath = os.path.join('millionsong.txt')
fileName = os.path.join(baseDir, inputPath)
numPartitions = 2
rawData = sc.textFile(fileName, numPartitions)
# EXERCICIO
numPoints = rawData.count()
print numPoints
samplePoints = rawData.take(5)
print samplePoints
# TEST Load and check the data (1a)
Test.assertEquals(numPoints, 6724, 'incorrect value for numPoints')
Test.assertEquals(len(samplePoints), 5, 'incorrect length for samplePoints')
"""
Explanation: Regressão Linear
Este notebook mostra uma implementação básica de Regressão Linear e o uso da biblioteca MLlib do PySpark para a tarefa de regressão na base de dados Million Song Dataset do repositório UCI Machine Learning Repository. Nosso objetivo é predizer o ano de uma música através dos seus atributos de áudio.
Neste notebook:
Parte 1: Leitura e parsing da base de dados
Visualização 1: Atributos
Visualização 2: Deslocamento das variáveis de interesse
Parte 2: Criar um preditor de referência
Visualização 3: Valores Preditos vs. Verdadeiros
Parte 3: Treinar e avaliar um modelo de regressão linear
Visualização 4: Erro de Treino
Parte 4: Treinar usando MLlib e ajustar os hiperparâmetros
Visualização 5: Predições do Melhor modelo
Visualização 6: Mapa de calor dos hiperparâmetros
Parte 5: Adicionando interações entre atributos
Parte 6: Aplicando na base de dados de Crimes de São Francisco
Para referência, consulte os métodos relevantes do PySpark em Spark's Python API e do NumPy em NumPy Reference
Parte 1: Leitura e parsing da base de dados
(1a) Verificando os dados disponíveis
Os dados da base que iremos utilizar estão armazenados em um arquivo texto. No primeiro passo vamos transformar os dados textuais em uma RDD e verificar a formatação dos mesmos. Altere a segunda célula para verificar quantas amostras existem nessa base de dados utilizando o método count method.
Reparem que o rótulo dessa base é o primeiro registro, representando o ano.
End of explanation
"""
from pyspark.mllib.regression import LabeledPoint
import numpy as np
# Here is a sample raw data point:
# '2001.0,0.884,0.610,0.600,0.474,0.247,0.357,0.344,0.33,0.600,0.425,0.60,0.419'
# In this raw data point, 2001.0 is the label, and the remaining values are features
# EXERCICIO
def parsePoint(line):
"""Converts a comma separated unicode string into a `LabeledPoint`.
Args:
line (unicode): Comma separated unicode string where the first element is the label and the
remaining elements are features.
Returns:
LabeledPoint: The line is converted into a `LabeledPoint`, which consists of a label and
features.
"""
Point = line.split(",")
return LabeledPoint(Point[0], Point[1:])
parsedSamplePoints = map(parsePoint,samplePoints)
firstPointFeatures = parsedSamplePoints[0].features
firstPointLabel = parsedSamplePoints[0].label
print firstPointFeatures, firstPointLabel
d = len(firstPointFeatures)
print d
# TEST Using LabeledPoint (1b)
Test.assertTrue(isinstance(firstPointLabel, float), 'label must be a float')
expectedX0 = [0.8841,0.6105,0.6005,0.4747,0.2472,0.3573,0.3441,0.3396,0.6009,0.4257,0.6049,0.4192]
Test.assertTrue(np.allclose(expectedX0, firstPointFeatures, 1e-4, 1e-4),
'incorrect features for firstPointFeatures')
Test.assertTrue(np.allclose(2001.0, firstPointLabel), 'incorrect label for firstPointLabel')
Test.assertTrue(d == 12, 'incorrect number of features')
"""
Explanation: (1b) Usando LabeledPoint
Na MLlib, bases de dados rotuladas devem ser armazenadas usando o objeto LabeledPoint. Escreva a função parsePoint que recebe como entrada uma amostra de dados, transforma os dados usandoo comando unicode.split, e retorna um LabeledPoint.
Aplique essa função na variável samplePoints da célula anterior e imprima os atributos e rótulo utilizando os atributos LabeledPoint.features e LabeledPoint.label. Finalmente, calcule o número de atributos nessa base de dados.
End of explanation
"""
#insert a graphic inline
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
sampleMorePoints = rawData.take(50)
parsedSampleMorePoints = map(parsePoint, sampleMorePoints)
dataValues = map(lambda lp: lp.features.toArray(), parsedSampleMorePoints)
#print dataValues
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',
gridWidth=1.0):
"""Template for generating the plot layout."""
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
# generate layout and plot
fig, ax = preparePlot(np.arange(.5, 11, 1), np.arange(.5, 49, 1), figsize=(8,7), hideLabels=True,
gridColor='#eeeeee', gridWidth=1.1)
image = plt.imshow(dataValues,interpolation='nearest', aspect='auto', cmap=cm.Greys)
for x, y, s in zip(np.arange(-.125, 12, 1), np.repeat(-.75, 12), [str(x) for x in range(12)]):
plt.text(x, y, s, color='#999999', size='10')
plt.text(4.7, -3, 'Feature', color='#999999', size='11'), ax.set_ylabel('Observation')
pass
"""
Explanation: Visualização 1: Atributos
A próxima célula mostra uma forma de visualizar os atributos através de um mapa de calor. Nesse mapa mostramos os 50 primeiros objetos e seus atributos representados por tons de cinza, sendo o branco representando o valor 0 e o preto representando o valor 1.
Esse tipo de visualização ajuda a perceber a variação dos valores dos atributos.
End of explanation
"""
# EXERCICIO
parsedDataInit = rawData.map(lambda x: parsePoint(x))
onlyLabels = parsedDataInit.map(lambda x: x.label)
minYear = onlyLabels.min()
maxYear = onlyLabels.max()
print maxYear, minYear
# TEST Find the range (1c)
Test.assertEquals(len(parsedDataInit.take(1)[0].features), 12,
'unexpected number of features in sample point')
sumFeatTwo = parsedDataInit.map(lambda lp: lp.features[2]).sum()
Test.assertTrue(np.allclose(sumFeatTwo, 3158.96224351), 'parsedDataInit has unexpected values')
yearRange = maxYear - minYear
Test.assertTrue(yearRange == 89, 'incorrect range for minYear to maxYear')
# Debug
parsedDataInit.take(1)
# EXERCICIO
parsedData = parsedDataInit.map(lambda x: LabeledPoint(x.label - minYear, x.features))
# Should be a LabeledPoint
print type(parsedData.take(1)[0])
# View the first point
print '\n{0}'.format(parsedData.take(1))
# TEST Shift labels (1d)
oldSampleFeatures = parsedDataInit.take(1)[0].features
newSampleFeatures = parsedData.take(1)[0].features
Test.assertTrue(np.allclose(oldSampleFeatures, newSampleFeatures),
'new features do not match old features')
sumFeatTwo = parsedData.map(lambda lp: lp.features[2]).sum()
Test.assertTrue(np.allclose(sumFeatTwo, 3158.96224351), 'parsedData has unexpected values')
minYearNew = parsedData.map(lambda lp: lp.label).min()
maxYearNew = parsedData.map(lambda lp: lp.label).max()
Test.assertTrue(minYearNew == 0, 'incorrect min year in shifted data')
Test.assertTrue(maxYearNew == 89, 'incorrect max year in shifted data')
"""
Explanation: (1c) Deslocando os rótulos
Para melhor visualizar as soluções obtidas, calcular o erro de predição e visualizar a relação dos atributos com os rótulos, costuma-se deslocar os rótulos para iniciarem em zero.
Dessa forma vamos verificar qual é a faixa de valores dos rótulos e, em seguida, subtrair os rótulos pelo menor valor encontrado. Em alguns casos também pode ser interessante normalizar tais valores dividindo pelo valor máximo dos rótulos.
End of explanation
"""
# EXERCICIO
weights = [.8, .1, .1]
seed = 42
parsedTrainData, parsedValData, parsedTestData = parsedData.randomSplit(weights, seed)
parsedTrainData.cache()
parsedValData.cache()
parsedTestData.cache()
nTrain = parsedTrainData.count()
nVal = parsedValData.count()
nTest = parsedTestData.count()
print nTrain, nVal, nTest, nTrain + nVal + nTest
print parsedData.count()
# TEST Training, validation, and test sets (1e)
Test.assertEquals(parsedTrainData.getNumPartitions(), numPartitions,
'parsedTrainData has wrong number of partitions')
Test.assertEquals(parsedValData.getNumPartitions(), numPartitions,
'parsedValData has wrong number of partitions')
Test.assertEquals(parsedTestData.getNumPartitions(), numPartitions,
'parsedTestData has wrong number of partitions')
Test.assertEquals(len(parsedTrainData.take(1)[0].features), 12,
'parsedTrainData has wrong number of features')
sumFeatTwo = (parsedTrainData
.map(lambda lp: lp.features[2])
.sum())
sumFeatThree = (parsedValData
.map(lambda lp: lp.features[3])
.reduce(lambda x, y: x + y))
sumFeatFour = (parsedTestData
.map(lambda lp: lp.features[4])
.reduce(lambda x, y: x + y))
Test.assertTrue(np.allclose([sumFeatTwo, sumFeatThree, sumFeatFour],
2526.87757656, 297.340394298, 184.235876654),
'parsed Train, Val, Test data has unexpected values')
Test.assertTrue(nTrain + nVal + nTest == 6724, 'unexpected Train, Val, Test data set size')
Test.assertEquals(nTrain, 5371, 'unexpected value for nTrain')
Test.assertEquals(nVal, 682, 'unexpected value for nVal')
Test.assertEquals(nTest, 671, 'unexpected value for nTest')
"""
Explanation: (1d) Conjuntos de treino, validação e teste
Como próximo passo, vamos dividir nossa base de dados em conjunto de treino, validação e teste conforme discutido em sala de aula. Use o método randomSplit method com os pesos (weights) e a semente aleatória (seed) especificados na célula abaixo parar criar a divisão das bases. Em seguida, utilizando o método cache() faça o pré-armazenamento da base processada.
Esse comando faz o processamento da base através das transformações e armazena em um novo RDD que pode ficar armazenado em memória, se couber, ou em um arquivo temporário.
End of explanation
"""
# EXERCICIO
averageTrainYear = (parsedTrainData
.map(lambda x: x.label)
.mean()
)
print averageTrainYear
# TEST Average label (2a)
Test.assertTrue(np.allclose(averageTrainYear, 53.9316700801),
'incorrect value for averageTrainYear')
"""
Explanation: Part 2: Criando o modelo de baseline
(2a) Rótulo médio
O baseline é útil para verificarmos que nosso modelo de regressão está funcionando. Ele deve ser um modelo bem simples que qualquer algoritmo possa fazer melhor.
Um baseline muito utilizado é fazer a mesma predição independente dos dados analisados utilizando o rótulo médio do conjunto de treino. Calcule a média dos rótulos deslocados para a base de treino, utilizaremos esse valor posteriormente para comparar o erro de predição. Use um método apropriado para essa tarefa, consulte o RDD API.
End of explanation
"""
# EXERCICIO
def squaredError(label, prediction):
"""Calculates the the squared error for a single prediction.
Args:
label (float): The correct value for this observation.
prediction (float): The predicted value for this observation.
Returns:
float: The difference between the `label` and `prediction` squared.
"""
return np.square(label - prediction)
def calcRMSE(labelsAndPreds):
"""Calculates the root mean squared error for an `RDD` of (label, prediction) tuples.
Args:
labelsAndPred (RDD of (float, float)): An `RDD` consisting of (label, prediction) tuples.
Returns:
float: The square root of the mean of the squared errors.
"""
return np.sqrt(labelsAndPreds.map(lambda (x,y): squaredError(x,y)).mean())
labelsAndPreds = sc.parallelize([(3., 1.), (1., 2.), (2., 2.)])
# RMSE = sqrt[((3-1)^2 + (1-2)^2 + (2-2)^2) / 3] = 1.291
exampleRMSE = calcRMSE(labelsAndPreds)
print exampleRMSE
# TEST Root mean squared error (2b)
Test.assertTrue(np.allclose(squaredError(3, 1), 4.), 'incorrect definition of squaredError')
Test.assertTrue(np.allclose(exampleRMSE, 1.29099444874), 'incorrect value for exampleRMSE')
"""
Explanation: (2b) Erro quadrático médio
Para comparar a performance em problemas de regressão, geralmente é utilizado o Erro Quadrático Médio (RMSE). Implemente uma função que calcula o RMSE a partir de um RDD de tuplas (rótulo, predição).
End of explanation
"""
#Debug
parsedTrainData.take(1)
# EXERCICIO -> (rótulo, predição)
labelsAndPredsTrain = parsedTrainData.map(lambda x:(x.label, averageTrainYear))
rmseTrainBase = calcRMSE(labelsAndPredsTrain)
labelsAndPredsVal = parsedValData.map(lambda x:(x.label, averageTrainYear))
rmseValBase = calcRMSE(labelsAndPredsVal)
labelsAndPredsTest = parsedTestData.map(lambda x:(x.label, averageTrainYear))
rmseTestBase = calcRMSE(labelsAndPredsTest)
print 'Baseline Train RMSE = {0:.3f}'.format(rmseTrainBase)
print 'Baseline Validation RMSE = {0:.3f}'.format(rmseValBase)
print 'Baseline Test RMSE = {0:.3f}'.format(rmseTestBase)
# TEST Training, validation and test RMSE (2c)
Test.assertTrue(np.allclose([rmseTrainBase, rmseValBase, rmseTestBase],
[21.305869, 21.586452, 22.136957]), 'incorrect RMSE value')
"""
Explanation: (2c) RMSE do baseline para os conjuntos de treino, validação e teste
Vamos calcular o RMSE para nossa baseline. Primeiro crie uma RDD de (rótulo, predição) para cada conjunto, e então chame a função calcRMSE.
End of explanation
"""
from matplotlib.colors import ListedColormap, Normalize
from matplotlib.cm import get_cmap
cmap = get_cmap('YlOrRd')
norm = Normalize()
actual = np.asarray(parsedValData
.map(lambda lp: lp.label)
.collect())
error = np.asarray(parsedValData
.map(lambda lp: (lp.label, lp.label))
.map(lambda (l, p): squaredError(l, p))
.collect())
clrs = cmap(np.asarray(norm(error)))[:,0:3]
fig, ax = preparePlot(np.arange(0, 100, 20), np.arange(0, 100, 20))
plt.scatter(actual, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=0.5)
ax.set_xlabel('Predicted'), ax.set_ylabel('Actual')
pass
predictions = np.asarray(parsedValData
.map(lambda lp: averageTrainYear)
.collect())
error = np.asarray(parsedValData
.map(lambda lp: (lp.label, averageTrainYear))
.map(lambda (l, p): squaredError(l, p))
.collect())
norm = Normalize()
clrs = cmap(np.asarray(norm(error)))[:,0:3]
fig, ax = preparePlot(np.arange(53.0, 55.0, 0.5), np.arange(0, 100, 20))
ax.set_xlim(53, 55)
plt.scatter(predictions, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=0.3)
ax.set_xlabel('Predicted'), ax.set_ylabel('Actual')
"""
Explanation: Visualização 2: Predição vs. real
Vamos visualizar as predições no conjunto de validação. Os gráficos de dispersão abaixo plotam os pontos com a coordenada X sendo o valor predito pelo modelo e a coordenada Y o valor real do rótulo.
O primeiro gráfico mostra a situação ideal, um modelo que acerta todos os rótulos. O segundo gráfico mostra o desempenho do modelo baseline. As cores dos pontos representam o erro quadrático daquela predição, quanto mais próxima do laranja, maior o erro.
End of explanation
"""
from pyspark.mllib.linalg import DenseVector
# EXERCICIO
def gradientSummand(weights, lp):
"""Calculates the gradient summand for a given weight and `LabeledPoint`.
Note:
`DenseVector` behaves similarly to a `numpy.ndarray` and they can be used interchangably
within this function. For example, they both implement the `dot` method.
Args:
weights (DenseVector): An array of model weights (betas).
lp (LabeledPoint): The `LabeledPoint` for a single observation.
Returns:
DenseVector: An array of values the same length as `weights`. The gradient summand.
"""
return (weights.dot(lp.features) - lp.label) * lp.features
exampleW = DenseVector([1, 1, 1])
exampleLP = LabeledPoint(2.0, [3, 1, 4])
summandOne = gradientSummand(exampleW, exampleLP)
print summandOne
exampleW = DenseVector([.24, 1.2, -1.4])
exampleLP = LabeledPoint(3.0, [-1.4, 4.2, 2.1])
summandTwo = gradientSummand(exampleW, exampleLP)
print summandTwo
# TEST Gradient summand (3a)
Test.assertTrue(np.allclose(summandOne, [18., 6., 24.]), 'incorrect value for summandOne')
Test.assertTrue(np.allclose(summandTwo, [1.7304,-5.1912,-2.5956]), 'incorrect value for summandTwo')
"""
Explanation: Parte 3: Treinando e avaliando o modelo de regressão linear
(3a) Gradiente do erro
Vamos implementar a regressão linear através do gradiente descendente.
Lembrando que para atualizar o peso da regressão linear fazemos: $$ \scriptsize \mathbf{w}_{i+1} = \mathbf{w}_i - \alpha_i \sum_j (\mathbf{w}_i^\top\mathbf{x}_j - y_j) \mathbf{x}_j \,.$$ onde $ \scriptsize i $ é a iteração do algoritmo, e $ \scriptsize j $ é o objeto sendo observado no momento.
Primeiro, implemente uma função que calcula esse gradiente do erro para certo objeto: $ \scriptsize (\mathbf{w}^\top \mathbf{x} - y) \mathbf{x} \, ,$ e teste a função em dois exemplos. Use o método DenseVector dot para representar a lista de atributos (ele tem funcionalidade parecida com o np.array()).
End of explanation
"""
# EXERCICIO
def getLabeledPrediction(weights, observation):
"""Calculates predictions and returns a (label, prediction) tuple.
Note:
The labels should remain unchanged as we'll use this information to calculate prediction
error later.
Args:
weights (np.ndarray): An array with one weight for each features in `trainData`.
observation (LabeledPoint): A `LabeledPoint` that contain the correct label and the
features for the data point.
Returns:
tuple: A (label, prediction) tuple.
"""
return ( observation.label, weights.dot(observation.features) )
weights = np.array([1.0, 1.5])
predictionExample = sc.parallelize([LabeledPoint(2, np.array([1.0, .5])),
LabeledPoint(1.5, np.array([.5, .5]))])
labelsAndPredsExample = predictionExample.map(lambda lp: getLabeledPrediction(weights, lp))
print labelsAndPredsExample.collect()
# TEST Use weights to make predictions (3b)
Test.assertEquals(labelsAndPredsExample.collect(), [(2.0, 1.75), (1.5, 1.25)],
'incorrect definition for getLabeledPredictions')
"""
Explanation: (3b) Use os pesos para fazer a predição
Agora, implemente a função getLabeledPredictions que recebe como parâmetro o conjunto de pesos e um LabeledPoint e retorna uma tupla (rótulo, predição). Lembre-se que podemos predizer um rótulo calculando o produto interno dos pesos com os atributos.
End of explanation
"""
# EXERCICIO
def linregGradientDescent(trainData, numIters):
"""Calculates the weights and error for a linear regression model trained with gradient descent.
Note:
`DenseVector` behaves similarly to a `numpy.ndarray` and they can be used interchangably
within this function. For example, they both implement the `dot` method.
Args:
trainData (RDD of LabeledPoint): The labeled data for use in training the model.
numIters (int): The number of iterations of gradient descent to perform.
Returns:
(np.ndarray, np.ndarray): A tuple of (weights, training errors). Weights will be the
final weights (one weight per feature) for the model, and training errors will contain
an error (RMSE) for each iteration of the algorithm.
"""
# The length of the training data
n = trainData.count()
# The number of features in the training data
d = len(trainData.take(1)[0].features)
w = np.zeros(d)
alpha = 1.0
# We will compute and store the training error after each iteration
errorTrain = np.zeros(numIters)
for i in range(numIters):
# Use getLabeledPrediction from (3b) with trainData to obtain an RDD of (label, prediction)
# tuples. Note that the weights all equal 0 for the first iteration, so the predictions will
# have large errors to start.
labelsAndPredsTrain = trainData.map(lambda x: getLabeledPrediction(w, x))
errorTrain[i] = calcRMSE(labelsAndPredsTrain)
# Calculate the `gradient`. Make use of the `gradientSummand` function you wrote in (3a).
# Note that `gradient` sould be a `DenseVector` of length `d`.
gradient = trainData.map(lambda x: gradientSummand(w, x)).sum()
# Update the weights
alpha_i = alpha / (n * np.sqrt(i+1))
w -= alpha_i*gradient
return w, errorTrain
# create a toy dataset with n = 10, d = 3, and then run 5 iterations of gradient descent
# note: the resulting model will not be useful; the goal here is to verify that
# linregGradientDescent is working properly
exampleN = 10
exampleD = 3
exampleData = (sc
.parallelize(parsedTrainData.take(exampleN))
.map(lambda lp: LabeledPoint(lp.label, lp.features[0:exampleD])))
print exampleData.take(2)
exampleNumIters = 5
exampleWeights, exampleErrorTrain = linregGradientDescent(exampleData, exampleNumIters)
print exampleWeights
# TEST Gradient descent (3c)
expectedOutput = [48.88110449, 36.01144093, 30.25350092]
Test.assertTrue(np.allclose(exampleWeights, expectedOutput), 'value of exampleWeights is incorrect')
expectedError = [79.72013547, 30.27835699, 9.27842641, 9.20967856, 9.19446483]
Test.assertTrue(np.allclose(exampleErrorTrain, expectedError),
'value of exampleErrorTrain is incorrect')
"""
Explanation: (3c) Gradiente descendente
Finalmente, implemente o algoritmo gradiente descendente para regressão linear e teste a função em um exemplo.
End of explanation
"""
# EXERCICIO
numIters = 50
weightsLR0, errorTrainLR0 = linregGradientDescent(parsedTrainData, numIters)
labelsAndPreds = parsedValData.map(lambda x: getLabeledPrediction(weightsLR0, x))
rmseValLR0 = calcRMSE(labelsAndPreds)
print 'Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}'.format(rmseValBase,
rmseValLR0)
# TEST Train the model (3d)
expectedOutput = [22.64535883, 20.064699, -0.05341901, 8.2931319, 5.79155768, -4.51008084,
15.23075467, 3.8465554, 9.91992022, 5.97465933, 11.36849033, 3.86452361]
Test.assertTrue(np.allclose(weightsLR0, expectedOutput), 'incorrect value for weightsLR0')
"""
Explanation: (3d) Treinando o modelo na base de dados
Agora iremos treinar o modelo de regressão linear na nossa base de dados de treino e calcular o RMSE na base de validação. Lembrem-se que não devemos utilizar a base de teste até que o melhor parâmetro do modelo seja escolhido.
Para essa tarefa vamos utilizar as funções linregGradientDescent, getLabeledPrediction e calcRMSE já implementadas.
End of explanation
"""
norm = Normalize()
clrs = cmap(np.asarray(norm(np.log(errorTrainLR0))))[:,0:3]
fig, ax = preparePlot(np.arange(0, 60, 10), np.arange(2, 6, 1))
ax.set_ylim(2, 6)
plt.scatter(range(0, numIters), np.log(errorTrainLR0), s=14**2, c=clrs, edgecolors='#888888', alpha=0.75)
ax.set_xlabel('Iteration'), ax.set_ylabel(r'$\log_e(errorTrainLR0)$')
pass
norm = Normalize()
clrs = cmap(np.asarray(norm(errorTrainLR0[6:])))[:,0:3]
fig, ax = preparePlot(np.arange(0, 60, 10), np.arange(17, 22, 1))
ax.set_ylim(17.8, 21.2)
plt.scatter(range(0, numIters-6), errorTrainLR0[6:], s=14**2, c=clrs, edgecolors='#888888', alpha=0.75)
ax.set_xticklabels(map(str, range(6, 66, 10)))
ax.set_xlabel('Iteration'), ax.set_ylabel(r'Training Error')
pass
"""
Explanation: Visualização 3: Erro de Treino
Vamos verificar o comportamento do algoritmo durante as iterações. Para isso vamos plotar um gráfico em que o eixo x representa a iteração e o eixo y o log do RMSE. O primeiro gráfico mostra as primeiras 50 iterações enquanto o segundo mostra as últimas 44 iterações. Note que inicialmente o erro cai rapidamente, quando então o gradiente descendente passa a fazer apenas pequenos ajustes.
End of explanation
"""
from pyspark.mllib.regression import LinearRegressionWithSGD
# Values to use when training the linear regression model
numIters = 500 # iterations
alpha = 1.0 # step
miniBatchFrac = 1.0 # miniBatchFraction
reg = 1e-1 # regParam
regType = 'l2' # regType
useIntercept = True # intercept
# EXERCICIO
firstModel = LinearRegressionWithSGD.train(parsedTrainData, iterations = numIters, step = alpha, miniBatchFraction = 1.0,
regParam=reg,regType=regType, intercept=useIntercept)
# weightsLR1 stores the model weights; interceptLR1 stores the model intercept
weightsLR1 = firstModel.weights
interceptLR1 = firstModel.intercept
print weightsLR1, interceptLR1
# TEST LinearRegressionWithSGD (4a)
expectedIntercept = 13.3335907631
expectedWeights = [16.682292427, 14.7439059559, -0.0935105608897, 6.22080088829, 4.01454261926, -3.30214858535,
11.0403027232, 2.67190962854, 7.18925791279, 4.46093254586, 8.14950409475, 2.75135810882]
Test.assertTrue(np.allclose(interceptLR1, expectedIntercept), 'incorrect value for interceptLR1')
Test.assertTrue(np.allclose(weightsLR1, expectedWeights), 'incorrect value for weightsLR1')
"""
Explanation: Part 4: Treino utilizando MLlib e Busca em Grade (Grid Search)
(4a) LinearRegressionWithSGD
Nosso teste inicial já conseguiu obter um desempenho melhor que o baseline, mas vamos ver se conseguimos fazer melhor introduzindo a ordenada de origem da reta além de outros ajustes no algoritmo. MLlib LinearRegressionWithSGD implementa o mesmo algoritmo da parte (3b), mas de forma mais eficiente para o contexto distribuído e com várias funcionalidades adicionais.
Primeiro utilize a função LinearRegressionWithSGD para treinar um modelo com regularização L2 (Ridge) e com a ordenada de origem. Esse método retorna um LinearRegressionModel.
Em seguida, use os atributos weights e intercept para imprimir o modelo encontrado.
End of explanation
"""
# EXERCICIO
samplePoint = parsedTrainData.take(1)[0]
samplePrediction = firstModel.predict(samplePoint.features)
print samplePrediction
# TEST Predict (4b)
Test.assertTrue(np.allclose(samplePrediction, 56.8013380112),
'incorrect value for samplePrediction')
"""
Explanation: (4b) Predição
Agora use o método LinearRegressionModel.predict() para fazer a predição de um objeto. Passe o atributo features de um LabeledPoint comp parâmetro.
End of explanation
"""
# EXERCICIO
labelsAndPreds = parsedValData.map(lambda x: (x.label, firstModel.predict(x.features)))
rmseValLR1 = calcRMSE(labelsAndPreds)
print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}' +
'\n\tLR1 = {2:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1)
# TEST Evaluate RMSE (4c)
Test.assertTrue(np.allclose(rmseValLR1, 19.691247), 'incorrect value for rmseValLR1')
"""
Explanation: (4c) Avaliar RMSE
Agora avalie o desempenho desse modelo no teste de validação. Use o método predict() para criar o RDD labelsAndPreds RDD, e então use a função calcRMSE() da Parte (2b) para calcular o RMSE.
End of explanation
"""
# EXERCICIO
bestRMSE = rmseValLR1
bestRegParam = reg
bestModel = firstModel
numIters = 500
alpha = 1.0
miniBatchFrac = 1.0
for reg in [1e-10, 1e-5, 1]:
model = LinearRegressionWithSGD.train(parsedTrainData, numIters, alpha,
miniBatchFrac, regParam=reg,
regType='l2', intercept=True)
labelsAndPreds = parsedValData.map(lambda x: (x.label, model.predict(x.features)))
rmseValGrid = calcRMSE(labelsAndPreds)
print rmseValGrid
if rmseValGrid < bestRMSE:
bestRMSE = rmseValGrid
bestRegParam = reg
bestModel = model
rmseValLRGrid = bestRMSE
print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}\n\tLR1 = {2:.3f}\n' +
'\tLRGrid = {3:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1, rmseValLRGrid)
# TEST Grid search (4d)
Test.assertTrue(np.allclose(17.017170, rmseValLRGrid), 'incorrect value for rmseValLRGrid')
"""
Explanation: (4d) Grid search
Já estamos superando o baseline em pelo menos dois anos na média, vamos ver se encontramos um conjunto de parâmetros melhor. Faça um grid search para encontrar um bom parâmetro de regularização. Tente valores para regParam dentro do conjunto 1e-10, 1e-5, e 1.
End of explanation
"""
predictions = np.asarray(parsedValData
.map(lambda lp: bestModel.predict(lp.features))
.collect())
actual = np.asarray(parsedValData
.map(lambda lp: lp.label)
.collect())
error = np.asarray(parsedValData
.map(lambda lp: (lp.label, bestModel.predict(lp.features)))
.map(lambda (l, p): squaredError(l, p))
.collect())
norm = Normalize()
clrs = cmap(np.asarray(norm(error)))[:,0:3]
fig, ax = preparePlot(np.arange(0, 120, 20), np.arange(0, 120, 20))
ax.set_xlim(15, 82), ax.set_ylim(-5, 105)
plt.scatter(predictions, actual, s=14**2, c=clrs, edgecolors='#888888', alpha=0.75, linewidths=.5)
ax.set_xlabel('Predicted'), ax.set_ylabel(r'Actual')
pass
"""
Explanation: Visualização 5: Predições do melhor modelo
Agora, vamos criar um gráfico para verificar o desempenho do melhor modelo. Reparem nesse gráfico que a quantidade de pontos mais escuros reduziu bastante em relação ao baseline.
End of explanation
"""
# EXERCICIO
reg = bestRegParam
modelRMSEs = []
for alpha in [1e-5, 10]:
for numIters in [500, 5]:
model = LinearRegressionWithSGD.train(parsedTrainData, numIters, alpha,
miniBatchFrac, regParam=reg,
regType='l2', intercept=True)
labelsAndPreds = parsedValData.map(lambda lp: (lp.label, model.predict(lp.features)))
rmseVal = calcRMSE(labelsAndPreds)
print 'alpha = {0:.0e}, numIters = {1}, RMSE = {2:.3f}'.format(alpha, numIters, rmseVal)
modelRMSEs.append(rmseVal)
# TEST Vary alpha and the number of iterations (4e)
expectedResults = sorted([56.969705, 56.892949, 355124752.221221])
Test.assertTrue(np.allclose(sorted(modelRMSEs)[:3], expectedResults), 'incorrect value for modelRMSEs')
"""
Explanation: (4e) Grid Search para o valor de alfa e número de iterações
Agora, vamos verificar diferentes valores para alfa e número de iterações para perceber o impacto desses parâmetros em nosso modelo. Especificamente tente os valores 1e-5 e 10 para alpha e os valores 500 e 5 para número de iterações. Avalie todos os modelos no conjunto de valdação. Reparem que com um valor baixo de alpha, o algoritmo necessita de muito mais iterações para convergir ao ótimo, enquanto um valor muito alto para alpha, pode fazer com que o algoritmo não encontre uma solução.
End of explanation
"""
# EXERCICIO
import itertools
def twoWayInteractions(lp):
"""Creates a new `LabeledPoint` that includes two-way interactions.
Note:
For features [x, y] the two-way interactions would be [x^2, x*y, y*x, y^2] and these
would be appended to the original [x, y] feature list.
Args:
lp (LabeledPoint): The label and features for this observation.
Returns:
LabeledPoint: The new `LabeledPoint` should have the same label as `lp`. Its features
should include the features from `lp` followed by the two-way interaction features.
"""
newfeats = <COMPLETAR>
return LabeledPoint(lp.label, <COMPLETAR>)
#return lp
print twoWayInteractions(LabeledPoint(0.0, [2, 3]))
# Transform the existing train, validation, and test sets to include two-way interactions.
trainDataInteract = parsedTrainData.map(twoWayInteractions)
valDataInteract = parsedValData.map(twoWayInteractions)
testDataInteract = parsedTestData.map(twoWayInteractions)
# TEST Add two-way interactions (5a)
twoWayExample = twoWayInteractions(LabeledPoint(0.0, [2, 3]))
Test.assertTrue(np.allclose(sorted(twoWayExample.features),
sorted([2.0, 3.0, 4.0, 6.0, 6.0, 9.0])),
'incorrect features generatedBy twoWayInteractions')
twoWayPoint = twoWayInteractions(LabeledPoint(1.0, [1, 2, 3]))
Test.assertTrue(np.allclose(sorted(twoWayPoint.features),
sorted([1.0,2.0,3.0,1.0,2.0,3.0,2.0,4.0,6.0,3.0,6.0,9.0])),
'incorrect features generated by twoWayInteractions')
Test.assertEquals(twoWayPoint.label, 1.0, 'incorrect label generated by twoWayInteractions')
Test.assertTrue(np.allclose(sum(trainDataInteract.take(1)[0].features), 40.821870576035529),
'incorrect features in trainDataInteract')
Test.assertTrue(np.allclose(sum(valDataInteract.take(1)[0].features), 45.457719932695696),
'incorrect features in valDataInteract')
Test.assertTrue(np.allclose(sum(testDataInteract.take(1)[0].features), 35.109111632783168),
'incorrect features in testDataInteract')
"""
Explanation: Parte 5: Adicionando atributos não-lineares
(5a) Interações par a par
Conforme mencionado em aula, os modelos de regressão linear conseguem capturar as relações lineares entre os atributos e o rótulo. Porém, em muitos casos a relação entre eles é não-linear.
Uma forma de resolver tal problema é criando mais atributos com características não-lineares, como por exemplo a expansão quadrática dos atributos originais. Escreva uma função twoWayInteractions que recebe um LabeledPoint e gera um novo LabeledPoint que contém os atributos antigos e as interações par a par entre eles. Note que um objeto com 3 atributos terá nove interações ( $ \scriptsize 3^2 $ ) par a par.
Para facilitar, utilize o método itertools.product para gerar tuplas para cada possível interação. Utilize também np.hstack para concatenar dois vetores.
End of explanation
"""
# EXERCICIO
numIters = 500
alpha = 1.0
miniBatchFrac = 1.0
reg = 1e-10
modelInteract = LinearRegressionWithSGD.train(trainDataInteract, numIters, alpha,
miniBatchFrac, regParam=reg,
regType='l2', intercept=True)
labelsAndPredsInteract = valDataInteract.<COMPLETAR>
rmseValInteract = calcRMSE(labelsAndPredsInteract)
print ('Validation RMSE:\n\tBaseline = {0:.3f}\n\tLR0 = {1:.3f}\n\tLR1 = {2:.3f}\n\tLRGrid = ' +
'{3:.3f}\n\tLRInteract = {4:.3f}').format(rmseValBase, rmseValLR0, rmseValLR1,
rmseValLRGrid, rmseValInteract)
# TEST Build interaction model (5b)
Test.assertTrue(np.allclose(rmseValInteract, 15.6894664683), 'incorrect value for rmseValInteract')
"""
Explanation: (5b) Construindo um novo modelo
Agora construa um novo modelo usando esses novos atributos. Repare que idealmente, com novos atributos, você deve realizar um novo Grid Search para determinar os novos parâmetros ótimos, uma vez que os parâmetros do modelo anterior não necessariamente funcionarão aqui.
Para este exercício, os parâmetros já foram otimizados.
End of explanation
"""
# EXERCICIO
labelsAndPredsTest = testDataInteract.<COMPLETAR>
rmseTestInteract = calcRMSE(labelsAndPredsTest)
print ('Test RMSE:\n\tBaseline = {0:.3f}\n\tLRInteract = {1:.3f}'
.format(rmseTestBase, rmseTestInteract))
# TEST Evaluate interaction model on test data (5c)
Test.assertTrue(np.allclose(rmseTestInteract, 16.3272040537),
'incorrect value for rmseTestInteract')
"""
Explanation: (5c) Avaliando o modelo de interação
Finalmente, temos que o melhor modelo para o conjunto de validação foi o modelo de interação. Na prática esse seria o modelo escolhido para aplicar nos modelos não-rotulados. Vamos ver como essa escolha se sairia utilizand a base de teste nesse modelo e no baseline.
End of explanation
"""
|
deadbeatfour/notebooks | SWE_square_well/swe_square_well.ipynb | mit | delta = 1 # The spacing between neighboring lattice points
L = 1000 # The ends of the lattice
length = 10
momentum = 1
lattice = arange(-L,L,delta)
v = vectorize(potential)
plot(v(lattice))
hamiltonian = np.zeros((lattice.shape[0],lattice.shape[0]))
for row in range(lattice.shape[0]):
for col in range(lattice.shape[0]):
hamiltonian[row,col] = (kronecker_delta(col+1, row)\
+ kronecker_delta(col-1, row)\
- 2*kronecker_delta(col,row))/-(delta)**2\
+ potential(row)*kronecker_delta(col,row)
hamiltonian
"""
Explanation: Next we generate the lattice for the simulation. For now we consider 2000 lattice points.
End of explanation
"""
eigenvalues, eigenvectors = linalg.eigh(hamiltonian)
plot(lattice,absolute(eigenvectors[0]))
# eigenvectors[0].shape
wavefunction.shape
"""
Explanation: Now that we've computed the hamiltonian matrix, let's diagonalize it
End of explanation
"""
wavefunction = (1/sqrt(2)*length)*exp(-((lattice)**2)/2000)*exp(1j*momentum*lattice)
normalize = sum(absolute(wavefunction))*delta
wavefunction /= normalize
print(sum(absolute(wavefunction))*delta)
plot(lattice,absolute(wavefunction))
wavefunction
"""
Explanation: Let's define the initial state of our system now.
End of explanation
"""
coeff = empty_like(eigenvalues)
for index in range(eigenvalues.shape[0]):
coeff[index] = vdot(eigenvectors[index],wavefunction)
"""
Explanation: Now we decompose the state into the projections on the basis vectors
End of explanation
"""
func = zeros_like(eigenvalues,dtype=complex128)
for index in range(eigenvalues.shape[0]):
func += coeff[index]*eigenvectors[index]
plot(lattice,absolute(func))
time_steps = arange(0,200,2)
evolution = []
for time in time_steps:
wavefunc = zeros_like(eigenvalues,dtype=complex128)
for index in range(eigenvalues.shape[0]):
wavefunc += coeff[index]*eigenvectors[index]*exp(-1j*eigenvalues[index]*time)
evolution.append(wavefunc)
fig = plt.figure()
ax = plt.axes(xlim=(-L, L), ylim=(0, 0.2))
line, = ax.plot([], [], lw=2)
def init():
line.set_data([], [])
return line,
def animate(i):
line.set_data(lattice,absolute(evolution[i]))
return line,
from matplotlib import animation
anim = animation.FuncAnimation(fig,animate,init_func=init,frames=100,interval=50,blit=True)
HTML(anim.to_html5_video())
"""
Explanation: Now that we know the wavefunction as a linear combination of basis vectors, the time evolution of the complete wavefuction is just the linear combination of the time evolutions of the basis states. Lets do this for 10 time steps.
End of explanation
"""
|
quantumlib/ReCirq | docs/qaoa/example_problems.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 Google
End of explanation
"""
try:
import recirq
except ImportError:
!pip install git+https://github.com/quantumlib/ReCirq
"""
Explanation: QAOA example problems
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/experiments/qaoa/example_problems"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/qaoa/example_problems.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/ReCirq/blob/master/docs/qaoa/example_problems.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/qaoa/example_problems.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
The shallowest depth version of the Quantum Approximate Optimization Algorithm (QAOA) consists of the application of two unitary operators: the problem unitary and the driver unitary. The first of these depends on the parameter $\gamma$ and applies a phase to pairs of bits according to the problem-specific cost operator $C$:
$$
U_C ! \left(\gamma \right) = e^{-i \gamma C } = \prod_{j < k} e^{-i \gamma w_{jk} Z_j Z_k}
$$
whereas the driver unitary depends on the parameter $\beta$, is problem-independent, and serves to drive transitions between bitstrings within the superposition state:
$$
\newcommand{\gammavector}{\boldsymbol{\gamma}}
\newcommand{\betavector}{\boldsymbol{\beta}}
U_B ! \left(\beta \right) = e^{-i \beta B} = \prod_j e^{- i \beta X_j},
\quad \qquad
B = \sum_j X_j
$$
where $X_j$ is the Pauli $X$ operator on qubit $j$. These operators can be implemented by sequentially evolving under each term of the product; specifically the problem unitary is applied with a sequence of two-body interactions while the driver unitary is a single qubit rotation on each qubit. For higher-depth versions of the algorithm the two unitaries are sequentially re-applied each with their own $\beta$ or $\gamma$. The number of applications of the pair of unitaries is represented by the hyperparameter $p$ with parameters $\gammavector = (\gamma_1, \dots, \gamma_p)$ and $\betavector = (\beta_1, \dots, \beta_p)$. For $n$ qubits, we prepare the parameterized state
$$
\newcommand{\bra}[1]{\langle #1|}
\newcommand{\ket}[1]{|#1\rangle}
| \gammavector , \betavector \rangle = U_B(\beta_p) U_C(\gamma_p ) \cdots U_B(\beta_1) U_C(\gamma_1 ) \ket{+}^{\otimes n},
$$
where $\ket{+}^{\otimes n}$ is the symmetric superposition of computational basis states.
<img src="./images/qaoa_circuit.png" alt="QAOA circuit"/>
The optimization problems we study in this work are defined through a cost function with a corresponding quantum operator C given by
$$
C = \sum_{j < k} w_{jk} Z_j Z_k
$$
where $Z_j$ dnotes the Pauli $Z$ operator on qubit $j$, and the $w_{jk}$ correspond to scalar weights with values ${0, \pm1}$. Because these clauses act on at most two qubits, we are able to associate a graph with a given problem instance with weighted edges given by the $w_{jk}$ adjacency matrix.
Setup
Install the ReCirq package:
End of explanation
"""
import networkx as nx
import numpy as np
import scipy.optimize
import cirq
import recirq
%matplotlib inline
from matplotlib import pyplot as plt
# theme colors
QBLUE = '#1967d2'
QRED = '#ea4335ff'
QGOLD = '#fbbc05ff'
"""
Explanation: Now import Cirq, ReCirq and the module dependencies:
End of explanation
"""
from recirq.qaoa.problems import get_all_hardware_grid_problems
import cirq.contrib.routing as ccr
hg_problems = get_all_hardware_grid_problems(
device_graph=ccr.gridqubits_to_graph_device(recirq.get_device_obj_by_name('Sycamore23').qubits),
central_qubit=cirq.GridQubit(6,3),
n_instances=10,
rs=np.random.RandomState(5)
)
instance_i = 0
n_qubits = 23
problem = hg_problems[n_qubits, instance_i]
fig, ax = plt.subplots(figsize=(6,5))
pos = {i: coord for i, coord in enumerate(problem.coordinates)}
nx.draw_networkx(problem.graph, pos=pos, with_labels=False, node_color=QBLUE)
if True: # toggle edge labels
edge_labels = {(i1, i2): f"{weight:+d}"
for i1, i2, weight in problem.graph.edges.data('weight')}
nx.draw_networkx_edge_labels(problem.graph, pos=pos, edge_labels=edge_labels)
ax.axis('off')
fig.tight_layout()
"""
Explanation: Hardware grid
First, we study problem graphs which match the connectivity of our hardware, which we term "Hardware Grid problems". Despite results showing that problems on such graphs are efficient to solve on average, we study these problems as they do not require routing. This family of problems is composed of random instances generated by sampling $w_{ij}$ to be $\pm 1$ for edges in the device topology or a subgraph thereof.
End of explanation
"""
from recirq.qaoa.problems import get_all_sk_problems
n_qubits = 17
all_sk_problems = get_all_sk_problems(max_n_qubits=17, n_instances=10, rs=np.random.RandomState(5))
sk_problem = all_sk_problems[n_qubits, instance_i]
fig, ax = plt.subplots(figsize=(6,5))
pos = nx.circular_layout(sk_problem.graph)
nx.draw_networkx(sk_problem.graph, pos=pos, with_labels=False, node_color=QRED)
if False: # toggle edge labels
edge_labels = {(i1, i2): f"{weight:+d}"
for i1, i2, weight in sk_problem.graph.edges.data('weight')}
nx.draw_networkx_edge_labels(sk_problem.graph, pos=pos, edge_labels=edge_labels)
ax.axis('off')
fig.tight_layout()
"""
Explanation: Sherrington-Kirkpatrick model
Next, we study instances of the Sherrington-Kirkpatrick (SK) model, defined on the complete graph with $w_{ij}$ randomly chosen to be $\pm 1$. This is a canonical example of a frustrated spin glass and is most penalized by routing, which can be performed optimally using the linear swap networks at the cost of a linear increase in circuit depth.
End of explanation
"""
from recirq.qaoa.problems import get_all_3_regular_problems
n_qubits = 22
instance_i = 0
threereg_problems = get_all_3_regular_problems(max_n_qubits=22, n_instances=10, rs=np.random.RandomState(5))
threereg_problem = threereg_problems[n_qubits, instance_i]
fig, ax = plt.subplots(figsize=(6,5))
pos = nx.spring_layout(threereg_problem.graph, seed=11)
nx.draw_networkx(threereg_problem.graph, pos=pos, with_labels=False, node_color=QGOLD)
if False: # toggle edge labels
edge_labels = {(i1, i2): f"{weight:+d}"
for i1, i2, weight in threereg_problem.graph.edges.data('weight')}
nx.draw_networkx_edge_labels(threereg_problem.graph, pos=pos, edge_labels=edge_labels)
ax.axis('off')
fig.tight_layout()
"""
Explanation: 3-regular MaxCut
Finally, we study instances of the MaxCut problem on 3-regular graphs. This is a prototypical discrete optimization problem with a low, fixed node degree but a high dimension which cannot be trivially mapped to a planar architecture. It more closely matches problems of industrial interest. For these problems, we use an automated routing algorithm to heuristically insert SWAP operations.
End of explanation
"""
|
AaronCWong/phys202-2015-work | assignments/assignment05/InteractEx02.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
"""
Explanation: Interact Exercise 2
Imports
End of explanation
"""
def plot_sine1(a,b):
plt.figure(figsize=(15,2))
x = np.linspace(0,4 * np.pi,200)
plt.plot(x,np.sin((a*x)+b))
plt.title('Sine Plot')
plt.xticks([0,np.pi,2*np.pi,3*np.pi,4*np.pi],['0','$\pi$','$2\pi$','$3\pi$','$4\pi$'])
plt.grid(True)
plt.box(False)
plt.xlabel('Interval of pi')
plt.ylabel('Sin(ax + b)')
plt.xlim(0,4*np.pi);
plot_sine1(5,3.4)
"""
Explanation: Plotting with parameters
Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$.
Customize your visualization to make it effective and beautiful.
Customize the box, grid, spines and ticks to match the requirements of this data.
Use enough points along the x-axis to get a smooth plot.
For the x-axis tick locations use integer multiples of $\pi$.
For the x-axis tick labels use multiples of pi using LaTeX: $3\pi$.
End of explanation
"""
interact(plot_sine1 , a = (0.0,5.0,0.1) , b = (-5.0,5.5,0.1));
assert True # leave this for grading the plot_sine1 exercise
"""
Explanation: Then use interact to create a user interface for exploring your function:
a should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$.
b should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$.
End of explanation
"""
def plot_sine2(a,b,style='b'):
plt.figure(figsize=(15,2))
x = np.linspace(0,4 * np.pi,200)
plt.plot(x,np.sin((a*x)+b),style)
plt.title('Sine Plot')
plt.xticks([0,np.pi,2*np.pi,3*np.pi,4*np.pi],['0','$\pi$','$2\pi$','$3\pi$','$4\pi$'])
plt.grid(True)
plt.box(False)
plt.xlabel('Interval of pi')
plt.ylabel('Sin(ax + b)')
plt.xlim(0,4*np.pi);
plot_sine2(4.0, -1.0,'r.')
"""
Explanation: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument:
dashed red: r--
blue circles: bo
dotted black: k.
Write a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue line.
End of explanation
"""
interact(plot_sine2 , a = (0.0,5.0,0.1) , b = (-5.0,5.0,0.1) , style = {'Blue dotted': 'b.' , 'Black circles': 'ko' , 'Red triangles': 'r^'});
assert True # leave this for grading the plot_sine2 exercise
"""
Explanation: Use interact to create a UI for plot_sine2.
Use a slider for a and b as above.
Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles.
End of explanation
"""
|
qutip/qutip-notebooks | examples/piqs_introduction.ipynb | lgpl-3.0 | import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import cm
from qutip import *
from qutip.piqs import *
from qutip.cy.piqs import j_min
"""
Explanation: Introducing the Permutational Invariant Quantum Solver (PIQS)
Notebook Author: Nathan Shammah (nathan.shammah@gmail.com)
PIQS code: Nathan Shammah (nathan.shammah@gmail.com), Shahnawaz Ahmed (shahnawaz.ahmed95@gmail.com)
The Permutational Invariant Quantum Solver (PIQS) is an open-source Python solver to study the exact Lindbladian dynamics of open quantum systems consisting of identical qubits. It is integrated in QuTiP and can be imported as as a model.
Using this library, the Liouvillian of an ensemble of $N$ qubits, or two-level systems (TLSs), $\mathcal{D}_{TLS}(\rho)$, can be built using only polynomial – instead of exponential – resources. This has many applications for the study of realistic quantum optics models of many TLSs and in general as a tool in cavity QED [1].
Consider a system evolving according to the equation
\begin{eqnarray}
\dot{\rho} = \mathcal{D}\text{TLS}(\rho) &=&
-\frac{i}{\hbar}\lbrack H,\rho \rbrack
+\frac{\gamma\text{CE}}{2}\mathcal{L}{J{-}}[\rho]
+\frac{\gamma_\text{CD}}{2}\mathcal{L}{J{z}}[\rho]
+\frac{\gamma_\text{CP}}{2}\mathcal{L}{J{+}}[\rho]\nonumber\
&&+\sum_{n=1}^{N}\left(
\frac{\gamma_\text{E}}{2}\mathcal{L}{J{-,n}}[\rho]
+\frac{\gamma_\text{D}}{2}\mathcal{L}{J{z,n}}[\rho]
+\frac{\gamma_\text{P}}{2}\mathcal{L}{J{+,n}}[\rho]\right)
\end{eqnarray}
where $J_{\alpha,n}=\frac{1}{2}\sigma_{\alpha,n}$ are SU(2) Pauli spin operators, with ${\alpha=x,y,z}$ and $J_{\pm,n}=\sigma_{\pm,n}$. The collective spin operators are $J_{\alpha} = \sum_{n}J_{\alpha,n}$. The Lindblad super-operators are $\mathcal{L}_{A} = 2A\rho A^\dagger - A^\dagger A \rho - \rho A^\dagger A$.
The inclusion of local processes in the dynamics lead to using a Liouvillian space of dimension $4^N$. By exploiting the permutational invariance of identical particles [2-8], the Liouvillian $\mathcal{D}_\text{TLS}(\rho)$ can be built as a block-diagonal matrix in the basis of Dicke states $|j, m \rangle$.
The system under study is defined by creating an object of the $\texttt{Piqs}$ class, e.g. simply named $\texttt{system}$, whose first attribute is
$\texttt{system.N}$, the number of TLSs of the system $N$.
The rates for collective and local processes are simply defined as
$\texttt{collective}_ \texttt{emission}$ defines $\gamma_\text{CE}$, collective (superradiant) emission
$\texttt{collective}_ \texttt{dephasing}$ defines $\gamma_\text{CD}$, collective dephasing
$\texttt{collective}_ \texttt{pumping}$ defines $\gamma_\text{CP}$, collective pumping.
$\texttt{emission}$ defines $\gamma_\text{E}$, incoherent emission (losses)
$\texttt{dephasing}$ defines $\gamma_\text{D}$, local dephasing
$\texttt{pumping}$ defines $\gamma_\text{P}$, incoherent pumping.
Then the $\texttt{system.lindbladian()}$ creates the total TLS Linbladian superoperator matrix.
Similarly, $\texttt{system.hamiltonian}$ defines the TLS hamiltonian of the system $H_\text{TLS}$.
The system's Liouvillian can be built using $\texttt{system.liouvillian()}$. The properties of a Piqs object can be visualized by simply calling $\texttt{system}$.
We give two basic examples on the use of PIQS. In the first example the incoherent emission of $N$ driven TLSs is considered.
End of explanation
"""
N = 20
system = Dicke(N = N)
[jx, jy, jz] = jspin(N)
jp = jspin(N,"+")
jm = jp.dag()
w0 = 0.5
wx = 1.0
system.hamiltonian = w0 * jz + wx * jx
system.emission = 0.05
D_tls = system.liouvillian()
"""
Explanation: $1$. $N$ Qubits Dynamics
We study a driven ensemble of $N$ TLSs emitting incoherently,
\begin{eqnarray}
H_\text{TLS}&=&\hbar\omega_{0} J_{z}+\hbar\omega_{x} J_{x}
\end{eqnarray}
\begin{eqnarray}
\dot{\rho} &=& \mathcal{D}\text{TLS}(\rho)= -\frac{i}{\hbar}\lbrack H\text{TLS},\rho \rbrack+\sum_{n=1}^{N}\frac{\gamma_\text{E}}{2}\mathcal{L}{J{-,n}}[\rho]
\end{eqnarray}
End of explanation
"""
system.dephasing = 0.1
D_tls = system.liouvillian()
"""
Explanation: The properties of a given object can be updated dynamically, such that local dephasing could be added to the object $\texttt{'system'}$ symply with
End of explanation
"""
steady_tls = steadystate(D_tls, method="direct")
jz_ss = expect(jz, steady_tls)
jpjm_ss = expect(jp*jm, steady_tls)
"""
Explanation: Calculating the TLS Steady state and steady expectation values is straightforward with QuTiP's $\texttt{steadystate}()$ and $\texttt{expect}()$ [9].
End of explanation
"""
rho0_tls = dicke(N, N/2, -N/2)
t = np.linspace(0, 100, 1000)
result = mesolve(D_tls, rho0_tls, t, [], e_ops = [jz])
rhot_tls = result.states
jzt = result.expect[0]
"""
Explanation: Calculating the TLS time evolution can be done with QuTiP's $\texttt{mesolve}()$
End of explanation
"""
j_max = N/2.
label_size = 20
fig1 = plt.figure(1)
plt.rc('text', usetex = True)
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
plt.plot(t, jzt/j_max, 'k-', label=r'$\langle J_{z}\rangle(t)$')
plt.plot(t, t * 0 + jz_ss/j_max, 'g--', label = R'Steady-state $\langle J_{z}\rangle_\mathrm{ss}$')
plt.title(r'Total inversion', fontsize = label_size)
plt.xlabel(r'$t$', fontsize = label_size)
plt.ylabel(r'$\langle J_{z}\rangle$', fontsize = label_size)
plt.legend( fontsize = 0.8 * label_size)
#plt.yticks([-1, -0.99])
plt.show()
plt.close()
"""
Explanation: Visualization
End of explanation
"""
# TLS parameters
n_tls = 5
N = n_tls
system = Dicke(N = n_tls)
[jx, jy, jz] = jspin(n_tls)
jp = jspin(n_tls, "+")
jm = jp.dag()
w0 = 1.
wx = 0.1
system.hamiltonian = w0 * jz + wx * jx
system.emission = 0.5
D_tls = system.liouvillian()
# Light-matter coupling parameters
wc = 1.
g = 0.9
kappa = 1
pump = 0.1
nphot = 16
a = destroy(nphot)
h_int = g * tensor(a + a.dag(), jx)
# Photonic Liouvillian
c_ops_phot = [np.sqrt(kappa) * a, np.sqrt(pump) * a.dag()]
D_phot = liouvillian(wc * a.dag()*a , c_ops_phot)
# Identity super-operators
nds = num_dicke_states(n_tls)
id_tls = to_super(qeye(nds))
id_phot = to_super(qeye(nphot))
# Define the total Liouvillian
D_int = -1j* spre(h_int) + 1j* spost(h_int)
D_tot = D_int + super_tensor(D_phot, id_tls) + super_tensor(id_phot, D_tls)
# Define operator in the total space
nphot_tot = tensor(a.dag()*a, qeye(nds))
"""
Explanation: $2$. Dynamics of $N$ Qubits in a Bosonic Cavity
Now we consider an ensemble of spins in a driven, leaky cavity
\begin{eqnarray}
\dot{\rho} &=& \mathcal{D}\text{TLS}(\rho) +\mathcal{D}\text{phot}(\rho) -\frac{i}{\hbar}\lbrack H_\text{int}, \rho\rbrack\nonumber\
&=& -i\lbrack \omega_{0} J_{z} + \omega_{c} a^\dagger a + g\left(a^\dagger+a\right)J_{x},\rho \rbrack+\frac{w}{2}\mathcal{L}{a^\dagger}[\rho]+\frac{\kappa}{2}\mathcal{L}{a}[\rho]+\sum_{n=1}^{N}\frac{\gamma_\text{E}}{2}\mathcal{L}{J{-,n}}[\rho]
\end{eqnarray}
where now the full system density matrix is defined on a tensor Hilbert space $\rho \in \mathcal{H}\text{TLS}\otimes\mathcal{H}\text{phot}$, where the dymension of $\mathcal{H}_\text{TLS}$ is reduced from $2^N$ using the approach of an uncoupled basis to $O(N^2)$ using $PIQS$.
Thanks to QuTiP's $\texttt{super}_\texttt{tensor}()$ function, we can add the two independently built Liouvillians, being careful only to place the light-matter interaction of the Hamiltonian in the total Hilbert space and creating the corresponding "left" and "right" superoperators with $\texttt{spre}()$ and $\texttt{spost}()$.
End of explanation
"""
rho_ss = steadystate(D_tot)
nphot_ss = expect(nphot_tot, rho_ss)
psi = rho_ss.ptrace(0)
xvec = np.linspace(-6, 6, 100)
W = wigner(psi, xvec, xvec)
"""
Explanation: Wigner function and steady state $\rho_\text{ss}$
End of explanation
"""
jmax = (0.5 * N)
j2max = (0.5 * N + 1) * (0.5 * N)
plt.rc('text', usetex = True)
label_size = 20
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
wmap = wigner_cmap(W) # Generate Wigner colormap
nrm = mpl.colors.Normalize(0, W.max())
max_cb =np.max(W)
min_cb =np.min(W)
fig2 = plt.figure(2)
plotw = plt.contourf(xvec, xvec, W, 100, cmap=wmap, norm=nrm)
plt.title(r"Wigner Function", fontsize=label_size);
plt.xlabel(r'$x$', fontsize = label_size)
plt.ylabel(r'$p$', fontsize = label_size)
cb = plt.colorbar()
cb.set_ticks( [min_cb, max_cb])
cb.set_ticklabels([r'$0$',r'max'])
plt.show()
plt.close()
"""
Explanation: Visualization
End of explanation
"""
excited_state = excited(N)
ground_phot = ket2dm(basis(nphot,0))
rho0 = tensor(ground_phot, excited_state)
t = np.linspace(0, 20, 1000)
result2 = mesolve(D_tot, rho0, t, [], e_ops = [nphot_tot])
rhot_tot = result2.states
nphot_t = result2.expect[0]
"""
Explanation: Time evolution of $\rho(t)$
End of explanation
"""
fig3 = plt.figure(3)
plt.plot(t, nphot_t, 'k-', label='Time evolution')
plt.plot(t, t*0 + nphot_ss, 'g--', label = 'Steady-state value')
plt.title(r'Cavity photon population', fontsize = label_size)
plt.xlabel(r'$t$', fontsize = label_size)
plt.ylabel(r'$\langle a^\dagger a\rangle(t)$', fontsize = label_size)
plt.legend(fontsize = label_size)
plt.show()
plt.close()
"""
Explanation: Visualization
End of explanation
"""
B = nphot_tot
rhoA = B * rho_ss
result3 = mesolve(D_tot, rhoA, t, [], e_ops = B)
g2_t = result3.expect[0]
"""
Explanation: ### Steady-state correlations: $g^{(2)}(\tau)$ for $\rho_\text{ss}$
We define the $g^{(2)}(\tau)$ of the system as the two-time correlation function of the intracavity photons,
\begin{eqnarray}
g^{(2)}(\tau) &=& \frac{\langle: a^\dagger(\tau) a^\dagger(0) a(\tau) a(0) :\rangle}{|\langle: a^\dagger(0) a(0) :\rangle|^2}\nonumber.
\end{eqnarray}
End of explanation
"""
fig4 = plt.figure(4)
plt.plot(t, np.real(g2_t)/nphot_ss**2, '-')
plt.plot(t, 0*t + 1, '--')
plt.title(r'Photon correlation function', fontsize = label_size)
plt.xlabel(r'$\tau$', fontsize = label_size)
plt.ylabel(r'$g^{(2)}(\tau)$', fontsize = label_size)
plt.show()
plt.close()
"""
Explanation: Visualization
End of explanation
"""
N = 6
#Dicke Basis
dicke_blocks = np.real(block_matrix(N).todense())
#Dicke states
excited_state = dicke(N, N/2, N/2)
superradiant_state = dicke(N, N/2, j_min(N))
subradiant_state = dicke(N, j_min(N), -j_min(N))
ground_state = dicke(N, N/2, -N/2)
N = 7
#GHZ state
ghz_state = ghz(N)
#CSS states
a = 1/np.sqrt(2)
b = 1/np.sqrt(2)
css_symmetric = css(N, a, b)
css_antisymmetric = css(N, a, -b)
"""
Explanation: $3$. Initial States
$PIQS$ allows the user to quickly define initial states as density matrices in the Dicke basis of dimension $O(N^2)$ (by default) or in the uncoupled TLS basis $2^N$ (by setting the basis specification as $\texttt{basis='uncoupled'}$). Below we give an overview of
Dicke states with "$\texttt{dicke}()$",
Greenberger–Horne–Zeilinger (GHZ), called by "$\texttt{ghz}()$",
Coherent Spin States (CSS) called by "$\texttt{css}()$",
hereafter all expressed in the compact Dicke basis.
End of explanation
"""
label_size = 15
c_map = 'bwr'
# Convert to real-valued dense matrices
rho1 = np.real(css_antisymmetric.full())
rho3b = np.real(ghz_state.full())
rho4b = np.real(css_symmetric.full())
rho5 = np.real(excited_state.full())
rho6 = np.real(superradiant_state.full())
rho7 = np.real(ground_state.full())
rho8 = np.real(subradiant_state.full())
rho9 = dicke_blocks
# Dicke basis
plt.rc('text', usetex = True)
label_size = 25
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
fig, axes = plt.subplots(nrows=2, ncols=3, figsize=(20, 12))
fig1 = axes[0,0].imshow(rho9, cmap = c_map)
axes[0,0].set_title(r"$\rho=\sum_{j,m,m'}|j,m\rangle\langle j,m'|$",
fontsize = label_size)
plt.setp(axes, xticks=[],
yticks=[])
#Excited
fig2 = axes[0,1].imshow(rho9+rho5, cmap = c_map)
axes[0,1].set_title(r"Fully excited, $|\frac{N}{2},\frac{N}{2}\rangle\langle \frac{N}{2},\frac{N}{2}|$", fontsize = label_size)
#Ground
fig3 = axes[0,2].imshow(rho9+rho7, cmap = c_map)
axes[0,2].set_title(r"Ground state, $|\frac{N}{2},-\frac{N}{2}\rangle\langle \frac{N}{2},-\frac{N}{2}|$", fontsize = label_size)
#Classical Mixture
fig4 = axes[1,0].imshow(rho9+(rho8+rho5), cmap = c_map)
axes[1,0].set_title(r"Mixture, $|0,0\rangle\langle 0,0|+|\frac{N}{2},\frac{N}{2}\rangle\langle \frac{N}{2},\frac{N}{2}|$", fontsize = label_size)
#Superradiant
fig5 = axes[1,1].imshow(rho9+rho6, cmap = c_map)
axes[1,1].set_title(r"Superradiant state, $|\frac{N}{2},0\rangle\langle \frac{N}{2},0|$", fontsize = label_size)
#Subradiant
fig6 = axes[1,2].imshow(rho9+rho8, cmap = c_map)
axes[1,2].set_title(r"Subradiant state, $|0,0\rangle\langle 0,0|$", fontsize = label_size)
plt.show()
plt.close()
# Plots for density matrices that are not diagonal in the Dicke basis
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(20, 5))
# Antisymmetric Coherent Spin State
fig1 = axes[0].imshow(rho1, cmap = c_map)
axes[0].set_title(r'$\rho=|\frac{1}{\sqrt{2}},-\frac{1}{\sqrt{2}}\rangle\langle\frac{1}{\sqrt{2}},-\frac{1}{\sqrt{2}}|_\mathrm{CSS}$', fontsize = label_size)
cb = plt.colorbar(fig1,ax=axes[0])
fig1.set_clim([np.min(rho1),np.max(rho1)])
cb.set_ticks( [np.min(rho1),0, np.max(rho1)])
cb.set_ticklabels([r'min',r'$0$',r'max'])
plt.setp(axes, xticks=[],yticks=[])
# Symmetric Coherent Spin State
fig2 = axes[1].imshow(rho4b, cmap = c_map)
cb2 = plt.colorbar(fig2,ax=axes[1])
fig2.set_clim([0,np.max(rho4b)])
cb2.set_ticks( [0, np.max(rho4b)])
cb2.set_ticklabels([r'$0$',r'max'])
axes[1].set_title(r'$\rho=|\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\rangle\langle\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}|_\mathrm{CSS}$', fontsize = label_size)
# GHZ state
fig3 = axes[2].imshow(rho3b, cmap = c_map)
axes[2].set_title(r'$\rho=|\mathrm{GHZ}\rangle\langle\mathrm{GHZ}|$', fontsize = label_size)
cb3 = plt.colorbar(fig3,ax=axes[2])
fig3.set_clim([0,np.max(rho3b)])
cb3.set_ticks( [np.min(rho3b), np.max(rho3b)])
cb3.set_ticklabels([r'$0$',r'max'])
plt.show()
plt.close()
"""
Explanation: Visualization
End of explanation
"""
qutip.about()
"""
Explanation: References
[1]
B.A. Chase and J.M. Geremia, Phys Rev. A 78, 052101 (2008)
[2] M. Xu, D.A. Tieri, and M.J. Holland, Phys Rev. A 87, 062101 (2013)
[3] S. Hartmann, Quantum Inf. Comput. 16, 1333 (2016)
[4] F. Damanet, D. Braun, and J. Martin, Phys. Rev. A 94, 033838 (2016)
[5] P. Kirton and J. Keeling, , Phys. Rev. Lett. 118, 123602 (2017) https://github.com/peterkirton/permutations
[6] N. Shammah, N. Lambert, F. Nori, and S. De Liberato, Phys Rev. A 96, 023863 (2017)
[7] M. Gegg and M. Richter, Sci. Rep. 7, 16304 (2017) https://github.com/modmido/psiquasp
[8] J. R. Johansson, P. D. Nation, and F. Nori, Comp. Phys. Comm. 183, 1760 (2012). http://qutip.org
End of explanation
"""
|
DTOcean/dtocean-core | notebooks/DTOcean Mooring and Foundations Example.ipynb | gpl-3.0 | %matplotlib inline
from IPython.display import display, HTML
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (14.0, 8.0)
import numpy as np
from dtocean_core import start_logging
from dtocean_core.core import Core
from dtocean_core.menu import ModuleMenu, ProjectMenu
from dtocean_core.pipeline import Tree
def html_list(x):
message = "<ul>"
for name in x:
message += "<li>{}</li>".format(name)
message += "</ul>"
return message
def html_dict(x):
message = "<ul>"
for name, status in x.iteritems():
message += "<li>{}: <b>{}</b></li>".format(name, status)
message += "</ul>"
return message
# Bring up the logger
start_logging()
"""
Explanation: DTOcean Mooring and Foundations Example
Note, this example assumes the DTOcean database and the Electrical Sub-Systems Module has been installed
End of explanation
"""
new_core = Core()
project_menu = ProjectMenu()
module_menu = ModuleMenu()
pipe_tree = Tree()
"""
Explanation: Create the core, menus and pipeline tree
The core object carrys all the system information and is operated on by the other classes
End of explanation
"""
project_title = "DTOcean"
new_project = project_menu.new_project(new_core, project_title)
"""
Explanation: Create a new project
End of explanation
"""
options_branch = pipe_tree.get_branch(new_core, new_project, "System Type Selection")
variable_id = "device.system_type"
my_var = options_branch.get_input_variable(new_core, new_project, variable_id)
my_var.set_raw_interface(new_core, "Tidal Floating")
my_var.read(new_core, new_project)
"""
Explanation: Set the device type
End of explanation
"""
project_menu.initiate_pipeline(new_core, new_project)
"""
Explanation: Initiate the pipeline
This step will be important when the database is incorporated into the system as it will effect the operation of the pipeline.
End of explanation
"""
names = module_menu.get_available(new_core, new_project)
message = html_list(names)
HTML(message)
"""
Explanation: Discover available modules
End of explanation
"""
module_name = 'Mooring and Foundations'
module_menu.activate(new_core, new_project, module_name)
"""
Explanation: Activate a module
Note that the order of activation is important and that we can't deactivate yet!
End of explanation
"""
mooring_branch = pipe_tree.get_branch(new_core, new_project, 'Mooring and Foundations')
input_status = mooring_branch.get_input_status(new_core, new_project)
message = html_dict(input_status)
HTML(message)
"""
Explanation: Check the status of the module inputs
End of explanation
"""
project_menu.initiate_dataflow(new_core, new_project)
"""
Explanation: Initiate the dataflow
This indicates that the filtering and module / theme selections are complete
End of explanation
"""
%run test_data/inputs_wp4.py
mooring_branch.read_test_data(new_core,
new_project,
"test_data/inputs_wp4.pkl")
"""
Explanation: Load test data
Prepare the test data for loading. The test_data directory of the source code should be copied to the directory that the notebook is running. When the python file is run a pickle file is generated containing a dictionary of inputs.
End of explanation
"""
can_execute = module_menu.is_executable(new_core, new_project, module_name)
display(can_execute)
input_status = mooring_branch.get_input_status(new_core, new_project)
message = html_dict(input_status)
HTML(message)
"""
Explanation: Check if the module can be executed
End of explanation
"""
module_menu.execute_current(new_core, new_project)
"""
Explanation: Execute the current module
The "current" module refers to the next module to be executed in the chain (pipeline) of modules. This command will only execute that module and another will be used for executing all of the modules at once.
Note, any data supplied by the module will be automatically copied into the active data state.
End of explanation
"""
output_status = mooring_branch.get_output_status(new_core, new_project)
message = html_dict(output_status)
HTML(message)
economics = mooring_branch.get_output_variable(new_core, new_project, "project.moorings_foundations_economics_data")
economics.get_value(new_core, new_project)
moorings = mooring_branch.get_output_variable(new_core, new_project, "project.moorings_component_data")
moorings.get_value(new_core, new_project)
lines = mooring_branch.get_output_variable(new_core, new_project, "project.moorings_line_data")
lines.get_value(new_core, new_project)
umbilicals = mooring_branch.get_output_variable(new_core, new_project, "project.umbilical_cable_data")
umbilicals.get_value(new_core, new_project)
dimensions = mooring_branch.get_output_variable(new_core, new_project, "project.moorings_dimensions")
dimensions.get_value(new_core, new_project)
network = mooring_branch.get_output_variable(new_core, new_project, "project.moorings_foundations_network")
net_val = network.get_value(new_core, new_project)
import pprint
pprint.pprint(net_val)
"""
Explanation: Examine the results
Currently, there is no robustness built into the core, so the assumption is that the module executed successfully. This will have to be improved towards deployment of the final software.
Let's check the number of devices and annual output of the farm, using just information in the data object.
End of explanation
"""
|
sgrindy/Bayesian-estimation-of-relaxation-spectra | Double_Maxwell_Lognormal_prior.ipynb | mit | def H(tau):
h1 = 1; tau1 = 0.03; sd1 = 0.5;
h2 = 7; tau2 = 10; sd2 = 0.5;
term1 = h1/np.sqrt(2*sd1**2*np.pi) * np.exp(-(np.log10(tau/tau1)**2)/(2*sd1**2))
term2 = h2/np.sqrt(2*sd2**2*np.pi) * np.exp(-(np.log10(tau/tau2)**2)/(2*sd2**2))
return term1 + term2
Nfreq = 50
Nmodes = 30
w = np.logspace(-4,4,Nfreq).reshape((1,Nfreq))
tau = np.logspace(-np.log10(w.max()),-np.log10(w.min()),Nmodes).reshape((Nmodes,1))
# get equivalent discrete spectrum
delta_log_tau = np.log10(tau[1]/tau[0])
g_true = (H(tau) * delta_log_tau).reshape((1,Nmodes))
plt.loglog(tau,H(tau), label='Continuous spectrum')
plt.plot(tau.ravel(),g_true.ravel(), 'or', label='Equivalent discrete spectrum')
plt.legend(loc=4)
plt.xlabel(r'$\tau$')
plt.ylabel(r'$H(\tau)$ or $g$')
plt.savefig('Original_relax_spec.png',dpi=500)
"""
Explanation: First, we need to set up our test data. We'll use two relaxation modes that are themselves log-normally distributed.
End of explanation
"""
wt = tau*w
Kp = wt**2/(1+wt**2)
Kpp = wt/(1+wt**2)
noise_level = 0.02
Gp_true = np.dot(g_true,Kp)
Gp_noise = Gp_true + Gp_true*noise_level*np.random.randn(Nfreq)
Gpp_true = np.dot(g_true,Kpp)
Gpp_noise = Gpp_true + Gpp_true*noise_level*np.random.randn(Nfreq)
plt.loglog(w.ravel(),Gp_true.ravel(),label="True G'")
plt.plot(w.ravel(),Gpp_true.ravel(), label='True G"')
plt.plot(w.ravel(),Gp_noise.ravel(),'xr',label="Noisy G'")
plt.plot(w.ravel(),Gpp_noise.ravel(),'+r',label='Noisy G"')
plt.xlabel(r'$\omega$')
plt.ylabel("Moduli")
plt.legend(loc=4)
plt.savefig('Original_Moduli_spec.png',dpi=500)
"""
Explanation: Now, let's calculate the moduli. We'll have both a true version and a noisy version with some random noise added to simulate experimental variance.
End of explanation
"""
noisyModel = pm.Model()
with noisyModel:
g = pm.Lognormal('g', mu=0, tau=0.1, shape=g_true.shape)
sd1 = pm.HalfNormal('sd1',tau=1)
sd2 = pm.HalfNormal('sd2',tau=1)
# we'll log-weight the moduli as in other fitting methods
logGp = pm.Normal('logGp',mu=np.log(tt.dot(g,Kp)),
sd=sd1, observed=np.log(Gp_noise))
logGpp = pm.Normal('logGpp',mu=np.log(tt.dot(g,Kpp)),
sd=sd2, observed=np.log(Gpp_noise))
trueModel = pm.Model()
with trueModel:
g = pm.Lognormal('g', mu=0, tau=0.1, shape=g_true.shape)
sd1 = pm.HalfNormal('sd1',tau=1)
sd2 = pm.HalfNormal('sd2',tau=1)
# we'll log-weight the moduli as in other fitting methods
logGp = pm.Normal('logGp',mu=np.log(tt.dot(g,Kp)),
sd=sd1, observed=np.log(Gp_true))
logGpp = pm.Normal('logGpp',mu=np.log(tt.dot(g,Kpp)),
sd=sd2, observed=np.log(Gpp_true))
"""
Explanation: Now, we can build the model with PyMC3. I'll make 2: one with noise, and one without.
End of explanation
"""
Nsamples = 2000
trueMapEstimate = pm.find_MAP(model=trueModel)
with trueModel:
trueTrace = pm.sample(Nsamples, start=trueMapEstimate)
pm.backends.text.dump('./Double_Maxwell_v3_true', trueTrace)
noisyMapEstimate = pm.find_MAP(model=noisyModel)
with noisyModel:
noisyTrace = pm.sample(Nsamples, start=noisyMapEstimate)
pm.backends.text.dump('./Double_Maxwell_v3_noisy', noisyTrace)
burn = 500
trueQ = pm.quantiles(trueTrace[burn:])
noisyQ = pm.quantiles(noisyTrace[burn:])
"""
Explanation: Now we can sample the models to get our parameter distributions:
End of explanation
"""
def plot_quantiles(Q,ax):
ax.fill_between(tau.ravel(), y1=Q['g'][2.5], y2=Q['g'][97.5], color='c',
alpha=0.25)
ax.fill_between(tau.ravel(), y1=Q['g'][25], y2=Q['g'][75], color='c',
alpha=0.5)
ax.plot(tau.ravel(), Q['g'][50], 'b-')
# sampling localization lines:
ax.axvline(x=np.exp(np.pi/2)/w.max(), color='k', linestyle='--')
ax.axvline(x=(np.exp(np.pi/2)*w.min())**-1, color='k', linestyle='--')
fig,ax = plt.subplots(nrows=2, sharex=True,
subplot_kw={'xscale':'log','yscale':'log',
'ylabel':'$g_i$'})
plot_quantiles(trueQ,ax[0])
plot_quantiles(noisyQ,ax[1])
# true spectrum
trueSpectrumline0 = ax[0].plot(tau.ravel(), g_true.ravel(),'xr',
label='True Spectrum')
trueSpectrumline1 = ax[1].plot(tau.ravel(), g_true.ravel(),'xr',
label='True Spectrum')
ax[0].legend(loc=4)
ax[0].set_title('Using True Moduli')
ax[1].set_xlabel(r'$\tau$')
ax[1].legend(loc=4)
ax[1].set_title('Using Noisy Moduli')
fig.set_size_inches(5,8)
fig.savefig('True,Noisy_moduli.png',dpi=500)
noisySample = pm.sample_ppc(noisyTrace[burn:],model=noisyModel,samples=250)
fig,ax = plt.subplots()
for logg1,logg2 in zip(noisySample['logGp'].reshape(250,50),
noisySample['logGpp'].reshape(250,50)):
ax.plot(w.ravel(),np.exp(logg1),'b-',alpha=0.05)
ax.plot(w.ravel(),np.exp(logg2),'r-',alpha=0.01)
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel(r'$\omega$')
ax.set_ylabel('G\',G"')
ax.plot(w.ravel(), Gp_true.ravel(),'xk', label='True G\'' )
ax.plot(w.ravel(), Gpp_true.ravel(), '+k', label='True G"')
ax.set_title('Moduli estimated from noisy sample')
plt.legend(loc=4)
plt.savefig('Re-estimated_moduli.png',dpi=500)
"""
Explanation: Plotting the quantiles gives us a sense of the uncertainty in our estimation of $g_i$:
End of explanation
"""
|
julienchastang/unidata-python-workshop | notebooks/CF Conventions/NetCDF and CF - The Basics.ipynb | mit | # Import some useful Python tools
from datetime import datetime, timedelta
import numpy as np
# Twelve hours of hourly output starting at 22Z today
start = datetime.utcnow().replace(hour=22, minute=0, second=0, microsecond=0)
times = np.array([start + timedelta(hours=h) for h in range(13)])
# 3km spacing in x and y
x = np.arange(-150, 153, 3)
y = np.arange(-100, 100, 3)
# Standard pressure levels in hPa
press = np.array([1000, 925, 850, 700, 500, 300, 250])
temps = np.random.randn(times.size, press.size, y.size, x.size)
"""
Explanation: <div style="width:1000 px">
<div style="float:left; width:98 px; height:98px;">
<img src="https://www.unidata.ucar.edu/images/logos/netcdf-150x150.png" alt="netCDF Logo" style="height: 98px;">
</div>
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<div style="text-align:center;">
<h1>NetCDF and CF: The Basics</h1>
</div>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
Overview
This workshop will teach some of the basics of Climate and Forecasting metadata for netCDF data files with some hands-on work available in Jupyter Notebooks using Python. Along with introduction to netCDF and CF, we will introduce the CF data model and discuss some netCDF implementation details to consider when deciding how to write data with CF and netCDF. We will cover gridded data as well as in situ data (stations, soundings, etc.) and touch on storing geometries data in CF.
This assumes a basic understanding of netCDF.
Outline
<a href="#gridded">Gridded Data</a>
<a href="#obs">Observation Data</a>
<a href="#exercises">Exercises</a>
<a href="#references">References</a>
<a name="gridded"></a>
Gridded Data
Let's say we're working with some numerical weather forecast model output. Let's walk through the steps necessary to store this data in netCDF, using the Climate and Forecasting metadata conventions to ensure that our data are available to as many tools as possible.
To start, let's assume the following about our data:
* It corresponds to forecast three dimensional temperature at several times
* The native coordinate system of the model is on a regular grid that represents the Earth on a Lambert conformal projection.
We'll also go ahead and generate some arrays of data below to get started:
End of explanation
"""
from netCDF4 import Dataset
nc = Dataset('forecast_model.nc', 'w', format='NETCDF4_CLASSIC', diskless=True)
"""
Explanation: Creating the file and dimensions
The first step is to create a new file and set up the shared dimensions we'll be using in the file. We'll be using the netCDF4-python library to do all of the requisite netCDF API calls.
End of explanation
"""
nc.Conventions = 'CF-1.7'
nc.title = 'Forecast model run'
nc.institution = 'Unidata'
nc.source = 'WRF-1.5'
nc.history = str(datetime.utcnow()) + ' Python'
nc.references = ''
nc.comment = ''
"""
Explanation: We're going to start by adding some global attribute metadata. These are recommendations from the standard (not required), but they're easy to add and help users keep the data straight, so let's go ahead and do it.
End of explanation
"""
nc.createDimension('forecast_time', None)
nc.createDimension('x', x.size)
nc.createDimension('y', y.size)
nc.createDimension('pressure', press.size)
nc
"""
Explanation: At this point, this is the CDL representation of this dataset:
netcdf forecast_model {
attributes:
:Conventions = "CF-1.7" ;
:title = "Forecast model run" ;
:institution = "Unidata" ;
:source = "WRF-1.5" ;
:history = "2019-07-16 02:21:52.005718 Python" ;
:references = "" ;
:comment = "" ;
}
Next, before adding variables to the file to define each of the data fields in this file, we need to define the dimensions that exist in this data set. We set each of x, y, and pressure to the size of the corresponding array. We set forecast_time to be an "unlimited" dimension, which allows the dataset to grow along that dimension if we write additional data to it later.
End of explanation
"""
temps_var = nc.createVariable('Temperature', datatype=np.float32,
dimensions=('forecast_time', 'pressure', 'y', 'x'),
zlib=True)
"""
Explanation: The CDL representation now shows our dimensions:
netcdf forecast_model {
dimensions:
forecast_time = UNLIMITED (currently 13) ;
x = 101 ;
y = 67 ;
pressure = 7 ;
attributes:
:Conventions = "CF-1.7" ;
:title = "Forecast model run" ;
:institution = "Unidata" ;
:source = "WRF-1.5" ;
:history = "2019-07-16 02:21:52.005718 Python" ;
:references = "" ;
:comment = "" ;
}
Creating and filling a variable
So far, all we've done is outlined basic information about our dataset: broad metadata and the dimensions of our dataset. Now we create a variable to hold one particular data field for our dataset, in this case the forecast air temperature. When defining this variable, we specify the datatype for the values being stored, the relevant dimensions, as well as enable optional compression.
End of explanation
"""
temps_var[:] = temps
temps_var
"""
Explanation: Now that we have the variable, we tell python to write our array of data to it.
End of explanation
"""
next_slice = 0
for temp_slice in temps:
temps_var[next_slice] = temp_slice
next_slice += 1
"""
Explanation: If instead we wanted to write data sporadically, like once per time step, we could do that instead (though the for loop below might actually be at a higher level in the program:
End of explanation
"""
temps_var.units = 'Kelvin'
temps_var.standard_name = 'air_temperature'
temps_var.long_name = 'Forecast air temperature'
temps_var.missing_value = -9999
temps_var
"""
Explanation: At this point, this is the CDL representation of our dataset:
netcdf forecast_model {
dimensions:
forecast_time = UNLIMITED (currently 13) ;
x = 101 ;
y = 67 ;
pressure = 7 ;
variables:
float Temperature(forecast_time, pressure, y, x) ;
attributes:
:Conventions = "CF-1.7" ;
:title = "Forecast model run" ;
:institution = "Unidata" ;
:source = "WRF-1.5" ;
:history = "2019-07-16 02:21:52.005718 Python" ;
:references = "" ;
:comment = "" ;
}
We can also add attributes to this variable to define metadata. The CF conventions require a units attribute to be set for all variables that represent a dimensional quantity. The value of this attribute needs to be parsable by the UDUNITS library. Here we set it to a value of 'Kelvin'. We also set the standard (optional) attributes of long_name and standard_name. The former contains a longer description of the variable, while the latter comes from a controlled vocabulary in the CF conventions. This allows users of data to understand, in a standard fashion, what a variable represents. If we had missing values, we could also set the missing_value attribute to an appropriate value.
NASA Dataset Interoperability Recommendations:
Section 2.2 - Include Basic CF Attributes
Include where applicable: units, long_name, standard_name, valid_min / valid_max, scale_factor / add_offset and others.
End of explanation
"""
x_var = nc.createVariable('x', np.float32, ('x',))
x_var[:] = x
x_var.units = 'km'
x_var.axis = 'X' # Optional
x_var.standard_name = 'projection_x_coordinate'
x_var.long_name = 'x-coordinate in projected coordinate system'
y_var = nc.createVariable('y', np.float32, ('y',))
y_var[:] = y
y_var.units = 'km'
y_var.axis = 'Y' # Optional
y_var.standard_name = 'projection_y_coordinate'
y_var.long_name = 'y-coordinate in projected coordinate system'
"""
Explanation: The resulting CDL (truncated to the variables only) looks like:
variables:
float Temperature(forecast_time, pressure, y, x) ;
Temperature:units = "Kelvin" ;
Temperature:standard_name = "air_temperature" ;
Temperature:long_name = "Forecast air temperature" ;
Temperature:missing_value = -9999.0 ;
Coordinate variables
To properly orient our data in time and space, we need to go beyond dimensions (which define common sizes and alignment) and include values along these dimensions, which are called "Coordinate Variables". Generally, these are defined by creating a one dimensional variable with the same name as the respective dimension.
To start, we define variables which define our x and y coordinate values. These variables include standard_names which allow associating them with projections (more on this later) as well as an optional axis attribute to make clear what standard direction this coordinate refers to.
End of explanation
"""
press_var = nc.createVariable('pressure', np.float32, ('pressure',))
press_var[:] = press
press_var.units = 'hPa'
press_var.axis = 'Z' # Optional
press_var.standard_name = 'air_pressure'
press_var.positive = 'down' # Optional
"""
Explanation: We also define a coordinate variable pressure to reference our data in the vertical dimension. The standard_name of 'air_pressure' is sufficient to identify this coordinate variable as the vertical axis, but let's go ahead and specify the axis as well. We also specify the attribute positive to indicate whether the variable increases when going up or down. In the case of pressure, this is technically optional.
End of explanation
"""
from cftime import date2num
time_units = 'hours since {:%Y-%m-%d 00:00}'.format(times[0])
time_vals = date2num(times, time_units)
time_vals
"""
Explanation: Time coordinates must contain a units attribute with a string value with a form similar to 'seconds since 2019-01-06 12:00:00.00'. 'seconds', 'minutes', 'hours', and 'days' are the most commonly used units for time. Due to the variable length of months and years, they are not recommended.
Before we can write data, we need to first need to convert our list of Python datetime instances to numeric values. We can use the cftime library to make this easy to convert using the unit string as defined above.
End of explanation
"""
time_var = nc.createVariable('forecast_time', np.int32, ('forecast_time',))
time_var[:] = time_vals
time_var.units = time_units
time_var.axis = 'T' # Optional
time_var.standard_name = 'time' # Optional
time_var.long_name = 'time'
"""
Explanation: Now we can create the forecast_time variable just as we did before for the other coordinate variables:
End of explanation
"""
from pyproj import Proj
X, Y = np.meshgrid(x, y)
lcc = Proj({'proj':'lcc', 'lon_0':-105, 'lat_0':40, 'a':6371000.,
'lat_1':25})
lon, lat = lcc(X * 1000, Y * 1000, inverse=True)
"""
Explanation: The CDL representation of the variables now contains much more information:
dimensions:
forecast_time = UNLIMITED (currently 13) ;
x = 101 ;
y = 67 ;
pressure = 7 ;
variables:
float x(x) ;
x:units = "km" ;
x:axis = "X" ;
x:standard_name = "projection_x_coordinate" ;
x:long_name = "x-coordinate in projected coordinate system" ;
float y(y) ;
y:units = "km" ;
y:axis = "Y" ;
y:standard_name = "projection_y_coordinate" ;
y:long_name = "y-coordinate in projected coordinate system" ;
float pressure(pressure) ;
pressure:units = "hPa" ;
pressure:axis = "Z" ;
pressure:standard_name = "air_pressure" ;
pressure:positive = "down" ;
float forecast_time(forecast_time) ;
forecast_time:units = "hours since 2019-07-16 00:00" ;
forecast_time:axis = "T" ;
forecast_time:standard_name = "time" ;
forecast_time:long_name = "time" ;
float Temperature(forecast_time, pressure, y, x) ;
Temperature:units = "Kelvin" ;
Temperature:standard_name = "air_temperature" ;
Temperature:long_name = "Forecast air temperature" ;
Temperature:missing_value = -9999.0 ;
Auxilliary Coordinates
Our data are still not CF-compliant because they do not contain latitude and longitude information, which is needed to properly locate the data. To solve this, we need to add variables with latitude and longitude. These are called "auxillary coordinate variables", not because they are extra, but because they are not simple one dimensional variables.
Below, we first generate longitude and latitude values from our projected coordinates using the pyproj library.
End of explanation
"""
lon_var = nc.createVariable('lon', np.float64, ('y', 'x'))
lon_var[:] = lon
lon_var.units = 'degrees_east'
lon_var.standard_name = 'longitude' # Optional
lon_var.long_name = 'longitude'
lat_var = nc.createVariable('lat', np.float64, ('y', 'x'))
lat_var[:] = lat
lat_var.units = 'degrees_north'
lat_var.standard_name = 'latitude' # Optional
lat_var.long_name = 'latitude'
"""
Explanation: Now we can create the needed variables. Both are dimensioned on y and x and are two-dimensional. The longitude variable is identified as actually containing such information by its required units of 'degrees_east', as well as the optional 'longitude' standard_name attribute. The case is the same for latitude, except the units are 'degrees_north' and the standard_name is 'latitude'.
End of explanation
"""
temps_var.coordinates = 'lon lat'
"""
Explanation: With the variables created, we identify these variables as containing coordinates for the Temperature variable by setting the coordinates value to a space-separated list of the names of the auxilliary coordinate variables:
End of explanation
"""
proj_var = nc.createVariable('lambert_projection', np.int32, ())
proj_var.grid_mapping_name = 'lambert_conformal_conic'
proj_var.standard_parallel = 25.
proj_var.latitude_of_projection_origin = 40.
proj_var.longitude_of_central_meridian = -105.
proj_var.semi_major_axis = 6371000.0
proj_var
"""
Explanation: This yields the following CDL:
double lon(y, x);
lon:units = "degrees_east";
lon:long_name = "longitude coordinate";
lon:standard_name = "longitude";
double lat(y, x);
lat:units = "degrees_north";
lat:long_name = "latitude coordinate";
lat:standard_name = "latitude";
float Temperature(time, y, x);
Temperature:units = "Kelvin" ;
Temperature:standard_name = "air_temperature" ;
Temperature:long_name = "Forecast air temperature" ;
Temperature:missing_value = -9999.0 ;
Temperature:coordinates = "lon lat";
Coordinate System Information
With our data specified on a Lambert conformal projected grid, it would be good to include this information in our metadata. We can do this using a "grid mapping" variable. This uses a dummy scalar variable as a namespace for holding all of the required information. Relevant variables then reference the dummy variable with their grid_mapping attribute.
Below we create a variable and set it up for a Lambert conformal conic projection on a spherical earth. The grid_mapping_name attribute describes which of the CF-supported grid mappings we are specifying. The names of additional attributes vary between the mappings.
End of explanation
"""
temps_var.grid_mapping = 'lambert_projection' # or proj_var.name
"""
Explanation: Now that we created the variable, all that's left is to set the grid_mapping attribute on our Temperature variable to the name of our dummy variable:
End of explanation
"""
lons = np.array([-97.1, -105, -80])
lats = np.array([35.25, 40, 27])
heights = np.linspace(10, 1000, 10)
temps = np.random.randn(lats.size, heights.size)
stids = ['KBOU', 'KOUN', 'KJUP']
"""
Explanation: This yields the CDL:
variables:
int lambert_projection ;
lambert_projection:grid_mapping_name = "lambert_conformal_conic ;
lambert_projection:standard_parallel = 25.0 ;
lambert_projection:latitude_of_projection_origin = 40.0 ;
lambert_projection:longitude_of_central_meridian = -105.0 ;
lambert_projection:semi_major_axis = 6371000.0 ;
float Temperature(forecast_time, pressure, y, x) ;
Temperature:units = "Kelvin" ;
Temperature:standard_name = "air_temperature" ;
Temperature:long_name = "Forecast air temperature" ;
Temperature:missing_value = -9999.0 ;
Temperature:coordinates = "lon lat" ;
Temperature:grid_mapping = "lambert_projection" ;
Cell Bounds
NASA Dataset Interoperability Recommendations:
Section 2.3 - Use CF “bounds” attributes
CF conventions state: “When gridded data does not represent the point values of a field but instead represents some characteristic of the field within cells of finite ‘volume,’ a complete description of the variable should include metadata that describes the domain or extent of each cell, and the characteristic of the field that the cell values represent.”
For example, if a rain guage is read every 3 hours but only dumped every six hours, it might look like this
netcdf precip_bucket_bounds {
dimensions:
lat = 12 ;
lon = 19 ;
time = 8 ;
tbv = 2;
variables:
float lat(lat) ;
float lon(lon) ;
float time(time) ;
time:units = "hours since 2019-07-12 00:00:00.00";
time:bounds = "time_bounds" ;
float time_bounds(time,tbv)
float precip(time, lat, lon) ;
precip:units = "inches" ;
data:
time = 3, 6, 9, 12, 15, 18, 21, 24;
time_bounds = 0, 3, 0, 6, 6, 9, 6, 12, 12, 15, 12, 18, 18, 21, 18, 24;
}
So the time coordinate looks like
|---X
|-------X
|---X
|-------X
|---X
|-------X
|---X
|-------X
0 3 6 9 12 15 18 21 24
<a name="obs"></a>
Observational Data
So far we've focused on how to handle storing data that are arranged in a grid. What about observation data? The CF conventions describe this as conventions for Discrete Sampling Geometeries (DSG).
For data that are regularly sampled (say, all at the same heights) this is straightforward. First, let's define some sample profile data, all at a few heights less than 1000m:
End of explanation
"""
nc.close()
nc = Dataset('obs_data.nc', 'w', format='NETCDF4_CLASSIC', diskless=True)
nc.createDimension('station', lats.size)
nc.createDimension('heights', heights.size)
nc.createDimension('str_len', 4)
nc.Conventions = 'CF-1.7'
nc.featureType = 'profile'
nc
"""
Explanation: Creation and basic setup
First we create a new file and define some dimensions. Since this is profile data, heights will be one dimension. We use station as our other dimension. We also set the global featureType attribute to 'profile' to indicate that this file holds "an ordered set of data points along a vertical line at a fixed horizontal position and fixed time". We also add a dimension to assist in storing our string station ids.
End of explanation
"""
lon_var = nc.createVariable('lon', np.float64, ('station',))
lon_var.units = 'degrees_east'
lon_var.standard_name = 'longitude'
lat_var = nc.createVariable('lat', np.float64, ('station',))
lat_var.units = 'degrees_north'
lat_var.standard_name = 'latitude'
"""
Explanation: Which gives this CDL:
netcdf obs_data {
dimensions:
station = 3 ;
heights = 10 ;
str_len = 4 ;
attributes:
:Conventions = "CF-1.7" ;
:featureType = "profile" ;
}
We can create our coordinates with:
End of explanation
"""
heights_var = nc.createVariable('heights', np.float32, ('heights',))
heights_var.units = 'meters'
heights_var.standard_name = 'altitude'
heights_var.positive = 'up'
heights_var[:] = heights
"""
Explanation: The standard refers to these as "instance variables" because each one refers to an instance of a feature. From here we can create our height coordinate variable:
End of explanation
"""
stid_var = nc.createVariable('stid', 'c', ('station', 'str_len'))
stid_var.cf_role = 'profile_id'
stid_var.long_name = 'Station identifier'
stid_var[:] = stids
"""
Explanation: Station IDs
Now we can also write our station IDs to a variable. This is a 2D variable, but one of the dimensions is simply there to facilitate treating strings as character arrays. We also assign this an attribute cf_role with a value of 'profile_id' to facilitate software to identify individual profiles:
End of explanation
"""
time_var = nc.createVariable('time', np.float32, ())
time_var.units = 'minutes since 2019-07-16 17:00'
time_var.standard_name = 'time'
time_var[:] = [5.]
temp_var = nc.createVariable('temperature', np.float32, ('station', 'heights'))
temp_var.units = 'celsius'
temp_var.standard_name = 'air_temperature'
temp_var.coordinates = 'lon lat heights time'
"""
Explanation: Now our CDL looks like:
netcdf obs_data {
dimensions:
station = 3 ;
heights = 10 ;
str_len = 4 ;
variables:
double lon(station) ;
lon:units = "degrees_east" ;
lon:standard_name = "longitude" ;
double lat(station) ;
lat:units = "degrees_north" ;
lat:standard_name = "latitude" ;
float heights(heights) ;
heights:units = "meters" ;
heights:standard_name = "altitude";
heights:positive = "up" ;
char stid(station, str_len) ;
stid:cf_role = "profile_id" ;
stid:long_name = "Station identifier" ;
attributes:
:Conventions = "CF-1.7" ;
:featureType = "profile" ;
}
Writing the field
Now all that's left is to write our profile data, which looks fairly standard. We also add a scalar variable for the time at which these profiles were captured:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/miroc/cmip6/models/sandbox-1/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'sandbox-1', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: MIROC
Source ID: SANDBOX-1
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
tensorflow/tfx | tfx/examples/airflow_workshop/notebooks/step3.ipynb | apache-2.0 | from __future__ import print_function
!pip install -q papermill
!pip install -q matplotlib
!pip install -q networkx
import os
import tfx_utils
%matplotlib notebook
def _make_default_sqlite_uri(pipeline_name):
return os.path.join(os.environ['HOME'], 'airflow/tfx/metadata', pipeline_name, 'metadata.db')
def get_metadata_store(pipeline_name):
return tfx_utils.TFXReadonlyMetadataStore.from_sqlite_db(_make_default_sqlite_uri(pipeline_name))
pipeline_name = 'taxi'
pipeline_db_path = _make_default_sqlite_uri(pipeline_name)
print('Pipeline DB:\n{}'.format(pipeline_db_path))
store = get_metadata_store(pipeline_name)
"""
Explanation: Step 3: Data Validation
Use the code below to run TensorFlow Data Validation on your pipeline. Start by importing and opening the metadata store.
End of explanation
"""
# Visualize properties of example artifacts
store.get_artifacts_of_type_df(tfx_utils.TFXArtifactTypes.EXAMPLES)
"""
Explanation: Now print out the data artifacts:
End of explanation
"""
# Visualize stats for data
store.display_stats_for_examples(<insert ID here>)
"""
Explanation: Now visualize the dataset features.
Hint: try ID 2 or 3
End of explanation
"""
# Try different IDs here. Click stop in the plot when changing IDs.
%matplotlib notebook
store.plot_artifact_lineage(<insert ID here>)
"""
Explanation: Now plot the artifact lineage:
End of explanation
"""
|
dennys-bd/Coursera-Machine-Learning-Specialization | Course 2 - ML, Regression/week-3-polynomial-regression-assignment-blank.ipynb | mit | import graphlab
"""
Explanation: Regression Week 3: Assessing Fit (polynomial regression)
In this notebook you will compare different regression models in order to assess which model fits best. We will be using polynomial regression as a means to examine this topic. In particular you will:
* Write a function to take an SArray and a degree and return an SFrame where each column is the SArray to a polynomial value up to the total degree e.g. degree = 3 then column 1 is the SArray column 2 is the SArray squared and column 3 is the SArray cubed
* Use matplotlib to visualize polynomial regressions
* Use matplotlib to visualize the same polynomial degree on different subsets of the data
* Use a validation set to select a polynomial degree
* Assess the final fit using test data
We will continue to use the House data from previous notebooks.
Fire up graphlab create
End of explanation
"""
tmp = graphlab.SArray([1., 2., 3.])
tmp_cubed = tmp.apply(lambda x: x**3)
print tmp
print tmp_cubed
"""
Explanation: Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.
The easiest way to apply a power to an SArray is to use the .apply() and lambda x: functions.
For example to take the example array and compute the third power we can do as follows: (note running this cell the first time may take longer than expected since it loads graphlab)
End of explanation
"""
ex_sframe = graphlab.SFrame()
ex_sframe['power_1'] = tmp
print ex_sframe
"""
Explanation: We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).
End of explanation
"""
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe = graphlab.SFrame()
# and set poly_sframe['power_1'] equal to the passed feature
poly_sframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# then assign poly_sframe[name] to the appropriate power of feature
poly_sframe[name] = feature**power
return poly_sframe
"""
Explanation: Polynomial_sframe function
Using the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree:
End of explanation
"""
print polynomial_sframe(tmp, 3)
"""
Explanation: To test your function consider the smaller tmp variable and what you would expect the outcome of the following call:
End of explanation
"""
sales = graphlab.SFrame('kc_house_data.gl/')
"""
Explanation: Visualizing polynomial regression
Let's use matplotlib to visualize what a polynomial regression looks like on some real data.
End of explanation
"""
sales = sales.sort(['sqft_living', 'price'])
"""
Explanation: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
End of explanation
"""
poly1_data = polynomial_sframe(sales['sqft_living'], 1)
poly1_data['price'] = sales['price'] # add price to the data since it's the target
"""
Explanation: Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.
End of explanation
"""
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = ['power_1'], validation_set = None)
#let's take a look at the weights before we plot
model1.get("coefficients")
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(poly1_data['power_1'],poly1_data['price'],'.',
poly1_data['power_1'], model1.predict(poly1_data),'-')
"""
Explanation: NOTE: for all the models in this notebook use validation_set = None to ensure that all results are consistent across users.
End of explanation
"""
poly2_data = polynomial_sframe(sales['sqft_living'], 2)
my_features = poly2_data.column_names() # get the name of the features
poly2_data['price'] = sales['price'] # add price to the data since it's the target
model2 = graphlab.linear_regression.create(poly2_data, target = 'price', features = my_features, validation_set = None)
model2.get("coefficients")
plt.plot(poly2_data['power_1'],poly2_data['price'],'.',
poly2_data['power_1'], model2.predict(poly2_data),'-')
"""
Explanation: Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'.
We can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial?
End of explanation
"""
poly3_data = polynomial_sframe(sales['sqft_living'], 3)
my_features3 = poly3_data.column_names()
poly3_data['price'] = sales['price']
model3 = graphlab.linear_regression.create(poly3_data, target = 'price', features = my_features3, validation_set = None)
plt.plot(poly3_data['power_1'],poly3_data['price'],'.',
poly3_data['power_1'], model3.predict(poly3_data),'-')
"""
Explanation: The resulting model looks like half a parabola. Try on your own to see what the cubic looks like:
End of explanation
"""
poly15_data = polynomial_sframe(sales['sqft_living'], 15)
my_features15 = poly15_data.column_names()
poly15_data['price'] = sales['price']
model15 = graphlab.linear_regression.create(poly15_data, target = 'price', features = my_features15, validation_set = None)
plt.plot(poly15_data['power_1'],poly15_data['price'],'.',
poly15_data['power_1'], model15.predict(poly15_data),'-')
"""
Explanation: Now try a 15th degree polynomial:
End of explanation
"""
set_11, set_12 = sales.random_split(.5,seed=0)
set_1, set_2 = set_11.random_split(.5,seed=0)
set_3, set_4 = set_12.random_split(.5,seed=0)
"""
Explanation: What do you think of the 15th degree polynomial? Do you think this is appropriate? If we were to change the data do you think you'd get pretty much the same curve? Let's take a look.
Changing the data and re-learning
We're going to split the sales data into four subsets of roughly equal size. Then you will estimate a 15th degree polynomial model on all four subsets of the data. Print the coefficients (you should use .print_rows(num_rows = 16) to view all of them) and plot the resulting fit (as we did above). The quiz will ask you some questions about these results.
To split the sales data into four subsets, we perform the following steps:
* First split sales into 2 subsets with .random_split(0.5, seed=0).
* Next split the resulting subsets into 2 more subsets each. Use .random_split(0.5, seed=0).
We set seed=0 in these steps so that different users get consistent results.
You should end up with 4 subsets (set_1, set_2, set_3, set_4) of approximately equal size.
End of explanation
"""
set1_poly15_data = polynomial_sframe(set_1['sqft_living'], 15)
set1_poly15_data['price'] = set_1['price']
set1_model15 = graphlab.linear_regression.create(set1_poly15_data, target = 'price', features = my_features15, validation_set = None)
set1_model15.get("coefficients")
plt.plot(set1_poly15_data['power_1'],set1_poly15_data['price'],'.',
set1_poly15_data['power_1'], set1_model15.predict(set1_poly15_data),'-')
set2_poly15_data = polynomial_sframe(set_2['sqft_living'], 15)
set2_poly15_data['price'] = set_2['price']
set2_model15 = graphlab.linear_regression.create(set2_poly15_data, target = 'price', features = my_features15, validation_set = None)
set2_model15.get("coefficients")
plt.plot(set2_poly15_data['power_1'],set2_poly15_data['price'],'.',
set2_poly15_data['power_1'], set2_model15.predict(set2_poly15_data),'-')
set3_poly15_data = polynomial_sframe(set_3['sqft_living'], 15)
set3_poly15_data['price'] = set_3['price']
set3_model15 = graphlab.linear_regression.create(set3_poly15_data, target = 'price', features = my_features15, validation_set = None)
set3_model15.get("coefficients")
plt.plot(set3_poly15_data['power_1'],set3_poly15_data['price'],'.',
set3_poly15_data['power_1'], set3_model15.predict(set3_poly15_data),'-')
set4_poly15_data = polynomial_sframe(set_4['sqft_living'], 15)
set4_poly15_data['price'] = set_4['price']
set4_model15 = graphlab.linear_regression.create(set4_poly15_data, target = 'price', features = my_features15, validation_set = None)
set4_model15.get("coefficients")
"""
Explanation: Fit a 15th degree polynomial on set_1, set_2, set_3, and set_4 using sqft_living to predict prices. Print the coefficients and make a plot of the resulting model.
End of explanation
"""
training_validation_set, test_data= sales.random_split(.9,seed=1)
train_data, validation_data = training_validation_set.random_split(.5,seed=1)
"""
Explanation: Some questions you will be asked on your quiz:
Quiz Question: Is the sign (positive or negative) for power_15 the same in all four models?
Quiz Question: (True/False) the plotted fitted lines look the same in all four plots
Selecting a Polynomial Degree
Whenever we have a "magic" parameter like the degree of the polynomial there is one well-known way to select these parameters: validation set. (We will explore another approach in week 4).
We split the sales dataset 3-way into training set, test set, and validation set as follows:
Split our sales data into 2 sets: training_and_validation and testing. Use random_split(0.9, seed=1).
Further split our training data into two sets: training and validation. Use random_split(0.5, seed=1).
Again, we set seed=1 to obtain consistent results for different users.
End of explanation
"""
RSS = {}
for i in range(0, 15):
poly_data = polynomial_sframe(train_data['sqft_living'], i)
my_features = poly_data.column_names()
poly_data['price'] = train_data['price']
train_model = graphlab.linear_regression.create(poly_data, target = 'price', features = my_features, validation_set = None)
poly_validation = polynomial_sframe(validation_data['sqft_living'], i)
RSS[i] = sum(train_model.predict(poly_validation))
print RSS
key=0
value=0
for k,v in RSS.iteritems():
if(k == 0):
value = v
else:
if(v<value):
key = k
value = v
print key, value
"""
Explanation: Next you should write a loop that does the following:
* For degree in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] (to get this in python type range(1, 15+1))
* Build an SFrame of polynomial data of train_data['sqft_living'] at the current degree
* hint: my_features = poly_data.column_names() gives you a list e.g. ['power_1', 'power_2', 'power_3'] which you might find useful for graphlab.linear_regression.create( features = my_features)
* Add train_data['price'] to the polynomial SFrame
* Learn a polynomial regression model to sqft vs price with that degree on TRAIN data
* Compute the RSS on VALIDATION data (here you will want to use .predict()) for that degree and you will need to make a polynmial SFrame using validation data.
* Report which degree had the lowest RSS on validation data (remember python indexes from 0)
(Note you can turn off the print out of linear_regression.create() with verbose = False)
End of explanation
"""
poly_data = polynomial_sframe(train_data['sqft_living'], 5)
my_features = poly_data.column_names()
poly_data['price'] = train_data['price']
train_model = graphlab.linear_regression.create(poly_data, target = 'price', features = my_features, validation_set = None)
poly_test = polynomial_sframe(test_data['sqft_living'], 5)
sum(train_model.predict(poly_test))
"""
Explanation: Quiz Question: Which degree (1, 2, …, 15) had the lowest RSS on Validation data?
Now that you have chosen the degree of your polynomial using validation data, compute the RSS of this model on TEST data. Report the RSS on your quiz.
End of explanation
"""
|
y2ee201/Deep-Learning-Nanodegree | autoencoder/Convolutional_Autoencoder_Solution.ipynb | mit | %matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
"""
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
"""
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
"""
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughlt 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
End of explanation
"""
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
"""
Explanation: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
"""
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
"""
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
"""
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
"""
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation
"""
|
opesci/devito | examples/performance/00_overview.ipynb | mit | from examples.performance import unidiff_output, print_kernel
"""
Explanation: Performance optimization overview
The purpose of this tutorial is twofold
Illustrate the performance optimizations applied to the code generated by an Operator.
Describe the options Devito provides to users to steer the optimization process.
As we shall see, most optimizations are automatically applied as they're known to systematically improve performance. Others, whose impact varies across different Operator's, are instead to be enabled through specific flags.
An Operator has several preset optimization levels; the fundamental ones are noop and advanced. With noop, no performance optimizations are introduced. With advanced, several flop-reducing and data locality optimization passes are applied. Examples of flop-reducing optimization passes are common sub-expressions elimination and factorization, while examples of data locality optimization passes are loop fusion and cache blocking. Optimization levels in Devito are conceptually akin to the -O2, -O3, ... flags in classic C/C++/Fortran compilers.
An optimization pass may provide knobs, or options, for fine-grained tuning. As explained in the next sections, some of these options are given at compile-time, others at run-time.
** Remark **
Parallelism -- both shared-memory (e.g., OpenMP) and distributed-memory (MPI) -- is by default disabled and is not controlled via the optimization level. In this tutorial we will also show how to enable OpenMP parallelism (you'll see it's trivial!). Another mini-guide about parallelism in Devito and related aspects is available here.
****
Outline
API
Default values
Running example
OpenMP parallelism
The advanced mode
The advanced-fsg mode
API
The optimization level may be changed in various ways:
globally, through the DEVITO_OPT environment variable. For example, to disable all optimizations on all Operator's, one could run with
DEVITO_OPT=noop python ...
programmatically, adding the following lines to a program
from devito import configuration
configuration['opt'] = 'noop'
locally, as an Operator argument
Operator(..., opt='noop')
Local takes precedence over programmatic, and programmatic takes precedence over global.
The optimization options, instead, may only be changed locally. The syntax to specify an option is
Operator(..., opt=('advanced', {<optimization options>})
A concrete example (you can ignore the meaning for now) is
Operator(..., opt=('advanced', {'blocklevels': 2})
That is, options are to be specified together with the optimization level (advanced).
Default values
By default, all Operator's are run with the optimization level set to advanced. So this
Operator(Eq(...))
is equivalent to
Operator(Eq(...), opt='advanced')
and obviously also to
Operator(Eq(...), opt=('advanced', {}))
In virtually all scenarios, regardless of application and underlying architecture, this ensures very good performance -- but not necessarily the very best.
Misc
The following functions will be used throughout the notebook for printing generated code.
End of explanation
"""
from devito import configuration
configuration['language'] = 'C'
configuration['platform'] = 'bdw' # Optimize for an Intel Broadwell
configuration['opt-options']['par-collapse-ncores'] = 1 # Maximize use loop collapsing
"""
Explanation: The following cell is only needed for Continuous Integration. But actually it's also an example of how "programmatic takes precedence over global" (see API section).
End of explanation
"""
from devito import Eq, Grid, Operator, Function, TimeFunction, sin
grid = Grid(shape=(80, 80, 80))
f = Function(name='f', grid=grid)
u = TimeFunction(name='u', grid=grid, space_order=4)
eq = Eq(u.forward, f**2*sin(f)*u.dy.dy)
"""
Explanation: Running example
Throughout the notebook we will generate Operator's for the following time-marching Eq.
End of explanation
"""
op0 = Operator(eq, opt=('noop'))
op0_omp = Operator(eq, opt=('noop', {'openmp': True}))
# print(op0)
# print(unidiff_output(str(op0), str(op0_omp))) # Uncomment to print out the diff only
print_kernel(op0_omp)
"""
Explanation: Despite its simplicity, this Eq is all we need to showcase the key components of the Devito optimization engine.
OpenMP parallelism
There are several ways to enable OpenMP parallelism. The one we use here consists of supplying an option to an Operator. The next cell illustrates the difference between two Operator's generated with the noop optimization level, but with OpenMP enabled on the latter one.
End of explanation
"""
op0_b0_omp = Operator(eq, opt=('noop', {'openmp': True, 'par-dynamic-work': 100}))
print_kernel(op0_b0_omp)
"""
Explanation: The OpenMP-ized op0_omp Operator includes:
the header file "omp.h"
a #pragma omp parallel num_threads(nthreads) directive
a #pragma omp for collapse(...) schedule(dynamic,1) directive
More complex Operator's will have more directives, more types of directives, different iteration scheduling strategies based on heuristics and empirical tuning (e.g., static instead of dynamic), etc.
The reason for collapse(1), rather than collapse(3), boils down to using opt=('noop', ...); if the default advanced mode had been used instead, we would see the latter clause.
We note how the OpenMP pass introduces a new symbol, nthreads. This allows users to explicitly control the number of threads with which an Operator is run.
op0_omp.apply(time_M=0) # Picks up `nthreads` from the standard environment variable OMP_NUM_THREADS
op0_omp.apply(time_M=0, nthreads=2) # Runs with 2 threads per parallel loop
A few optimization options are available for this pass (but not on all platforms, see here), though in our experience the default values do a fine job:
par-collapse-ncores: use a collapse clause only if the number of available physical cores is greater than this value (default=4).
par-collapse-work: use a collapse clause only if the trip count of the collapsable loops is statically known to exceed this value (default=100).
par-chunk-nonaffine: a coefficient to adjust the chunk size in non-affine parallel loops. The larger the coefficient, the smaller the chunk size (default=3).
par-dynamic-work: use dynamic scheduling if the operation count per iteration exceeds this value. Otherwise, use static scheduling (default=10).
par-nested: use nested parallelism if the number of hyperthreads per core is greater than this value (default=2).
So, for instance, we could switch to static scheduling by constructing the following Operator
End of explanation
"""
op1_omp = Operator(eq, opt=('blocking', {'openmp': True}))
# print(op1_omp) # Uncomment to see the *whole* generated code
print_kernel(op1_omp)
"""
Explanation: The advanced mode
The default optimization level in Devito is advanced. This mode performs several compilation passes to optimize the Operator for computation (number of flops), working set size, and data locality. In the next paragraphs we'll dissect the advanced mode to analyze, one by one, some of its key passes.
Loop blocking
The next cell creates a new Operator that adds loop blocking to what we had in op0_omp.
End of explanation
"""
op1_omp_6D = Operator(eq, opt=('blocking', {'blockinner': True, 'blocklevels': 2, 'openmp': True}))
# print(op1_omp_6D) # Uncomment to see the *whole* generated code
print_kernel(op1_omp_6D)
"""
Explanation: ** Remark **
'blocking' is not an optimization level -- it rather identifies a specific compilation pass. In other words, the advanced mode defines an ordered sequence of passes, and blocking is one such pass.
****
The blocking pass creates additional loops over blocks. In this simple Operator there's just one loop nest, so only a pair of additional loops are created. In more complex Operator's, several loop nests may individually be blocked, whereas others may be left unblocked -- this is decided by the Devito compiler according to certain heuristics. The size of a block is represented by the symbols x0_blk0_size and y0_blk0_size, which are runtime parameters akin to nthreads.
By default, Devito applies 2D blocking and sets the default block shape to 8x8. There are two ways to set a different block shape:
passing an explicit value. For instance, below we run with a 24x8 block shape
op1_omp.apply(..., x0_blk0_size=24)
letting the autotuner pick up a better block shape for us. There are several autotuning modes. A short summary is available here
op1_omp.apply(..., autotune='aggressive')
Loop blocking also provides two optimization options:
blockinner={False, True} -- to enable 3D (or any nD, n>2) blocking
blocklevels={int} -- to enable hierarchical blocking, to exploit multiple levels of the cache hierarchy
In the example below, we construct an Operator with six-dimensional loop blocking: the first three loops represent outer blocks, whereas the second three loops represent inner blocks within an outer block.
End of explanation
"""
op2_omp = Operator(eq, opt=('blocking', 'simd', {'openmp': True}))
# print(op2_omp) # Uncomment to see the generated code
# print(unidiff_output(str(op1_omp), str(op2_omp))) # Uncomment to print out the diff only
"""
Explanation: SIMD vectorization
Devito enforces SIMD vectorization through OpenMP pragmas.
End of explanation
"""
op3_omp = Operator(eq, opt=('lift', {'openmp': True}))
print_kernel(op3_omp)
"""
Explanation: Code motion
The advanced mode has a code motion pass. In explicit PDE solvers, this is most commonly used to lift expensive time-invariant sub-expressions out of the inner loops. The pass is however quite general in that it is not restricted to the concept of time-invariance -- any sub-expression invariant with respect to a subset of Dimensions is a code motion candidate. In our running example, sin(f) gets hoisted out of the inner loops since it is determined to be an expensive invariant sub-expression. In other words, the compiler trades the redundant computation of sin(f) for additional storage (the r0[...] array).
End of explanation
"""
op4_omp = Operator(eq, opt=('cse', {'openmp': True}))
print(unidiff_output(str(op0_omp), str(op4_omp)))
"""
Explanation: Basic flop-reducing transformations
Among the simpler flop-reducing transformations applied by the advanced mode we find:
"classic" common sub-expressions elimination (CSE),
factorization,
optimization of powers
The cell below shows how the computation of u changes by incrementally applying these three passes. First of all, we observe how the symbolic spacing h_y gets assigned to a temporary, r0, as it appears in several sub-expressions. This is the effect of CSE.
End of explanation
"""
op5_omp = Operator(eq, opt=('cse', 'factorize', {'openmp': True}))
print(unidiff_output(str(op4_omp), str(op5_omp)))
"""
Explanation: The factorization pass makes sure r0 is collected to reduce the number of multiplications.
End of explanation
"""
op6_omp = Operator(eq, opt=('cse', 'factorize', 'opt-pows', {'openmp': True}))
print(unidiff_output(str(op5_omp), str(op6_omp)))
"""
Explanation: Finally, opt-pows turns costly pow calls into multiplications.
End of explanation
"""
op7_omp = Operator(eq, opt=('cire-sops', {'openmp': True}))
print_kernel(op7_omp)
# print(unidiff_output(str(op7_omp), str(op0_omp))) # Uncomment to print out the diff only
"""
Explanation: Cross-iteration redundancy elimination (CIRE)
This is perhaps the most advanced among the optimization passes in Devito. CIRE [1] searches for redundancies across consecutive loop iterations. These are often induced by a mixture of nested, high-order, partial derivatives. The underlying idea is very simple. Consider:
r0 = a[i-1] + a[i] + a[i+1]
at i=1, we have
r0 = a[0] + a[1] + a[2]
at i=2, we have
r0 = a[1] + a[2] + a[3]
So the sub-expression a[1] + a[2] is computed twice, by two consecutive iterations.
What makes CIRE complicated is the generalization to arbitrary expressions, the presence of multiple dimensions, the scheduling strategy due to the trade-off between redundant compute and working set, and the co-existance with other optimizations (e.g., blocking, vectorization). All these aspects won't be treated here. What instead we will show is the effect of CIRE in our running example and the optimization options at our disposal to drive the detection and scheduling of the captured redundancies.
In our running example, some cross-iteration redundancies are induced by the nested first-order derivatives along y. As we see below, these redundancies are captured and assigned to the two-dimensional temporary r0.
Note: the name cire-sops means "Apply CIRE to sum-of-product expressions". A sum-of-product is what taking a derivative via finite differences produces.
End of explanation
"""
eq = Eq(u.forward, f**2*sin(f)*(u.dy.dy + u.dx.dx))
op7_b0_omp = Operator(eq, opt=('cire-sops', {'openmp': True}))
print_kernel(op7_b0_omp)
"""
Explanation: We note that since there are no redundancies along x, the compiler is smart to figure out that r0 and u can safely be computed within the same x loop. This is nice -- not only is the reuse distance decreased, but also a grid-sized temporary avoided.
The min-storage option
Let's now consider a variant of our running example
End of explanation
"""
op7_b1_omp = Operator(eq, opt=('cire-sops', {'openmp': True, 'min-storage': True}))
print_kernel(op7_b1_omp)
"""
Explanation: As expected, there are now two temporaries, one stemming from u.dy.dy and the other from u.dx.dx. A key difference with respect to op7_omp here is that both are grid-size temporaries. This might seem odd at first -- why should the u.dy.dy temporary, that is r1, now be a three-dimensional temporary when we know already it could be a two-dimensional temporary? This is merely a compiler heuristic: by adding an extra dimension to r1, both temporaries can be scheduled within the same loop nest, thus augmenting data reuse and potentially enabling further cross-expression optimizations. We can disable this heuristic through the min-storage option.
End of explanation
"""
eq = Eq(u.forward, f**2*sin(f)*u.dy.dy) # Back to original running example
op8_omp = Operator(eq, opt=('cire-sops', {'openmp': True, 'cire-mincost-sops': 31}))
print_kernel(op8_omp)
"""
Explanation: The cire-mincost-sops option
So far so good -- we've seen that Devito can capture and schedule cross-iteration redundancies. But what if we actually do not want certain redundancies to be captured? There are a few reasons we may like that way, for example we're allocating too much extra memory for the tensor temporaries, and we rather prefer to avoid that. For this, we can tell Devito what the minimum cost of a sub-expression should be in order to be a CIRE candidate. The cost is an integer number based on the operation count and the type of operations:
A basic arithmetic operation such as + and * has a cost of 1.
A / whose divisor is a constant expression has a cost of 1.
A / whose divisor is not a constant expression has a cost of 25.
A power with non-integer exponent has a cost of 50.
A power with non-negative integer exponent n has a cost of n-1 (i.e., the number of * it will be converted into).
A trascendental function (sin, cos, etc.) has a cost of 100.
The cire-mincost-sops option can be used to control the minimum cost of CIRE candidates.
End of explanation
"""
op8_b1_omp = Operator(eq, opt=('cire-sops', {'openmp': True, 'cire-mincost-sops': 30}))
print_kernel(op8_b1_omp)
"""
Explanation: We observe how setting cire-min-cost=31 makes the tensor temporary produced by op7_omp disappear. 30 is indeed the minimum cost such that the targeted sub-expression becomes an optimization candidate.
8.33333333e-2F*u[t0][x + 4][y + 2][z + 4]/h_y - 6.66666667e-1F*u[t0][x + 4][y + 3][z + 4]/h_y + 6.66666667e-1F*u[t0][x + 4][y + 5][z + 4]/h_y - 8.33333333e-2F*u[t0][x + 4][y + 6][z + 4]/h_y
The cost of integer arithmetic for array indexing is always zero => 0.
Three + (or -) => 3
Four / by h_y, a constant symbol => 4
Four two-way * with operands (1/h_y, u) => 8
So in total we have 15 operations. For a sub-expression to be optimizable away by CIRE, the resulting saving in operation count must be at least twice the cire-mincost-sops value OR there must be no increase in working set size. Here there is an increase in working set size -- the new tensor temporary -- and at the same time the threshold is set to 31, so the compiler decides not to optimize away the sub-expression. In short, the rational is that the saving in operation count does not justify the introduction of a new tensor temporary.
Next, we try again with a smaller cire-mincost-sops.
End of explanation
"""
eq = Eq(u.forward, f**2*sin(f)*u.dy.dy) # Back to original running example
op11_omp = Operator(eq, opt=('lift', {'openmp': True}))
# print_kernel(op11_omp) # Uncomment to see the generated code
"""
Explanation: The cire-mincost-inv option
Akin to sum-of-products, cross-iteration redundancies may be searched across dimension-invariant sub-expressions, typically time-invariants. So, analogously to what we've seen before:
End of explanation
"""
op12_omp = Operator(eq, opt=('lift', {'openmp': True, 'cire-mincost-inv': 51}))
print_kernel(op12_omp)
"""
Explanation: For convenience, the lift pass triggers CIRE for dimension-invariant sub-expressions. As seen before, this leads to producing one tensor temporary. By setting a larger value for cire-mincost-inv, we avoid a grid-size temporary to be allocated, in exchange for a trascendental function, sin, to be computed at each iteration
End of explanation
"""
op13_omp = Operator(eq, opt=('cire-sops', {'openmp': True, 'cire-maxpar': True}))
print_kernel(op13_omp)
"""
Explanation: The cire-maxpar option
Sometimes it's possible to trade storage for parallelism (i.e., for more parallel dimensions). For this, Devito provides the cire-maxpar option which is by default set to:
False on CPU backends
True on GPU backends
Let's see what happens when we switch it on
End of explanation
"""
op14_omp = Operator(eq, openmp=True)
print(op14_omp)
# op14_b0_omp = Operator(eq, opt=('advanced', {'min-storage': True}))
"""
Explanation: The generated code uses a three-dimensional temporary that gets written and subsequently read in two separate x-y-z loop nests. Now, both loops can safely be openmp-collapsed, which is vital when running on GPUs.
Impact of CIRE in the advanced mode
The advanced mode triggers all of the passes we've seen so far... and in fact, many more! Some of them, however, aren't visible in our running example (e.g., all of the MPI-related optimizations). These will be treated in a future notebook.
Obviously, all of the optimization options (e.g., min-cost, cire-mincost-sops, blocklevels) are applicable and composable in any arbitrary way.
End of explanation
"""
op15_omp = Operator(eq, opt=('advanced', {'openmp': True, 'cire-rotate': True}))
print_kernel(op15_omp)
"""
Explanation: A crucial observation here is that CIRE is applied on top of loop blocking -- the r1 temporary is computed within the same block as u, which in turn requires an additional iteration at the block edge along y to be performed (the first y loop starts at y0_blk0 - 2, while the second one at y0_blk0). Further, the size of r1 is now a function of the block shape. Altogether, this implements what in the literature is often referred to as "overlapped tiling" (or "overlapped blocking"): data reuse across consecutive loop nests is obtained by cross-loop blocking, which in turn requires a certain degree of redundant computation at the block edge. Clearly, there's a tension between the block shape and the amount of redundant computation. For example, a small block shape guarantees a small(er) working set, and thus better data reuse, but also requires more redundant computation.
The cire-rotate option
So far we've seen two ways to compute the tensor temporaries:
The temporary dimensions span the whole grid;
The temporary dimensions span a block created by the loop blocking pass.
There are a few other ways, and in particular there's a third way supported in Devito, enabled through the cire-rotate option:
The temporary outermost-dimension is a function of the stencil radius; all other temporary dimensions are a function of the loop blocking shape.
Let's jump straight into an example
End of explanation
"""
print(op15_omp.body[1].header[4])
"""
Explanation: There are two key things to notice here:
The r1 temporary is a pointer to a two-dimensional array of size [2][z_size]. It's obtained via casting of pr1[tid], which in turn is defined as
End of explanation
"""
eq = Eq(u.forward, (2*f*f*u.dy).dy)
op16_omp = Operator(eq, opt=('advanced', {'openmp': True}))
print_kernel(op16_omp)
"""
Explanation: Within the y loop there are several iteration variables, some of which (yr0, yr1, ...) employ modulo increment to cyclically produce the indices 0 and 1.
In essence, with cire-rotate, instead of computing an entire slice of y values, at each y iteration we only keep track of the values that are strictly necessary to evaluate u at y -- only two values in this case. This results in a working set reduction, at the price of turning one parallel loop (y) into a sequential one.
The cire-maxalias option
Let's consider the following variation of our running example, in which the outer y derivative now comprises several terms, other than just u.dy.
End of explanation
"""
op16_b0_omp = Operator(eq, opt=('advanced', {'openmp': True, 'cire-maxalias': True}))
print_kernel(op16_b0_omp)
"""
Explanation: By paying close attention to the generated code, we see that the r0 temporary only stores the u.dy component, while the 2*f*f term is left intact in the loop nest computing u. This is due to a heuristic applied by the Devito compiler: in a derivative, only the sub-expressions representing additions -- and therefore inner derivatives -- are used as CIRE candidates. The logic behind this heuristic is that of minimizing the number of required temporaries at the price of potentially leaving some redundancies on the table. One can disable this heuristic via the cire-maxalias option.
End of explanation
"""
eq = Eq(u.forward, (2*f*f*u.dy).dy + (3*f*u.dy).dy)
op16_b1_omp = Operator(eq, opt=('advanced', {'openmp': True}))
op16_b2_omp = Operator(eq, opt=('advanced', {'openmp': True, 'cire-maxalias': True}))
# print_kernel(op16_b1_omp) # Uncomment to see generated code with one temporary but more flops
# print_kernel(op16_b2_omp) # Uncomment to see generated code with two temporaries but fewer flops
"""
Explanation: Now we have a "fatter" temporary, and that's good -- but if, instead, we had had an Operator such as the one below, the gain from a reduced operation count might be outweighed by the presence of more temporaries, which means larger working set and increased memory traffic.
End of explanation
"""
eq = Eq(u.forward, f**2*sin(f)*u.dy.dy) # Back to original running example
op17_omp = Operator(eq, opt=('advanced-fsg', {'openmp': True}))
print(op17_omp)
"""
Explanation: The advanced-fsg mode
The alternative advanced-fsg optimization level applies the same passes as advanced, but in a different order. The key difference is that -fsg does not generate overlapped blocking code across CIRE-generated loop nests.
End of explanation
"""
eq = Eq(u.forward, f**2*sin(f)*(u.dy.dy + u.dx.dx))
op17_b0 = Operator(eq, opt=('advanced-fsg', {'openmp': True}))
print(op17_b0)
"""
Explanation: The x loop here is still shared by the two loop nests, but the y one isn't. Analogously, if we consider the alternative eq already used in op7_b0_omp, we get two completely separate, and therefore individually blocked, loop nests.
End of explanation
"""
|
buruzaemon/svd | PCA.ipynb | bsd-3-clause | corr = X_zscaled.corr()
tmp = pd.np.triu(corr) - np.eye(corr.shape[0])
tmp = tmp.flatten()
tmp = tmp[np.nonzero(tmp)]
tmp = pd.Series(np.abs(tmp))
print('Correlation matrix:\n\n{}\n\n'.format(corr.values))
print('Multicollinearity check using off-diagonal values:\n\n{}'.format(tmp.describe()))
"""
Explanation: Multicollinearity Check
Using $corr(X)$, check to see that we some level of multicollinearity in the data, enough to warrant using PCA of visualization in reduced-rank principal component space. As a rule of thumb, the off-diagonal correlation values (either in the upper- or lower-triangle) should have absolute values of around 0.30 or so.
End of explanation
"""
eigenvalues, eigenvectors = np.linalg.eig(X_zscaled.cov())
eigenvalues_normalized = eigenvalues / eigenvalues.sum()
cumvar_explained = np.cumsum(eigenvalues_normalized)
"""
Explanation: PCA via Eigen-Decomposition
Obtain eigenvalues, eigenvectors of $cov(X)$ via eigen-decomposition
From the factorization of symmetric matrix $S$ into orthogonal matrix $Q$ of the eigenvectors and diagonal matrix $\Lambda$ of the eigenvalues, we can likewise decompose $cov(X)$ (or in our case $corr(X)$ since we standardized our data).
\begin{align}
S &= Q \Lambda Q^\intercal \
\Rightarrow X^\intercal X &= V \Lambda V^\intercal \
\end{align}
where $V$ are the orthonormal eigenvectors of $X X^\intercal$.
We can normalize the eigenvalues to see how much variance is captured per each respective principal component. We will also calculate the cumulative variance explained. This will help inform our decision of how many principal components to keep when reducing dimensions in visualizing the data.
End of explanation
"""
T = pd.DataFrame(X_zscaled.dot(eigenvectors))
# set column names
T.columns = ['pc1', 'pc2', 'pc3', 'pc4']
# also add the species label as
T = pd.concat([T, Y.species], axis=1)
"""
Explanation: Reduce dimensions and visualize
To project the original data into principal component space, we obtain score matrix $T$ by taking the dot product of $X$ and the eigenvectors $V$.
\begin{align}
T &= X V \
\end{align}
End of explanation
"""
# let's try using the first 2 principal components
k = 2
# divide T by label
irises = [T[T.species=='setosa'],
T[T.species=='versicolor'],
T[T.species=='virginica']]
# define a color-blind friendly set of quantative colors
# for each species
colors = ['#1b9e77', '#d95f02', '#7570b3']
_, (ax1, ax2) = plt.subplots(1, 2, sharey=False)
# plot principal component vis-a-vis total variance in data
ax1.plot([1,2,3,4],
eigenvalues_normalized,
'-o',
color='#8da0cb',
label='eigenvalue (normalized)',
alpha=0.8,
zorder=1000)
ax1.plot([1,2,3,4],
cumvar_explained,
'-o',
color='#fc8d62',
label='cum. variance explained',
alpha=0.8,
zorder=1000)
ax1.set_xlim(0.8, 4.2)
ax1.set_xticks([1,2,3,4])
ax1.set_xlabel('Principal component')
ax1.set_ylabel('Variance')
ax1.legend(loc='center right', fontsize=7)
ax1.grid(color='#fdfefe')
ax1.set_facecolor('#f4f6f7')
# plot the reduced-rank score matrix representation
for group, color in zip(irises, colors):
ax2.scatter(group.pc1,
group.pc2,
marker='^',
color=color,
label=group.species,
alpha=0.5,
zorder=1000)
ax2.set_xlabel(r'PC 1')
ax2.set_ylabel(r'PC 2')
ax2.grid(color='#fdfefe')
ax2.set_facecolor('#f4f6f7')
ax2.legend(labels=iris.target_names, fontsize=7)
plt.suptitle(r'Fig. 1: PCA via eigen-decomposition, 2D visualization')
plt.tight_layout(pad=3.0)
plt.show()
"""
Explanation: We can visualize the original, 4D iris data of $X$ by using the first $k$ eigenvectors of $X$, projecting the original data into a reduced-rank $k$-dimensional principal component space.
\begin{align}
T_{rank=k} &= X V_{rank=k}
\end{align}
End of explanation
"""
feature_norms = np.linalg.norm(eigenvectors[:, 0:k], axis=1)
feature_weights = feature_norms / feature_norms.sum()
msg = ('Using {} principal components, '
'the original features are represented with the following weights:')
print(msg.format(k))
for feature, weight in zip(iris.feature_names, feature_weights):
print('- {}: {:0.3f}'.format(feature, weight))
"""
Explanation: Relative weights of the original features in principal component space
Each row of $V$ is a vector representing the relative weights for each of the original features for each principal component.
Given reduced-rank $k$, the columns of $V_k$ are the weights for principal components $1, \dots, k$.
Calculate the norm for each row of $V_k$, and normalize the results to obtain the relative weights for each original feature in principal component space.
End of explanation
"""
U, S, Vt = np.linalg.svd(X_zscaled)
print(eigenvectors)
print(Vt.T)
Vt.T.shape
"""
Explanation: PCA via Singular Value Decomposition
Singular value decomposition factors any matrix $A$ into right-singular vector matrix $U$; diagonal matrix of singular values $\Sigma$; and right-singular vector matrix $V$.
\begin{align}
A &= U \Sigma V^\intercal \
\end{align}
If we start with
\begin{align}
X &= U \Sigma V^\intercal \
\
X^\intercal X &= (U \Sigma V^\intercal)^\intercal U \Sigma V^\intercal \
&= V \Sigma^\intercal U^\intercal U \Sigma V^\intercal \
&= V \Sigma^\intercal \Sigma V^\intercal \
\
\Rightarrow \Sigma^\intercal \Sigma &= \Lambda
\end{align}
End of explanation
"""
|
google/data-pills | pills/CM/[DATA_PILL]_[CM]_Campaign_Overlap_(ADH)_v1.ipynb | apache-2.0 | # Install additional packages
!pip install -q matplotlib-venn
# Import all necessary libs
import json
import sys
import argparse
import pprint
import random
import datetime
import pandas as pd
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient import discovery
from oauthlib.oauth2.rfc6749.errors import InvalidGrantError
from google.auth.transport.requests import AuthorizedSession
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
from googleapiclient.errors import HttpError
from matplotlib import pyplot as plt
from matplotlib_venn import venn3, venn3_circles
from IPython.display import display, HTML
from googleapiclient.errors import HttpError
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
"""
Explanation: Important
This content are intended for educational and informational purposes only.
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Target overlap analysis
Why?
In today’s digital ecosystem a consumer’s journey down the marketing funnel can be more complicated than ever. There are so many user touch points and ways to win (or lose) a customer along the way. Carefully moving consumers down the funnel is key to success. Therefore, we believe that brands that holistically think about the entire marketing funnel will generate the most demand and profitability.
By running targeting analysis you can intentionally increase the overlap of your branding and direct response media with coordinated targeting and creative.
What?
In this analysis we will be looking into the cookie size overlap across up to three different media strategies (DV360 Line Items or Insertion Oders), and (optionaly) how this overlap influenciates floodlight conversions/cookies rate.
Key notes / assumptions
* If User_id is 0 for a user we will not consider it (potentially due to user being in a Google Demographic or Affinity Segment, is direct traffic from Facebook or for publisher policy reasons).
ADH APIs Configuration Steps
Go to the Google Developers Console and verify that you have access to your Google Cloud project via the drop-down menu at the top of the page. If you don't see the right Google Cloud project, you should reach out to your Ads Data Hub team to get access.
From the project drop-down menu, select your Big Query project.
Click on the hamburger button on the top left corner of the page and click APIs & services > Credentials.
If you have not done so already, create an API key by clicking the Create credentials drop-down menu and select API key. This will create an API key that you will need for a later step.
If you have not done so already, create a new OAuth 2.0 client ID by clicking the Create credentials button and select OAuth client ID. For the Application type select Other and optionally enter a name to be associated with the client ID. Click Create to create the new Client ID and a dialog will appear to show you your client ID and secret. On the Credentials page for
your project, find your new client ID listed under OAuth 2.0 client IDs, and click the corresponding download icon. The downloaded file will contain your credentials, which will be needed to step through the OAuth 2.0 installed application flow.
update the DEVELOPER_KEY field to match the
API key you retrieved earlier.
Rename the credentials file you downloaded earlier to adh-key.json and upload the file in this colab (on the left menu click on the "Files" tab and then click on the "upload" button
Set Up - Install all dependencies and authorize bigQuery access
End of explanation
"""
# TODO: Update the value of these variables with your own values
DEVELOPER_KEY = 'AIzaSyBa6P2oQ2m6T5M5LGUxuOKBsvzTDX1rnOw' #'INSERT_DEVELOPER_KEY_HERE'
CLIENT_SECRETS_FILE = 'adh-key.json' #'Make sure you have correctly renamed this file and you have uploaded it in this colab'
# Other configuration variables
_APPLICATION_NAME = 'ADH Campaign Overlap'
_CREDENTIALS_FILE = 'fcq-credentials.json'
_SCOPES = 'https://www.googleapis.com/auth/adsdatahub'
_DISCOVERY_URL_TEMPLATE = 'https://%s/$discovery/rest?version=%s&key=%s'
_FCQ_DISCOVERY_FILE = 'fcq-discovery.json'
_FCQ_SERVICE = 'adsdatahub.googleapis.com'
_FCQ_VERSION = 'v1'
_REDIRECT_URI = 'urn:ietf:wg:oauth:2.0:oob'
_SCOPE = ['https://www.googleapis.com/auth/adsdatahub']
_TOKEN_URI = 'https://accounts.google.com/o/oauth2/token'
MAX_PAGE_SIZE = 50
"""
Explanation: API Configuration
The Developer Key is used to retrieve a discovery
document
This is used to build the service used to make API
Client secret is used to authenticate to the API, it can be downloaded from the Google Cloud Console (make sure you have correctly renamed the json file to 'adh-key.json')
End of explanation
"""
#!/usr/bin/python
#
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
def _GetCredentialsFromInstalledApplicationFlow():
"""Get new credentials using the installed application flow."""
flow = InstalledAppFlow.from_client_secrets_file(
CLIENT_SECRETS_FILE, scopes=_SCOPE)
flow.redirect_uri = _REDIRECT_URI # Set the redirect URI used for the flow.
auth_url, _ = flow.authorization_url(prompt='consent')
print ('Log into the Google Account you use to access the Full Circle Query '
'v2 API and go to the following URL:\n%s\n' % auth_url)
print 'After approving the token, enter the verification code (if specified).'
code = raw_input('Code: ')
try:
flow.fetch_token(code=code)
except InvalidGrantError as ex:
print 'Authentication has failed: %s' % ex
sys.exit(1)
credentials = flow.credentials
_SaveCredentials(credentials)
return credentials
def _LoadCredentials():
"""Loads and instantiates Credentials from JSON credentials file."""
with open(_CREDENTIALS_FILE, 'rb') as handler:
stored_creds = json.loads(handler.read())
creds = Credentials(client_id=stored_creds['client_id'],
client_secret=stored_creds['client_secret'],
token=None,
refresh_token=stored_creds['refresh_token'],
token_uri=_TOKEN_URI)
return creds
def _SaveCredentials(creds):
"""Save credentials to JSON file."""
stored_creds = {
'client_id': getattr(creds, '_client_id'),
'client_secret': getattr(creds, '_client_secret'),
'refresh_token': getattr(creds, '_refresh_token')
}
with open(_CREDENTIALS_FILE, 'wb') as handler:
handler.write(json.dumps(stored_creds))
def GetCredentials():
"""Get stored credentials if they exist, otherwise return new credentials.
If no stored credentials are found, new credentials will be produced by
stepping through the Installed Application OAuth 2.0 flow with the specified
client secrets file. The credentials will then be saved for future use.
Returns:
A configured google.oauth2.credentials.Credentials instance.
"""
try:
creds = _LoadCredentials()
creds.refresh(Request())
except IOError:
creds = _GetCredentialsFromInstalledApplicationFlow()
return creds
def GetDiscoveryDocument():
"""Downloads the Full Circle Query v2 discovery document.
Downloads the Full Circle Query v2 discovery document to fcq-discovery.json
if it is accessible. If the file already exists, it will be overwritten.
Raises:
ValueError: raised if the discovery document is inaccessible for any reason.
"""
credentials = GetCredentials()
discovery_url = _DISCOVERY_URL_TEMPLATE % (
_FCQ_SERVICE, _FCQ_VERSION, DEVELOPER_KEY)
auth_session = AuthorizedSession(credentials)
discovery_response = auth_session.get(discovery_url)
if discovery_response.status_code == 200:
with open(_FCQ_DISCOVERY_FILE, 'wb') as handler:
handler.write(discovery_response.text)
else:
raise ValueError('Unable to retrieve discovery document for api name "%s"'
'and version "%s" via discovery URL: %s'
% _FCQ_SERVICE, _FCQ_VERSION, discovery_url)
def GetService():
"""Builds a configured Full Circle Query v2 API service.
Returns:
A googleapiclient.discovery.Resource instance configured for the Full Circle
Query v2 service.
"""
credentials = GetCredentials()
discovery_url = _DISCOVERY_URL_TEMPLATE % (
_FCQ_SERVICE, _FCQ_VERSION, DEVELOPER_KEY)
service = discovery.build(
'adsdatahub', _FCQ_VERSION, credentials=credentials,
discoveryServiceUrl=discovery_url)
return service
def GetServiceFromDiscoveryDocument():
"""Builds a configured Full Circle Query v2 API service via discovery file.
Returns:
A googleapiclient.discovery.Resource instance configured for the Full Circle
Query API v2 service.
"""
credentials = GetCredentials()
with open(_FCQ_DISCOVERY_FILE, 'rb') as handler:
discovery_doc = handler.read()
service = discovery.build_from_document(
service=discovery_doc, credentials=credentials)
return service
"""
Explanation: API Authentication - OAuth2.0 Flow
Utility functions to execute the OAuth2.0 flow
End of explanation
"""
try:
full_circle_query = GetService()
except IOError as ex:
print ('Unable to create ads data hub service - %s' % ex)
print ('Did you specify the client secrets file in samples_util.py?')
sys.exit(1)
try:
# Execute the request.
response = full_circle_query.customers().list().execute()
except HttpError as e:
print (e)
sys.exit(1)
if 'customers' in response:
print ('ADH API Returned {} Ads Data Hub customers for the current user!'.format(len(response['customers'])))
for customer in response['customers']:
print(json.dumps(customer))
else:
print ('No customers found for current user.')
"""
Explanation: Actual Request to the Ads Data Hub Service API
End of explanation
"""
#@title Define the data source in BigQuery:
customer_id = 000000001 #@param
dataset_id = 000000001 #@param
big_query_project = 'bq_project_id' #@param Destination Project ID {type:"string"}
big_query_dataset = 'dataset_name' #@param Destination Dataset {type:"string"}
big_query_destination_table = "table_name" #@param {type:"string"}
query_name = "query_name" #@param {type:"string"}
"""
Explanation: Analysis 1: Impression/click campaign overlap
Step 1: Define analysis parameters
We will be looking at the overlap between users in the funnel for this example but this methodology could be used for any groups with potential overlap (e.g. flight sales and hotel sales)
1.1 Data source:
First we will define which customer the data is coming from and where this data will be saved in BigQuery
We will also give the ADH query a name. This name must be unique and must not already exist in ADH.
You must have a bigquery dataset and table set up in order to save results.
If you don't already have one you can create then in the BigQuery UI
End of explanation
"""
#@title Define basic analysis parameters
group_type = 'dv360_insertion_order_id'#@param ["dv360_insertion_order_id", "dv360_line_item_id", "advertiser_id"]
overlap_type = 'impressions'#@param ["clicks", "impressions"]
start_date = '2019-08-01' #@param {type:"string"}
end_date = '2019-09-18' #@param {type:"string"}
"""
Explanation: 1.2 Analysis Parameters:
Next we will set up the basic analysis parameters.
You can run the analysis at an IO or LI level based on either impressions or clicks
Select the dates for which you want to run the analysis
End of explanation
"""
#@title Define IOs/LI ids (comma separated) that will be used to select each of the 3 groups. leave it as -1 to exclude a group.
group_1_ids = '10048874, 9939957'#@param {type:"string"}
group_2_ids = '10048146, 9956341'#@param {type:"string"}
group_3_ids = '10048875, 9939959'#@param {type:"string"}
"""
Explanation: 1.3 Funnel groups:
For each of the groups list out the IO or LI ids you would like to include for the analysis
End of explanation
"""
#@title Define friendly names (labels) for each group
group_1_lb = 'upper funnel'#@param {type:"string"}
group_2_lb = 'mid funnel'#@param {type:"string"}
group_3_lb = 'lower funnel'#@param {type:"string"}
if group_1_ids == '-1':
group_1_lb = ''
if group_2_ids == '-1':
group_2_lb = ''
if group_3_ids == '-1':
group_3_lb = ''
"""
Explanation: 1.4 Friendly Names:
Set the friendly name for each of the groups (e.g. upper funnel). These will be used in the visualisation
End of explanation
"""
#assemble dynamic content dict
dc = {}
dc['group_type'] = group_type
dc['overlap_type'] = overlap_type
dc['start_date'] = start_date
dc['end_date'] = end_date
"""
Explanation: Step 2: Assemble and run Query
2.1 Assemble Query
Set all the variables for the query
End of explanation
"""
q1_1 = '''
### Step 1: Create a label for different targeted audiences and impressions
CREATE TABLE interactions_by_user_id_and_g AS (
SELECT
user_id,
SUM(IF(event.{group_type} IN UNNEST(@group_1_ids),1,0)) AS imp_g_1,
SUM(IF(event.{group_type} IN UNNEST(@group_2_ids),1,0)) AS imp_g_2,
SUM(IF(event.{group_type} IN UNNEST(@group_3_ids),1,0)) AS imp_g_3
FROM adh.cm_dt_{overlap_type} as imp
WHERE
event.{group_type} IN UNNEST(ARRAY_CONCAT(@group_1_ids,@group_2_ids,@group_3_ids))
AND
user_id != '0'
GROUP BY 1
);'''
"""
Explanation: Create the Query
For each user id:
Part 1 - find interactions:
* Identify if there has been an interaction (impression or click) for any of the listed IOs or LIs in each of the groups by comparing the IO or LI id (from the field defined in 'group_type' to the predefined list
* filter for the IDs set- look at the IO or LI ID column and determine if it exists in the defined list
* filter out zero user IDs
End of explanation
"""
q1_2 = '''
#Part 2 - calculate metrics
SELECT
COUNT(interactions.user_id) AS Unique_Cookies,
SUM(IF(interactions.imp_g_1 > 0 AND interactions.imp_g_2 + interactions.imp_g_3 = 0,1,0)) AS cookies_exclusive_g_1,
SUM(IF(interactions.imp_g_2 > 0 AND interactions.imp_g_1 + interactions.imp_g_3 = 0,1,0)) AS cookies_exclusive_g_2,
SUM(IF(interactions.imp_g_3 > 0 AND interactions.imp_g_1 + interactions.imp_g_2 = 0,1,0)) AS cookies_exclusive_g_3,
SUM(IF(interactions.imp_g_1 > 0 AND interactions.imp_g_2 > 0 AND interactions.imp_g_3 = 0 ,1,0)) AS cookies_g_1_2,
SUM(IF(interactions.imp_g_1 > 0 AND interactions.imp_g_3 > 0 AND interactions.imp_g_2 = 0,1,0)) AS cookies_g_1_3,
SUM(IF(interactions.imp_g_3 > 0 AND interactions.imp_g_2 > 0 AND interactions.imp_g_1 = 0,1,0)) AS cookies_g_2_3,
SUM(IF(interactions.imp_g_1 > 0 AND interactions.imp_g_2 > 0 AND interactions.imp_g_3 > 0 ,1,0)) AS cookies_g_1_2_3,
#3 count total impressions
SUM(interactions.imp_g_1 + interactions.imp_g_2 + interactions.imp_g_3) AS all_impressions,
#4 count total users
SUM(1) AS total_cookies
FROM
tmp.interactions_by_user_id_and_g AS interactions
'''
"""
Explanation: Part 2 - calculate metrics:
* For each of the groups and group combindations identify if any impressions have been logged
* Identify the number of impressions from zero'd out users, total impressions, total users and the % of zero'd out users
End of explanation
"""
query_text = (q1_1 + q1_2).format(**dc)
print('Final BigQuery SQL:')
print(query_text)
"""
Explanation: Put the query together
Join the 3 parts of the query and use the python format function to pass through the parameters set in step 1
End of explanation
"""
parameters_type = {
"group_1_ids": {
"defaultValue": {
"value": ""
},
"type": {
"arrayType": {
"type": "INT64"
}
}
},
"group_2_ids": {
"defaultValue": {
"value": ""
},
"type": {
"arrayType": {
"type": "INT64"
}
}
},
"group_3_ids": {
"defaultValue": {
"value": ""
},
"type": {
"arrayType": {
"type": "INT64"
}
}
}
}
"""
Explanation: Set up group parameters
End of explanation
"""
try:
full_circle_query = GetService()
except IOError, ex:
print 'Unable to create ads data hub service - %s' % ex
print 'Did you specify the client secrets file?'
sys.exit(1)
query_create_body = {
'name': query_name,
'title': query_name,
'queryText': query_text,
'parameterTypes': parameters_type
}
try:
# Execute the request.
new_query = full_circle_query.customers().analysisQueries().create(body=query_create_body, parent='customers/' + str(customer_id)).execute()
new_query_name = new_query["name"]
except HttpError as e:
print e
sys.exit(1)
print 'New query %s created for customer ID "%s":' % (new_query_name, customer_id)
print(json.dumps(new_query))
"""
Explanation: Create the Query in ADH
End of explanation
"""
destination_table_full_path = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table
CUSTOMER_ID = customer_id
DATASET_ID = dataset_id
QUERY_NAME = query_name
DEST_TABLE = destination_table_full_path
#Dates
format_str = '%Y-%m-%d' # The format
start_date_obj = datetime.datetime.strptime(start_date, format_str)
end_date_obj = datetime.datetime.strptime(end_date, format_str)
START_DATE = {
"year": start_date_obj.year,
"month": start_date_obj.month,
"day": start_date_obj.day
}
END_DATE = {
"year": end_date_obj.year,
"month": end_date_obj.month,
"day": end_date_obj.day
}
try:
full_circle_query = GetService()
except IOError, ex:
print('Unable to create ads data hub service - %s' % ex)
print('Did you specify the client secrets file?')
sys.exit(1)
query_start_body = {
'spec': {
'adsDataCustomerId': DATASET_ID,
'startDate': START_DATE,
'endDate': END_DATE,
'parameterValues':
{"group_1_ids":
{"value": group_1_ids},
"group_2_ids":
{"value": group_2_ids},
"group_3_ids":
{"value": group_3_ids},
}
},
'destTable': DEST_TABLE,
'customerId': CUSTOMER_ID
}
try:
# Execute the request.
operation = full_circle_query.customers().analysisQueries().start(body=query_start_body, name=new_query_name).execute()
except HttpError as e:
print(e)
sys.exit(1)
print('Running query with name "%s" via the following operation:' % query_name)
print(json.dumps(operation))
"""
Explanation: Check the query exists in ADH
Full Query
2.2 Execute the query
End of explanation
"""
import time
statusDone = False
while statusDone is False:
print("waiting for the job to complete...")
updatedOperation = full_circle_query.operations().get(name=operation['name']).execute()
if updatedOperation.has_key('done') and updatedOperation['done'] == True:
statusDone = True
if(statusDone == False):
time.sleep(5)
print("Job completed... Getting results")
#run bigQuery query
dc = {}
dc['table'] = big_query_dataset + '.' + big_query_destination_table
q1 = '''
select * from {table}
'''.format(**dc)
df1 = pd.io.gbq.read_gbq(q1, project_id=big_query_project, dialect='standard')
print('Total Cookies: ' + str(df1.total_cookies[0]))
"""
Explanation: 2.3 Retrieve the result from BigQuery
Pass the query (q1) and billing project id in order to run the query
We are using a direct pandas integration with BigQuery in order to run this
End of explanation
"""
from __future__ import division
# define main variables
cookies = {}
#Total
cookies['all'] = df1.total_cookies[0]
cookies['g1'] = df1.cookies_exclusive_g_1[0]
cookies['g2'] = df1.cookies_exclusive_g_2[0]
cookies['g3'] = df1.cookies_exclusive_g_3[0]
cookies['g12'] = df1.cookies_g_1_2[0]
cookies['g13'] = df1.cookies_g_1_3[0]
cookies['g23'] = df1.cookies_g_2_3[0]
cookies['g123'] = df1.cookies_g_1_2_3[0]
#percentage
cookies_p = {}
cookies_p['all'] = 1
cookies_p['g1'] = cookies['g1']/cookies['all']
cookies_p['g2'] = cookies['g2']/cookies['all']
cookies_p['g3'] = cookies['g3']/cookies['all']
cookies_p['g12'] = cookies['g12']/cookies['all']
cookies_p['g13'] = cookies['g13']/cookies['all']
cookies_p['g23'] = cookies['g23']/cookies['all']
cookies_p['g123'] = cookies['g123']/cookies['all']
# define table labels from variables at the start
table_labels = {
'g1':'1 - ' + group_1_lb,
'g2':'2 - ' + group_2_lb,
'g3':'3 - ' + group_3_lb,
'g12':'4 - ' + group_1_lb + ' and ' + group_2_lb,
'g13':'5 - ' + group_1_lb + ' and ' + group_3_lb,
'g23':'6 - ' + group_2_lb + ' and ' + group_3_lb,
'g123':'7 - ' + group_1_lb + ', '+ group_2_lb + ' and ' + group_3_lb,
'all': 'total'
}
#display results in table
def create_df_series(data_dict,labels):
retVal = {}
for key in data_dict:
data = data_dict[key]
label = labels[key]
retVal[label] = data
return retVal
col_cookies_percent = pd.Series(create_df_series(cookies_p,table_labels))
col_cookies = pd.Series(create_df_series(cookies,table_labels))
df_1_summary = pd.DataFrame({'Cookies':col_cookies,'cookies (%)':col_cookies_percent})
df_1_summary
"""
Explanation: Step 3: Calculate auxiliary metrics
For each group and combination of groups calculate the total number of cookies and the percentage of cookies
Label each group with a friendly name and display the output in a table
End of explanation
"""
#create diagram image
plt.figure(figsize=(20,15))
plt.title("Cookie Overlap across %s, %s and %s"%(group_1_lb, group_2_lb, group_3_lb))
venn_data_subset = [
cookies_p['g1'],cookies_p['g2'],cookies_p['g12'],
cookies_p['g3'],cookies_p['g13'],cookies_p['g23'],cookies_p['g123']
]
v = venn3(
subsets = venn_data_subset,
set_labels = (group_1_lb, group_2_lb, group_3_lb)
)
#replace diagram labels
def replace_diagram_labels(v):
for i, sl in enumerate(v.subset_labels):
if sl is not None:
sl.set_text(str(round(float(sl.get_text())*100,1))+'%\nof all cookies')
#plot diagram
replace_diagram_labels(v)
plt.show()
"""
Explanation: Step 4 - Display the output
End of explanation
"""
#@title Define floodlight ids (comma separated) that will be used as conversion
floodlight_activity_ids = '3716682,3714314,3716571,3714314,1399226'#@param
dc['activity_ids'] = floodlight_activity_ids
query_name_2 = 'query_name'#@param
big_query_destination_table_2 = 'table_name'#@param
"""
Explanation: What is the overlap between your groups?
Are your upper funnel users moving through the funnel?
Analysis 2: Conversion Rate Impact on overlap
Step 1: Define additional analysis parameters
Set the list of floodlight activities to use for attributing conversions
All other parameters will be fetched from previous analysis
If overlap_type is set to click, query will perform a post-click attribution. If overlap_type is set to impression, query will perform a post-impression attribution
We will need to create a new unique query name and set a new BigQuery destination table
End of explanation
"""
q2_1 = '''
#Part 1: Get interactions
with interactions AS (
SELECT
user_id,
IF(event.{group_type} IN UNNEST(@group_1_ids),1,0) AS imp_g_1,
IF(event.{group_type} IN UNNEST(@group_2_ids),1,0) AS imp_g_2,
IF(event.{group_type} IN UNNEST(@group_3_ids),1,0) AS imp_g_3,
event.event_time AS interaction_event_time
FROM adh.cm_dt_{overlap_type}
WHERE event.{group_type} IN UNNEST(ARRAY_CONCAT(@group_1_ids,@group_2_ids,@group_3_ids))
AND user_id <> '0' #remove zeroed out ids
),'''
"""
Explanation: Step 2: Assemble and Run the Query
2.1: Assemble Query
Part 1 - Get interactions: If the IO/LI is in the defined list of IOs/LIs for each of your groups - count as 1 from the click or impression data (overlap type)
Part 2 - Filter date data
Part 3 - Get conversiions:
Create a unique id for user-id + event time so that each event can be counted as a distinct event
Find all the conversion events in your defined list
Part 4 - Find the interactions that lead to conversions:
Join all the interactions (clicks or impressions) to all the conversions with a left join to see which interactions have a conversion using user id as the join key and only considering interactions that happened before the conversion event
Part 5 - For each user find the number of conversions and impressions
Count the distinct number of conversions per user and the number of impressions in each of the 3 funnel groups (IO / LIs)
Part 6 - For each combination of groups:
For each group find the number of cookies and find the number of conversions
Part 1 - Get interactions
Looking at the impressions or clicks table
Create a column for each of the groups
If the IO/LI id is in the defined list of IOs/LIs for each of your groups - count as 1 from the click or impression data (overlap type)
Data is grouped by user ID
filter for the IDs set- look at the IO or LI ID column and determine if it exists in the defined list
filter out zero user IDs
End of explanation
"""
q2_2 = '''
conversions AS (
SELECT
user_id,
event.event_time AS conversion_event_time
FROM adh.cm_dt_activities
WHERE CAST(event.activity_id AS INT64) IN UNNEST(@activity_ids)
AND user_id <> '0'
#GROUP BY 1
),
'''
"""
Explanation: output example:
<table>
<tr>
<th>user_id</th>
<th>imp_g_1</th>
<th>imp_g_2</th>
<th>imp_g_3</th>
<th>interaction_event_time</th>
</tr>
<tr>
<td>001</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>timestamp</td>
</tr>
<tr>
<td>002</td>
<td>0</td>
<td>0</td>
<td>1</td>
<td>timestamp</td>
</tr>
<tr>
<td>003</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>timestamp</td>
</tr>
<tr>
<td>001</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>timestamp</td>
</tr>
<tr>
<td>001</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>timestamp</td>
</tr>
</table>
Part 2: Get conversions
Looking at the activity table
Filter the data to find the conversion events in your defined list of floodlight activity ids
End of explanation
"""
q2_3 = '''
#define which of these interactions led to a conversion
impressions_and_conversions AS(
SELECT
t0.user_id AS user_id,
t1.conversion_event_time,
t0.imp_g_1 AS imp_g_1,
t0.imp_g_2 AS imp_g_2,
t0.imp_g_3 AS imp_g_3
FROM interactions As t0
LEFT JOIN conversions t1 ON t0.user_id = t1.user_id
WHERE
t1.user_id IS NULL OR
interaction_event_time < conversion_event_time
),
'''
"""
Explanation: output example:
<table>
<tr>
<th>user_id</th>
<th>conversion_event_time</th>
</tr>
<tr>
<td>001</td>
<td>timestamp</td>
</tr>
<tr>
<td>002</td>
<td>timestamp</td>
</tr>
<tr>
<td>003</td>
<td>timestamp</td>
</tr>
<tr>
<td>004</td>
<td>timestamp</td>
</tr>
</table>
Part 3 - Find the interactions that lead to conversions:
Join all the interactions (clicks or impressions) to all the conversions with a left join to see which interactions have a conversion using user id as the join key and only considering interactions that happened before the conversion event
End of explanation
"""
q2_4 = '''
#aggregate interactions per user
results_by_user_id AS (
SELECT
user_id,
COUNT(DISTINCT conversion_event_time) AS conversions,
SUM(imp_g_1) AS imp_g_1,
SUM(imp_g_2) AS imp_g_2,
SUM(imp_g_3) AS imp_g_3
FROM impressions_and_conversions
GROUP BY 1
)
'''
"""
Explanation: example output:
<table>
<tr>
<th>user_id</th>
<th>conversion_event_tim</th>
<th>imp_g_1</th>
<th>imp_g_2</th>
<th>imp_g_3</th>
</tr>
<tr>
<td>001</td>
<td>timestamp</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>001</td>
<td>timestamp</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>001</td>
<td>timestamp</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>002</td>
<td>timestamp</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>003</td>
<td>timestamp</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
</table>
Part 4 - For each user find the number of conversions and impressions
Count the distinct number of conversions per user and the number of impressions in each of the 3 funnel groups (IO / LIs)
End of explanation
"""
q2_5 = '''
#calculate group overlaps per user
SELECT
#cookie count
SUM(IF(imp_g_1 > 0 AND imp_g_2 + imp_g_3 = 0,1,0)) AS cookies_exclusive_g_1,
SUM(IF(imp_g_2 > 0 AND imp_g_1 + imp_g_3 = 0,1,0)) AS cookies_exclusive_g_2,
SUM(IF(imp_g_3 > 0 AND imp_g_1 + imp_g_2 = 0,1,0)) AS cookies_exclusive_g_3,
SUM(IF(imp_g_1 > 0 AND imp_g_2 > 0 AND imp_g_3 = 0 ,1,0)) AS cookies_g_1_2,
SUM(IF(imp_g_1 > 0 AND imp_g_3 > 0 AND imp_g_2 = 0,1,0)) AS cookies_g_1_3,
SUM(IF(imp_g_3 > 0 AND imp_g_2 > 0 AND imp_g_1 = 0,1,0)) AS cookies_g_2_3,
SUM(IF(imp_g_1 > 0 AND imp_g_2 > 0 AND imp_g_3 > 0 ,1,0)) AS cookies_g_1_2_3,
#conversion count
SUM(IF(imp_g_1 > 0 AND imp_g_2 + imp_g_3 = 0,conversions,0)) AS conversions_exclusive_g_1,
SUM(IF(imp_g_2 > 0 AND imp_g_1 + imp_g_3 = 0,conversions,0)) AS conversions_exclusive_g_2,
SUM(IF(imp_g_3 > 0 AND imp_g_1 + imp_g_2 = 0,conversions,0)) AS conversions_exclusive_g_3,
SUM(IF(imp_g_1 > 0 AND imp_g_2 > 0 AND imp_g_3 = 0 ,conversions,0)) AS conversions_g_1_2,
SUM(IF(imp_g_1 > 0 AND imp_g_3 > 0 AND imp_g_2 = 0,conversions,0)) AS conversions_g_1_3,
SUM(IF(imp_g_3 > 0 AND imp_g_2 > 0 AND imp_g_1 = 0,conversions,0)) AS conversions_g_2_3,
SUM(IF(imp_g_1 > 0 AND imp_g_2 > 0 AND imp_g_3 > 0 ,conversions,0)) AS conversions_g_1_2_3,
#total metrics count
SUM(conversions) AS total_conversions,
COUNT(1) AS total_cookies,
SUM(conversions) / COUNT(1) As total_conversions_per_cookie
FROM results_by_user_id
'''
"""
Explanation: example output:
<table>
<tr>
<th>user_id</th>
<th>conversions</th>
<th>imp_g_1</th>
<th>imp_g_2</th>
<th>imp_g_3</th>
</tr>
<tr>
<td>001</td>
<td>3</td>
<td>1</td>
<td>2</td>
<td>0</td>
</tr>
<tr>
<td>002</td>
<td>1</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>003</td>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
</table>
Part 5 - For each combination of groups:
For each group find the number of cookies and find the number of conversions
End of explanation
"""
dc = {}
dc['group_type'] = group_type
dc['overlap_type'] = overlap_type
dc['activity_ids'] = floodlight_activity_ids
q2 = (q2_1 + q2_2 + q2_3 + q2_4 + q2_5).format(**dc)
print('Final BigQuery SQL:')
print(q2)
"""
Explanation: example output:
<table>
<tr>
<th>cookies_exclusive_g_1</th>
<th>cookies_exclusive_g_2</th>
<th>cookies_exclusive_g_3</th>
<th>cookies_g_1_2</th>
<th>cookies_g_1_3</th>
<th>cookies_g_2_3</th>
<th>cookies_g_1_2_3</th>
<th>conversions_exclusive_g_1</th>
<th>conversions_exclusive_g_2</th>
<th>conversions_exclusive_g_3</th>
<th>conversions_g_1_2</th>
<th>conversions_g_1_3</th>
<th>conversions_g_2_3</th>
<th>conversions_g_1_2_3</th>
<th>total_conversions</th>
<th>total_cookies</th>
<th>total_conversions_per_cookie</th>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</table>
Assemble the Query and display the output
End of explanation
"""
parameters_type = {
"group_1_ids": {
"defaultValue": {
"value": ""
},
"type": {
"arrayType": {
"type": "INT64"
}
}
},
"group_2_ids": {
"defaultValue": {
"value": ""
},
"type": {
"arrayType": {
"type": "INT64"
}
}
},
"group_3_ids": {
"defaultValue": {
"value": ""
},
"type": {
"arrayType": {
"type": "INT64"
}
}
},
"activity_ids": {
"defaultValue": {
"value": ""
},
"type": {
"arrayType": {
"type": "INT64"
}
}
}
}
"""
Explanation: Set the parameter types
ADH allows you to create variables
We will pass this in to the request when we create the query so we need to define what the types are
End of explanation
"""
try:
full_circle_query = GetService()
except IOError, ex:
print 'Unable to create ads data hub service - %s' % ex
print 'Did you specify the client secrets file?'
sys.exit(1)
query_create_body = {
'name': query_name_2,
'title': query_name_2,
'queryText': q2,
'parameterTypes': parameters_type
}
try:
# Execute the request.
new_query = full_circle_query.customers().analysisQueries().create(body=query_create_body, parent='customers/' + str(customer_id)).execute()
new_query_name = new_query["name"]
except HttpError as e:
print e
sys.exit(1)
print 'New query created for customer ID "%s":' % customer_id
print(json.dumps(new_query))
"""
Explanation: Create the query in ADH
End of explanation
"""
destination_table_full_path = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table_2
CUSTOMER_ID = customer_id
DATASET_ID = dataset_id
QUERY_NAME = query_name
DEST_TABLE = destination_table_full_path
#Dates
format_str = '%Y-%m-%d' # The format
start_date_obj = datetime.datetime.strptime(start_date, format_str)
end_date_obj = datetime.datetime.strptime(end_date, format_str)
START_DATE = {
"year": start_date_obj.year,
"month": start_date_obj.month,
"day": start_date_obj.day
}
END_DATE = {
"year": end_date_obj.year,
"month": end_date_obj.month,
"day": end_date_obj.day
}
try:
full_circle_query = GetService()
except IOError, ex:
print('Unable to create ads data hub service - %s' % ex)
print('Did you specify the client secrets file?')
sys.exit(1)
query_start_body = {
'spec': {
'adsDataCustomerId': DATASET_ID,
'startDate': START_DATE,
'endDate': END_DATE,
'parameterValues':
{"group_1_ids":
{"value": group_1_ids},
"group_2_ids":
{"value": group_2_ids},
"group_3_ids":
{"value": group_3_ids},
"activity_ids":
{"value": floodlight_activity_ids}
}
},
'destTable': DEST_TABLE,
'customerId': CUSTOMER_ID
}
try:
# Execute the request.
operation = full_circle_query.customers().analysisQueries().start(body=query_start_body, name=new_query_name).execute()
except HttpError as e:
print(e)
sys.exit(1)
print('Running query with name "%s" via the following operation:' % query_name)
print(json.dumps(operation))
"""
Explanation: 2.2: Run the query
End of explanation
"""
statusDone = False
while statusDone is False:
print("waiting for the job to complete...")
updatedOperation = full_circle_query.operations().get(name=operation['name']).execute()
if updatedOperation.has_key('done') and updatedOperation['done'] == True:
statusDone = True
if(statusDone == False):
time.sleep(5)
print("Job completed... Getting results")
#run bigQuery query
dc = {}
dc['table'] = big_query_dataset + '.' + big_query_destination_table_2
q1 = '''
select * from {table}
'''.format(**dc)
df2 = pd.io.gbq.read_gbq(q1, project_id=big_query_project, dialect='standard')
print(dc['table'])
print('Total Cookies: ' + str(df2.total_cookies[0]))
print('Total Conversions: ' + str(df2.total_conversions[0]))
"""
Explanation: 2.3 Get the results from BigQuery
End of explanation
"""
# define main variables
cookies = {}
cookies['all'] = df2.total_cookies[0]
cookies['g1'] = df2.cookies_exclusive_g_1[0]
cookies['g2'] = df2.cookies_exclusive_g_2[0]
cookies['g3'] = df2.cookies_exclusive_g_3[0]
cookies['g12'] = df2.cookies_g_1_2[0]
cookies['g13'] = df2.cookies_g_1_3[0]
cookies['g23'] = df2.cookies_g_2_3[0]
cookies['g123'] = df2.cookies_g_1_2_3[0]
cookies_p = {}
cookies_p['all'] = 1
cookies_p['g1'] = cookies['g1']/cookies['all']
cookies_p['g2'] = cookies['g2']/cookies['all']
cookies_p['g3'] = cookies['g3']/cookies['all']
cookies_p['g12'] = cookies['g12']/cookies['all']
cookies_p['g13'] = cookies['g13']/cookies['all']
cookies_p['g23'] = cookies['g23']/cookies['all']
cookies_p['g123'] = cookies['g123']/cookies['all']
# define table labels
table_labels = {
'g1':'1 - ' + group_1_lb,
'g2':'2 - ' + group_2_lb,
'g3':'3 - ' + group_3_lb,
'g12':'4 - ' + group_1_lb + ' and ' + group_2_lb,
'g13':'5 - ' + group_1_lb + ' and ' + group_3_lb,
'g23':'6 - ' + group_2_lb + ' and ' + group_3_lb,
'g123':'7 - ' + group_1_lb + ', '+ group_2_lb + ' and ' + group_3_lb,
'all': 'total'
}
#display results in table
def create_df_series(data_dict,labels):
retVal = {}
for key in data_dict:
data = data_dict[key]
label = labels[key]
retVal[label] = data
return retVal
col_cookies_percent = pd.Series(create_df_series(cookies_p,table_labels))
col_cookies = pd.Series(create_df_series(cookies,table_labels))
df_2_summary = pd.DataFrame({'Cookies':col_cookies,'cookies (%)':col_cookies_percent})
df_2_summary
"""
Explanation: Step 3: Set up the data to display
3.1 Define the variables for displaying in the chart
Use the response from the query to set all the variables for the chart
End of explanation
"""
#calculate cookie overlap across groups
all_cookies = df2.total_cookies[0]
g1 = round((df2.cookies_exclusive_g_1[0]/all_cookies*100),2)
g2 = round((df2.cookies_exclusive_g_2[0]/all_cookies)*100, 2)
g3 = round((df2.cookies_exclusive_g_3[0]/all_cookies)*100, 2)
g12 = round((df2.cookies_g_1_2[0]/all_cookies)*100, 2)
g13 = round((df2.cookies_g_1_3[0]/all_cookies)*100, 2)
g23 = round((df2.cookies_g_2_3[0]/all_cookies)*100, 2)
g123 = round((df2.cookies_g_1_2_3[0]/all_cookies)*100, 2)
#calculate conversions per cookie metric
all_conversions = df2.total_conversions[0]
g1_conv_user = round((df2.conversions_exclusive_g_1[0]/df2.cookies_exclusive_g_1[0]*100),3)
g2_conv_user = round((df2.conversions_exclusive_g_2[0]/df2.cookies_exclusive_g_2[0]*100),3)
g3_conv_user = round((df2.conversions_exclusive_g_3[0]/df2.cookies_exclusive_g_3[0]*100),3)
g12_conv_user = round((df2.conversions_g_1_2[0]/df2.cookies_g_1_2[0]*100),3)
g13_conv_user = round((df2.conversions_g_1_3[0]/df2.cookies_g_1_3[0]*100),3)
g23_conv_user = round((df2.conversions_g_2_3[0]/df2.cookies_g_2_3[0]*100),3)
g123_conv_user = round((df2.conversions_g_1_2_3[0]/df2.cookies_g_1_2_3[0]*100),3)
"""
Explanation: 3.2 Calculate the percentage cookie overlap and conversion rate and format the values
Calculate the percentage of cookie overlap and the percentage of conversions for each group in order to display on the chart
End of explanation
"""
conv_per_cookie = [g1_conv_user, g2_conv_user, g12_conv_user, g3_conv_user,g13_conv_user,g23_conv_user,g123_conv_user]
subsets = ['g1', 'g2', 'g12', 'g3', 'g13', 'g23', 'g123']
plt.figure(figsize=(25,15))
plt.title("Cookie Overlap across %s, %s and %s"%(group_1_lb, group_2_lb, group_3_lb))
v = venn3(subsets = (g1, g2, g12, g3, g13, g23, g123), set_labels = (group_1_lb, group_2_lb, group_3_lb))
def replace_diagram_labels(v):
for i, sl in enumerate(v.subset_labels):
if sl is not None:
print(table_labels[subsets[i]] +': '+ sl.get_text()+'% of all cookies. '+str(conv_per_cookie[i])+'% cvr')
sl.set_text(sl.get_text()+'% of all cookies. \n'+str(conv_per_cookie[i])+'% cvr')
replace_diagram_labels(v)
plt.show()
"""
Explanation: Step 4: Display the output
Plot the chart
End of explanation
"""
|
tcstewar/testing_notebooks | Data extraction from Nengo.ipynb | gpl-2.0 | model = nengo.Network()
with model:
def stim_a_func(t):
return np.sin(t*2*np.pi)
stim_a = nengo.Node(stim_a_func)
a = nengo.Ensemble(n_neurons=50, dimensions=1)
nengo.Connection(stim_a, a)
def stim_b_func(t):
return np.cos(t*np.pi)
stim_b = nengo.Node(stim_b_func)
b = nengo.Ensemble(n_neurons=50, dimensions=1)
nengo.Connection(stim_b, b)
c = nengo.Ensemble(n_neurons=200, dimensions=2, radius=1.5)
nengo.Connection(a, c[0])
nengo.Connection(b, c[1])
d = nengo.Ensemble(n_neurons=50, dimensions=1)
def multiply(x):
return x[0] * x[1]
nengo.Connection(c, d, function=multiply)
data = []
def output_function(t, x):
data.append(x)
output = nengo.Node(output_function, size_in=1)
nengo.Connection(d, output, synapse=0.03)
"""
Explanation: Using Nengo to define a model to be run with a different simulator
This is meant as an example script to take a standard Nengo model and extract out the information needed to simulate it outside of Nengo. When developing a new backend, we often use this as a first step to figure out the process that would need to be automated.
Step 1: Define the model
End of explanation
"""
sim = nengo.Simulator(model)
sim.run(2)
pylab.plot(data)
"""
Explanation: Step 2: Run it in normal Nengo
End of explanation
"""
for ens in model.all_ensembles:
print('Ensemble %d' % id(ens))
print(' number of neurons: %d' % ens.n_neurons)
print(' tau_rc: %g' % ens.neuron_type.tau_rc)
print(' tau_ref: %g' % ens.neuron_type.tau_ref)
print(' bias: %s' % sim.data[ens].bias)
"""
Explanation: Step 3: Extract data about groups of neurons
End of explanation
"""
for node in model.all_nodes:
if node.size_out > 0:
print('Input node %d' % id(node))
print('Function to call: %s' % node.output)
"""
Explanation: Step 4: Extract data about inputs
End of explanation
"""
for node in model.all_nodes:
if node.size_in > 0:
print('Output node %d' % id(node))
print('Function to call: %s' % node.output)
"""
Explanation: Step 5: Extract data about outputs
End of explanation
"""
for conn in model.all_connections:
if isinstance(conn.pre_obj, nengo.Ensemble) and isinstance(conn.post_obj, nengo.Ensemble):
print('Connection from %d to %d' % (id(conn.pre_obj), id(conn.post_obj)))
print(' synapse time constant: %g' % conn.synapse.tau)
decoder = sim.data[conn].weights
transform = nengo.utils.builder.full_transform(conn, allow_scalars=False)
encoder = sim.data[conn.post_obj].scaled_encoders
print(' decoder: %s' % decoder)
print(' transform: %s' % transform)
print(' encoder: %s' % encoder)
"""
Explanation: Step 6: Extract data about connections between Ensembles
End of explanation
"""
for conn in model.all_connections:
if isinstance(conn.pre_obj, nengo.Node) and isinstance(conn.post_obj, nengo.Ensemble):
print('Connection from input %d to %d' % (id(conn.pre_obj), id(conn.post_obj)))
print(' synapse time constant: %g' % conn.synapse.tau)
transform = nengo.utils.builder.full_transform(conn, allow_scalars=False)
encoder = sim.data[conn.post_obj].scaled_encoders
print(' transform: %s' % transform)
print(' encoder: %s' % encoder)
"""
Explanation: Step 7: Extract data about inputs
End of explanation
"""
for conn in model.all_connections:
if isinstance(conn.pre_obj, nengo.Ensemble) and isinstance(conn.post_obj, nengo.Node):
print('Connection from %d to output %d' % (id(conn.pre_obj), id(conn.post_obj)))
print(' synapse time constant: %g' % conn.synapse.tau)
decoder = sim.data[conn].weights
transform = nengo.utils.builder.full_transform(conn, allow_scalars=False)
print(' decoder: %s' % decoder)
print(' transform: %s' % transform)
"""
Explanation: Step 8: Extract data about outputs:
End of explanation
"""
|
GPflow/GPflowOpt | doc/source/notebooks/hyperopt.ipynb | apache-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
# Loading airline data
import numpy as np
data = np.load('airline.npz')
X_train, Y_train = data['X_train'], data['Y_train']
D = Y_train.shape[1];
"""
Explanation: Bayesian Optimization of Hyperparameters
Vincent Dutordoir, Joachim van der Herten
Introduction
The paper Practical Bayesian Optimization of Machine Learning algorithms by Snoek et al. 2012 describes the
use of Bayesian optimization for hyperparameter optimization. In this paper, the (at the time) state-of-the-art test error for convolutional neural networks on the CIFAR-10 dataset was improved significantly by optimizing the parameters of the training process with Bayesian optimization.
In this notebook we demonstrate the principle by optimizing the starting point of the maximum likelihood estimation of a GP. Note that we use a GP to optimize the initial hyperparameter values of another GP.
End of explanation
"""
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel('Time (years)')
ax.set_ylabel('Airline passengers ($10^3$)')
ax.plot(X_train.flatten(),Y_train.flatten(), c='b')
ax.set_xticklabels([1949, 1952, 1955, 1958, 1961, 1964])
plt.tight_layout()
"""
Explanation: Data set
The data to illustrate hyperparameter optimization is the well-known airline passenger volume data. It is a one-dimensional time series of the passenger volumes of airlines over time. We wish to use it to make forecasts. Plotting the data below, it is clear that the data contains a pattern.
End of explanation
"""
from gpflow.kernels import RBF, Cosine, Linear, Bias, Matern52
from gpflow import transforms
from gpflow.gpr import GPR
Q = 10 # nr of terms in the sum
max_iters = 1000
# Trains a model with a spectral mixture kernel, given an ndarray of 2Q frequencies and lengthscales
def create_model(hypers):
f = np.clip(hypers[:Q], 0, 5)
weights = np.ones(Q) / Q
lengths = hypers[Q:]
kterms = []
for i in range(Q):
rbf = RBF(D, lengthscales=lengths[i], variance=1./Q)
rbf.lengthscales.transform = transforms.Exp()
cos = Cosine(D, lengthscales=f[i])
kterms.append(rbf * cos)
k = np.sum(kterms) + Linear(D) + Bias(D)
m = GPR(X_train, Y_train, kern=k)
return m
X_test, X_complete = data['X_test'], data['X_complete']
def plotprediction(m):
# Perform prediction
mu, var = m.predict_f(X_complete)
# Plot
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel('Time (years)')
ax.set_ylabel('Airline passengers ($10^3$)')
ax.set_xticklabels([1949, 1952, 1955, 1958, 1961, 1964, 1967, 1970, 1973])
ax.plot(X_train.flatten(),Y_train.flatten(), c='b')
ax.plot(X_complete.flatten(), mu.flatten(), c='g')
lower = mu - 2*np.sqrt(var)
upper = mu + 2*np.sqrt(var)
ax.plot(X_complete, upper, 'g--', X_complete, lower, 'g--', lw=1.2)
ax.fill_between(X_complete.flatten(), lower.flatten(), upper.flatten(),
color='g', alpha=.1)
plt.tight_layout()
m = create_model(np.ones((2*Q,)))
m.optimize(maxiter=max_iters)
plotprediction(m)
"""
Explanation: Modeling
To forecast this timeseries, we will pick up its pattern using a Spectral Mixture kernel (Wilson et al, 2013). Essentially, this kernel is a sum of $Q$ products of Cosine and RBF kernels. For this one-dimensional problem each term of the sum has 4 hyperparameters. To account for the upward trend, we also add a Linear and a Bias kernel.
End of explanation
"""
from gpflowopt.domain import ContinuousParameter
from gpflowopt.objective import batch_apply
# Objective function for our optimization
# Input: N x 2Q ndarray, output: N x 1.
# returns the negative log likelihood obtained by training with given frequencies and rbf lengthscales
# Applies some tricks for stability similar to GPy's jitchol
@batch_apply
def objectivefx(freq):
m = create_model(freq)
for i in [0] + [10**exponent for exponent in range(6,1,-1)]:
try:
mean_diag = np.mean(np.diag(m.kern.compute_K_symm(X_train)))
m.likelihood.variance = 1 + mean_diag * i
m.optimize(maxiter=max_iters)
return -m.compute_log_likelihood()
except:
pass
raise RuntimeError("Frequency combination failed indefinately.")
# Setting up optimization domain.
lower = [0.]*Q
upper = [5.]*int(Q)
df = np.sum([ContinuousParameter('freq{0}'.format(i), l, u) for i, l, u in zip(range(Q), lower, upper)])
lower = [1e-5]*Q
upper = [300]*int(Q)
dl = np.sum([ContinuousParameter('l{0}'.format(i), l, u) for i, l, u in zip(range(Q), lower, upper)])
domain = df + dl
domain
"""
Explanation: In total, a lot of hyperparameters must be optimized. Furthermore, the optimization surface of the spectral mixture is highly multimodal. Starting from the default hyperparameter values the optimized GP is able to pick up the linear trend, and the RBF kernels perform local interpolation. However, the kernel is not able to extrapolate away from the data. In sum, with this starting point, the likelihood optimization ends in a local minimum.
Hyperparameter optimization
This issue is a known problem of the spectram mixture kernel, and several recommendations exist on how to improve the starting point. Here, we will use GPflowOpt to optimize the initial values for the lengthscales of the RBF and the Cosine kernel (i.e., the frequencies of the latter kernel). The other hyperparameters (rbf and cosine variances, likelihood variances and the linear and bias terms) are kept at their defaults and will be optimized by the standard likelihood optimization.
First, we setup the objective function accepting proposed starting points. The objective function returns the negative log likelihood, obtained by optimizing the hyperparameters from the given starting point. Then, we setup the 20D optimization domain for the frequencies and lengthscales.
End of explanation
"""
from gpflowopt.design import LatinHyperCube
from gpflowopt.acquisition import ExpectedImprovement
from gpflowopt import optim, BayesianOptimizer
design = LatinHyperCube(6, domain)
X = design.generate()
Y = objectivefx(X)
m = GPR(X, Y, kern=Matern52(domain.size, ARD=False))
ei = ExpectedImprovement(m)
opt = optim.StagedOptimizer([optim.MCOptimizer(domain, 5000), optim.SciPyOptimizer(domain)])
optimizer = BayesianOptimizer(domain, ei, optimizer=opt)
with optimizer.silent():
result = optimizer.optimize(objectivefx, n_iter=24)
m = create_model(result.x[0,:])
m.optimize()
plotprediction(m)
"""
Explanation: High-dimensional Bayesian optimization is tricky, although the complexity of the problem is significantly reduced due to symmetry in the optimization domain (interchanging frequencies does not make a difference) and because we still optimize the likelihood given the starting point. Therefore, getting near a mode is sufficient. Furthermore we disable ARD of the kernel of the model approximating the objective function to avoid optimizing a lot of lengthscales with little data. We then use EI to pick new candidate starting points and evaluate our objective.
End of explanation
"""
f, axes = plt.subplots(1, 1, figsize=(7, 5))
f = ei.data[1][:,0]
axes.plot(np.arange(0, ei.data[0].shape[0]), np.minimum.accumulate(f))
axes.set_ylabel('fmin')
axes.set_xlabel('Number of evaluated points');
"""
Explanation: Clearly, the optimization point identified with BO is a lot better than the default values. We now obtain a proper forecasting. By inspecting the evolution of the best likelihood value obtained so far, we see the solution is identified quickly.
End of explanation
"""
|
colonelzentor/occmodel | examples/OCCT_Bottle_Example.ipynb | gpl-2.0 | height = 70.
width = 50.
thickness = 30.
pnt1 = [-width/2., 0., 0.]
pnt2 = [-width/2., -thickness/4., 0.]
pnt3 = [0., -thickness/2., 0.]
pnt4 = [width/2., -thickness/4., 0.]
pnt5 = [width/2., 0., 0.]
edge1 = Edge().createLine(start=pnt1, end=pnt2)
edge2 = Edge().createArc3P(start=pnt2, end=pnt4, pnt=pnt3)
edge3 = Edge().createLine(start=pnt4, end=pnt5)
halfProfile = Wire([edge1, edge2, edge3])
mirrorPlane = Plane(origin=[0,0,0], xaxis=[1,0,0], yaxis=[0,0,1])
mirrorProfile = halfProfile.mirror(mirrorPlane, copy=True)
allEdges = list(EdgeIterator(halfProfile)) + list(EdgeIterator(mirrorProfile))
fullProfile = Wire().createWire(allEdges)
bottomFace = Face().createFace(fullProfile)
body = Solid().extrude(bottomFace, (0, 0, 0), (0, 0, height))
body.fillet(thickness/12.)
neckHeight = height/10
neckRadius = thickness/4
neck = Solid().createCylinder([0,0,0], [0,0,neckHeight], radius=neckRadius)
neck.translate([0, 0, height])
body.fuse(neck)
zMax = -1
neckTopFace = None
for f in FaceIterator(body):
[x, y , z] = f.centreOfMass()
if z >= zMax:
neckTopFace = f
zMax = z
body.shell(thickness/50., [neckTopFace], tolerance=1E-3)
t_thick = neckHeight/5
t_height = neckHeight - t_thick
t_radius = neckRadius + t_thick/4
t_pitch = t_height/2
t_angle = 0
# Note the following thread geometry is not correct. The profile
# is wrong and there is a twist added to the profile. But it's
# kind of close and good enough for this example.
threadHelix = Edge().createHelix(pitch=t_pitch,
height=t_height,
radius=t_radius,
angle = t_angle)
threadFace = Face().createPolygonal([[0, 0, t_thick/2],
[t_thick, .0, 0],
[0, 0, -t_thick/2]])
threadFace.translate([t_radius, 0, 0])
thread = Solid().pipe(threadFace, threadHelix)
thread.translate([0, 0, height])
body.fuse(thread)
actor = body.toVtkActor()
"""
Explanation: OCCT Bottle Tutorial
End of explanation
"""
try:
a = get_QApplication([])
except:
pass
vtkWin = SimpleVtkViewer()
vtkWin.add_actor(actor)
# If the VTK window is blank/white, click on the window and hit 'r' to zoom to fit.
"""
Explanation: VTK Viewer
The following summarizes the mouse and keyboard commands for interacting with shapes rendered in the viewer.
Keypress j / Keypress t: toggle between joystick (position sensitive) and trackball (motion sensitive) styles. In joystick style, motion occurs continuously as long as a mouse button is pressed. In trackball style, motion occurs when the mouse button is pressed and the mouse pointer moves.
Keypress c / Keypress a: toggle between camera and actor modes. In camera mode, mouse events affect the camera position and focal point. In actor mode, mouse events affect the actor that is under the mouse pointer.
Button 1: rotate the camera around its focal point (if camera mode) or rotate the actor around its origin (if actor mode). The rotation is in the direction defined from the center of the renderer's viewport towards the mouse position. In joystick mode, the magnitude of the rotation is determined by the distance the mouse is from the center of the render window.
Button 2: pan the camera (if camera mode) or translate the actor (if object mode). In joystick mode, the direction of pan or translation is from the center of the viewport towards the mouse position. In trackball mode, the direction of motion is the direction the mouse moves. (Note: with 2-button mice, pan is defined as <Shift>Button 1.)
Button 3: zoom the camera (if camera mode) or scale the actor (if object mode). Zoom in/increase scale if the mouse position is in the top half of the viewport; zoom out/decrease scale if the mouse position is in the bottom half. In joystick mode, the amount of zoom is controlled by the distance of the mouse pointer from the horizontal centerline of the window.
Keypress 3: toggle the render window into and out of stereo mode. By default, red-blue stereo pairs are created. Some systems support Crystal Eyes LCD stereo glasses; you have to invoke SetStereoTypeToCrystalEyes() on the rendering window object. Note: to use stereo you also need to pass a stereo=1 keyword argument to the render window object constructor.
Keypress e: exit the application.
Keypress f: fly to the picked point
Keypress p: perform a pick operation. The render window interactor has an internal instance of vtkCellPicker that it uses to pick.
Keypress r: reset the camera view along the current view direction. Centers the actors and moves the camera so that all actors are visible.
Keypress s: modify the representation of all actors so that they are surfaces.
Keypress u: invoke the user-defined function. Typically, this keypress will bring up an interactor that you can type commands in.
Keypress w: modify the representation of all actors so that they are wireframe.
End of explanation
"""
|
QInfer/qinfer-examples | scoremixin_example.ipynb | agpl-3.0 | from __future__ import division, print_function
%matplotlib inline
from qinfer import ScoreMixin, SimplePrecessionModel, RandomizedBenchmarkingModel
import numpy as np
import matplotlib.pyplot as plt
try:
plt.style.use('ggplot')
except:
pass
"""
Explanation: Fisher Score Mixin Example
This notebook demonstrates how to use the qinfer.ScoreMixin class to develop models that use numerical differentiation to calculate the Fisher information. We test the mixin class with two examples where the Fisher information is known analytically.
Preamble
End of explanation
"""
class NumericalSimplePrecessionModel(ScoreMixin, SimplePrecessionModel):
pass
analytic_model = SimplePrecessionModel()
numerical_model = NumericalSimplePrecessionModel()
expparams = np.linspace(1, 10, 50)
modelparams = np.linspace(.1,1,50)[:, np.newaxis]
"""
Explanation: Simple Precession Model Test
Create two models, one that uses ScoreMixin's numerical score method, and one that uses SimplePrecessionModel's analytic score method. To make the first model, we declare a class that does nothing but inherits from both the ScoreMixin class and SimplePrecessionModel; note that ScoreMixin is first, such that its implementation of Model.score() overrides that of SimplePrecessionModel.
End of explanation
"""
analytic_score = analytic_model.score(np.array([0],dtype=int),modelparams, expparams)[0,0,...]
print(analytic_score.shape)
numerical_score = numerical_model.score(np.array([0],dtype=int),modelparams, expparams)[0,0,...]
print(numerical_score.shape)
plt.subplot(1,2,1)
plt.imshow(analytic_score)
plt.subplot(1,2,2)
plt.imshow(numerical_score)
"""
Explanation: We verify that both models compute the same score by plotting the score for a range of experiment and model parameters. Since this is a single-parameter model, the score is a scalar.
End of explanation
"""
analytic_fisher_info = analytic_model.fisher_information(modelparams, expparams)[0,0,...]
numerical_fisher_info = numerical_model.fisher_information(modelparams, expparams)[0,0,...]
plt.subplot(1,2,1)
plt.imshow(analytic_fisher_info)
plt.subplot(1,2,2)
plt.imshow(numerical_fisher_info)
"""
Explanation: Next, we verify that both models give the same Fisher information.
End of explanation
"""
class NumericalRandomizedBenchmarkingModel(ScoreMixin, RandomizedBenchmarkingModel):
pass
analytic_model = RandomizedBenchmarkingModel()
numerical_model = NumericalRandomizedBenchmarkingModel()
"""
Explanation: Randomized Benchmarking Model
To test that we get multiparameter Fisher information calculations correct as well, we compare to the zeroth-order non-interlaced randomized benchmarking model.
End of explanation
"""
expparams = np.empty((150,), dtype=analytic_model.expparams_dtype)
expparams['m'] = np.arange(1, 151)
modelparams = np.empty((500, 3))
modelparams[:, 0] = np.linspace(0.1, 0.999, 500)
modelparams[:, 1] = 0.5
modelparams[:, 2] = 0.5
"""
Explanation: We now make experiment and parameters to test with.
End of explanation
"""
afi = analytic_model.fisher_information(modelparams, expparams)
assert afi.shape == (3, 3, modelparams.shape[0], expparams.shape[0])
nfi = numerical_model.fisher_information(modelparams, expparams)
assert nfi.shape == (3, 3, modelparams.shape[0], expparams.shape[0])
"""
Explanation: Let's make sure that the returned Fisher information has the right shape. Note that the Fisher information is a four-index tensor here, with the two indices for the information matrix itself, plus two indices that vary over the input model parameters and experiment parameters.
End of explanation
"""
np.linalg.norm(afi - nfi) / np.linalg.norm(afi)
"""
Explanation: We check that each Fisher information matrix has errors that are small compared to the analytic FI alone.
End of explanation
"""
def tr_inv(arr):
try:
return np.trace(np.linalg.inv(arr.reshape(3, 3)))
except LinAlgError:
return float('inf')
def crb(fi):
return np.apply_along_axis(tr_inv, 0, np.sum(fi.reshape((9, modelparams.shape[0], expparams.shape[0])), axis=-1))
plt.figure(figsize=(15, 6))
for idx, fi in enumerate([afi, nfi]):
plt.subplot(1,2, 1 + idx)
plt.semilogy(modelparams[:, 0], crb(fi))
plt.ylabel(r'$\operatorname{Tr}\left(\left(\sum_m F(p, m)\right)^{-1}\right)$')
plt.xlabel('$p$')
"""
Explanation: Next, we plot the trace-inverse of each to check that we get the same Cramer-Rao bounds.
End of explanation
"""
%timeit analytic_model.fisher_information(modelparams, expparams)
%timeit numerical_model.fisher_information(modelparams, expparams)
"""
Explanation: Finally, we note that the numerical FI calculations are not much slower than the analytic calculations.
End of explanation
"""
|
hobgreenson/chicago_employees | ChicagoEmployeeGenderSalary.ipynb | mit | workers = pd.read_csv('Current_Employee_Names__Salaries__and_Position_Titles.csv')
"""
Explanation: Introduction
I downloaded a CSV of City of Chicago employee salary data,
which includes the names, titles, departments and salaries of Chicago employees. I was
interested to see whether men and women earn similar salaries for similar roles.
City data don't report the gender of employees, so I used an employee's first name
as a proxy, which is explained in more detail below.
End of explanation
"""
workers = workers[(workers['Salary or Hourly']=='Salary') & (workers['Full or Part-Time']=='F')]
"""
Explanation: To simplify the analysis, I restricted my attention to full-time employees with a salary.
End of explanation
"""
workers['Name'] = workers['Name'].apply(lambda s: s.split(',')[1].split()[0].lower())
"""
Explanation: To make grouping and matching first names easier, I extracted the first name from each
employee record and lower-cased it:
End of explanation
"""
workers['Annual Salary'] = workers['Annual Salary'].apply(lambda s: float(s.strip('$')))
"""
Explanation: Annual salary is represented as a string with a leading $ sign, which I converted to floats.
End of explanation
"""
workers.head(10)
"""
Explanation: The first ten rows of the data set look like this now:
End of explanation
"""
# Data are in seperate CSV files per year, and are concatenated here
name_data = []
for yob in range(1940, 2017):
df = pd.read_csv('names/yob' + str(yob) + '.txt',
header=0, names=['Name', 'Gender', 'Count'])
name_data.append(df)
names = pd.concat(name_data, axis=0)
# Lower-case first name so that it can be joined with the workers dataframe
names['Name'] = names['Name'].str.lower()
names.head(5)
# Count how often a name is given to boys and girls
gender_frequency = names.groupby(['Name', 'Gender']).sum().reset_index()
gender_frequency.sample(5)
def predict_gender(df):
max_idx = df['Count'].idxmax()
return df.loc[max_idx]
# Select the more frequent gender for each name
gender_guess = gender_frequency.groupby('Name').agg(predict_gender).reset_index()
gender_guess.sample(10)
"""
Explanation: Gender prediction
To estimate the gender of an employee based on his or her first name, I used a data set of
baby names. For each unique name, I counted
how many times, from years 1940 to 2016, that name had been given to a boy versus a girl. If
the name was more frequently given to boys, then I predicted the gender associated with the
name to be male, and vice-versa for female.
End of explanation
"""
workers = pd.merge(workers, gender_guess, on='Name', how='inner')
workers[['Name', 'Job Titles', 'Department', 'Gender', 'Annual Salary']].sample(10)
"""
Explanation: The above list of names and associated genders can be combined with the worker data to
predict the gender of each Chicago employee using a join:
End of explanation
"""
# Focus on these columns
workers = workers[['Job Titles', 'Department', 'Gender', 'Annual Salary']]
# Remove jobs for which only men or only women are employed
job_groups = workers[['Job Titles', 'Gender', 'Annual Salary']].groupby(['Job Titles'])
def male_and_female(grp):
return np.any(grp['Gender']=='M') and np.any(grp['Gender']=='F')
job_groups = job_groups.filter(male_and_female)
# Look at the maximum salary of each gender for each job title
job_group_maximums = job_groups.groupby(['Job Titles', 'Gender']).agg(np.max)
job_group_maximums.head(30)
higher_max_male_salary_count = 0
total_jobs = 0
for job_title, df in job_group_maximums.groupby(level=0):
assert len(df) == 2
if df.loc[(job_title, 'M')][0] > df.loc[(job_title, 'F')][0]:
higher_max_male_salary_count += 1
total_jobs += 1
higher_max_male_salary_percentage = 100 * higher_max_male_salary_count / total_jobs
higher_max_male_salary_percentage
ax = sns.stripplot(x="Gender", y="Annual Salary", data=job_group_maximums.reset_index(), jitter=True)
plt.show()
"""
Explanation: Analysis
I wanted to know wether men and women were paid equally if they shared the same job title and
department. To answer this, I specifically looked at full-time, salaried employees, and jobs
for which both men and women were employed under the same title and department.
For example, given the job title POLICE OFFICER in the POLICE department, a position for which
both men and women are employed, do male and female officers have similar salaries? More
generally, are men and women paid equally across all job titles and departments?
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/ec-earth-consortium/cmip6/models/ec-earth3/toplevel.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: EC-EARTH3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
mayank-johri/LearnSeleniumUsingPython | Section 3 - Machine Learning/ThirdParty-scikit-learn-videos-master/07_cross_validation.ipynb | gpl-3.0 | from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
# read in the iris data
iris = load_iris()
# create X (features) and y (response)
X = iris.data
y = iris.target
# use train/test split with different random_state values
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=4)
# check classification accuracy of KNN with K=5
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred))
"""
Explanation: Cross-validation for parameter tuning, model selection, and feature selection
From the video series: Introduction to machine learning with scikit-learn
Agenda
What is the drawback of using the train/test split procedure for model evaluation?
How does K-fold cross-validation overcome this limitation?
How can cross-validation be used for selecting tuning parameters, choosing between models, and selecting features?
What are some possible improvements to cross-validation?
Review of model evaluation procedures
Motivation: Need a way to choose between machine learning models
Goal is to estimate likely performance of a model on out-of-sample data
Initial idea: Train and test on the same data
But, maximizing training accuracy rewards overly complex models which overfit the training data
Alternative idea: Train/test split
Split the dataset into two pieces, so that the model can be trained and tested on different data
Testing accuracy is a better estimate than training accuracy of out-of-sample performance
But, it provides a high variance estimate since changing which observations happen to be in the testing set can significantly change testing accuracy
End of explanation
"""
# simulate splitting a dataset of 25 observations into 5 folds
from sklearn.cross_validation import KFold
kf = KFold(25, n_folds=5, shuffle=False)
# print the contents of each training and testing set
print('{} {:^61} {}'.format('Iteration', 'Training set observations', 'Testing set observations'))
for iteration, data in enumerate(kf, start=1):
print('{:^9} {} {:^25}'.format(iteration, data[0], data[1]))
"""
Explanation: Question: What if we created a bunch of train/test splits, calculated the testing accuracy for each, and averaged the results together?
Answer: That's the essense of cross-validation!
Steps for K-fold cross-validation
Split the dataset into K equal partitions (or "folds").
Use fold 1 as the testing set and the union of the other folds as the training set.
Calculate testing accuracy.
Repeat steps 2 and 3 K times, using a different fold as the testing set each time.
Use the average testing accuracy as the estimate of out-of-sample accuracy.
Diagram of 5-fold cross-validation:
End of explanation
"""
from sklearn.cross_validation import cross_val_score
# 10-fold cross-validation with K=5 for KNN (the n_neighbors parameter)
knn = KNeighborsClassifier(n_neighbors=5)
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
print(scores)
# use average accuracy as an estimate of out-of-sample accuracy
print(scores.mean())
# search for an optimal value of K for KNN
k_range = list(range(1, 31))
k_scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
scores = cross_val_score(knn, X, y, cv=10, scoring='accuracy')
k_scores.append(scores.mean())
print(k_scores)
import matplotlib.pyplot as plt
%matplotlib inline
# plot the value of K for KNN (x-axis) versus the cross-validated accuracy (y-axis)
plt.plot(k_range, k_scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Cross-Validated Accuracy')
"""
Explanation: Dataset contains 25 observations (numbered 0 through 24)
5-fold cross-validation, thus it runs for 5 iterations
For each iteration, every observation is either in the training set or the testing set, but not both
Every observation is in the testing set exactly once
Comparing cross-validation to train/test split
Advantages of cross-validation:
More accurate estimate of out-of-sample accuracy
More "efficient" use of data (every observation is used for both training and testing)
Advantages of train/test split:
Runs K times faster than K-fold cross-validation
Simpler to examine the detailed results of the testing process
Cross-validation recommendations
K can be any number, but K=10 is generally recommended
For classification problems, stratified sampling is recommended for creating the folds
Each response class should be represented with equal proportions in each of the K folds
scikit-learn's cross_val_score function does this by default
Cross-validation example: parameter tuning
Goal: Select the best tuning parameters (aka "hyperparameters") for KNN on the iris dataset
End of explanation
"""
# 10-fold cross-validation with the best KNN model
knn = KNeighborsClassifier(n_neighbors=20)
print(cross_val_score(knn, X, y, cv=10, scoring='accuracy').mean())
# 10-fold cross-validation with logistic regression
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
print(cross_val_score(logreg, X, y, cv=10, scoring='accuracy').mean())
"""
Explanation: Cross-validation example: model selection
Goal: Compare the best KNN model with logistic regression on the iris dataset
End of explanation
"""
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
# read in the advertising dataset
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
# create a Python list of three feature names
feature_cols = ['TV', 'Radio', 'Newspaper']
# use the list to select a subset of the DataFrame (X)
X = data[feature_cols]
# select the Sales column as the response (y)
y = data.Sales
# 10-fold cross-validation with all three features
lm = LinearRegression()
scores = cross_val_score(lm, X, y, cv=10, scoring='mean_squared_error')
print(scores)
# fix the sign of MSE scores
mse_scores = -scores
print(mse_scores)
# convert from MSE to RMSE
rmse_scores = np.sqrt(mse_scores)
print(rmse_scores)
# calculate the average RMSE
print(rmse_scores.mean())
# 10-fold cross-validation with two features (excluding Newspaper)
feature_cols = ['TV', 'Radio']
X = data[feature_cols]
print(np.sqrt(-cross_val_score(lm, X, y, cv=10, scoring='mean_squared_error')).mean())
"""
Explanation: Cross-validation example: feature selection
Goal: Select whether the Newspaper feature should be included in the linear regression model on the advertising dataset
End of explanation
"""
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
"""
Explanation: Improvements to cross-validation
Repeated cross-validation
Repeat cross-validation multiple times (with different random splits of the data) and average the results
More reliable estimate of out-of-sample performance by reducing the variance associated with a single trial of cross-validation
Creating a hold-out set
"Hold out" a portion of the data before beginning the model building process
Locate the best model using cross-validation on the remaining data, and test it using the hold-out set
More reliable estimate of out-of-sample performance since hold-out set is truly out-of-sample
Feature engineering and selection within cross-validation iterations
Normally, feature engineering and selection occurs before cross-validation
Instead, perform all feature engineering and selection within each cross-validation iteration
More reliable estimate of out-of-sample performance since it better mimics the application of the model to out-of-sample data
Resources
scikit-learn documentation: Cross-validation, Model evaluation
scikit-learn issue on GitHub: MSE is negative when returned by cross_val_score
Section 5.1 of An Introduction to Statistical Learning (11 pages) and related videos: K-fold and leave-one-out cross-validation (14 minutes), Cross-validation the right and wrong ways (10 minutes)
Scott Fortmann-Roe: Accurately Measuring Model Prediction Error
Machine Learning Mastery: An Introduction to Feature Selection
Harvard CS109: Cross-Validation: The Right and Wrong Way
Journal of Cheminformatics: Cross-validation pitfalls when selecting and assessing regression and classification models
Comments or Questions?
Email: kevin@dataschool.io
Website: http://dataschool.io
Twitter: @justmarkham
End of explanation
"""
|
luciansmith/sedml-test-suite | archives/sbml-test-suite/convert-to-combine-arch.ipynb | bsd-3-clause | import pprint, tellurium as te
# level and version of SBML to use
lv_string = 'l3v1'
# run all supported test cases
# cases = te.getSupportedTestCases()
# run just a subset containing all supported cases between 1 and 10
cases = te.getSupportedTestCases(981)
print('Using the following {} cases:'.format(len(cases)))
# pprint.PrettyPrinter().pprint(cases)
# maximum cutoff for passing a test case (per-variable)
max_threshold = 1e-3
# change to the directory of this notebook
%cd ~/devel/src/tellurium-combine-archive-test-cases/sbml-test-suite
"""
Explanation: <a id="begin"></a>
Generating COMBINE Archives from the SBML Test Suite
The SBML test suite is intended to test the accuracy SBML ODE simulators by providing a normative standard to compare simulation results to. The test suite predates SED-ML and specifies the simulation parameters in a custom format. However, recent versions include the SED-ML necessary to reproduce the simulations. Here, we provide an automated script to convert these test cases into COMBINE archives, which can be imported in Tellurium.
This notebook is adapted from Stanley Gu's original here.
This notebook contains two parts: converting the SBML test cases to COMBINE archives and running the simulations in the archives. To skip to running the simulations, click here.
Tellurium's current pass/fail status for these test cases is here.
Tip: Double-click this cell to edit it.
Outstanding issues: cannot cd to notebook dir
End of explanation
"""
import os, tellurium as te
lv_archive_path = os.path.join('archives', lv_string)
n_failures = 0
n_successes = 0
for case in cases:
archive_name = case+'.omex'
print('Running {}'.format(archive_name))
case_path = os.path.join(lv_archive_path, archive_name)
te.convertAndExecuteCombineArchive(case_path)
# compare results
csv = te.extractFileFromCombineArchive(case_path, case+'-results.csv')
from io import StringIO
import pandas as pd
df = pd.read_csv(StringIO(csv))
report = te.getLastReport()
report = report.drop(report.shape[0]-1)
df.columns = report.columns
# difference between simulation and expected results
diff = report.subtract(df)
max_val = (diff**2).mean().max()
if max_val > max_threshold:
n_failures += 1
else:
n_successes += 1
print('Finished running tests: {} PASS, {} FAIL'.format(n_successes, n_failures))
"""
Explanation: <a id="simulations"></a>
Step 6: Run all simulations in the COMBINE archives in Tellurium.
This runs the simulations in all COMBINE archives specified by the cases variable. By default, this will run all tests that Tellurium supports, which will take a long time. You can set the cases variable to a subset of tests at the beginning of this notebook.
End of explanation
"""
import os.path
import platform
if not platform.system() == 'Linux':
import urllib.request
# url for the test case archive
url = 'http://sourceforge.net/projects/sbml/files/test-suite/3.1.1/cases-archive/sbml-test-cases-2014-10-22.zip'
test_cases_filename = 'sbml-test-cases.zip'
# download the test case archive
with urllib.request.urlopen(url) as response, open(test_cases_filename, 'wb') as out_file:
out_file.write(response.read())
else:
!wget http://sourceforge.net/projects/sbml/files/test-suite/3.1.1/cases-archive/sbml-test-cases-2014-10-22.zip
%mv sbml-test-cases-2014-10-22.zip sbml-test-cases.zip
print('Downloaded test case archive to {}'.format(os.path.abspath(test_cases_filename)))
"""
Explanation: Step 2: Download the SBML test cases
End of explanation
"""
import os, errno, zipfile
# extract to 'archives' directory
with zipfile.ZipFile(test_cases_filename) as z:
z.extractall('.')
print('Extracted test cases to {}'.format(os.path.abspath('.')))
"""
Explanation: Step 3: Extract the Test Case Archive
End of explanation
"""
# create a function to make a new directory if it doesn't already exist
def mkdirp(path):
try:
os.makedirs(path)
except OSError as exc:
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise
# make a new directory for this level and version if it doesn't already exist
lv_archive_path = os.path.join('archives', lv_string)
mkdirp(lv_archive_path)
print('Created directory {}'.format(lv_archive_path))
"""
Explanation: Step 5: Create COMBINE Archives
Here, we create COMBINE archives for SBML Level 3 Version 1. We could package all levels and versions in the same COMBINE archive, but this would drastically increase simulation time.
First, we will set the level and version used to pick out only L3V1 SBML test cases (all SED-ML in this example is L1V1):
Next, we create the directory structure for the archives
End of explanation
"""
import re
from xml.dom import minidom
import xml.etree.ElementTree as ET
n_archives_written = 0
for case in cases:
test_case_path = os.path.join('cases', 'semantic', case)
ls = os.listdir(test_case_path)
# Only L3V1:
regex_sbml = re.compile(case + r'-sbml-{}\.xml'.format(lv_string), re.IGNORECASE)
regex_sedml = re.compile(case + r'-sbml-{}-sedml\.xml'.format(lv_string), re.IGNORECASE)
regex_csv = re.compile(case + r'-results\.csv$', re.IGNORECASE)
# All levels/versions:
# regex_sbml = re.compile(case + '-sbml-l\dv\d\.xml', re.IGNORECASE)
# regex_sedml = re.compile(case + '-sbml-l\dv\d\-sedml.xml', re.IGNORECASE)
sbmlfiles = sorted([file for file in ls if regex_sbml.search(file)])
sedmlfiles = sorted([file for file in ls if regex_sedml.search(file)])
csvfiles = sorted([file for file in ls if regex_csv.search(file)])
plot_file = [file for file in ls if 'plot.jpg' in file][0]
ET.register_namespace('', 'http://identifiers.org/combine.specifications/omex-manifest')
manifest_template = '''<?xml version="1.0" encoding="UTF-8"?>
<omexManifest
xmlns="http://identifiers.org/combine.specifications/omex-manifest"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://identifiers.org/combine.specifications/omex-manifest combine.xsd "></omexManifest>
'''
doc = ET.fromstring(manifest_template)
manifest = ET.SubElement(doc, 'content')
manifest.attrib['format'] = 'http://identifiers.org/combine.specifications/omex-manifest'
manifest.attrib['location'] = './manifest.xml'
for sbmlfile in sbmlfiles:
model = ET.SubElement(doc, 'content')
model.attrib['format'] = 'http://identifiers.org/combine.specifications/sbml'
model.attrib['location'] = './' + sbmlfile
for sedmlfile in sedmlfiles:
sedml = ET.SubElement(doc, 'content')
sedml.attrib['format'] = 'http://identifiers.org/combine.specifications/sed-ml.level-1.version-1'
sedml.attrib['location'] = './' + sedmlfile
sedml.attrib['master'] = 'true'
for csvfile in csvfiles:
csv = ET.SubElement(doc, 'content')
csv.attrib['format'] = 'http://identifiers.org/combine.specifications/csv'
csv.attrib['location'] = './' + csvfile
archive_files = sbmlfiles + sedmlfiles + csvfiles
xml_str = ET.tostring(doc, encoding='UTF-8')
# reparse the xml string to pretty print it
reparsed = minidom.parseString(xml_str)
pretty_xml_str = reparsed.toprettyxml(indent=" ")
# use zipfile to create Combine archive containing
from zipfile import ZipFile
archive_name = case + '.omex'
archive_path = os.path.join('archives', lv_string, archive_name)
initial_wd = os.getcwd()
ls = os.listdir(test_case_path)
with ZipFile(archive_path, 'w') as archive:
# write the manifest
archive.writestr('manifest.xml', pretty_xml_str.encode('utf-8'))
os.chdir(test_case_path)
for f in archive_files:
archive.write(f)
os.chdir(initial_wd)
# print the number of contained files (add +1 for the manifest)
print('Created {} (containing {} files)'.format(archive_path, 1+len(archive_files)))
n_archives_written += 1
print('Finished writing archives ({} total archives written)'.format(n_archives_written))
"""
Explanation: Step 6: Create the archives by including SBML, SED-ML, and results files.
End of explanation
"""
|
stephank16/enes_graph_use_case | prov_templates/old/PROV-Templates-python-work.ipynb | gpl-3.0 | # Define the variable parts in the template as dictionary keys
# dictionary values are the prov template variable bindings in one case
# and correspond to the variable instance settings in the other case
import prov.model as prov
template_dict = {
'var_author':'var:author',
'var_value':'var:value',
'var_name':'var:name',
'var_quote':'var:quote'
}
instance_dict = {
'var:author':'orcid:0000-0002-3494-120X',
'var:value':'A Little Provenance Goes a Long Way',
'var:name':'Luc Moreau',
'var:quote':'ex:quote1'
}
# test for use of default namespace
document = prov.ProvDocument()
document.set_default_namespace('http://example.org/0/')
quote = document.entity('tst')
def new_provdoc():
document = prov.ProvDocument()
# ----- namspace settings --------------------------------------------------
document.add_namespace('prov','http://www.w3.org/ns/prov#')
document.add_namespace('var','http://openprovenance.org/var#>')
document.add_namespace('vargen','http://openprovenance.org/vargen#')
document.add_namespace('tmpl','http://openprovenance.org/tmpl#')
document.add_namespace('foaf','http://xmlns.com/foaf/0.1/')
document.add_namespace('ex', 'http://example.org/')
document.add_namespace('orcid','http://orcid.org/')
document.set_default_namespace('http://example.org/0/')
#document.add_namespace('rdf','http://www.w3.org/1999/02/22-rdf-syntax-ns#')
#document.add_namespace('rdfs','http://www.w3.org/2000/01/rdf-schema#')
#document.add_namespace('xsd','http://www.w3.org/2001/XMLSchema#')
#document.add_namespace('ex1', 'http://example.org/1/')
#document.add_namespace('ex2', 'http://example.org/2/')
# ----------------------------------------------------------------------------
return document
def make_prov(var_value,var_name,var_quote,var_author):
# for enes data ingest use case: use information from dkrz_forms/config/workflow_steps.py
document = prov.ProvDocument()
# ----- namspace settings --------------------------------------------------
document.add_namespace('prov','http://www.w3.org/ns/prov#')
document.add_namespace('var','http://openprovenance.org/var#>')
document.add_namespace('vargen','http://openprovenance.org/vargen#')
document.add_namespace('tmpl','http://openprovenance.org/tmpl#')
document.add_namespace('foaf','http://xmlns.com/foaf/0.1/')
document.add_namespace('ex', 'http://example.org/')
document.add_namespace('orcid','http://orcid.org/')
#document.set_default_namespace('http://example.org/0/')
#document.add_namespace('rdf','http://www.w3.org/1999/02/22-rdf-syntax-ns#')
#document.add_namespace('rdfs','http://www.w3.org/2000/01/rdf-schema#')
#document.add_namespace('xsd','http://www.w3.org/2001/XMLSchema#')
#document.add_namespace('ex1', 'http://example.org/1/')
#document.add_namespace('ex2', 'http://example.org/2/')
# ----------------------------------------------------------------------------
bundle = document.bundle('vargen:bundleid')
#bundle.set_default_namespace('http://example.org/0/')
quote = bundle.entity(var_quote,(
('prov:value',var_value),
))
author = bundle.entity(var_author,(
(prov.PROV_TYPE, "prov:Person"),
('foaf:name',var_name)
))
bundle.wasAttributedTo(var_quote,var_author)
return document
def save_and_show(filename):
doc1 = make_prov(**template_dict)
print(doc1.get_provn())
with open(filename, 'w') as provn_file:
provn_file.write(doc1.get_provn())
print("------")
print("saved in file:",filename)
return doc1
doc1 = save_and_show('/home/stephan/test/xxxx.provn')
bundles = doc1.bundles
for k in bundles:
bundle = k
print(k)
tst = {"'prov:type'": 'prov:Person', "'foaf:name'": 'Luc Moreau'}
for key,val in tst.items():
print(key)
print(key.replace("'",""))
k.identifier
doc1.bundle(k.identifier)
"""
Explanation: PROV Templates in Python
Author: Stephan Kindermann
Affiliation: DKRZ
Community: ENES (Earth System Sciences)
Version: 0.1 (July 2018)
Motivation:
* PROV template expansion currently is only supported by the provconvert java tool.
* initial generation and integration of PROV descriptions as part of ENES community efforts
is oftenly done by adopting interactive languages like python.
* The sharing PROV adoption narratives is well supported by using jupyter notebooks and python.
* Core infrastructure services in ENES are implemented using python.
* provenvert output can not be imported using the python prov package (only supporting PROV-json serialization format for import)
* The java based provconvert tool thus is difficult to exploit in ENES use cases
* What is needed in the short term are simple python based wrappers integrated in our
community workflow, generating prov descriptions best on the basis of prov template instantiations.
* The idea of using PROV templates yet is helpful, the only problem is to enable tool based generation and instantiation in an interactive setting without the requirement to use provconvert and do use Java for tool based template use cases.
Approach taken
Thus a simple approach is taken which on the one hand side allows to use PROV templates and which
on the other hand side allows for pure python based template instantiations.
Drawback is that the elaborated prov expansion algorithm can not be used - yet to make the expansion explicit on the basis of python can also seen as an advantage as the community PROV adopters don't need to dig into the expansion algorithm implemented as part of provconvert (and eventual errors therein).
The approach taken is illustrated in the following:
* PROV templates (being standard PROV documents) are generated in Python based on the prov library alongside PROV template instances.
* A very simple instantiation algorithm is used to instantiate templates based on dictionaries containing the variable settings .. this instantiation algorithm is stepwise expanded
Generate a PROV template
PROV templates are generated in functions with all RROV variables as parameters.
This function is called with prov template variable names to generate prov templates. When called with instances the result is an prov document corresponding to the instantiated prov template.
In the following the approach is illustrated based on a concrete examplec (corresponding to the first example in the provconvert tutorial).
End of explanation
"""
# %load Downloads/ProvToolbox-Tutorial4-0.7.0/src/main/resources/template1.provn
document
prefix var <http://openprovenance.org/var#>
prefix vargen <http://openprovenance.org/vargen#>
prefix tmpl <http://openprovenance.org/tmpl#>
prefix foaf <http://xmlns.com/foaf/0.1/>
bundle vargen:bundleId
entity(var:quote, [prov:value='var:value'])
entity(var:author, [prov:type='prov:Person', foaf:name='var:name'])
wasAttributedTo(var:quote,var:author)
endBundle
endDocument
"""
Explanation: Compare generated PROV template with original example:
End of explanation
"""
doc2 = make_prov(**instance_dict)
print(doc2.get_provn())
%matplotlib inline
doc1.plot()
doc2.plot()
# take same instantiation as in the tutorial:
# %load Downloads/ProvToolbox-Tutorial4-0.7.0/src/main/resources/binding1.ttl
@prefix prov: <http://www.w3.org/ns/prov#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix tmpl: <http://openprovenance.org/tmpl#> .
@prefix var: <http://openprovenance.org/var#> .
@prefix ex: <http://example.com/#> .
var:author a prov:Entity;
tmpl:value_0 <http://orcid.org/0000-0002-3494-120X>.
var:name a prov:Entity;
tmpl:2dvalue_0_0 "Luc Moreau".
var:quote a prov:Entity;
tmpl:value_0 ex:quote1.
var:value a prov:Entity;
tmpl:2dvalue_0_0 "A Little Provenance Goes a Long Way".
"""
Explanation: Instantiate PROV template
End of explanation
"""
!provconvert -infile test/template1.provn -bindings test/binding1.ttl -outfile test/doc1.provn
# %load test/doc1.provn
document
bundle uuid:b07bc92f-f16a-443c-9e0a-3bda4063fc10
prefix foaf <http://xmlns.com/foaf/0.1/>
prefix pre_0 <http://orcid.org/>
prefix ex <http://example.com/#>
prefix uuid <urn:uuid:>
entity(ex:quote1,[prov:value = "A Little Provenance Goes a Long Way" %% xsd:string])
entity(pre_0:0000-0002-3494-120X,[prov:type = 'prov:Person', foaf:name = "Luc Moreau" %% xsd:string])
wasAttributedTo(ex:quote1, pre_0:0000-0002-3494-120X)
endBundle
endDocument
!provconvert -infile test/doc1.provn -outfile test/doc1.png
!provconvert -infile test/template1.provn -outfile test/template1.png
# %load Downloads/ProvToolbox-Tutorial4-0.7.0/target/doc1.provn
document
bundle uuid:4c7236d5-6420-4a88-b192-6089e27aa88e
prefix foaf <http://xmlns.com/foaf/0.1/>
prefix pre_0 <http://orcid.org/>
prefix ex <http://example.com/#>
prefix uuid <urn:uuid:>
entity(ex:quote1,[prov:value = "A Little Provenance Goes a Long Way" %% xsd:string])
entity(pre_0:0000-0002-3494-120X,[prov:type = 'prov:Person', foaf:name = "Luc Moreau" %% xsd:string])
wasAttributedTo(ex:quote1, pre_0:0000-0002-3494-120X)
endBundle
endDocument
#------------------------to be removeed ------------------------------------
%matplotlib inline
document.plot()
!cat Downloads/ProvToolbox-Tutorial4-0.7.0/Makefile
"""
Explanation: Instantiate PROV template using provconvert and compare results
End of explanation
"""
#def set_template_vars(var_dict):
# for var, value in var_dict.items():
# globals()[var] = value
"""
Explanation: Show provconvert result to compare
End of explanation
"""
# idea: input: template prov doc
# param: instantiation settings (ttl)
# output: instantiated prov doc
# impl: loop over prov entities and parameters and replace (initial version)
# later: more complex instantiation rules implementation
class PROVT(object):
''' Form object with attributes defined by a configurable project dictionary
'''
__metaclass__=abc.ABCMeta
def __init__(self, adict):
"""Convert a dictionary to a Form Object
:param adict: a (hierarchical) python dictionary
:returns Form objcet: a hierachical Form object with attributes set based on input dictionary
"""
self.__dict__.update(adict)
## self.__dict__[key] = AttributeDict(**mydict) ??
for k, v in adict.items():
if isinstance(v, dict):
self.__dict__[k] = PROVT(v)
def make_prov(d,)
def __repr__(self):
"""
"""
return "PROVT object "
def __str__(self):
return "PROVT object: %s" % self.__dict__
def test(*par):
print(par)
par =(1,2,3)
test(*par)
from prov.model import ProvDocument
from prov.dot import prov_to_dot
from IPython.display import Image
from prov.model import (
PROV_ACTIVITY, PROV_AGENT, PROV_ALTERNATE, PROV_ASSOCIATION,
PROV_ATTRIBUTION, PROV_BUNDLE, PROV_COMMUNICATION, PROV_DERIVATION,
PROV_DELEGATION, PROV_ENTITY, PROV_GENERATION, PROV_INFLUENCE,
PROV_INVALIDATION, PROV_END, PROV_MEMBERSHIP, PROV_MENTION,
PROV_SPECIALIZATION, PROV_START, PROV_USAGE, Identifier,
PROV_ATTRIBUTE_QNAMES, sorted_attributes, ProvException
)
import six
r_list = []
records = doc1.get_records()
r_list.append(records)
blist = list(doc1.bundles)
for bundle in blist:
r_list.append(bundle.records)
print(r_list)
for r in r_list:
if r:
for pp in r:
print(pp)
if pp.is_element():
print("Element")
print(pp.attributes)
for (qn,val) in pp.attributes:
print(qn,val)
pp.add_attributes({qn:'ex:tst'})
print("added attribute to ",qn,val)
if pp.is_relation():
print("Relation")
#print(pp.identifier.localpart)
#print(pp.value)
for r in r_list:
if r:
for pp in r:
print(pp)
print(doc1.get_provn())
#record = records[0]
record = records[-1]
#record.value
record.args
record.attributes
record.extra_attributes
print(record.attributes)
eid = record.identifier
print(eid)
eid.localpart
eid.namespace
print(doc1.records)
from prov.model import ProvDocument
from prov.dot import prov_to_dot
from IPython.display import Image
from prov.model import (
PROV_ACTIVITY, PROV_AGENT, PROV_ALTERNATE, PROV_ASSOCIATION,
PROV_ATTRIBUTION, PROV_BUNDLE, PROV_COMMUNICATION, PROV_DERIVATION,
PROV_DELEGATION, PROV_ENTITY, PROV_GENERATION, PROV_INFLUENCE,
PROV_INVALIDATION, PROV_END, PROV_MEMBERSHIP, PROV_MENTION,
PROV_SPECIALIZATION, PROV_START, PROV_USAGE, Identifier,
PROV_ATTRIBUTE_QNAMES, sorted_attributes, ProvException
)
import six
import itertools
def match(eid,mdict):
if eid in mdict:
return mdict[eid]
else:
print("Warning: matching of key not successful: ",eid)
return eid
def gen_graph_model(prov_doc):
new_doc = new_provdoc()
relations = []
nodes = []
new_nodes=[]
r_list = []
records = prov_doc.get_records()
r_list.append(records)
blist = list(prov_doc.bundles)
for bundle in blist:
r_list.append(bundle.records)
#print(bundle.records)
fr_list = list(itertools.chain(*r_list))
for rec in fr_list:
if rec.is_element():
nodes.append(rec)
#print(rec)
else:
relations.append(rec)
for rec in nodes:
eid = rec.identifier
attr = rec.attributes
args = rec.args
#print(eid)
#print(attr)
#print(args)
neid = match(eid._str,instance_dict)
print("match: ",neid)
new_node = new_doc.entity(Identifier(neid))
new_nodes.append(new_node)
return new_doc
new = gen_graph_model(doc1)
print(doc1.get_provn())
print(new.get_provn())
match("var:quote",instance_dict)
for rec in relations:
args = rec.args
# skipping empty records
if not args:
continue
# picking element nodes
nodes = [
value for attr_name, value in rec.formal_attributes
if attr_name in PROV_ATTRIBUTE_QNAMES
]
other_attributes = [
(attr_name, value) for attr_name, value in rec.attributes
if attr_name not in PROV_ATTRIBUTE_QNAMES
]
return (nodes,other_attributes)
"""
Explanation: Test: PROV Template Instantiation class implementation
End of explanation
"""
|
shanghai-machine-learning-meetup/presentations | peek_into_keras_backend/Peek into Keras backend.ipynb | apache-2.0 | import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
from keras.layers.embeddings import Embedding
from keras.layers.core import Dense, Dropout, Lambda
from keras.layers import Input, GlobalAveragePooling1D
from keras.layers.convolutional import Conv1D
from keras.layers.merge import concatenate
from keras import backend as K
from keras.layers.normalization import BatchNormalization
from keras.models import Model
from keras.optimizers import Adam
# Keras model graph
from IPython.display import SVG, Image
from keras.utils.vis_utils import model_to_dot
# ... and drawing helper
def draw_model_graph(model):
return SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg'))
"""
Explanation: Presentation
This notebook is modified version (with added comments) of presentation given by Bart Grasza on 2017.07.15
Goals of the notebook
This notebook has multiple purposes:
what is and how to use Keras Lambda() layer
how to use it to implement e.g. math operations from ML scientific papers
how Keras is using underlying backends
what is the difference between Keras model and Theano/TensorFlow graph
The bigger idea behind all of this is:
TensorFlow/Theano is powerful but much more complex than Keras. You can easily end up writing 10 times more code. You should stick to Keras, but what if at some point you feel limited by what can be achieved using Keras? What if instead of switching to pure TensorFlow/Theano, you could use them from within Keras? I will try to explain this concept with theory and code examples.
Note: This notebook focuses on TensorFlow being used as Keras backend.
End of explanation
"""
a = [1, 2, 3, 4]
map(lambda x: x*2, a)
"""
Explanation: Python lambda - anonymous function
(you can skip this section if you know python'a lambda function)
Before we dive into Keras Lambda layer, you should get familiar with python lambda function. You can read more about it e.g. here: http://www.secnetix.de/olli/Python/lambda_functions.hawk
Below I show how lambda works together with python map() method, which is very similar to how it's being used in Keras Lambda layer
End of explanation
"""
a = [[1, 1],
[2, 2],
[3, 3],
[4, 4]]
map(lambda x: x[0]*x[1], a)
"""
Explanation: If argument passed is list of lists, you can refer to it's elements in standard fashion:
End of explanation
"""
a = [1, 2, 3, 4]
# Syntax:
# map(lambda x: x*2, a)
# is similar to:
def times_two(x):
return x*2
map(times_two, a)
"""
Explanation: Lambda function can be "un-anomized"
End of explanation
"""
X1 = np.random.rand(1000, 50)
X2 = np.random.rand(1000, 50)
Y = np.random.rand(1000,)
input = Input(shape=(50,))
dense_layer = Dense(30)(input)
##################
merge_layer = Lambda(lambda x: x*2)(dense_layer)
##################
output = Dense(1)(merge_layer)
model = Model(inputs=input, outputs=output)
model.compile(loss='mse', optimizer='adam')
model.fit(x=X1, y=Y)
draw_model_graph(model)
"""
Explanation: Keras Lambda layer - custom operations on model
Below is simple keras model (fed with randomly generated data) to show how you can perform custom operations on data from previous layer. Here we're taking data from dense_layer and multiply it by 2
End of explanation
"""
Image(filename='images/cos_distance.png')
"""
Explanation: Model using custom operation (cosine similarity) on "output"
All the examples in Keras online documentation and in http://fast.ai course show how to build a model with output being fully connected ( Dense() ) layer.
Here I show the example of model which, given 2 vectors on the input, expects on the output cosine similarity score between these vectors. Cosine similarity can have values from < -1, 1 >, where close to 1 represents 2 vectors being similar/almost similar. Close to -1 represents 2 vectors being very different.
The exact equation is:
End of explanation
"""
input_1 = Input(shape=(50,))
d1 = Dense(30)(input_1)
input_2 = Input(shape=(50,))
d2 = Dense(30)(input_2)
def merge_layer(layer_input):
x1_l2_norm = K.l2_normalize(layer_input[0], axis=1)
print(x1_l2_norm)
print(type(x1_l2_norm))
x2_l2_norm = K.l2_normalize(layer_input[1], axis=1)
mat_mul = K.dot(x1_l2_norm,
K.transpose(x2_l2_norm))
return K.sum(mat_mul, axis=1)
output = Lambda(merge_layer, output_shape=(1,))([d1, d2])
model = Model(inputs=[input_1, input_2], outputs=output)
model.compile(loss='mse', optimizer='adam')
model.fit(x=[X1, X2], y=Y)
draw_model_graph(model)
"""
Explanation: I skip the details of Keras implementation of above equation, instead focus on the Lambda layer.
Notes:
- our output layer isn't Dense() but Lambda().
- print() statement shows example Tensor from Lambda calculation being type tensorflow.python.framework.ops.Tensor
- Keras documentation for Lamda layer says that for TensorFlow you don't need to specify output_shape but it's not always the truth. In the case of model below where it accepts 2 inputs, it couldn't figure out correct output shape.
End of explanation
"""
# copied from Keras, file: tensorflow_backend.py
def l2_normalize(x, axis):
"""Normalizes a tensor wrt the L2 norm alongside the specified axis.
# Arguments
x: Tensor or variable.
axis: axis along which to perform normalization.
# Returns
A tensor.
"""
if axis < 0:
axis %= len(x.get_shape())
return tf.nn.l2_normalize(x, dim=axis)
"""
Explanation: Let's have a look at backend implementation.
If you git clone Keras repository (https://github.com/fchollet/keras), open file: keras/backend/tensorflow_backend.py
and look for l2_normalize(x, axis) method, you will see this:
End of explanation
"""
input_1 = Input(shape=(50,))
d1 = Dense(30)(input_1)
input_2 = Input(shape=(50,))
d2 = Dense(30)(input_2)
def merge_layer(layer_input):
x1_l2_norm = tf.nn.l2_normalize(layer_input[0], dim=1) # notice axis -> dim
x2_l2_norm = tf.nn.l2_normalize(layer_input[1], dim=1)
mat_mul = tf.matmul(x1_l2_norm,
tf.transpose(x2_l2_norm))
return tf.reduce_sum(mat_mul, axis=1)
output = Lambda(merge_layer, output_shape=(1,))([d1, d2])
model = Model(inputs=[input_1, input_2], outputs=output)
model.compile(loss='mse', optimizer='adam')
model.fit(x=[X1, X2], y=Y)
"""
Explanation: As you can see, the keras l2_normalize is very thin layer on top of tensorflow tf.nn.l2_normalize function.
What would happen if we just replace all keras functions with tensorflow equivalents?
Below you can see exactly the same model as above but directly using TensorFlow methods
End of explanation
"""
Image(filename='images/vec_sim.png', width=400)
"""
Explanation: It still works!
What are benefits of doing that?
Tensorflow has very rich library of functions, some of them are not available in Keras, so this way you can still use all goodies from TensorFlow!
What are the drawbacks?
The moment you start directly using TensorFlow, your code stops working in Theano. It's not a big problem if you made a choice of always using single Keras backend.
Example: how to use what we've learned so far to implement more complex NN
In this section we will slightly divert from main topic and show how you can implement part of the real scientific paper.
Paper we're going to use is: http://www.mit.edu/~jonasm/info/MuellerThyagarajan_AAAI16.pdf
The goal is to train NN to measure similarity between 2 sentences. You can read paper for details how the sentences are being transformed into vectors, here we only focus on the last part where each sentence is already in vector format.
The idea is that the closer the value of compared sentences to 1 is, the sentences are more similar. The closer the value to 0 means that sentences are different.
In the PDF, If you scroll to the bottom of page 3, you can see following equation:
End of explanation
"""
x = np.linspace(-5, 5, 50)
y = np.exp(x)
plt.figure(figsize=(10, 5))
plt.plot(x, y)
"""
Explanation: Let's try to break down this equation.
First, notice that $h_{T_a}^{(a)}$ and $h_{T_b}^{(b)}$ represent sentences transformed into vectors representation.
The exp() is simply $e^x$ and if you plot it, it looks like this:
End of explanation
"""
x = np.linspace(-1, 1, 50)
y = np.exp(x)
plt.figure(figsize=(10,10))
plt.plot(x, y)
"""
Explanation: If we zoom in $x$ axis to $<-1, 1>$, we can see that for $x$ == 0 the value is 1
End of explanation
"""
x = np.linspace(0, 6, 50)
y = np.exp(-x)
plt.figure(figsize=(10,5))
plt.plot(x, y)
"""
Explanation: Now we can flip it horizontaly by $f(x)$ axis by just adding minus sign to $x$: $e^{-x}$
The plot changes to:
End of explanation
"""
input_1 = Input(shape=(50,))
d1 = Dense(30)(input_1)
input_2 = Input(shape=(50,))
d2 = Dense(30)(input_2)
def merge_layer(layer_input):
v1 = layer_input[0]
v2 = layer_input[1]
# L1 distance operations
sub_op = tf.subtract(v1, v2)
print("tensor: %s" % sub_op)
print("sub_op.shape: %s" % sub_op.shape)
abs_op = tf.abs(sub_op)
print("abs_op.shape: %s" % abs_op.shape)
sum_op = tf.reduce_sum(abs_op, axis=-1)
print("sum_op.shape: %s" % sum_op.shape)
# ... followed by exp(-x) part
reverse_op = -sum_op
out = tf.exp(reverse_op)
return out
# The same but in single line
# pred = tf.exp(-tf.reduce_sum(tf.abs(tf.subtract(v1, v2)), axis=-1))
output = Lambda(merge_layer, output_shape=(1,))([d1, d2])
model = Model(inputs=[input_1, input_2], outputs=output)
model.compile(loss='mse', optimizer='adam')
m = model.fit(x=[X1, X2], y=Y)
"""
Explanation: The higher value of $x$, the closer $f(x)$ value is to 0!
Now we just need to find a way how to combine 2 vectors so that:
- when they are similar, they will give $x$ close to 0 and
- the more different they are, the value of $x$ should be higher.
To achieve that, authors used L1 distance (https://en.wikipedia.org/wiki/Taxicab_geometry ), which basically is a absolute (method tf.abs() ) difference between values in each dimention. It is represented by $\lVert \mathbf{p - q} \rVert_1$ in our original equation.
All above can be represented by code in merge_layer(layer_input):
End of explanation
"""
from keras.datasets import mnist
import keras
num_classes = 10
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print(y_train.shape)
input = Input(shape=(784,))
y_input = Input(shape=(10,)) # <--
dense1 = Dense(512, activation='relu')(input)
x = Dropout(0.2)(dense1)
dense2 = Dense(512, activation='relu')(x)
x = Dropout(0.2)(dense2)
output = Dense(10, activation='softmax')(x)
def calc_accuracy(x):
y = tf.argmax(x[0], axis=-1)
predictions = tf.argmax(x[1], axis=-1)
comparison = tf.cast(tf.equal(predictions, y), dtype=tf.float32)
return tf.reduce_sum(comparison) * 1. / len(x_test)
accuracy = Lambda(calc_accuracy)([y_input, output]) # <--
model = Model(inputs=input, outputs=output)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs=2)
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
"""
Explanation: Writing custom layer
When should you use Lambda and when you need to write custom layer?
Quote from:
https://keras.io/layers/writing-your-own-keras-layers/ :
"For simple, stateless custom operations, you are probably better off using layers.core.Lambda layers. But for any custom operation that has trainable weights, you should implement your own layer."
-----------------------
Full list of Keras backend functions with types
List below represents all functions from Keras repository tensorflow_backend.py
I'm showing them here so that you could quickly see how many useful methods it has, notice many similar methods implemented in Numpy.
INTERNAL UTILS
get_uid(prefix='')
reset_uids()
clear_session()
manual_variable_initialization(value)
learning_phase()
set_learning_phase(value)
get_session()
set_session(session)
VARIABLE MANIPULATION
_convert_string_dtype(dtype)
_to_tensor(x, dtype)
is_sparse(tensor)
to_dense(tensor)
variable(value, dtype=None, name=None)
_initialize_variables()
constant(value, dtype=None, shape=None, name=None)
is_keras_tensor(x)
placeholder(shape=None, ndim=None, dtype=None, sparse=False, name=None)
shape(x)
int_shape(x)
ndim(x)
dtype(x)
eval(x)
zeros(shape, dtype=None, name=None)
ones(shape, dtype=None, name=None)
eye(size, dtype=None, name=None)
zeros_like(x, dtype=None, name=None)
ones_like(x, dtype=None, name=None)
identity(x)
random_uniform_variable(shape, low, high, dtype=None, name=None, seed=None)
random_normal_variable(shape, mean, scale, dtype=None, name=None, seed=None)
count_params(x)
cast(x, dtype)
UPDATES OPS
update(x, new_x)
update_add(x, increment)
update_sub(x, decrement)
moving_average_update(x, value, momentum)
LINEAR ALGEBRA
dot(x, y)
batch_dot(x, y, axes=None)
transpose(x)
gather(reference, indices)
ELEMENT-WISE OPERATIONS
_normalize_axis(axis, ndim)
max(x, axis=None, keepdims=False)
min(x, axis=None, keepdims=False)
sum(x, axis=None, keepdims=False)
prod(x, axis=None, keepdims=False)
cumsum(x, axis=0)
cumprod(x, axis=0)
var(x, axis=None, keepdims=False)
std(x, axis=None, keepdims=False)
mean(x, axis=None, keepdims=False)
any(x, axis=None, keepdims=False)
all(x, axis=None, keepdims=False)
argmax(x, axis=-1)
argmin(x, axis=-1)
square(x)
abs(x)
sqrt(x)
exp(x)
log(x)
logsumexp(x, axis=None, keepdims=False)
round(x)
sign(x)
pow(x, a)
clip(x, min_value, max_value)
equal(x, y)
not_equal(x, y)
greater(x, y)
greater_equal(x, y)
less(x, y)
less_equal(x, y)
maximum(x, y)
minimum(x, y)
sin(x)
cos(x)
normalize_batch_in_training(x, gamma, beta,
batch_normalization(x, mean, var, beta, gamma, epsilon=1e-3)
SHAPE OPERATIONS
concatenate(tensors, axis=-1)
reshape(x, shape)
permute_dimensions(x, pattern)
resize_images(x, height_factor, width_factor, data_format)
resize_volumes(x, depth_factor, height_factor, width_factor, data_format)
repeat_elements(x, rep, axis)
repeat(x, n)
arange(start, stop=None, step=1, dtype='int32')
tile(x, n)
flatten(x)
batch_flatten(x)
expand_dims(x, axis=-1)
squeeze(x, axis)
temporal_padding(x, padding=(1, 1))
spatial_2d_padding(x, padding=((1, 1), (1, 1)), data_format=None)
spatial_3d_padding(x, padding=((1, 1), (1, 1), (1, 1)), data_format=None)
stack(x, axis=0)
one_hot(indices, num_classes)
reverse(x, axes)
VALUE MANIPULATION
get_value(x)
batch_get_value(ops)
set_value(x, value)
batch_set_value(tuples)
get_variable_shape(x)
print_tensor(x, message='')
GRAPH MANIPULATION
(class) class Function(object)
function(inputs, outputs, updates=None, **kwargs)
gradients(loss, variables)
stop_gradient(variables)
CONTROL FLOW
rnn(step_function, inputs, initial_states,
_step(time, output_ta_t, *states)
_step(time, output_ta_t, *states)
switch(condition, then_expression, else_expression)
then_expression_fn()
else_expression_fn()
in_train_phase(x, alt, training=None)
in_test_phase(x, alt, training=None)
NN OPERATIONS
relu(x, alpha=0., max_value=None)
elu(x, alpha=1.)
softmax(x)
softplus(x)
softsign(x)
categorical_crossentropy(output, target, from_logits=False)
sparse_categorical_crossentropy(output, target, from_logits=False)
binary_crossentropy(output, target, from_logits=False)
sigmoid(x)
hard_sigmoid(x)
tanh(x)
dropout(x, level, noise_shape=None, seed=None)
l2_normalize(x, axis)
in_top_k(predictions, targets, k)
CONVOLUTIONS
_preprocess_deconv3d_output_shape(x, shape, data_format)
_preprocess_deconv_output_shape(x, shape, data_format)
_preprocess_conv2d_input(x, data_format)
_preprocess_conv3d_input(x, data_format)
_preprocess_conv2d_kernel(kernel, data_format)
_preprocess_conv3d_kernel(kernel, data_format)
_preprocess_padding(padding)
_postprocess_conv2d_output(x, data_format)
_postprocess_conv3d_output(x, data_format)
conv1d(x, kernel, strides=1, padding='valid', data_format=None, dilation_rate=1)
conv2d(x, kernel, strides=(1, 1), padding='valid', data_format=None, dilation_rate=1)
conv2d_transpose(x, kernel, output_shape, strides=(1, 1), padding='valid', data_format=None)
separable_conv2d(x, depthwise_kernel, pointwise_kernel, strides=(1, 1), padding='valid', data_format=None, dilation_rate=(1, 1))
depthwise_conv2d(x, depthwise_kernel, strides=(1, 1), padding='valid', data_format=None, dilation_rate=(1, 1))
conv3d(x, kernel, strides=(1, 1, 1), padding='valid', data_format=None, dilation_rate=(1, 1, 1))
conv3d_transpose(x, kernel, output_shape, strides=(1, 1, 1), padding='valid', data_format=None)
pool2d(x, pool_size, strides=(1, 1), padding='valid', data_format=None, pool_mode='max')
pool3d(x, pool_size, strides=(1, 1, 1), padding='valid', data_format=None, pool_mode='max')
bias_add(x, bias, data_format=None)
RANDOMNESS
random_normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None)
random_uniform(shape, minval=0.0, maxval=1.0, dtype=None, seed=None)
random_binomial(shape, p=0.0, dtype=None, seed=None)
truncated_normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None)
CTC
Tensorflow has a native implemenation, but it uses sparse tensors
and therefore requires a wrapper for Keras. The functions below convert
dense to sparse tensors and also wraps up the beam search code that is
in tensorflow's CTC implementation
ctc_label_dense_to_sparse(labels, label_lengths)
range_less_than(_, current_input)
ctc_batch_cost(y_true, y_pred, input_length, label_length)
ctc_decode(y_pred, input_length, greedy=True, beam_width=100, top_paths=1)
HIGH ORDER FUNCTIONS
map_fn(fn, elems, name=None, dtype=None)
foldl(fn, elems, initializer=None, name=None)
foldr(fn, elems, initializer=None, name=None)
local_conv1d(inputs, kernel, kernel_size, strides, data_format=None)
local_conv2d(inputs, kernel, kernel_size, strides, output_shape, data_format=None)
-----------------------
Computational graph
So far all our Keras models have been based on Model() class. But with our new knowledge I encourage you to stop thinking of your artificial neural network implemented in Keras as of only Model() being stack of layers, one after another.
To fully understand what I mean by that (and the code below), first read introduction to TensorFlow: https://www.tensorflow.org/get_started/get_started .
Now, armed with this new knowledge, it will be easier to understand why and how the code below works.
The original code (MNIST from Keras repository) has been extended to show how you can write Keras code without being restricted only to Keras Model() class, but instead build graph in similar way you would do using pure Tensorflow.
Why would you do that? Because Keras is much simpler and verbose than TensorFlow, it allows you to move faster. Whenever you'd feel restricted by using standard Model() approach, you can get away by extending your graph in a similar way I present below.
In this example we extend the graph to calculate accuracy (I'm aware that the accuracy is in-built into Keras model.fit() or model.evaluate() methods, but the purpose of this example is to show how you can add more operations to the graph).
Having said that, original MNIST graph has been extended with:
another Input: y_input (in TensorFlow called Placeholder) to input true Y values you will be checking your model's output against
Lambda layer accuracy to perform actual calculation
End of explanation
"""
draw_model_graph(model)
"""
Explanation: And while our draw_model_graph helper function shows this
End of explanation
"""
Image(filename='images/graph_with_acc.png', width=500)
"""
Explanation: It's actually closer to this:
(blue color represents our Model())
End of explanation
"""
Image(filename='images/tensorboard_graph_1.png', height=500)
"""
Explanation: The same graph represented as screenshot from Tensorboard (with hidden Dropout layers)
First, the "main" model:
End of explanation
"""
Image(filename='images/tensorboard_graph_2.png', height=500)
"""
Explanation: and the same model but with accuracy
( Compare this graph elements with our calc_accuracy method used by Lambda layer )
End of explanation
"""
# Note: This code will return error, read comment below to understand why.
accuracy_fn = K.function(inputs=[input, y_input],
outputs=[accuracy])
accuracy_fn([x_test, y_test])
"""
Explanation: How to execute our new accuracy operation?
Simply use K.function().
Keras documentation says:
def function(inputs, outputs, updates=None, **kwargs):
Instantiates a Keras function.
# Arguments
inputs: List of placeholder tensors.
outputs: List of output tensors.
updates: List of update ops.
**kwargs: Passed to `tf.Session.run`.
# Returns
Output values as Numpy arrays.
If you'd check the original implementation, you would see that it's TensorFlow wrapper of session.run() with your inputs being used in feed_dict arguments:
session.run([outputs], feed_dict=inputs_dictionary)
Refer to introduction to TensorFlow if you don't understand what I'm talking about.
So, to calculate accuracy using K.function() on our test set, we need to do following:
End of explanation
"""
accuracy_fn = K.function(inputs=[input, y_input, K.learning_phase()],
outputs=[accuracy])
accuracy_fn([x_test, y_test, 0])
"""
Explanation: In order to make it work, we should tell Keras/TensorFlow in which phase - train or test phase - we want to execute our K.function()
Why? Because some layers have to know in which phase they are, e.g. Dropout() should be skipped in test phase.
Here is our code extended with additional input (in TensorFlow called Placeholder) with phase K.learning_phase() set to 0 which means it's a test phase.
End of explanation
"""
# input_values is an single 25x25 flattened "image" with random values
input_values = np.random.rand(1, 784)
second_layer = K.function(inputs=[input],
outputs=[dense1])
dense1_output = second_layer([input_values, 0])[0]
print("Our dense1 layer output shape is")
dense1_output.shape
"""
Explanation: You can compare this output with original model.evaluate() output to see they are similar.
In other words, K.function() (similarly to TensorFlow's session.run() ) allows you to run custom operations on sub-parts of your graph.
Below example shows how to get output from our model's second Dense() layer
End of explanation
"""
|
FordyceLab/AcqPack | notebooks/Test20170524.ipynb | mit | import time
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
"""
Explanation: SETUP
End of explanation
"""
# config directory must have "__init__.py" file
# from the 'config' directory, import the following classes:
from config import Motor, ASI_Controller, Autosipper
from config import utils as ut
autosipper = Autosipper(Motor('config/motor.yaml'), ASI_Controller('config/asi_controller.yaml'))
autosipper.coord_frames
from config import gui
gui.stage_control(autosipper.XY, autosipper.Z)
# add/determine deck info
autosipper.coord_frames.deck.position_table = ut.read_delim_pd('config/position_tables/deck')
# check deck alignment
# CLEAR DECK OF OBSTRUCTIONS!!
autosipper.go_to('deck', ['name'],'align')
# add plate
from config import utils as ut
platemap = ut.generate_position_table((8,8),(9,9),93.5)
platemap[]
ut.lookup(platemap)
"""
Explanation: Autosipper
End of explanation
"""
from config import Manifold
manifold = Manifold('192.168.1.3', 'config/valvemaps/valvemap.csv', 512)
manifold.valvemap[manifold.valvemap.name>0]
def valve_states():
tmp = []
for i in [2,0,14,8]:
status = 'x'
if manifold.read_valve(i):
status = 'o'
tmp.append([status, manifold.valvemap.name.iloc[i]])
return pd.DataFrame(tmp)
tmp = []
for i in range(16):
status = 'x'
if manifold.read_valve(i):
status = 'o'
name = manifold.valvemap.name.iloc[i]
tmp.append([status, name])
pd.DataFrame(tmp).replace(np.nan, '')
name = 'inlet_in'
v = manifold.valvemap['valve'][manifold.valvemap.name==name]
v=14
manifold.depressurize(v)
manifold.pressurize(v)
manifold.exit()
"""
Explanation: Manifold
End of explanation
"""
# !!!! Also must have MM folder on system PATH
# mm_version = 'C:\Micro-Manager-1.4'
# cfg = 'C:\Micro-Manager-1.4\SetupNumber2_05102016.cfg'
mm_version = 'C:\Program Files\Micro-Manager-2.0beta'
cfg = 'C:\Program Files\Micro-Manager-2.0beta\Setup2_20170413.cfg'
import sys
sys.path.insert(0, mm_version) # make it so python can find MMCorePy
import MMCorePy
from PIL import Image
core = MMCorePy.CMMCore()
core.loadSystemConfiguration(cfg)
core.setProperty("Spectra", "White_Enable", "1")
core.waitForDevice("Spectra")
core.setProperty("Cam Andor_Zyla4.2", "Sensitivity/DynamicRange", "16-bit (low noise & high well capacity)") # NEED TO SET CAMERA TO 16 BIT (ceiling 12 BIT = 4096)
core.setProperty("Spectra", "White_Enable", "0")
"""
Explanation: Micromanager
End of explanation
"""
log = []
autosipper.Z.move(93.5)
manifold.depressurize(2)
manifold.depressurize(0)
log.append([time.ctime(time.time()), 'open inlet_in, inlet_out'])
valve_states()
text = 'fluorescence observed'
log.append([time.ctime(time.time()), text])
text = 'CLOSE inlet_out'
manifold.pressurize(0)
log.append([time.ctime(time.time()), text])
text = 'OPEN chip_in, chip_out'
manifold.depressurize(14)
manifold.depressurize(8)
log.append([time.ctime(time.time()), text])
valve_states()
text = 'fill'
log.append([time.ctime(time.time()), text])
manifold.pressurize(8)
#closed all
autosipper.Z.move(93.5)
manifold.depressurize(2)
manifold.depressurize(0)
log.append([time.ctime(time.time()), 'open inlet_in, inlet_out'])
valve_states()
text = 'fluorescence removed'
log.append([time.ctime(time.time()), text])
text = 'CLOSE inlet_out'
manifold.pressurize(0)
log.append([time.ctime(time.time()), text])
text = 'OPEN chip_in, chip_out'
manifold.depressurize(14)
manifold.depressurize(8)
log.append([time.ctime(time.time()), text])
valve_states()
text = 'flush'
log.append([time.ctime(time.time()), text])
manifold.pressurize(8)
for i in [2,0,14,8]:
manifold.pressurize(i)
"""
Explanation: Preset: 1_PBP
ConfigGroup,Channel,1_PBP,TIFilterBlock1,Label,1-PBP
Preset: 2_BF
ConfigGroup,Channel,2_BF,TIFilterBlock1,Label,2-BF
Preset: 3_DAPI
ConfigGroup,Channel,3_DAPI,TIFilterBlock1,Label,3-DAPI
Preset: 4_eGFP
ConfigGroup,Channel,4_eGFP,TIFilterBlock1,Label,4-GFP
Preset: 5_Cy5
ConfigGroup,Channel,5_Cy5,TIFilterBlock1,Label,5-Cy5
Preset: 6_AttoPhos
ConfigGroup,Channel,6_AttoPhos,TIFilterBlock1,Label,6-AttoPhos
TEST
4.5 psi, 25 psi valves
End of explanation
"""
log
core.setConfig('Channel','2_BF')
core.setProperty(core.getCameraDevice(), "Exposure", 20)
core.snapImage()
img = core.getImage()
plt.imshow(img,cmap='gray')
image = Image.fromarray(img)
# image.save('TESTIMAGE.tif')
import config.utils as ut
position_list = ut.load_mm_positionlist("C:/Users/fordycelab/Desktop/D1_cjm.pos")
position_list
def acquire():
for i in xrange(len(position_list)):
si = str(i)
x,y = position_list[['x','y']].iloc[i]
core.setXYPosition(x,y)
core.waitForDevice(core.getXYStageDevice())
logadd(core, log, 'moved '+si)
core.snapImage()
# core.waitForDevice(core.getCameraDevice())
logadd(core, log, 'snapped '+si)
img = core.getImage()
logadd(core, log, 'got image '+si)
image = Image.fromarray(img)
image.save('images/images_{}.tif'.format(i))
logadd(core, log, 'saved image '+si)
x,y = position_list[['x','y']].iloc[0]
core.setXYPosition(x,y)
core.waitForDevice(core.getXYStageDevice())
logadd(core, log, 'moved '+ str(0))
def logadd(core,log,st):
log.append([time.ctime(time.time()), st])
core.logMessage(st)
print log[-1]
# Trial 1: returning stage to home at end of acquire
log = []
for i in xrange(15):
sleep = (10*i)*60
logadd(log, 'STRT SLEEP '+ str(sleep/60) + ' min')
time.sleep(sleep)
logadd(log, 'ACQ STARTED '+str(i))
acquire()
core.stopSecondaryLogFile(l2)
# Trial 2: returning stage to home at end of acquire
# added mm logs
# core.setPrimaryLogFile('20170524_log_prim2.txt')
# core.enableDebugLog(True)
# core.enableStderrLog(True)
l2 = core.startSecondaryLogFile('20170524_log_sec3.txt', True, False, True)
log = []
for i in xrange(15):
sleep = (10*i)*60
logadd(core, log, 'STRT SLEEP '+ str(sleep/60) + ' min')
time.sleep(sleep)
logadd(core, log, 'ACQ STARTED '+str(i))
acquire()
core.stopSecondaryLogFile(l2)
# Trial 3: returning stage to home at end of acquire
# added mm logs
# core.setPrimaryLogFile('20170524_log_prim2.txt')
# core.enableDebugLog(True)
# core.enableStderrLog(True)
l2 = core.startSecondaryLogFile('20170524_log_sec3.txt', True, False, True)
log = []
for i in xrange(15):
sleep = (10*i)*60
logadd(core, log, 'STRT SLEEP '+ str(sleep/60) + ' min')
time.sleep(sleep)
logadd(core, log, 'ACQ STARTED '+str(i))
acquire()
core.stopSecondaryLogFile(l2)
# Trial 4: returning stage to home at end of acquire
# added mm logs
# after USB fix
# core.setPrimaryLogFile('20170524_log_prim2.txt')
# core.enableDebugLog(True)
# core.enableStderrLog(True)
l2 = core.startSecondaryLogFile('20170526_log_sec4.txt', True, False, True)
log = []
for i in xrange(15):
sleep = (5*i)*60
logadd(core, log, 'STRT SLEEP '+ str(sleep/60) + ' min')
time.sleep(sleep)
logadd(core, log, 'ACQ STARTED '+str(i))
acquire()
core.stopSecondaryLogFile(l2)
# Trial 5: returning stage to home at end of acquire
# second one after USB fix
# core.setPrimaryLogFile('20170524_log_prim2.txt')
# core.enableDebugLog(True)
# core.enableStderrLog(True)
l2 = core.startSecondaryLogFile('20170526_log_sec5.txt', True, False, True)
log = []
for i in xrange(15):
sleep = (10*i)*60
logadd(core, log, 'STRT SLEEP '+ str(sleep/60) + ' min')
time.sleep(sleep)
logadd(core, log, 'ACQ STARTED '+str(i))
acquire()
core.stopSecondaryLogFile(l2)
# Auto
core.setAutoShutter(True) # default
core.snapImage()
# Manual
core.setAutoShutter(False) # disable auto shutter
core.setProperty("Shutter", "State", "1")
core.waitForDevice("Shutter")
core.snapImage()
core.setProperty("Shutter", "State", "0")
"""
Explanation: ACQUISITION
End of explanation
"""
core.getFocusDevice()
core.getCameraDevice()
core.XYStageDevice()
core.getDevicePropertyNames(core.getCameraDevice())
"""
Explanation: MM Get info
End of explanation
"""
import cv2
from IPython import display
import numpy as np
from ipywidgets import widgets
import time
# core.initializeCircularBuffer()
# core.setCircularBufferMemoryFootprint(4096) # MiB
cv2.WND
# video with button (CV2)
live = widgets.Button(description='Live')
close = widgets.Button(description='Close')
display.display(widgets.HBox([live, close]))
def on_live_clicked(b):
display.clear_output(wait=True)
print 'LIVE'
core.startContinuousSequenceAcquisition(1000) # time overridden by exposure
time.sleep(.2)
cv2.namedWindow('Video', cv2.WINDOW_NORMAL)
cv2.setWindowProperty('Video', cv2.WND_PROP_ASPECT_RATIO, cv2.WINDOW_KEEPRATIO)
cv2.resizeWindow('Video', 500,500)
img = np.zeros((500,500))
print 'To stop, click window + press ESC'
while(1):
time.sleep(.015)
if core.getRemainingImageCount() > 0:
img = core.getLastImage()
cv2.imshow('Video',img)
k = cv2.waitKey(30)
if k==27: # ESC key; may need 255 mask?
break
print 'STOPPED'
core.stopSequenceAcquisition()
def on_close_clicked(b):
if core.isSequenceRunning():
core.stopSequenceAcquisition()
cv2.destroyWindow('Video')
live.on_click(on_live_clicked)
close.on_click(on_close_clicked)
# video with button (CV2)
# serial snap image
live = widgets.Button(description='Live')
close = widgets.Button(description='Close')
display.display(widgets.HBox([live, close]))
def on_live_clicked(b):
display.clear_output(wait=True)
print 'LIVE'
cv2.namedWindow('Video', cv2.WINDOW_NORMAL)
cv2.setWindowProperty('Video', cv2.WND_PROP_ASPECT_RATIO, cv2.WINDOW_KEEPRATIO)
cv2.resizeWindow('Video', 500,500)
img = np.zeros((500,500))
print 'To stop, click window + press ESC'
while(1):
core.snapImage()
time.sleep(.05)
img = core.getImage()
cv2.imshow('Video',img)
k = cv2.waitKey(30)
if k==27: # ESC key; may need 255 mask?
break
print 'STOPPED'
def on_close_clicked(b):
if core.isSequenceRunning():
core.stopSequenceAcquisition()
cv2.destroyWindow('Video')
live.on_click(on_live_clicked)
close.on_click(on_close_clicked)
cv2.destroyAllWindows()
"""
Explanation: Video
End of explanation
"""
# snap (CV2)
snap = widgets.Button(description='Snap')
close2 = widgets.Button(description='Close')
display.display(widgets.HBox([snap, close2]))
def on_snap_clicked(b):
cv2.destroyWindow('Snap')
cv2.namedWindow('Snap',cv2.WINDOW_NORMAL)
cv2.resizeWindow('Snap', 500,500)
cv2.setWindowProperty('Snap', cv2.WND_PROP_ASPECT_RATIO, cv2.WINDOW_KEEPRATIO)
core.snapImage()
time.sleep(.1)
img = core.getImage()
cv2.imshow('Snap',img)
k = cv2.waitKey(30)
def on_close2_clicked(b):
cv2.destroyWindow('Snap')
snap.on_click(on_snap_clicked)
close2.on_click(on_close2_clicked)
"""
Explanation: SNAP CV2
End of explanation
"""
autosipper.exit()
manifold.exit()
core.unloadAllDevices()
core.reset()
print 'closed'
import config.gui as gui
core.
gui.video(core)
gui.snap(core, 'mpl')
gui.manifold_control(manifold)
"""
Explanation: EXIT
End of explanation
"""
|
kunaltyagi/SDES | notes/python/p_norvig/logic/Mean Misanthrope Density.ipynb | gpl-3.0 | from statistics import mean
def occ(n):
"The expected occupancy for a row of n houses (under misanthrope rules)."
return (0 if n == 0 else
1 if n == 1 else
mean(occ(L) + 1 + occ(R)
for (L, R) in runs(n)))
def runs(n):
"""A list [(L, R), ...] where the i-th tuple contains the lengths of the runs
of acceptable houses to the left and right of house i."""
return [(max(0, i - 1), max(0, n - i - 2))
for i in range(n)]
def density(n): return occ(n) / n
"""
Explanation: The Puzzle of the Misanthropic Neighbors
Consider this puzzle from The Riddler:
The misanthropes are coming. Suppose there is a row of some number, N, of houses in a new, initially empty development. Misanthropes are moving into the development one at a time and selecting a house at random from those that have nobody in them and nobody living next door. They keep on coming until no acceptable houses remain. At most, one out of two houses will be occupied; at least one out of three houses will be. But what’s the expected fraction of occupied houses as the development gets larger, that is, as N goes to infinity?
Consider N=4 Houses
To make sure we understand the problem, let's try a simple example, with N=4 houses. We will represent the originally empty row of four houses by four dots:
....
Now the first person chooses one of the four houses (which are all acceptable). We'll indicate an occupied house by a 1, so the four equiprobable choices are:
1...
.1..
..1.
...1
When a house is occupied, any adjacent houses become unacceptable. We'll indicate that with a 0:
10..
010.
.010
..01
In all four cases, a second occupant has a place to move in, but then there is no place for a third occupant.
The occupancy is 2 in all cases, and thus the expected occpancy is 2. The occupancy fraction, or density, is 2/N = 2/4 = 1/2.
Consider N=7 Houses
With N=7 houses, there are 7 equiprobable choices for the first occupant:
10.....
010....
.010...
..010..
...010.
....010
.....01
Now we'll add something new: the lengths of the runs of consecutive acceptable houses to the left and right of the chosen house:
10..... runs = (0, 5)
010.... runs = (0, 4)
.010... runs = (1, 3)
..010.. runs = (2, 2)
...010. runs = (3, 1)
....010 runs = (4, 0)
.....01 runs = (5, 0)
Defining occ(n)
This gives me a key hint as to how to analyze the problem. I'll define occ(n) to be the expected number of occupied houses in a row of n houses. So:
10..... runs = (0, 5); occupancy = occ(0) + 1 + occ(5)
010.... runs = (0, 4); occupancy = occ(0) + 1 + occ(4)
.010... runs = (1, 3); occupancy = occ(1) + 1 + occ(3)
..010.. runs = (2, 2); occupancy = occ(2) + 1 + occ(2)
...010. runs = (3, 1); occupancy = occ(3) + 1 + occ(1)
....010 runs = (4, 0); occupancy = occ(4) + 1 + occ(0)
.....01 runs = (5, 0); occupancy = occ(5) + 1 + occ(0)
So we can say that occ(n) is:
0 when n is 0 (because no houses means no occupants),
1 when n is 1 (because one isolated acceptable house has one occupant),
else the mean, over the n choices for the first occupied house,
of the sum of that house plus the occupancy of the runs to the left and right.
We can implement that:
End of explanation
"""
occ(4)
"""
Explanation: Let's check that occ(4) is 2, as we computed it should be:
End of explanation
"""
runs(7)
"""
Explanation: And that runs(7) is what we described above:
End of explanation
"""
occ(7)
density(7)
"""
Explanation: Let's check on n = 7:
End of explanation
"""
def occ(n, cache=[0, 1]):
"The expected occupancy for a row of n houses (under misanthrope rules)."
# Store occ(i) in cache[i] for all as-yet-uncomputed values of i up to n:
for i in range(len(cache), n+1):
cache.append(mean(cache[L] + 1 + cache[R]
for (L, R) in runs(i)))
return cache[n]
"""
Explanation: That seems reasonable, but I don't know for sure that it is correct.
Dynamic Programming Version of occ
The computation of occ(n) makes multiple calls to occ(n-1), occ(n-2), etc. To avoid re-computing the same calls over and over, we will modify occ to save previous results using dynamic programming. I could implement that in one line with the decorator @functools.lru_cache, but instead I will explicitly manage a list, cache, such that cache[n] holds occ(n):
End of explanation
"""
occ(4) == 2
density(7)
"""
Explanation: Let's make sure this new version gets the same results as the old version:
End of explanation
"""
%time occ(2000)
%time occ(2000)
"""
Explanation: Let's make sure the caching makes computation faster the second time:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
def plot_density(ns):
"Plot density(n) for each n in the list of numbers ns."
plt.xlabel('n houses'); plt.ylabel('density(n)')
plt.plot(ns, [density(n) for n in ns], 's-')
return density(ns[-1])
plot_density(range(1, 100))
"""
Explanation: Plotting density(n)
To get a feel for the limit of density(n), start by drawing a plot over some small values of n:
End of explanation
"""
plot_density(range(1, 11))
"""
Explanation: There is something funny going on with the first few values of n. Let's separately look at the first few:
End of explanation
"""
plot_density(range(100, 4000, 50))
"""
Explanation: And at a wider range:
End of explanation
"""
def diff(n, m): return density(n) - density(m)
diff(100, 200)
"""
Explanation: The density is going down, and the curve is almost but not quite flat.
lim <sub>n → ∞</sub> density(n)
Thes puzzle
is to figure out the limit of density(n) as n goes to infinity. The plot above makes it look like 0.432+something, but we can't answer the question just by plotting; we'll need to switch modes from computational thinking to mathematical thinking.
At this point I started playing around with density(n), looking at various values, differences of values, ratios of values, and ratios of differences of values, hoping to achieve some mathematical insight. Mostly I got dead ends.
But eventually I hit on something promising. I looked at the difference between density values (using the function diff), and particularly the difference as you double n:
End of explanation
"""
diff(200, 400)
"""
Explanation: And compared that to the difference when you double n again:
End of explanation
"""
diff(100, 200) / diff(200, 400)
"""
Explanation: Hmm—I noticed that the first difference is just about twice as much as the second. Let's check:
End of explanation
"""
n = 500; diff(n, 2*n) / diff(2*n, 4*n)
n = 1000; diff(n, 2*n) / diff(2*n, 4*n)
"""
Explanation: Wow—not only is it close to twice as much, it is exactly twice as much (to the precision of floating point numbers). Let's try other starting values for n:
End of explanation
"""
from scipy.optimize import curve_fit
Ns = list(range(100, 10001, 100))
def f(n, A, B): return A + B / n
((A, B), covariance) = curve_fit(f=f, xdata=Ns, ydata=[density(n) for n in Ns])
covariance
"""
Explanation: OK, I'm convinced this is real!
Now, what mathematical function behaves like this? I figured out that f(n) = (1 / n) does. The ratio of the differences would be:
$$\frac{f(n) - f(2n)}{f(2n) - f(4n)} = \frac{1/n - 1/(2n)}{1/(2n) - 1 / (4n)}\;\;$$
Multiplying top and bottom by n you get:
$$\frac{1 - 1/2}{1/2 - 1 /4} = \frac{1/2}{1/4} = 2\;\;$$
If the function (1 / n) fits the pattern, then so does any affine transformation of (1 / n), because we are taking the ratio of differences. So that means a density function of the form
density(n) = A + B / n
would fit the patterm. I can try a curve_fit to estimate the parameters A and B:
End of explanation
"""
A, B
"""
Explanation: The curve_fit function returns a sequence of parameter values, and a covariance matrix. The fact that all the numbers in the covariance matrix are really small indicates that the parameters are a really good fit. Here are the parameters, A and B:
End of explanation
"""
def estimated_density(n): return A + B / n
"""
Explanation: We can plug them into a function that estimates the density:
End of explanation
"""
max(abs(density(n) - estimated_density(n))
for n in range(200, 4000))
"""
Explanation: And we can test how close this function is to the true density function:
End of explanation
"""
from math import sinh, exp, e
S = sinh(1) / exp(1)
E = 0.5 * (1 - e ** (-2))
assert S == E
S, E, A
"""
Explanation: That says that, for all values of n from 200 to 4,000, density(n) and estimated_density(n) agree at least through the first 15 decimal places!
We now have a plausible answer to the puzzle:
lim<sub style="font-size:large"> <tt>n</tt>→∞</sub> <tt>density(n) ≅ A = 0.43233235838169343</tt>
Why?
Theis answer is empirically strong (15 decimal places of accuracy) but theoretically weak: we don't have a proof, and we don't have an explanation for why the density function has this form. We need some more mathematical thinking.
I didn't have any ideas, so I looked to see if anyone else had written something about the number 0.43233235838169343. I tried several searches and found two interesting formulas:
Search: [0.4323323] Formula: sinh(1) / exp(1) (Page)
Search: [0.432332358] Formula: 0.5(1-e^(-2)) (Page)
I can verify that the two formulas are equivalent, and that they are indeed equal to A to at least 15 decimal places:
End of explanation
"""
import random
def simulate(n):
"Simulate moving in to houses, and return a sorted tuple of occupied houses."
occupied = set()
for house in random.sample(range(n), n):
if (house - 1) not in occupied and (house + 1) not in occupied:
occupied.add(house)
return sorted(occupied)
def simulated_density(n, repeat=10000):
"Estimate density by simulation, repeated `repeat` times."
return mean(len(simulate(n)) / n
for _ in range(repeat))
"""
Explanation: So I now have a suspicion that
lim <sub>n → ∞</sub> density(n) = (1 - e<sup>-2</sup>) / 2
but I still have no proof, nor any intuition as to why this is so.
John Lamping to the Rescue
I reported my results to Anne Paulson and John Lamping, who had originally related the problem to me, and the next day John wrote back with the following:
<blockquote>
I got a derivation of the formula!
<p>Suppose that each house has a different "attractiveness", and that when it is a misanthrope's turn to pick a house, they consider the houses in order, from most attractive to least, and pick the first house not next to any other house. If the attractiveness are chosen independently, this process gives the same probabilities as each misanthrope picking from the available houses randomly.
<p>To be more concrete, let the attractivenesses range from 0 to 1 with uniform probability, with lower numbers being being considered earlier, and hence more attractive. (This makes the math come out easier.)
<p>Given the attractiveness, a, of a house, we can compute the chance that it will end up getting picked. It will get picked if and only if neither house on either side gets considered earlier and gets picked. Lets consider one side, and assume the houses, and their attractivenesses, are labeled a, b, c, d, e, f, g, ... with the house we are considering being a. There are an infinite number of cases where b won't get considered before a and picked. Here are the first few:
<p> a < b (a is considered before b)
<p> a > b > c < d (Someone considered c first among these 4 and picked it. A later person considered b before a, but rejected it because c had already been chosen.)
<p> a > b > c > d > e < f (Someone considered e first among these 6, and picked it. A later person considered d, but rejected it because e was chosen, then considered c, which they picked. Still later, someone considered b, which they rejected because c had been chosen.)
<p>We can write down the probabilities of these cases as a function of the attractiveness of a:
<p> a < b: The chance that b is greater than a: 1 - a.
<p> a > b > c < d: Let y = a - c, so y is between 0 and a. The probability is the integral over the possible y of the chance that b is between a and c, times the chance that d is greater than c, or integral from 0 to a of y * (1 - a + y) dy.
<p> a > b > c > d > e < f: Let y = a - e. The probability is the integral of (the chance that b, c, and d are all between a and e, and ordered right, which is y^3 / 3!, times the chance that f is greater than e, which is (1 - a + y))
<p>If you work out the definite integrals, you get
<p> a < b: 1 - a
<p> a > b > c < d: a^2 / 2 - a^3 / 3!
<p> a > b > c > d > e < f: a^4 / 4! - a^5 / 5!
<p>Add them all up, and you have 1 - a + a^2 / 2 - a^3 / 3! + a ^4 / 4! ... the Taylor expansion for e^-a.
<p>Now, there will be a house at a if both adjacent houses are not picked earlier, so the chance is the square of the chance for one side: e^(-2a). Integrate that from 0 to 1, and you get 1/2 (1 - e^-2).
</blockquote>
You can compare John Lamping's solution to the solutions by Jim Ferry and Andrew Mascioli.
Styles of Thinking
It is clear that different write-ups of this problem display different styles of thinking. I'll attempt to name and describe them:
Mathematical Publication Style: This style uses sophisticated mathematics (e.g. generating functions, differentiation, asymptotic analysis, and manipulation of summations). It defines new abstractions without
necessarily trying to motivate them first, it is terse and formal, and it gets us to the conclusion in a way that is
clearly correct, but does not describe all the steps of how the author came up with the ideas. (Ferry and Mascioli)
Mathematical Exploration Style: Like the Mathematical Publication Style, but with more explanatory prose. (Lamping)
Computational Thinking: A more concrete style; tends to use programs with specific values of n rather than
creating a proof for all values of n; produces tables or plots to help guide intuition; verifies results
with tests or alternative implementations. (Norvig)
In this specific puzzle, my steps were:
- Analyze the problem and implement code to solve it for small values of n.
- Plot results and examine them for insight.
- Play with the results and get a guess at a partial solution <tt>density(n) ≅ A + B / n</tt>.
- Solve numerically for A and B.
- Do a search with the numeric value of A to find a formula with that value.
- Given that formula, let Lamping figure out how it corresponds to the problem.
This is mostly computational thinking, with a little mathematical thrown in.
Validation by Anticipating Objections
Is our implementation of occ(n) correct? I think it is, but I can anticipate some objections and answer them:
In occ(n), is it ok to start from all empty houses, rather than considering layouts of partially-occupied houses? Yes, I think it is ok, because the problem states that initially all houses are empty, and each choice of a house breaks the street up into runs of acceptable houses, flanked by unacceptable houses. If we get the computation right for a run of n acceptable houses, then we can get the whole answer right. A key point is that the chosen first house breaks the row of houses into 2 runs of acceptable houses, not 2 runs of unoccupied houses. If it were unoccupied houses, then we would have to also keep track of whether there were occupied houses to the right and/or left of the runs. By considering runs of acceptable houses, eveything is clean and simple.
In occ(7), if the first house chosen is 2, that breaks the street up into runs of 1 and 3 acceptable houses. There is only one way to occupy the 1 house, but there are several ways to occupy the 3 houses. Shouldn't the average give more weight to the 3 houses, since there are more possibilities there? No, I don't think so. We are caclulating occupancy, and there is a specific number (5/3) which is the expected occupancy of 3 houses; it doesn't matter if there is one combination or a million combinations that contribute to that expected value, all that matters is what the expected value is.
Validation by Simulation
A simulation can add credence to our implementation of density(n), for two reasons:
- The code for the simulation can have a more direct corresponance to the problem statement.
- When two very different implementations get the same result, that is evidence supporting both of them.
The simulation will start with an empty set of occupied houses. Following Lamping's suggestion, we sample <i>n</i> houses (to get a random ordering) and go through them, occupying just the ones that have no neighbor"
End of explanation
"""
print(' n simul density estimated')
for n in (25, 50, 100, 200, 400):
print('{:3} {:.3} {:.3} {:.3}'
.format(n, simulated_density(n), density(n), estimated_density(n)))
"""
Explanation: Let's see if the simulation returns results that match the actual density function and the estimated_density function:
End of explanation
"""
simulate(7)
"""
Explanation: We got perfect agreement (at least to 3 decimal places), suggesting that either our three implementations are all correct, or we've made mistakes in all three.
The simulate function can also give us insights when we look at the results it produces:
End of explanation
"""
from collections import Counter
Counter(tuple(simulate(7)) for _ in range(10000))
"""
Explanation: Let's repeat that multiple times, and store the results in a Counter, which tracks how many times it has seen each result:
End of explanation
"""
def test():
assert occ(0) == 0
assert occ(1) == occ(2) == 1
assert occ(3) == 5/3
assert density(3) == occ(3) / 3
assert density(100) == occ(100) / 100
assert runs(3) == [(0, 1), (0, 0), (1, 0)]
assert runs(7) == [(0, 5), (0, 4), (1, 3), (2, 2), (3, 1), (4, 0), (5, 0)]
for n in (3, 7, 10, 20, 100, 101, 200, 201):
for repeat in range(500):
assert_valid(simulate(n), n)
return 'ok'
def assert_valid(occupied, n):
"""Assert that, in this collection of occupied houses, no house is adjacent to an
occupied house, and every unoccupied position is adjacent to an occupied house."""
occupied = set(occupied) # coerce to set
for house in range(n):
if house in occupied:
assert (house - 1) not in occupied and (house + 1) not in occupied
else:
assert (house - 1) in occupied or (house + 1) in occupied
test()
"""
Explanation: That says that about 1/3 of the time, things work out so that the 4 even-numbered houses are occupied. But if anybody ever chooses an odd-numbered house, then we are destined to have 3 houses occupied (in one of 6 different ways, of which (1, 3 5) is the most common, probably because it is the only one that has three chances of getting started with an odd-numbered house).
Verification by Test
Another way to gain more confidence in the code is to run a test suite:
End of explanation
"""
|
feststelltaste/software-analytics | notebooks/Developers' Habits (Linux Edition).ipynb | gpl-3.0 | import pandas as pd
raw = pd.read_csv(
r'../../linux/git_timestamp_author_email.log',
sep="\t",
encoding="latin-1",
header=None,
names=['unix_timestamp', 'author', 'email'])
# create separate columns for time data
raw[['timestamp', 'timezone']] = raw['unix_timestamp'].str.split(" ", expand=True)
# convert timestamp data
raw['timestamp'] = pd.to_datetime(raw['timestamp'], unit="s")
# add hourly offset data
raw['timezone_offset'] = pd.to_numeric(raw['timezone']) / 100.0
# calculate the local time
raw["timestamp_local"] = raw['timestamp'] + pd.to_timedelta(raw['timezone_offset'], unit='h')
# filter out wrong timestamps
raw = raw[
(raw['timestamp'] >= raw.iloc[-1]['timestamp']) &
(raw['timestamp'] <= pd.to_datetime('today'))]
git_authors = raw[['timestamp_local', 'timezone', 'author']].copy()
git_authors.head()
"""
Explanation: Introduction
The nice thing about reproducible data analysis (like I'm trying to do it here on my blog) is, well, that you can quickly reproduce or even replicate an analysis.
So, in this blog post/notebook, I transfer the analysis of "Developers' Habits (IntelliJ Edition)" to another project: The famous open-source operating system Linux. Again, we want to take a look at how much information you can extract from a simple Git log output. This time we want to know
where the developers come from
on which weekdays the developers work
what the normal working hours are and
if there is any sight of overtime periods.
Because we use an open approach for our analysis, we are able to respond to newly created insights. Again, we use Pandas as data analysis toolkit to accomplish these tasks and execute our code in a Juypter notebook (find the original on GitHub. We also see some refactorings by leveraging Pandas' date functionality a little bit more.
So let's start!
Gaining the data
I've already described the details on how to get the necessary data in my previous blog post. What we have at hand is a nice file with the following contents:
1514531161 -0800 Linus Torvalds torvalds@linux-foundation.org
1514489303 -0500 David S. Miller davem@davemloft.net
1514487644 -0800 Tom Herbert tom@quantonium.net
1514487643 -0800 Tom Herbert tom@quantonium.net
1514482693 -0500 Willem de Bruijn willemb@google.com
...
It includes the UNIX timestamp (in seconds since epoch), a whitespace, the time zone (where the authors live in), a tab separator, the name of the author, a tab and the email address of the author. The whole log shows 13 years of Linux development that is available on GitHub repository mirror.
Wrangling the raw data
We import the data by using Pandas' read_csv function and the appropriate parameters. We copy only the needed data from the raw dataset into the new DataFrame git_authors.
End of explanation
"""
import calendar
git_authors['weekday'] = git_authors["timestamp_local"].dt.weekday_name
git_authors['weekday'] = pd.Categorical(
git_authors['weekday'],
categories=calendar.day_name,
ordered=True)
git_authors.head()
"""
Explanation: Refining the dataset
In this section, we add some additional time-based information to the DataFrame to accomplish our tasks.
Adding weekdays
First, we add the information about the weekdays based on the weekday_name information of the timestamp_local column. Because we want to preserve the order of the weekdays, we convert the weekday entries to a Categorial data type, too. The order of the weekdays is taken from the calendar module.
Note: We can do this so easily because we have such a large amount of data where every weekday occurs. If we can't be sure to have a continuous sequence of weekdays, we have to use something like the pd.Grouper method to fill in missing weekdays.
End of explanation
"""
git_authors['hour'] = git_authors['timestamp_local'].dt.hour
git_authors.head()
"""
Explanation: Adding working hours
For the working hour analysis, we extract the hour information from the timestamp_local column.
Note: Again, we assume that every hour is in the dataset.
End of explanation
"""
%matplotlib inline
timezones = git_authors['timezone'].value_counts()
timezones.plot(
kind='pie',
figsize=(7,7),
title="Developers' timezones",
label="")
"""
Explanation: Analyzing the data
With the prepared git_authors DataFrame, we are now able to deliver insights into the past years of development.
Developers' timezones
First, we want to know where the developers roughly live. For this, we plot the values of the timezone columns as a pie chart.
End of explanation
"""
ax = git_authors['weekday'].\
value_counts(sort=False).\
plot(
kind='bar',
title="Commits per weekday")
ax.set_xlabel('weekday')
ax.set_ylabel('# commits')
"""
Explanation: Result
The majority of the developers' commits come from the time zones +0100, +0200 and -0700. With most commits coming probably from the West Coast of the USA, this might just be an indicator that Linus Torvalds lives there ;-) . But there are also many commits from developers within Western Europe.
Weekdays with the most commits
Next, we want to know on which days the developers are working during the week. We count by the weekdays but avoid sorting the results to keep the order along with our categories. We plot the result as a standard bar chart.
End of explanation
"""
ax = git_authors\
.groupby(['hour'])['author']\
.count().plot(kind='bar')
ax.set_title("Distribution of working hours")
ax.yaxis.set_label_text("# commits")
ax.xaxis.set_label_text("hour")
"""
Explanation: Result
Most of the commits occur during normal working days with a slight peak on Wednesday. There are relatively few commits happening on weekends.
Working behavior of the main contributor
It would be very interesting and easy to see when Linus Torvalds (the main contributor to Linux) is working. But we won't do that because the yet unwritten codex of Software Analytics does tell us that it's not OK to analyze a single person's behavior – especially when such an analysis is based on an uncleaned dataset as we have it here.
Usual working hours
To find out about the working habits of the contributors, we group the commits by hour and count the entries (in this case we choose author) to see if there are any irregularities. Again, we plot the results with a standard bar chart.
End of explanation
"""
latest_hour_per_week = git_authors.groupby(
[
pd.Grouper( key='timestamp_local', freq='1w'),
'author'
]
)[['hour']].max()
latest_hour_per_week.head()
"""
Explanation: Result
The distribution of the working hours is interesting:
- First, we can clearly see that there is a dent around 12:00. So this might be an indicator that developers have lunch at regular times (which is a good thing IMHO).
- Another not so typical result is the slight rise after 20:00. This could be interpreted as the development activity of free-time developers that code for Linux after their day-time job.
- Nevertheless, most of the developers seem to get a decent amount of sleep indicated by low commit activity from 1:00 to 7:00.
Signs of overtime
At last, we have a look at possible overtime periods by creating a simple model. We first group all commits on a weekly basis per authors. As grouping function, we choose max() to get the hour where each author committed at latest per week.
End of explanation
"""
mean_latest_hours_per_week = \
latest_hour_per_week \
.reset_index().groupby('timestamp_local').mean()
mean_latest_hours_per_week.head()
"""
Explanation: Next, we want to know if there were any stressful time periods that forced the developers to work overtime over a longer period of time. We calculate the mean of all late stays of all authors for each week.
End of explanation
"""
import numpy as np
numeric_index = range(0, len(mean_latest_hours_per_week))
coefficients = np.polyfit(numeric_index, mean_latest_hours_per_week.hour, 3)
polynomial = np.poly1d(coefficients)
ys = polynomial(numeric_index)
mean_latest_hours_per_week['trend'] = ys
mean_latest_hours_per_week.head()
"""
Explanation: We also create a trend line that shows how the contributors are working over the span of the past years. We use the polyfit function from numpy for this which needs a numeric index to calculate the polynomial coefficients later on. We then calculate the coefficients with a three-dimensional polynomial based on the hours of the mean_latest_hours_per_week DataFrame. For visualization, we decrease the number of degrees and calculate the y-coordinates for all weeks that are encoded in numeric_index. We store the result in the mean_latest_hours_per_week DataFrame.
End of explanation
"""
ax = mean_latest_hours_per_week[['hour', 'trend']].plot(
figsize=(10, 6),
color=['grey','blue'],
title="Late hours per weeks")
ax.set_xlabel("time")
ax.set_ylabel("hour")
"""
Explanation: At last, we plot the hour results of the mean_latest_hours_per_week DataFrame as well as the trend data in one line plot.
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/addons/tutorials/optimizers_conditionalgradient.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
!pip install -U tensorflow-addons
import tensorflow as tf
import tensorflow_addons as tfa
from matplotlib import pyplot as plt
# Hyperparameters
batch_size=64
epochs=10
"""
Explanation: TensorFlow Addons Optimizers: ConditionalGradient
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/addons/tutorials/optimizers_conditionalgradient"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/addons/blob/master/docs/tutorials/optimizers_conditionalgradient.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/addons/blob/master/docs/tutorials/optimizers_conditionalgradient.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/addons/docs/tutorials/optimizers_conditionalgradient.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
This notebook will demonstrate how to use the Conditional Graident Optimizer from the Addons package.
ConditionalGradient
Constraining the parameters of a neural network has been shown to be beneficial in training because of the underlying regularization effects. Often, parameters are constrained via a soft penalty (which never guarantees the constraint satisfaction) or via a projection operation (which is computationally expensive). Conditional gradient (CG) optimizer, on the other hand, enforces the constraints strictly without the need for an expensive projection step. It works by minimizing a linear approximation of the objective within the constraint set. In this notebook, you demonstrate the appliction of Frobenius norm constraint via the CG optimizer on the MNIST dataset. CG is now available as a tensorflow API. More details of the optimizer are available at https://arxiv.org/pdf/1803.06453.pdf
Setup
End of explanation
"""
model_1 = tf.keras.Sequential([
tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax', name='predictions'),
])
"""
Explanation: Build the Model
End of explanation
"""
# Load MNIST dataset as NumPy arrays
dataset = {}
num_validation = 10000
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Preprocess the data
x_train = x_train.reshape(-1, 784).astype('float32') / 255
x_test = x_test.reshape(-1, 784).astype('float32') / 255
"""
Explanation: Prep the Data
End of explanation
"""
def frobenius_norm(m):
"""This function is to calculate the frobenius norm of the matrix of all
layer's weight.
Args:
m: is a list of weights param for each layers.
"""
total_reduce_sum = 0
for i in range(len(m)):
total_reduce_sum = total_reduce_sum + tf.math.reduce_sum(m[i]**2)
norm = total_reduce_sum**0.5
return norm
CG_frobenius_norm_of_weight = []
CG_get_weight_norm = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lambda batch, logs: CG_frobenius_norm_of_weight.append(
frobenius_norm(model_1.trainable_weights).numpy()))
"""
Explanation: Define a Custom Callback Function
End of explanation
"""
# Compile the model
model_1.compile(
optimizer=tfa.optimizers.ConditionalGradient(
learning_rate=0.99949, lambda_=203), # Utilize TFA optimizer
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
history_cg = model_1.fit(
x_train,
y_train,
batch_size=batch_size,
validation_data=(x_test, y_test),
epochs=epochs,
callbacks=[CG_get_weight_norm])
"""
Explanation: Train and Evaluate: Using CG as Optimizer
Simply replace typical keras optimizers with the new tfa optimizer
End of explanation
"""
model_2 = tf.keras.Sequential([
tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax', name='predictions'),
])
SGD_frobenius_norm_of_weight = []
SGD_get_weight_norm = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lambda batch, logs: SGD_frobenius_norm_of_weight.append(
frobenius_norm(model_2.trainable_weights).numpy()))
# Compile the model
model_2.compile(
optimizer=tf.keras.optimizers.SGD(0.01), # Utilize SGD optimizer
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
history_sgd = model_2.fit(
x_train,
y_train,
batch_size=batch_size,
validation_data=(x_test, y_test),
epochs=epochs,
callbacks=[SGD_get_weight_norm])
"""
Explanation: Train and Evaluate: Using SGD as Optimizer
End of explanation
"""
plt.plot(
CG_frobenius_norm_of_weight,
color='r',
label='CG_frobenius_norm_of_weights')
plt.plot(
SGD_frobenius_norm_of_weight,
color='b',
label='SGD_frobenius_norm_of_weights')
plt.xlabel('Epoch')
plt.ylabel('Frobenius norm of weights')
plt.legend(loc=1)
"""
Explanation: Frobenius Norm of Weights: CG vs SGD
The current implementation of CG optimizer is based on Frobenius Norm, with considering Frobenius Norm as regularizer in the target function. Therefore, you compare CG’s regularized effect with SGD optimizer, which has not imposed Frobenius Norm regularizer.
End of explanation
"""
plt.plot(history_cg.history['accuracy'], color='r', label='CG_train')
plt.plot(history_cg.history['val_accuracy'], color='g', label='CG_test')
plt.plot(history_sgd.history['accuracy'], color='pink', label='SGD_train')
plt.plot(history_sgd.history['val_accuracy'], color='b', label='SGD_test')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(loc=4)
"""
Explanation: Train and Validation Accuracy: CG vs SGD
End of explanation
"""
|
scotthuang1989/Python-3-Module-of-the-Week | developer_tools/Python2ToPython3.ipynb | apache-2.0 | # %load example.py
def greet(name):
print "Hello, {0}!".format(name)
print "What's your name?"
name = raw_input()
greet(name)
# we can convert this file to python3-compliant
!2to3 example.py
"""
Explanation: Automated Python 2 to 3 code translation
2to3 is a Python program that reads Python 2.x source code and applies a series of fixers to transform it into valid Python 3.x code. The standard library contains a rich set of fixers that will handle almost all code.
Let's see some code in action then I will list the fixers
Code In Action
End of explanation
"""
!2to3 -w example.py
!ls example*
# %load example.py
def greet(name):
print("Hello, {0}!".format(name))
print("What's your name?")
name = input()
greet(name)
"""
Explanation: The above output is the diff that python2 version against python3 version
we can also write the change back into source file:example.py
End of explanation
"""
!2to3 -l
"""
Explanation: you can see that example.py is converted to python3 version. And a backup file named example.py.bak is created(if you add a -n option, no backup file will keep)
Customize Running actions
by default, 2to3 will run a set of 'pre-defined' fixers, you can customize the behavior
Specify fixers to run or not run
An explicit set of fixers to run can be given with -f
`
$ 2to3 -f imports -f has_key example.py
You can also disable certain fixers
$ 2to3 -x apply example.py
List all fixers
you can get all avaiable fixers with this command.
you can find detailed doc at here
End of explanation
"""
|
hanleilei/note | training/submit/PythonExercises3rdAnd4th.ipynb | cc0-1.0 | import os
class Dog(object):
def __init__(self):
self.name = "Dog"
def bark(self):
return "woof!"
class Cat(object):
def __init__(self):
self.name = "Cat"
def meow(self):
return "meow!"
class Human(object):
def __init__(self):
self.name = "Human"
def speak(self):
return "'hello'"
class Car(object):
def __init__(self):
self.name = "Car"
def make_noise(self, octane_level):
return "vroom%s" % ("!" * octane_level)
"""
Explanation: 适配器模式(Adapter pattern)是一种结构型设计模式,帮助我们实现两个不兼容接口之间的兼容。
首先,解释一下不兼容接口的真正含义。如果我们希望把一个老组件用于一个新系统中,或者把一个新组件用于一个老系统中,不对代码进行任何修改两者就能够通信的情况很少见。但又并非总是能修改代码,或因为我们无法访问这些代码(例如,组件以外部库的方式提供),或因为修改代码本身就不切实际。在这些情况下,我们可以编写一个额外的代码层,该代码层包含让两个接口之间能够通信需要进行的所有修改。这个代码层就叫适配器。
现在有四个类:
End of explanation
"""
def add(x, y):
print('add')
return x + y
def sub(x, y):
print('sub')
return x - y
def mul(x, y):
print('mul')
return x * y
def div(x, y):
print('div')
return x / y
"""
Explanation: 可能小伙伴们会觉得设计模式这块的东西略微有些复杂,完全不用感到灰心,如果不是想要将软件开发作为自己的职业的话,可能一辈子也不需要了解,或者不经意间用到也不知道。但是这部分内容可以用来复习类的概念知识。
这四个类的的接口各不一样,现在需要实现一个适配器,将这些不同类的接口统一起来:
回顾一下所掌握的装饰器的知识,现在有以下函数
End of explanation
"""
@debug
def add(x, y):
return x + y
@debug
def sub(x, y):
return x - y
@debug
def mul(x, y):
return x * y
@debug
def div(x, y):
return x / y
add(3,4)
"""
Explanation: 每次函数都要输出一个print语句告知用户当前在哪个函数中,这样的操作实在是太麻烦,可否实现一个装饰器,自动输出当前所在的函数名称?
End of explanation
"""
@debug(prefix='++++++')
def add(x, y):
return x + y
@debug(prefix='------')
def sub(x, y):
return x - y
@debug(prefix='******')
def mul(x, y):
return x * y
@debug(prefix='//////')
def div(x, y):
return x / y
add(3,4)
sub(3,2)
"""
Explanation: 现在想要对于上面的装饰器做一下功能增强,把一些必要的参数传递给装饰器函数,以增加装饰器函数的灵活性,比如说,对于加法函数,可以传入'+++++'作为参数传入,相应的减法函数传入‘----’,乘法函数传入‘****’,除法函数传入'/////'。
End of explanation
"""
import os
import sqlite3
db_filename = 'todo.db'
schema_filename = 'todo_schema.sql'
db_is_new = not os.path.exists(db_filename)
with sqlite3.connect(db_filename) as conn:
if db_is_new:
print('Creating schema')
with open(schema_filename, 'rt') as f:
schema = f.read()
conn.executescript(schema)
print('Inserting initial data')
conn.executescript("""
insert into project (name, description, deadline)
values ('pymotw', 'Python Module of the Week',
'2016-11-01');
insert into task (details, status, deadline, project)
values ('write about select', 'done', '2016-04-25',
'pymotw');
insert into task (details, status, deadline, project)
values ('write about random', 'waiting', '2016-08-22',
'pymotw');
insert into task (details, status, deadline, project)
values ('write about sqlite3', 'active', '2017-07-31',
'pymotw');
""")
else:
print('Database exists, assume schema does, too.')
下面请尝试检索上面所创建数据库中的全部数据:(使用fetchall)
"""
Explanation: 装饰器的内容都是很套路的函数操作,一般情况下就是用语简化重复代码:即“don't repeat yourself‘, 不要写重复的代码。
SQLite作为一个常见的数据库,也是python的标准库,用途非常广泛,适用于数据量较小的规模,但是又必须使用数据库的场景,sqlite的连接和CRUD操作可以参考标准库的文档。
现在用以下方式创建两个表,然后往里插入一些数据:
-- todo_schema.sql
-- Schema for to-do application examples.
-- Projects are high-level activities made up of tasks
create table project (
name text primary key,
description text,
deadline date
);
-- Tasks are steps that can be taken to complete a project
create table task (
id integer primary key autoincrement not null,
priority integer default 1,
details text,
status text,
deadline date,
completed_on date,
project text not null references project(name)
);
注意,需要将以上sql代码放到一个名为'todo_schema.sql'的文件之中。
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.21/_downloads/306dcf0b43a155a02804528d597e4e81/plot_roi_erpimage_by_rt.ipynb | bsd-3-clause | # Authors: Jona Sassenhagen <jona.sassenhagen@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.event import define_target_events
from mne.channels import make_1020_channel_selections
print(__doc__)
"""
Explanation: Plot single trial activity, grouped by ROI and sorted by RT
This will produce what is sometimes called an event related
potential / field (ERP/ERF) image.
The EEGLAB example file, which contains an experiment with button press
responses to simple visual stimuli, is read in and response times are
calculated.
Regions of Interest are determined by the channel types (in 10/20 channel
notation, even channels are right, odd are left, and 'z' are central). The
median and the Global Field Power within each channel group is calculated,
and the trials are plotted, sorting by response time.
End of explanation
"""
data_path = mne.datasets.testing.data_path()
fname = data_path + "/EEGLAB/test_raw.set"
event_id = {"rt": 1, "square": 2} # must be specified for str events
raw = mne.io.read_raw_eeglab(fname)
mapping = {
'EEG 000': 'Fpz', 'EEG 001': 'EOG1', 'EEG 002': 'F3', 'EEG 003': 'Fz',
'EEG 004': 'F4', 'EEG 005': 'EOG2', 'EEG 006': 'FC5', 'EEG 007': 'FC1',
'EEG 008': 'FC2', 'EEG 009': 'FC6', 'EEG 010': 'T7', 'EEG 011': 'C3',
'EEG 012': 'C4', 'EEG 013': 'Cz', 'EEG 014': 'T8', 'EEG 015': 'CP5',
'EEG 016': 'CP1', 'EEG 017': 'CP2', 'EEG 018': 'CP6', 'EEG 019': 'P7',
'EEG 020': 'P3', 'EEG 021': 'Pz', 'EEG 022': 'P4', 'EEG 023': 'P8',
'EEG 024': 'PO7', 'EEG 025': 'PO3', 'EEG 026': 'POz', 'EEG 027': 'PO4',
'EEG 028': 'PO8', 'EEG 029': 'O1', 'EEG 030': 'Oz', 'EEG 031': 'O2'
}
raw.rename_channels(mapping)
raw.set_channel_types({"EOG1": 'eog', "EOG2": 'eog'})
raw.set_montage('standard_1020')
events = mne.events_from_annotations(raw, event_id)[0]
"""
Explanation: Load EEGLAB example data (a small EEG dataset)
End of explanation
"""
# define target events:
# 1. find response times: distance between "square" and "rt" events
# 2. extract A. "square" events B. followed by a button press within 700 msec
tmax = .7
sfreq = raw.info["sfreq"]
reference_id, target_id = 2, 1
new_events, rts = define_target_events(events, reference_id, target_id, sfreq,
tmin=0., tmax=tmax, new_id=2)
epochs = mne.Epochs(raw, events=new_events, tmax=tmax + .1,
event_id={"square": 2})
"""
Explanation: Create Epochs
End of explanation
"""
# Parameters for plotting
order = rts.argsort() # sorting from fast to slow trials
selections = make_1020_channel_selections(epochs.info, midline="12z")
# The actual plots (GFP)
epochs.plot_image(group_by=selections, order=order, sigma=1.5,
overlay_times=rts / 1000., combine='gfp',
ts_args=dict(vlines=[0, rts.mean() / 1000.]))
"""
Explanation: Plot using :term:Global Field Power <GFP>
End of explanation
"""
epochs.plot_image(group_by=selections, order=order, sigma=1.5,
overlay_times=rts / 1000., combine='median',
ts_args=dict(vlines=[0, rts.mean() / 1000.]))
"""
Explanation: Plot using median
End of explanation
"""
|
autumn-lake/Facebook-V-Predicting-Check-Ins | timeIsMin.ipynb | mit | Now, to confirm, let us do a little bit of simple tests.
import pandas as pd
from matplotlib import pyplot as plt
from matplotlib import cm as cm
import numpy as np
%matplotlib inline
train=pd.read_csv('../train.csv')
train.describe()
# first take a look at the whole picture of time data:
train['time'].plot(kind='hist', bins=500)
"""
Explanation: A few people have done very nice fourier transformation analysis on the time data, and have convincingly show that the time is in minutes. Here, my purpose is just to show that, by basic reasoning and simpler analysis, we can still pretty convincingly see the same point.
I present several pieces of evidence here.
A little bit of basic reasoning tells us that the time is most likely minutes.
Let's estimate how long it took for Facebook to collect this many check-in events:
A typical large US city has a population of ~1M. The largest population is found in NY, which is 8.5M (complete list found here: https://en.wikipedia.org/wiki/List_of_United_States_cities_by_population). Let's way, we are looking at a large city with 3M people (slightly larger than Chicago), and Facebook has collected this 30M check-ins. Let us further suppose, that through out the time, on an average day, about 1% of the population used check-in once. This will result in 0.03M check-ins per day.
To get 30M check-ins, it takes 1000 days. our assumptions may not be accurate, so let's give it a range, say it takes ~500 to 2000 days to collect this many check-ins. Now, look at the 'time' column in the data. This ranges from 1 to 800,000. So each unit is roughly 1/800 day (assuming 1000 days), or 1/1600 to 1/400 days. We notice that 1 min is exactly 1/1440 day, sitting nicely within the range of our estimation.
If the unit is sec, we will have to change our assumptions drastically to make it fit, so drastic that it becomes unrealistic. So, most likely, the time unit is min, unless Facebook is using an arbitrary unit here.
End of explanation
"""
# Hypothesis 1, time in sec:
hourly=(train['time'].divide(3600)).apply(int)
pd.Series([len(hourly[(hourly%168)==i]) for i in xrange(168)]).plot()
"""
Explanation: Nice little peaks on top of the picture show a periodic component of ~10000, very close to the number of minutes in a week: 10080
Now, let us take a look at what a compound week data look like. This will require us to assume either the time is in minutes or seconds, and the resulting graphs will serve as a test.
Analysis of the time values.
Hypothesis 1: time in sec
Hypothesis 2: time in min
Simple method:
For each hypothesis, compute for each time data point to get its corresponding hour in a week. For each of the 168 hours in a week, get a total count of check-ins during that hour. Plot the compound hourly activity in the week range. The correct hypothsis should:
1. exhibit some degree of 24 hour cycle
2. should show some degree of weekend-weekday variation.
3. because weeks are periodic, the start activity and the end activity in the correct week plot should be somewhat close to each other
End of explanation
"""
# Hypothesis 2, time in min:
hourly=(train['time'].divide(60)).apply(int)
pd.Series([len(hourly[(hourly%168)==i]) for i in xrange(168)]).plot()
"""
Explanation: This has no 24 hour cycle for sure.
There might be weekend-weekday variation
The data at starting and ending of the week are far apart.
Does not look very likely.
End of explanation
"""
# random control 1:
hourly=(train['time'].divide(1485)).apply(int)
pd.Series([len(hourly[(hourly%168)==i]) for i in xrange(168)]).plot()
# random control 2:
hourly=(train['time'].divide(18)).apply(int)
pd.Series([len(hourly[(hourly%168)==i]) for i in xrange(168)]).plot()
# random control 3:
hourly=(train['time'].divide(83)).apply(int)
pd.Series([len(hourly[(hourly%168)==i]) for i in xrange(168)]).plot()
# random control 4:
hourly=(train['time'].divide(634)).apply(int)
pd.Series([len(hourly[(hourly%168)==i]) for i in xrange(168)]).plot()
# random control 5:
hourly=(train['time'].divide(20059)).apply(int)
pd.Series([len(hourly[(hourly%168)==i]) for i in xrange(168)]).plot()
"""
Explanation: There might be 24 hour cycle component, but not clear.
There might be weekend-weekday variation.
The data at starting and ending of the week are very close to each other.
This is more likely to be, although not very convincing.
Below I randomly picked some hourly numbers to serve as controls, to get a sense of what random graphs look like. Most of them look like Hypothesis 1 (seconds). This means the graphs produced by Hypothesis 1 is not distinguishable from random graphs.
Thus, this serves as an additional support for Hypothesis 2 (minutes).
End of explanation
"""
|
jrbourbeau/cr-composition | notebooks/legacy/lightheavy/parameter-tuning/RF-parameter-tuning.ipynb | mit | import sys
sys.path.append('/home/jbourbeau/cr-composition')
print('Added to PYTHONPATH')
from __future__ import division, print_function
from collections import defaultdict
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import seaborn.apionly as sns
import scipy.stats as stats
from sklearn.metrics import accuracy_score
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import validation_curve, GridSearchCV, cross_val_score, ParameterGrid, KFold, ShuffleSplit
import composition as comp
# Plotting-related
sns.set_palette('muted')
sns.set_color_codes()
color_dict = defaultdict()
for i, composition in enumerate(['light', 'heavy', 'total']):
color_dict[composition] = sns.color_palette('muted').as_hex()[i]
%matplotlib inline
"""
Explanation: Random forest parameter-tuning
Table of contents
Data preprocessing
Validation curves
KS-test tuning
End of explanation
"""
X_train_sim, X_test_sim, y_train_sim, y_test_sim, le, energy_train_sim, energy_test_sim = comp.preprocess_sim(return_energy=True)
X_test_data, energy_test_data = comp.preprocess_data(return_energy=True)
"""
Explanation: Data preprocessing
Load simulation dataframe and apply specified quality cuts
Extract desired features from dataframe
Get separate testing and training datasets
End of explanation
"""
pipeline = comp.get_pipeline('RF')
param_range = np.arange(1, 16)
train_scores, test_scores = validation_curve(
estimator=pipeline,
X=X_train_sim,
y=y_train_sim,
param_name='classifier__max_depth',
param_range=param_range,
cv=5,
scoring='accuracy',
verbose=2,
n_jobs=20)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(param_range, train_mean,
color='b', marker='o',
markersize=5, label='training accuracy')
plt.fill_between(param_range,
train_mean + train_std,
train_mean - train_std,
alpha=0.15, color='b')
plt.plot(param_range, test_mean,
color='g', linestyle='None',
marker='s', markersize=5,
label='validation accuracy')
plt.fill_between(param_range,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='g')
plt.grid()
plt.legend(loc='lower right')
plt.xlabel('Maximum depth')
plt.ylabel('Accuracy')
# plt.ylim([0.7, 0.8])
# plt.savefig('/home/jbourbeau/public_html/figures/composition/parameter-tuning/RF-validation_curve_min_samples_leaf.png', dpi=300)
plt.show()
"""
Explanation: Validation curves
(10-fold CV)
Maximum depth
End of explanation
"""
pipeline = comp.get_pipeline('RF')
param_range = np.arange(1, X_train.shape[1])
train_scores, test_scores = validation_curve(
estimator=pipeline,
X=X_train,
y=y_train,
param_name='classifier__max_features',
param_range=param_range,
cv=10,
verbose=2,
n_jobs=20)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(param_range, train_mean,
color='b', marker='o',
markersize=5, label='training accuracy')
plt.fill_between(param_range, train_mean + train_std,
train_mean - train_std, alpha=0.15,
color='b')
plt.plot(param_range, test_mean,
color='g', linestyle='None',
marker='s', markersize=5,
label='validation accuracy')
plt.fill_between(param_range,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='g')
plt.grid()
# plt.xscale('log')
plt.legend(loc='lower right')
plt.xlabel('Maximum features')
plt.ylabel('Accuracy')
# plt.ylim([0.8, 1.0])
# plt.savefig('/home/jbourbeau/public_html/figures/composition/parameter-tuning/RF-validation_curve_min_samples_leaf.png', dpi=300)
plt.show()
"""
Explanation: Max features
End of explanation
"""
pipeline = comp.get_pipeline('RF')
param_range = np.arange(1, 400, 25)
train_scores, test_scores = validation_curve(
estimator=pipeline,
X=X_train,
y=y_train,
param_name='classifier__min_samples_leaf',
param_range=param_range,
cv=10,
verbose=2,
n_jobs=20)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(param_range, train_mean,
color='b', marker='o',
markersize=5, label='training accuracy')
plt.fill_between(param_range, train_mean + train_std,
train_mean - train_std, alpha=0.15,
color='b')
plt.plot(param_range, test_mean,
color='g', linestyle='None',
marker='s', markersize=5,
label='validation accuracy')
plt.fill_between(param_range,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='g')
plt.grid()
# plt.xscale('log')
plt.legend()
plt.xlabel('Minimum samples in leaf node')
plt.ylabel('Accuracy')
# plt.ylim([0.8, 1.0])
# plt.savefig('/home/jbourbeau/public_html/figures/composition/parameter-tuning/RF-validation_curve_min_samples_leaf.png', dpi=300)
plt.show()
"""
Explanation: Minimum samples in leaf node
End of explanation
"""
comp_list = ['light', 'heavy']
max_depth_list = np.arange(1, 16)
pval_comp = defaultdict(list)
ks_stat = defaultdict(list)
kf = KFold(n_splits=10)
fold_num = 0
for train_index, test_index in kf.split(X_train):
fold_num += 1
print('\r')
print('Fold {}: '.format(fold_num), end='')
X_train_fold, X_test_fold = X_train[train_index], X_train[test_index]
y_train_fold, y_test_fold = y_train[train_index], y_train[test_index]
pval_maxdepth = defaultdict(list)
print('max_depth = ', end='')
for max_depth in max_depth_list:
print('{}...'.format(max_depth), end='')
pipeline = comp.get_pipeline('RF')
pipeline.named_steps['classifier'].set_params(max_depth=max_depth)
pipeline.fit(X_train_fold, y_train_fold)
test_probs = pipeline.predict_proba(X_test_fold)
train_probs = pipeline.predict_proba(X_train_fold)
for class_ in pipeline.classes_:
pval_maxdepth[le.inverse_transform(class_)].append(stats.ks_2samp(test_probs[:, class_], train_probs[:, class_])[1])
for composition in comp_list:
pval_comp[composition].append(pval_maxdepth[composition])
pval_sys_err = {key: np.std(pval_comp[key], axis=0) for key in pval_comp}
pval = {key: np.mean(pval_comp[key], axis=0) for key in pval_comp}
comp_list = ['light']
fig, ax = plt.subplots()
for composition in comp_list:
upper_err = np.copy(pval_sys_err[composition])
upper_err = [val if ((pval[composition][i] + val) < 1) else 1-pval[composition][i] for i, val in enumerate(upper_err)]
lower_err = np.copy(pval_sys_err[composition])
lower_err = [val if ((pval[composition][i] - val) > 0) else pval[composition][i] for i, val in enumerate(lower_err)]
if composition == 'light':
ax.errorbar(max_depth_list -0.25/2, pval[composition],
yerr=[lower_err, upper_err],
marker='.', linestyle=':',
label=composition, alpha=0.75)
if composition == 'heavy':
ax.errorbar(max_depth_list + 0.25/2, pval[composition],
yerr=[lower_err, upper_err],
marker='.', linestyle=':',
label=composition, alpha=0.75)
plt.ylabel('KS-test p-value')
plt.xlabel('Maximum depth')
plt.ylim([-0.1, 1.1])
# plt.legend()
plt.grid()
plt.show()
pval
"""
Explanation: KS-test tuning
Maximum depth
End of explanation
"""
comp_list = np.unique(df['MC_comp_class'])
min_samples_list = np.arange(1, 400, 25)
pval = defaultdict(list)
ks_stat = defaultdict(list)
print('min_samples_leaf = ', end='')
for min_samples_leaf in min_samples_list:
print('{}...'.format(min_samples_leaf), end='')
pipeline = comp.get_pipeline('RF')
params = {'max_depth': 4, 'min_samples_leaf': min_samples_leaf}
pipeline.named_steps['classifier'].set_params(**params)
pipeline.fit(X_train, y_train)
test_probs = pipeline.predict_proba(X_test)
train_probs = pipeline.predict_proba(X_train)
for class_ in pipeline.classes_:
pval[le.inverse_transform(class_)].append(stats.ks_2samp(test_probs[:, class_], train_probs[:, class_])[1])
fig, ax = plt.subplots()
for composition in pval:
ax.plot(min_samples_list, pval[composition], linestyle='-.', label=composition)
plt.ylabel('KS-test p-value')
plt.xlabel('Minimum samples leaf node')
plt.legend()
plt.grid()
plt.show()
"""
Explanation: Minimum samples in leaf node
End of explanation
"""
# comp_list = np.unique(df['MC_comp_class'])
comp_list = ['light']
min_samples_list = [1, 25, 50, 75]
min_samples_list = [1, 100, 200, 300]
fig, axarr = plt.subplots(2, 2, sharex=True, sharey=True)
print('min_samples_leaf = ', end='')
for min_samples_leaf, ax in zip(min_samples_list, axarr.flatten()):
print('{}...'.format(min_samples_leaf), end='')
max_depth_list = np.arange(1, 16)
pval = defaultdict(list)
ks_stat = defaultdict(list)
for max_depth in max_depth_list:
pipeline = comp.get_pipeline('RF')
params = {'max_depth': max_depth, 'min_samples_leaf': min_samples_leaf}
pipeline.named_steps['classifier'].set_params(**params)
pipeline.fit(X_train, y_train)
test_probs = pipeline.predict_proba(X_test)
train_probs = pipeline.predict_proba(X_train)
for class_ in pipeline.classes_:
pval[le.inverse_transform(class_)].append(stats.ks_2samp(test_probs[:, class_], train_probs[:, class_])[1])
for composition in pval:
ax.plot(max_depth_list, pval[composition], linestyle='-.', label=composition)
ax.set_ylabel('KS-test p-value')
ax.set_xlabel('Maximum depth')
ax.set_title('min samples = {}'.format(min_samples_leaf))
ax.set_ylim([0, 0.5])
ax.legend()
ax.grid()
plt.tight_layout()
plt.show()
"""
Explanation: Maximum depth for various minimum samples in leaf node
End of explanation
"""
|
nproctor/phys202-2015-work | assignments/assignment04/MatplotlibExercises.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Visualization 1: Matplotlib Basics Exercises
End of explanation
"""
data = np.random.randn(2, 100)
plt.scatter(data[0], data[1])
plt.xlabel("Random Number 1", fontsize=12, color="#666666")
plt.ylabel("Random Number 2", fontsize=12, color="#666666")
ax = plt.gca()
ax.spines['right'].set_color("None")
ax.spines['top'].set_color("None")
ax.spines['bottom'].set_color("#666666")
ax.spines['left'].set_color("#666666")
ax.tick_params(axis='x', colors='#666666')
ax.tick_params(axis='y', colors='#666666')
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plt.title("2D Random Number Scatter Plot", color="#383838")
"""
Explanation: Scatter plots
Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.
Generate random data using np.random.randn.
Style the markers (color, size, shape, alpha) appropriately.
Include an x and y label and title.
End of explanation
"""
data = np.random.randn(50)
plt.hist(data, 20, facecolor='green', )
plt.xlabel("Random Number", fontsize=12, color="#666666")
plt.ylabel("Probability", fontsize=12, color="#666666")
ax = plt.gca()
ax.spines['right'].set_color("None")
ax.spines['top'].set_color("None")
ax.spines['bottom'].set_color("#666666")
ax.spines['left'].set_color("#666666")
ax.tick_params(axis='x', colors='#666666')
ax.tick_params(axis='y', colors='#666666')
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plt.title("Random Number Histogram", color="#383838")
# used "Joe Kington"'s (from stackoverflow) method for setting axis tick and label colors
# used "timday"'s (from stackoverflow) method for hiding the top and right axis and ticks
"""
Explanation: Histogram
Learn how to use Matplotlib's plt.hist function to make a 1d histogram.
Generate randpom data using np.random.randn.
Figure out how to set the number of histogram bins and other style options.
Include an x and y label and title.
End of explanation
"""
|
macks22/gensim | docs/notebooks/doc2vec-IMDB.ipynb | lgpl-2.1 | import locale
import glob
import os.path
import requests
import tarfile
import sys
import codecs
import smart_open
dirname = 'aclImdb'
filename = 'aclImdb_v1.tar.gz'
locale.setlocale(locale.LC_ALL, 'C')
if sys.version > '3':
control_chars = [chr(0x85)]
else:
control_chars = [unichr(0x85)]
# Convert text to lower-case and strip punctuation/symbols from words
def normalize_text(text):
norm_text = text.lower()
# Replace breaks with spaces
norm_text = norm_text.replace('<br />', ' ')
# Pad punctuation with spaces on both sides
for char in ['.', '"', ',', '(', ')', '!', '?', ';', ':']:
norm_text = norm_text.replace(char, ' ' + char + ' ')
return norm_text
import time
start = time.clock()
if not os.path.isfile('aclImdb/alldata-id.txt'):
if not os.path.isdir(dirname):
if not os.path.isfile(filename):
# Download IMDB archive
print("Downloading IMDB archive...")
url = u'http://ai.stanford.edu/~amaas/data/sentiment/' + filename
r = requests.get(url)
with open(filename, 'wb') as f:
f.write(r.content)
tar = tarfile.open(filename, mode='r')
tar.extractall()
tar.close()
# Concatenate and normalize test/train data
print("Cleaning up dataset...")
folders = ['train/pos', 'train/neg', 'test/pos', 'test/neg', 'train/unsup']
alldata = u''
for fol in folders:
temp = u''
output = fol.replace('/', '-') + '.txt'
# Is there a better pattern to use?
txt_files = glob.glob(os.path.join(dirname, fol, '*.txt'))
for txt in txt_files:
with smart_open.smart_open(txt, "rb") as t:
t_clean = t.read().decode("utf-8")
for c in control_chars:
t_clean = t_clean.replace(c, ' ')
temp += t_clean
temp += "\n"
temp_norm = normalize_text(temp)
with smart_open.smart_open(os.path.join(dirname, output), "wb") as n:
n.write(temp_norm.encode("utf-8"))
alldata += temp_norm
with smart_open.smart_open(os.path.join(dirname, 'alldata-id.txt'), 'wb') as f:
for idx, line in enumerate(alldata.splitlines()):
num_line = u"_*{0} {1}\n".format(idx, line)
f.write(num_line.encode("utf-8"))
end = time.clock()
print ("Total running time: ", end-start)
import os.path
assert os.path.isfile("aclImdb/alldata-id.txt"), "alldata-id.txt unavailable"
"""
Explanation: Gensim Doc2vec Tutorial on the IMDB Sentiment Dataset
Introduction
In this tutorial, we will learn how to apply Doc2vec using gensim by recreating the results of <a href="https://arxiv.org/pdf/1405.4053.pdf">Le and Mikolov 2014</a>.
Bag-of-words Model
Previous state-of-the-art document representations were based on the <a href="https://en.wikipedia.org/wiki/Bag-of-words_model">bag-of-words model</a>, which represent input documents as a fixed-length vector. For example, borrowing from the Wikipedia article, the two documents
(1) John likes to watch movies. Mary likes movies too.
(2) John also likes to watch football games.
are used to construct a length 10 list of words
["John", "likes", "to", "watch", "movies", "Mary", "too", "also", "football", "games"]
so then we can represent the two documents as fixed length vectors whose elements are the frequencies of the corresponding words in our list
(1) [1, 2, 1, 1, 2, 1, 1, 0, 0, 0]
(2) [1, 1, 1, 1, 0, 0, 0, 1, 1, 1]
Bag-of-words models are surprisingly effective but still lose information about word order. Bag of <a href="https://en.wikipedia.org/wiki/N-gram">n-grams</a> models consider word phrases of length n to represent documents as fixed-length vectors to capture local word order but suffer from data sparsity and high dimensionality.
Word2vec Model
Word2vec is a more recent model that embeds words in a high-dimensional vector space using a shallow neural network. The result is a set of word vectors where vectors close together in vector space have similar meanings based on context, and word vectors distant to each other have differing meanings. For example, strong and powerful would be close together and strong and Paris would be relatively far. There are two versions of this model based on skip-grams and continuous bag of words.
Word2vec - Skip-gram Model
The skip-gram <a href="http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/">word2vec</a> model, for example, takes in pairs (word1, word2) generated by moving a window across text data, and trains a 1-hidden-layer neural network based on the fake task of given an input word, giving us a predicted probability distribution of nearby words to the input. The hidden-to-output weights in the neural network give us the word embeddings. So if the hidden layer has 300 neurons, this network will give us 300-dimensional word embeddings. We use <a href="https://en.wikipedia.org/wiki/One-hot">one-hot</a> encoding for the words.
Word2vec - Continuous-bag-of-words Model
Continuous-bag-of-words Word2vec is very similar to the skip-gram model. It is also a 1-hidden-layer neural network. The fake task is based on the input context words in a window around a center word, predict the center word. Again, the hidden-to-output weights give us the word embeddings and we use one-hot encoding.
Paragraph Vector
Le and Mikolov 2014 introduces the <i>Paragraph Vector</i>, which outperforms more naïve representations of documents such as averaging the Word2vec word vectors of a document. The idea is straightforward: we act as if a paragraph (or document) is just another vector like a word vector, but we will call it a paragraph vector. We determine the embedding of the paragraph in vector space in the same way as words. Our paragraph vector model considers local word order like bag of n-grams, but gives us a denser representation in vector space compared to a sparse, high-dimensional representation.
Paragraph Vector - Distributed Memory (PV-DM)
This is the Paragraph Vector model analogous to Continuous-bag-of-words Word2vec. The paragraph vectors are obtained by training a neural network on the fake task of inferring a center word based on context words and a context paragraph. A paragraph is a context for all words in the paragraph, and a word in a paragraph can have that paragraph as a context.
Paragraph Vector - Distributed Bag of Words (PV-DBOW)
This is the Paragraph Vector model analogous to Skip-gram Word2vec. The paragraph vectors are obtained by training a neural network on the fake task of predicting a probability distribution of words in a paragraph given a randomly-sampled word from the paragraph.
Requirements
The following python modules are dependencies for this tutorial:
* testfixtures ( pip install testfixtures )
* statsmodels ( pip install statsmodels )
Load corpus
Let's download the IMDB archive if it is not already downloaded (84 MB). This will be our text data for this tutorial.
The data can be found here: http://ai.stanford.edu/~amaas/data/sentiment/
End of explanation
"""
import gensim
from gensim.models.doc2vec import TaggedDocument
from collections import namedtuple
SentimentDocument = namedtuple('SentimentDocument', 'words tags split sentiment')
alldocs = [] # Will hold all docs in original order
with open('aclImdb/alldata-id.txt', encoding='utf-8') as alldata:
for line_no, line in enumerate(alldata):
tokens = gensim.utils.to_unicode(line).split()
words = tokens[1:]
tags = [line_no] # 'tags = [tokens[0]]' would also work at extra memory cost
split = ['train', 'test', 'extra', 'extra'][line_no//25000] # 25k train, 25k test, 25k extra
sentiment = [1.0, 0.0, 1.0, 0.0, None, None, None, None][line_no//12500] # [12.5K pos, 12.5K neg]*2 then unknown
alldocs.append(SentimentDocument(words, tags, split, sentiment))
train_docs = [doc for doc in alldocs if doc.split == 'train']
test_docs = [doc for doc in alldocs if doc.split == 'test']
doc_list = alldocs[:] # For reshuffling per pass
print('%d docs: %d train-sentiment, %d test-sentiment' % (len(doc_list), len(train_docs), len(test_docs)))
"""
Explanation: The text data is small enough to be read into memory.
End of explanation
"""
from gensim.models import Doc2Vec
import gensim.models.doc2vec
from collections import OrderedDict
import multiprocessing
cores = multiprocessing.cpu_count()
assert gensim.models.doc2vec.FAST_VERSION > -1, "This will be painfully slow otherwise"
simple_models = [
# PV-DM w/ concatenation - window=5 (both sides) approximates paper's 10-word total window size
Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores),
# PV-DBOW
Doc2Vec(dm=0, size=100, negative=5, hs=0, min_count=2, workers=cores),
# PV-DM w/ average
Doc2Vec(dm=1, dm_mean=1, size=100, window=10, negative=5, hs=0, min_count=2, workers=cores),
]
# Speed up setup by sharing results of the 1st model's vocabulary scan
simple_models[0].build_vocab(alldocs) # PV-DM w/ concat requires one special NULL word so it serves as template
print(simple_models[0])
for model in simple_models[1:]:
model.reset_from(simple_models[0])
print(model)
models_by_name = OrderedDict((str(model), model) for model in simple_models)
"""
Explanation: Set-up Doc2Vec Training & Evaluation Models
We approximate the experiment of Le & Mikolov "Distributed Representations of Sentences and Documents" with guidance from Mikolov's example go.sh:
./word2vec -train ../alldata-id.txt -output vectors.txt -cbow 0 -size 100 -window 10 -negative 5 -hs 0 -sample 1e-4 -threads 40 -binary 0 -iter 20 -min-count 1 -sentence-vectors 1
We vary the following parameter choices:
* 100-dimensional vectors, as the 400-d vectors of the paper don't seem to offer much benefit on this task
* Similarly, frequent word subsampling seems to decrease sentiment-prediction accuracy, so it's left out
* cbow=0 means skip-gram which is equivalent to the paper's 'PV-DBOW' mode, matched in gensim with dm=0
* Added to that DBOW model are two DM models, one which averages context vectors (dm_mean) and one which concatenates them (dm_concat, resulting in a much larger, slower, more data-hungry model)
* A min_count=2 saves quite a bit of model memory, discarding only words that appear in a single doc (and are thus no more expressive than the unique-to-each doc vectors themselves)
End of explanation
"""
from gensim.test.test_doc2vec import ConcatenatedDoc2Vec
models_by_name['dbow+dmm'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[2]])
models_by_name['dbow+dmc'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[0]])
"""
Explanation: Le and Mikolov notes that combining a paragraph vector from Distributed Bag of Words (DBOW) and Distributed Memory (DM) improves performance. We will follow, pairing the models together for evaluation. Here, we concatenate the paragraph vectors obtained from each model.
End of explanation
"""
import numpy as np
import statsmodels.api as sm
from random import sample
# For timing
from contextlib import contextmanager
from timeit import default_timer
import time
@contextmanager
def elapsed_timer():
start = default_timer()
elapser = lambda: default_timer() - start
yield lambda: elapser()
end = default_timer()
elapser = lambda: end-start
def logistic_predictor_from_data(train_targets, train_regressors):
logit = sm.Logit(train_targets, train_regressors)
predictor = logit.fit(disp=0)
# print(predictor.summary())
return predictor
def error_rate_for_model(test_model, train_set, test_set, infer=False, infer_steps=3, infer_alpha=0.1, infer_subsample=0.1):
"""Report error rate on test_doc sentiments, using supplied model and train_docs"""
train_targets, train_regressors = zip(*[(doc.sentiment, test_model.docvecs[doc.tags[0]]) for doc in train_set])
train_regressors = sm.add_constant(train_regressors)
predictor = logistic_predictor_from_data(train_targets, train_regressors)
test_data = test_set
if infer:
if infer_subsample < 1.0:
test_data = sample(test_data, int(infer_subsample * len(test_data)))
test_regressors = [test_model.infer_vector(doc.words, steps=infer_steps, alpha=infer_alpha) for doc in test_data]
else:
test_regressors = [test_model.docvecs[doc.tags[0]] for doc in test_docs]
test_regressors = sm.add_constant(test_regressors)
# Predict & evaluate
test_predictions = predictor.predict(test_regressors)
corrects = sum(np.rint(test_predictions) == [doc.sentiment for doc in test_data])
errors = len(test_predictions) - corrects
error_rate = float(errors) / len(test_predictions)
return (error_rate, errors, len(test_predictions), predictor)
"""
Explanation: Predictive Evaluation Methods
Let's define some helper methods for evaluating the performance of our Doc2vec using paragraph vectors. We will classify document sentiments using a logistic regression model based on our paragraph embeddings. We will compare the error rates based on word embeddings from our various Doc2vec models.
End of explanation
"""
from collections import defaultdict
best_error = defaultdict(lambda: 1.0) # To selectively print only best errors achieved
from random import shuffle
import datetime
alpha, min_alpha, passes = (0.025, 0.001, 20)
alpha_delta = (alpha - min_alpha) / passes
print("START %s" % datetime.datetime.now())
for epoch in range(passes):
shuffle(doc_list) # Shuffling gets best results
for name, train_model in models_by_name.items():
# Train
duration = 'na'
train_model.alpha, train_model.min_alpha = alpha, alpha
with elapsed_timer() as elapsed:
train_model.train(doc_list, total_examples=len(doc_list), epochs=1)
duration = '%.1f' % elapsed()
# Evaluate
eval_duration = ''
with elapsed_timer() as eval_elapsed:
err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs)
eval_duration = '%.1f' % eval_elapsed()
best_indicator = ' '
if err <= best_error[name]:
best_error[name] = err
best_indicator = '*'
print("%s%f : %i passes : %s %ss %ss" % (best_indicator, err, epoch + 1, name, duration, eval_duration))
if ((epoch + 1) % 5) == 0 or epoch == 0:
eval_duration = ''
with elapsed_timer() as eval_elapsed:
infer_err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs, infer=True)
eval_duration = '%.1f' % eval_elapsed()
best_indicator = ' '
if infer_err < best_error[name + '_inferred']:
best_error[name + '_inferred'] = infer_err
best_indicator = '*'
print("%s%f : %i passes : %s %ss %ss" % (best_indicator, infer_err, epoch + 1, name + '_inferred', duration, eval_duration))
print('Completed pass %i at alpha %f' % (epoch + 1, alpha))
alpha -= alpha_delta
print("END %s" % str(datetime.datetime.now()))
"""
Explanation: Bulk Training
We use an explicit multiple-pass, alpha-reduction approach as sketched in this gensim doc2vec blog post with added shuffling of corpus on each pass.
Note that vector training is occurring on all documents of the dataset, which includes all TRAIN/TEST/DEV docs.
We evaluate each model's sentiment predictive power based on error rate, and the evaluation is repeated after each pass so we can see the rates of relative improvement. The base numbers reuse the TRAIN and TEST vectors stored in the models for the logistic regression, while the inferred results use newly-inferred TEST vectors.
(On a 4-core 2.6Ghz Intel Core i7, these 20 passes training and evaluating 3 main models takes about an hour.)
End of explanation
"""
# Print best error rates achieved
print("Err rate Model")
for rate, name in sorted((rate, name) for name, rate in best_error.items()):
print("%f %s" % (rate, name))
"""
Explanation: Achieved Sentiment-Prediction Accuracy
End of explanation
"""
doc_id = np.random.randint(simple_models[0].docvecs.count) # Pick random doc; re-run cell for more examples
print('for doc %d...' % doc_id)
for model in simple_models:
inferred_docvec = model.infer_vector(alldocs[doc_id].words)
print('%s:\n %s' % (model, model.docvecs.most_similar([inferred_docvec], topn=3)))
"""
Explanation: In our testing, contrary to the results of the paper, PV-DBOW performs best. Concatenating vectors from different models only offers a small predictive improvement over averaging vectors. There best results reproduced are just under 10% error rate, still a long way from the paper's reported 7.42% error rate.
Examining Results
Are inferred vectors close to the precalculated ones?
End of explanation
"""
import random
doc_id = np.random.randint(simple_models[0].docvecs.count) # pick random doc, re-run cell for more examples
model = random.choice(simple_models) # and a random model
sims = model.docvecs.most_similar(doc_id, topn=model.docvecs.count) # get *all* similar documents
print(u'TARGET (%d): «%s»\n' % (doc_id, ' '.join(alldocs[doc_id].words)))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(alldocs[sims[index][0]].words)))
"""
Explanation: (Yes, here the stored vector from 20 epochs of training is usually one of the closest to a freshly-inferred vector for the same words. Note the defaults for inference are very abbreviated – just 3 steps starting at a high alpha – and likely need tuning for other applications.)
Do close documents seem more related than distant ones?
End of explanation
"""
word_models = simple_models[:]
import random
from IPython.display import HTML
# pick a random word with a suitable number of occurences
while True:
word = random.choice(word_models[0].wv.index2word)
if word_models[0].wv.vocab[word].count > 10:
break
# or uncomment below line, to just pick a word from the relevant domain:
#word = 'comedy/drama'
similars_per_model = [str(model.most_similar(word, topn=20)).replace('), ','),<br>\n') for model in word_models]
similar_table = ("<table><tr><th>" +
"</th><th>".join([str(model) for model in word_models]) +
"</th></tr><tr><td>" +
"</td><td>".join(similars_per_model) +
"</td></tr></table>")
print("most similar words for '%s' (%d occurences)" % (word, simple_models[0].wv.vocab[word].count))
HTML(similar_table)
"""
Explanation: (Somewhat, in terms of reviewer tone, movie genre, etc... the MOST cosine-similar docs usually seem more like the TARGET than the MEDIAN or LEAST.)
Do the word vectors show useful similarities?
End of explanation
"""
# Download this file: https://github.com/nicholas-leonard/word2vec/blob/master/questions-words.txt
# and place it in the local directory
# Note: this takes many minutes
if os.path.isfile('question-words.txt'):
for model in word_models:
sections = model.accuracy('questions-words.txt')
correct, incorrect = len(sections[-1]['correct']), len(sections[-1]['incorrect'])
print('%s: %0.2f%% correct (%d of %d)' % (model, float(correct*100)/(correct+incorrect), correct, correct+incorrect))
"""
Explanation: Do the DBOW words look meaningless? That's because the gensim DBOW model doesn't train word vectors – they remain at their random initialized values – unless you ask with the dbow_words=1 initialization parameter. Concurrent word-training slows DBOW mode significantly, and offers little improvement (and sometimes a little worsening) of the error rate on this IMDB sentiment-prediction task.
Words from DM models tend to show meaningfully similar words when there are many examples in the training data (as with 'plot' or 'actor'). (All DM modes inherently involve word vector training concurrent with doc vector training.)
Are the word vectors from this dataset any good at analogies?
End of explanation
"""
This cell left intentionally erroneous.
"""
Explanation: Even though this is a tiny, domain-specific dataset, it shows some meager capability on the general word analogies – at least for the DM/concat and DM/mean models which actually train word vectors. (The untrained random-initialized words of the DBOW model of course fail miserably.)
Slop
End of explanation
"""
from gensim.models import KeyedVectors
w2v_g100b = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True)
w2v_g100b.compact_name = 'w2v_g100b'
word_models.append(w2v_g100b)
"""
Explanation: To mix the Google dataset (if locally available) into the word tests...
End of explanation
"""
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
rootLogger = logging.getLogger()
rootLogger.setLevel(logging.INFO)
"""
Explanation: To get copious logging output from above steps...
End of explanation
"""
%load_ext autoreload
%autoreload 2
"""
Explanation: To auto-reload python code while developing...
End of explanation
"""
|
llclave/Springboard-Mini-Projects | data_wrangling_json/json_exercise.ipynb | mit | # Import required packages
import pandas as pd
import json
from pandas.io.json import json_normalize
# Read JSON file as Pandas DataFrame object
world_bank_df = pd.read_json('data/world_bank_projects.json')
world_bank_df
# Check DataFrame info
world_bank_df.info()
# List top 10 countries with the most projects
world_bank_df.groupby('countryname').size().sort_values(ascending=False)[:10]
"""
Explanation: JSON exercise
Using data in file 'data/world_bank_projects.json'
1. Find the 10 countries with most projects
2. Find the top 10 major project themes (using column 'mjtheme_namecode')
3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
1) Find the 10 countries with most projects
End of explanation
"""
# Load nested JSON column ('mjtheme_namecode') as Pandas DataFrame
data = json.load((open('data/world_bank_projects.json')))
project_themes_df = json_normalize(data, 'mjtheme_namecode')
project_themes_df
# Check DataFrame info
project_themes_df.info()
# List top 10 most occurring major project themes by name
project_themes_df.groupby('name').size().sort_values(ascending=False)[:10]
# Since there are missing values indicated by the missing name below this output is not accurate
# Convert code column to integers
# Setting errors='raise' will raise an alert if there are missing values
project_themes_df.code = pd.to_numeric(project_themes_df.code, errors='raise')
print(project_themes_df.code.dtype)
# List top 10 most occurring major project themes by code
project_themes_df.groupby('code').size().sort_values(ascending=False)[:10]
# There are no missing values so this output is more accurate than above
"""
Explanation: 2) Find the top 10 major project themes (using column 'mjtheme_namecode')
End of explanation
"""
# Change empty strings in the name column to nulls
project_themes_df.loc[project_themes_df['name'] == "", 'name'] = None
project_themes_df.info()
# Create dictionary that maps project codes to names
themes_by_code = {}
for i in range(project_themes_df['code'].min(), project_themes_df['code'].max() + 1):
code_filter = (project_themes_df.name.notnull()) & (project_themes_df.code == i)
themes_by_code[i] = project_themes_df.name[code_filter].iloc[0]
themes_by_code
# Fill in missing name values in DataFrame using code values
project_themes_df.name = project_themes_df.code.map(themes_by_code)
project_themes_df.info()
# Rerun code above from number 2 to list top 10 most occurring major project themes by name
project_themes_df.groupby('name').size().sort_values(ascending=False)[:10]
"""
Explanation: 3) In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
End of explanation
"""
|
dataDogma/Computer-Science | Courses/DAT-208x/DAT208x - Week 3 - Section 2 - Methods.ipynb | gpl-3.0 | """
Instructions:
+ Use the upper() method on room and store the result in room_up.
Use the dot notation.
+ Print out room and room_up. Did both change?
+ Print out the number of o's on the variable room by calling count()
on room and passing the letter "o" as an input to the method.
- We're talking about the variable room, not the word "room"!
"""
# string to experiment with: room
room = "poolhouse"
# Use upper() on room: room_up
room_up = room.upper()
# Print out room and room_up
print(room)
print( "\n" + room_up )
# Print out the number of o's in room
print("\n" + str( room.count("o") ) )
"""
Explanation: Lecture: Methods
Following are some of the functions used formarly in this course:
Max()
len()
round()
sorted()
Let's learn a few more:
Get index in a list: ?
Reversing a list: ?
Note: all the data structures in python are called objects.
Python has built-in methods which informaly are:
Functions that belong to python objects, e.g. A python object of type string has methods, such as:
capitalize and
replace
Further, objects of type float have "specific methods" depending on the type.
Syntax:
object.method_name( <arguments> )
The .(dot) is called address off operator which uses reference of specified method to be searched from the standard library of python, which contains list of functions and methods.
To sum things up:
In Python, everything is an object, and each object has a specific method associated with it, depending on the type of object.
Note:
Some methods can also, change the objects they are called on.
e.g. The .append() method!
Consequently, some don't and thus a caution is needed while usig such methods!
Lab: Methods
Objective:
Get to know different kinds of methods.
Understand the nuances that come packaged with methods.
by practising them on data types such as string and list.
String Methods -- 100xp, status: Earned
List Methods -- 100xp, status: Earned
List Methods II -- 100xp, status: Earned
1. String Methods -- 100xp, status: earned
End of explanation
"""
"""
Instructions:
+ Use the index() method to get the index of the element
in areas that is equal to 20.0. Print out this index.
+ Call count() on areas to find out how many times 14.5
appears in the list. Again, simply print out this number.
"""
# first let's look more about these methods
help(str.count)
print(2*"\n===================================================")
help(str.index)
# Create list areas
areas = [11.25, 18.0, 20.0, 10.75, 9.50]
# Print out the index of the element 20.0
print( "\nThe index of the element 20.0 is: " + str( areas.index( 20 ) ) )
# Print out how often 14.5 appears in areas
print("\nThe number of times 14.5 occurs is: " + str( areas.count( 14.5 ) ) )
"""
Explanation: 2. List Methods -- 100xp, status: earned
Other Python data types alos have many common method's associated with them, some of these methods are exclusive to some data types.
A few of them we will be experimenting on them:
index(), to get the index of the first element of a slist that matches its input.
count(), to get the number of times an element appears in a list.
End of explanation
"""
"""
Instructions:
+ Use the append method twice to add the size of the
poolhouse and the garage again:
- 24.5 and 15.45, respectively.
- Add them in order
+ Print out the areas.
+ Use the reverse() method to reverse the order of the
elements in areas.
+ Print out the area once more.
"""
# Let's look at the help on these methods
help( list.append )
print("=====================================================")
help( list.remove )
print("=====================================================")
help( list.reverse )
# Create list areas
areas = [11.25, 18.0, 20.0, 10.75, 9.50]
# Use append twice to add poolhouse and garage size
areas.append( 24.5 )
areas.append( 15.45 )
# Print out areas
print("\nThe new list contains two new items: " + str( areas ) )
# Reverse the orders of the elements in areas
areas.reverse()
# Print out areas
print("\nThe new list has been reversed: " + str( areas ) )
"""
Explanation: 3. List Methods II -- 100xp, status: earned
Most list methods will change the list they're called on. E.g.
append() : adds and element to the list it is called on.
remove(): removes the "1st element" of a list that matches the inuput.
reverse() : reverse the order of the elements in the list it is called on.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/hammoz-consortium/cmip6/models/mpiesm-1-2-ham/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'mpiesm-1-2-ham', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: MPIESM-1-2-HAM
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
cathalmccabe/PYNQ | boards/Pynq-Z1/base/notebooks/arduino/arduino_lcd18.ipynb | bsd-3-clause | from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
"""
Explanation: Arduino LCD Example using AdaFruit 1.8" LCD Shield
This notebook shows a demo on Adafruit 1.8" LCD shield.
End of explanation
"""
from pynq.lib.arduino import Arduino_LCD18
lcd = Arduino_LCD18(base.ARDUINO)
"""
Explanation: 1. Instantiate AdaFruit LCD controller
In this example, make sure that 1.8" LCD shield from Adafruit is placed on the Arduino interface.
After instantiation, users should expect to see a PYNQ logo with pink background shown on the screen.
End of explanation
"""
lcd.clear()
"""
Explanation: 2. Clear the LCD screen
Clear the LCD screen so users can display other pictures.
End of explanation
"""
lcd.display('data/board_small.jpg',x_pos=0,y_pos=127,
orientation=3,background=[255,255,255])
"""
Explanation: 3. Display a picture
The screen is 160 pixels by 128 pixels. So the largest picture that can fit in the screen is 160 by 128. To resize a picture to a desired size, users can do:
python
from PIL import Image
img = Image.open('data/large.jpg')
w_new = 160
h_new = 128
new_img = img.resize((w_new,h_new),Image.ANTIALIAS)
new_img.save('data/small.jpg','JPEG')
img.close()
The format of the picture can be PNG, JPEG, BMP, or any other format that can be opened using the Image library. In the API, the picture will be compressed into a binary format having (per pixel) 5 bits for blue, 6 bits for green, and 5 bits for red. All the pixels (of 16 bits each) will be stored in DDR memory and then transferred to the IO processor for display.
The orientation of the picture is as shown below, while currently, only orientation 1 and 3 are supported. Orientation 3 will display picture normally, while orientation 1 will display picture upside-down.
<img src="data/adafruit_lcd18.jpg" width="400px"/>
To display the picture at the desired location, the position has to be calculated. For example, to display in the center a 76-by-25 picture with orientation 3, x_pos has to be (160-76/2)=42, and y_pos has to be (128/2)+(25/2)=76.
The parameter background is a list of 3 components: [R,G,B], where each component consists of 8 bits. If it is not defined, it will be defaulted to [0,0,0] (black).
End of explanation
"""
lcd.display('data/logo_small.png',x_pos=0,y_pos=127,
orientation=1,background=[255,255,255],frames=100)
"""
Explanation: 4. Animate the picture
We can provide the number of frames to the method display(); this will move the picture around with a desired background color.
End of explanation
"""
lcd.clear()
lcd.draw_line(x_start_pos=151,y_start_pos=98,x_end_pos=19,y_end_pos=13)
"""
Explanation: 5. Draw a line
Draw a white line from upper left corner towards lower right corner.
The parameter background is a list of 3 components: [R,G,B], where each component consists of 8 bits. If it is not defined, it will be defaulted to [0,0,0] (black).
Similarly, the parameter color defines the color of the line, with a default value of [255,255,255] (white).
All the 3 draw_line() use the default orientation 3.
Note that if the background is changed, the screen will also be cleared. Otherwise the old lines will still stay on the screen.
End of explanation
"""
lcd.draw_line(50,50,150,50,color=[255,0,0],background=[255,255,0])
"""
Explanation: Draw a 100-pixel wide red horizontal line, on a yellow background. Since the background is changed, the screen will be cleared automatically.
End of explanation
"""
lcd.draw_line(50,20,50,120,[0,0,255],[255,255,0])
"""
Explanation: Draw a 80-pixel tall blue vertical line, on the same yellow background.
End of explanation
"""
text = 'Hello, PYNQ!'
lcd.print_string(1,1,text,[255,255,255],[0,0,255])
import time
text = time.strftime("%d/%m/%Y")
lcd.print_string(5,10,text,[255,255,0],[0,0,255])
"""
Explanation: 6. Print a scaled character
Users can print a scaled string at a desired position with a desired text color and background color.
The first print_string() prints "Hello, PYNQ!" at 1st row, 1st column, with white text color and blue background.
The second print_string() prints today's date at 5th row, 10th column, with yellow text color and blue background.
Note that if the background is changed, the screen will also be cleared. Otherwise the old strings will still stay on the screen.
End of explanation
"""
lcd.draw_filled_rectangle(x_start_pos=10,y_start_pos=10,
width=60,height=80,color=[64,255,0])
lcd.draw_filled_rectangle(x_start_pos=20,y_start_pos=30,
width=80,height=30,color=[255,128,0])
lcd.draw_filled_rectangle(x_start_pos=90,y_start_pos=40,
width=70,height=120,color=[64,0,255])
"""
Explanation: 7. Draw a filled rectangle
The next 3 cells will draw 3 rectangles of different colors, respectively. All of them use the default black background and orientation 3.
End of explanation
"""
button=lcd.read_joystick()
if button == 1:
print('Left')
elif button == 2:
print('Down')
elif button==3:
print('Center')
elif button==4:
print('Right')
elif button==5:
print('Up')
else:
print('Not pressed')
"""
Explanation: 8. Read joystick button
End of explanation
"""
|
mtasende/Machine-Learning-Nanodegree-Capstone | notebooks/prod/n10_dyna_q_with_predictor_full_training.ipynb | mit | # Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
import pickle
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent_predictor import AgentPredictor
from functools import partial
from sklearn.externals import joblib
NUM_THREADS = 1
LOOKBACK = -1
STARTING_DAYS_AHEAD = 252
POSSIBLE_FRACTIONS = [0.0, 1.0]
DYNA = 20
BASE_DAYS = 112
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Crop the final days of the test set as a workaround to make dyna work
# (the env, only has the market calendar up to a certain time)
data_test_df = data_test_df.iloc[:-DYNA]
total_data_test_df = total_data_test_df.loc[:data_test_df.index[-1]]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_in_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
estimator_close = joblib.load('../../data/best_predictor.pkl')
estimator_volume = joblib.load('../../data/best_volume_predictor.pkl')
agents = [AgentPredictor(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.999,
dyna_iterations=DYNA,
name='Agent_{}'.format(i),
estimator_close=estimator_close,
estimator_volume=estimator_volume,
env=env,
prediction_window=BASE_DAYS) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
"""
Explanation: In this notebook a Q learner with dyna and a custom predictor will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value).
End of explanation
"""
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 4
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
import pickle
with open('../../data/dyna_q_with_predictor.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
"""
Explanation: Let's show the symbols data, to see how good the recommender has to be.
End of explanation
"""
TEST_DAYS_AHEAD = 112
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
"""
Explanation: Let's run the trained agent, with the test set
First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).
End of explanation
"""
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
"""
Explanation: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
End of explanation
"""
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))))
"""
Explanation: What are the metrics for "holding the position"?
End of explanation
"""
|
tritemio/multispot_paper | usALEX - Corrections - Direct excitation fit.ipynb | mit | data_file = 'results/usALEX-5samples-PR-raw-dir_ex_aa-fit-AexAem.csv'
"""
Explanation: Direct ecitation coefficient fit
This notebook estracts the direct excitation coefficient from the set of 5 us-ALEX smFRET measurements.
What it does?
This notebook performs a weighted average of direct excitation coefficient fitted from each measurement.
Dependencies
This notebooks reads the file:
End of explanation
"""
from __future__ import division
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
%config InlineBackend.figure_format='retina' # for hi-dpi displays
sns.set_style('whitegrid')
palette = ('Paired', 10)
sns.palplot(sns.color_palette(*palette))
sns.set_palette(*palette)
data = pd.read_csv(data_file).set_index('sample')
data
data.columns
d = data[[c for c in data.columns if c.startswith('dir')]]
d.plot(kind='line', lw=3, title='Direct Excitation Coefficient')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., frameon=False);
dir_ex_aa = np.average(data.dir_ex_S_kde_w5, weights=data.n_bursts_aa)
'%.5f' % dir_ex_aa
"""
Explanation: The data has been generated by running the template notebook
usALEX-5samples-PR-raw-dir_ex_aa-fit-AexAem
for each sample.
To recompute the PR data used by this notebook run the
8-spots paper analysis notebook.
Computation
End of explanation
"""
with open('results/usALEX - direct excitation coefficient dir_ex_aa.csv', 'w') as f:
f.write('%.5f' % dir_ex_aa)
"""
Explanation: Save coefficient to disk
End of explanation
"""
|
NYUDataBootcamp/Projects | UG_S17/Zhou-Stock Pitch.ipynb | mit | import requests
import sys # system module
import pandas as pd # data package
import pandas_datareader.data as web
import datetime
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for pandas
import seaborn as sns # seaborn graphic tools
from plotly.offline import iplot, iplot_mpl # plotly plotting functions
import plotly.graph_objs as go
import plotly
import statsmodels.formula.api as sm #linear regression package
import statsmodels.api as sm
from pandas.stats.api import ols
%matplotlib inline
plotly.offline.init_notebook_mode(connected=True)
"""
Explanation: Stock Pitch and Trading Simulation
Final Project
Name: Kehang Zhou (kz653@nyu.edu)
Date: May 8, 2017
Introduction:
Stock Pitch has long been a popular topic among investors, either speculative or long-term. People always want to invest in a stock which could bring them gigantic and stable wealth. Fundamental analysis and technical analysis are the two main analysis branch in the equity market. Technical analysis approaches security from the chart and historical performance, while fundamental analysis approaches security from company's financial statements statistics.
This project is totally an exploration which aims to explore both of these two analysis and apply them in pitching stocks. I tried to explore the relationships between returns of stocks and statistics of stocks I got(both statistics of company and stock performance). I also approach the analysis from multiple dimensions besides statistics(sectors, trading volumes). In the final part, I tried to do 2 trading simulations(Back test) on specific strategies to see if we can turn our analysis to profitable trading ideas.
Outline:
Process of Deriving Data and Simple Data Cleaning (skippable and Unstable)
Data Analysis on Statistics
Trading Simulation and Analysis
Note that the first part is skippable. Also please don't run the cells simultaneously. The main parts of this notebook is based on the customized dataset I derived on April 15th. Please load the csv file in the Data Analysis part.
End of explanation
"""
#read the data from github and use the second file
url = 'https://raw.githubusercontent.com/datasets/s-and-p-500-companies/master/data/constituents-financials.csv'
url2 = 'https://raw.githubusercontent.com/datasets/s-and-p-500-companies/master/data/constituents.csv'
sp = pd.read_csv(url2)
# The table looks like:
sp.head(8)
sp.shape #general shape of the dataset
sp.dtypes #datatypes of the dataset
"""
Explanation: Main Dataset
I found the list of S&P 500 companies dataset in the website . The file constituents-financial.csv contains symbol of the company, name of the company, sector of the company and a bunch of financial statistics about the companies such as dividend yield, price to books value and EBITDA. Those could be useful to me but I’m afraid that those are not the most up-to-date statistics. So I use another file constituents.csv which only contains symbols, names and sectors.
End of explanation
"""
from yahoo_finance import Share #load the Share funciton
#Example: By putting an authorized symbol of stock in the share function, we could get the statsitics we want.
#Below is demo to get the 50day_moving_average of BRK-B share.
Share('BRK-B').get_50day_moving_avg()
stocklist = list(sp['Symbol'].astype(str)) #get the symbol list with symbols as string
print(len(stocklist))
stocklist[0:5] # How the string
"""
Explanation: Yahoo Finance Data Package
Here I Install Yahoo finance package in order to get access to latest stock data:
Please use pip install yahoo-finance in commander to intall the package
According to the python index instruction. I used multiple methods to get data by passing through the symbol of the stock.
Basically, this package could give you the latest statistics of companies and stocks in the Yahoo-finance summary page of a stock. Here is an example of the page
End of explanation
"""
stocklist_a = [] # create a adjusted list of symbol
for stock in stocklist:
slist = list(stock) # use list function on a string
slist2 = ['-' if x=='.' else x for x in slist] # replace the '.' in the list with '-'
stock = "".join(slist2) # combine the list as a string
stocklist_a.append(stock) # put the new symbol in the new list
sp['Symbol'] = stocklist_a #set the adjusted list as symbols in data
print(stocklist_a[0:10]) # how the adjusted list looks like
sp.head(7)
"""
Explanation: I discover that in our stocklist. Some symbol such as BRK.B contains a dot '.' . Yahoo Finance package only accepts symbol with '-' like BRK-B. So I cleaned the data by replacing '.' with '-'.
End of explanation
"""
list1 = [] # create an empty list to fill in the statistics.
for s in stocklist_a: # use the for loop to pass every symbol through
a = Share(s).get_50day_moving_avg()
list1.append(a)
list1[0:5] # How the list would look like if no error
sp['50 Days Moving Average'] = list1 # create a new columns and put the list
sp.head(5)
#The following codes did the similar things.
list2 = []
for s1 in stocklist_a:
list2.append(Share(s1).get_200day_moving_avg())
list3 = []
for s2 in stocklist_a:
list3.append(Share(s2).get_dividend_yield())
list4 = []
for s3 in stocklist_a:
list4.append(Share(s3).get_short_ratio())
list5 = []
for s4 in stocklist_a:
list5.append(Share(s4).get_price_earnings_growth_ratio())
sp['200 Days Moving Average'] = list2
sp['Dividend Yield'] = list3
sp['Short Ratio'] = list4
sp['Peg Ratio'] = list5
"""
Explanation: Then I used the for loop to get the specific statistics for each stock. Put all data in a list and then put the list in the dataframe
get_50day_moving_avg() - get the 50 days moving average of the stock price
get_200day_moving_avg() - get the 200 days moving average of the stock price
get_dividend_yield() - get the dividend yield of the stock
get_short_ratio() - get the short ratio of the stock
get_price_earnings_growth_ratio() -get the price to earning growth ratio(Peg) of the stock
I passed each one separately. Because every time I put the code together in one, it takes a super long time and it will have a error message. It's better if we run each one separately.
Warning: When I use the code to derive my dataset on April 15, 2016, everything was fine. However, when I execute the code in May, I always get error message saying HTTP error and the for loop interrupt. So if you run the cell below and get a error message, please don't executes the rest cells in the first part and just read the first part (Otherwise you'll get a lot error message). That's also why I clear the output of the dataset
End of explanation
"""
# create two list for the stock price on the first day and the stock price on the last day
startlist=[]
closelist=[]
for stock in stocklist_a:
start = datetime.datetime(2016, 11, 1)
end = datetime.datetime(2017, 4, 15)
f = web.DataReader(stock, 'yahoo', start, end)
flist = f['Close'].tolist() #put the close price in the list
startlist.append(flist[0]) # add the close price on the first day to the startlist
closelist.append(flist[-1]) # add the close price on the last day to the startlist
sp['Stock price on 2016-11-1'] = startlist # set new columns to put two lists in as data
sp['Stock price on 2017-4-15'] = closelist
sp['Holding Period Return'] = sp['Stock price on 2017-4-15']/sp['Stock price on 2016-11-1'] -1
# Calculate Holding period return data in this period and put it in to the dataset as a new column
sp.head()
"""
Explanation: Get the holding period return data from November 1 2016 to April 15 2017 by using the pandas data reader
The reasons why I pick the time range from November 1,2016 to Aril 15, 2017 are:
1. It is roughly 6 month, half year.
2. Some statistics of companies like oeg ratio doesn't change frequently but seasonally or semi-annually according to their report. So the statistics we get should have effects on the stock price during the period.
3. US stock market has experienced a huge rally after the US presidency election on the start of November 2016.
End of explanation
"""
#sp.to_csv('sp500_April_15.csv') # Save the data to our computer, please don't pass
"""
Explanation: Since every time I passed on the code I would get latest data from Yahoo Finance. SO I save the data derived on April 15 to csv file and load it again
End of explanation
"""
spd = pd.read_csv('sp500_April_15.csv', #the data derived on April 15th.
usecols = range(1,12)) # The first column became number automatically and we don't want it
spd.head() # general look of the data set
"""
Explanation: Data Analysis on Statistics
Main parts start from Here
Please load the dataset sp500_April_15.csv from the computer, you could replace the code in the cell below to read the csv file from certain path, I put the csv in the same path of this jupyter notebook.
End of explanation
"""
# spdi is our main dataset, but I also created some other dataset.
spdi = spd.set_index(['Symbol','Name']) # set the multi-level index
spdi.head()
spdi['Sector'].value_counts() # count how many companies in each sector
sector = list(spdi['Sector'].unique()) # find all the sectors name by using the unique function and put into list
sector
dic = dict(spdi['Sector'].value_counts()) # use the dict function to make the value_counts a dictionary
sector_name = [key for key in dic.keys()] # the key of the dictionary is name of each sector, put into a list
sector_count = [v for v in dic.values()] # the value of the dictionary is numbers of companies in each sector, put in a list
# Use the plotly to visualize the composition of S&P 500 stocks, use pie plot
fig = {
'data': [{'labels': sector_name,
'values': sector_count,
'type': 'pie',
'hoverinfo':'label+percent+value',
'hole' : .4
}],
'layout': {'title': 'S&P500 Decomposition by Sector Pie Chart',
'annotations':[{
'font': {'size': 10},
'showarrow': False,
'text': 'Sector + # of Companies + Percentage',
'x': 0.5,
'y': 0.5
}]
}
}
iplot(fig)
"""
Explanation: Stock pitch by sectors
Stock data in specific sector
End of explanation
"""
sns.set_style("whitegrid") # set the seaborn plotting style
# first try, simple bar plot example
spd1 = spdi[spdi['Sector']== sector[0]] # a small subset spd1 by slicing
fig, ax = plt.subplots(figsize=(8, 10))
spd1['Holding Period Return'].plot.barh(ax=ax)
ax.set_title('Bar Chart of Holding Period Return of Industrial Sector')
ax.set_xlabel('Holding Period Return')
"""
Explanation: Comment:
The S&P 500 contains companies from 11 sectors. Companies are distributed in these 11 sectors. But some sectors like Telecom and Material only contains a relative small amount of companies.
Below is simple bar plot to see the holding period return of companies inside 'Industrial' Sectors
End of explanation
"""
sns.set() # reset the seaborn plotting style to default
# The following paragraph shows us how to get the holding period return of S&P 500 index.
start = datetime.datetime(2016, 11, 1)
end = datetime.datetime(2017, 4, 15)
sp500 = web.DataReader('^GSPC', 'yahoo', start, end) # sp500 is a dataset used to calculate the return of the index
sp500_price = sp500['Close'].tolist()
sp500_return = sp500_price[-1] / sp500_price[0] -1
print('The Holding Period Return for S&P500 index is',sp500_return)
# The following codes shows how to plot
ave = spdi['Holding Period Return'].groupby(spdi['Sector']).mean() # Use the groupby method to get the mean HPR
ave = ave.reset_index()
ave = ave.set_index('Sector') # reset and then set the 'Sector' as index
# plot
fig, ax = plt.subplots(figsize = (10,6))
ave.plot(ax = ax, kind='bar',alpha = 0.3)
ax.axhline(sp500_return, linestyle = 'dashed', color = 'red')
ax.axhline(0,linestyle = '--', color ='black')
ax.set_title('Holding Period Return Sector Average', fontsize=15)
ax.text(0,sp500_return,'S&P500 Holding Period Return', fontsize=9)
"""
Explanation: Below is the plot procedure of average holding period return for each sector. We also add a horizontal line to show the holding period return on the S&P 500 index. By doing that, we could see which sector outperforms the aggregate index.
Note that the symbol for S&P500 index in Yahoo finance is '^GSPC'
End of explanation
"""
# seaborn boxplot and swarmplot
fig,ax = plt.subplots(figsize=(18,18))
sns.boxplot(ax=ax,data=spdi, x='Sector', y='Dividend Yield')
sns.swarmplot(ax=ax,data=spdi, x='Sector', y='Dividend Yield',size =8, linewidth=1, alpha=0.5)
ax.set_ylim([0,10])
ax.set_title('Dividend Yield by Sector, Boxplot Plus Swarmplot', fontsize = 15)
"""
Explanation: Comment:
From the plot, we could see that there are five sectors outperform the index(Financials, Health Care, Industrials, Information Technology, Materials) over the last half year.The only one sector that suffered from losses is the Telecom sector with least amount of companies.Consumer Staples and Energy Sectors also performs badly. So we might avoid bad sectors and pick good sectors in the following pitch.
Dividend Yield Pattern of Sectors
Here is graph of both boxplot and swarmplot of dividend yield of companies, classified by sectors in one canvas.
Before the plot, I have some comprehension of dividend yield. Dividend Yield, according to the definition, is a dividend expressed as a percentage of a current share price.
The companies with high profitability should have more disposable cash flow, so the dividend yield should be higher for them. Companies (such as tech companies and health care companies) which need a lot capital should be capital intensive and pay little dividend. But how big it is should also depend on company's operation strategy.
If a company pays a lot of dividend, of course shareholder would be happy because of the capital gain. It might attract a lot of new shareholders so the share price could go up. Return of the stock should also go up. But in the other hand, paying dividend unwisely could reduce company's disposable cash flow so the profitability should go down. This would cause negative effect on the stock price.
Let's see if we can capture the pattern of dividend yield of each sector.
End of explanation
"""
# create subset of outperforming companies
spdi['Sector Average Return'] = spdi['Holding Period Return'].groupby(spdi['Sector']).transform('mean')
sp_outperform = spdi[spdi['Holding Period Return'] > spdi['Sector Average Return']]
sp_outperform.head()
#Use the seaborn pairplot to give a overview of the statistics and the holding period return
sns.set_style("whitegrid")
sp_outperform = sp_outperform.dropna(axis=0, how='any') # remove some stocks which has nan value in statistics
sns.pairplot(sp_outperform[['Dividend Yield', 'Short Ratio', 'Peg Ratio','Holding Period Return']], size = 3)
"""
Explanation: Comment:
There are some notable facts in this graph.
1. Inside each sector, companies could have very different dividend yield even if they are in the same sector. Energy group is a good example for this point. The range of the dividend yield is wide.
2. There are some sectors which have high dividend yield in average. Utility and Real Estate sectors are two that have relatively high dividend yield. The only 5 companies in Telecom sector all pay super high dividend and they have really bad average holding period return. If we relate these with sector average holding period return above, the high dividend yield could be the reason why these sectors have underperformed the index and other sectors.
3. The sectors(Financials, Health Care, Industrials, Information Technology) which have outperformed the index all have relativlely low dividend yield in average. Take Health Care sector as an example. It has lowest average dividend yield(some health care companies don't pay dividend yield at all!). But it has highest average holding period return.
Data Analysis on all Statistics - Explore the correlation
I've already introduced the dividend yield. There are still some other statistics in the chart.
Definitions:
PEG Ratio
Short Ratio
Moving Average
* The Peg Ratio is a valuation metric for determining the relative trade-off between the price of a stock, the earnings generated per share (EPS), and the company's expected growth. It is derived by dividing the Price to earning ratio by the growth rate of earnings per share. High PEG ratio indicate that the company is overvalued too much compare to its growth, while a super low peg ratio might indicate that the company is undervalued. It could also be negative because the earnings and it's growth could be negative. A fair Peg raito should be around 1.
The short ratio is the ratio of tradable shares being shorted to shares in the market. Shorting stocks means that the investors is not optimistic about the stock. So I think a stock with relatively high short ratio should have lower return
Moving Average is a techinical analysis tool on the stock price. I used the simple moving average of stock price over last several days as the moving average. Usually, when the short term moving average is above the long term moving average, the stock should perform well.
In below, I add a column of 'sector average return' on the main dataset spdi. I also created a subset named sp_outperform by slicing the companies which outperform the sector average return. I use this dataset to do the data analysis, which gives me better and more visible correlation in below
End of explanation
"""
# Use the regplot to zoom in the graphs, I also did a linear regression with confidence interval
fig, ax = plt.subplots(2,2,figsize = (12,12))
sns.regplot(x='Dividend Yield',y='Holding Period Return',data=sp_outperform,ax=ax[0,0], ci=95)
sns.regplot(x='Short Ratio',y='Holding Period Return',data=sp_outperform,ax=ax[0,1], ci=95)
sns.regplot(x='Peg Ratio',y='Holding Period Return',data=sp_outperform,ax=ax[1,0],ci=95)
ax[1,0].set_ylim([-1,1])
ax[1,0].set_xlim([-5,5])
sns.regplot(x='Dividend Yield',y='Short Ratio',data=sp_outperform,ax=ax[1,1], ci=95)
"""
Explanation: Comment:
The pairplot was not so satisfied, but we get some conclusion.
1. Peg ratio might be a bad statistics for analyzing stock. In the pair plot, because of some really deviated Peg ratio. the pattern of the PEG ratio is like a straight line which tells nothing. I'll zoom in the detail of the plot in below
The short ratio has a weak positive correlation with dividend yield. This could be reasonable because as companies pay high dividend, investors would expect the stock price to go down because it hurts disposable cash flow and profitability of companies. So investors would short more stocks of the company.
Short ratio also has a weak negative correlation with holding period return. As short ratio gets super high(>8%), the average holding period return is lower than the one of companies with lower short ratio.
Dividend Yield seems to be uncorrelated with Holding period return. But we observe some super high return in companies with low dividend yield. This might suggest a weak negative correlation.
End of explanation
"""
# Load sklearn.linear_model to fit the linear regression and get the coefficients
from sklearn.linear_model import LinearRegression
splm = sp_outperform[['Holding Period Return','Dividend Yield' ]]
npmatrix = np.matrix(splm)
Y, X = npmatrix[:,0], npmatrix[:,1]
lreg = LinearRegression().fit(X,Y)
m = lreg.coef_[0]
c = lreg.intercept_
print('The formula of Model is: Holding Period Return = {0}*Dividend Yield + {1}'.format(m,c))
"""
Explanation: These four plot proves my thoughts in above. The Peg Ratio is not a good measure.And there is a negative correlation between dividend yield and return. All correlations are not strong so it is not satisfying.
Below I did a linear regression to model the relationship between the dividend yield and holding period return. Just to show the negative correlation coefficient.
End of explanation
"""
spdi_MA = spdi[['Sector','50 Days Moving Average', '200 Days Moving Average', 'Holding Period Return']].copy() #copy the subset of spdi
spdi_MA['MA Signal'] = (spdi['50 Days Moving Average'] > spdi['200 Days Moving Average']) # boolean selection as new column
spdi_MA.head()
# Use the violinplot to plot the holding period return for each sectors.
# Note that I use the hue function on 'MA signal'. so we get two pattern for stocks in each sectors.
sns.set(style="whitegrid", palette="pastel", color_codes=True)
fig, ax = plt.subplots(figsize=(17,16))
sns.violinplot(ax=ax, x= 'Sector', y ='Holding Period Return',hue='MA Signal', data = spdi_MA, edgecolor='gray', inner='quart', split=True)
ax.set_ylim([-1,1])
ax.set_title('Holding Period Return vs. Moving Average 50-200 Order', fontsize = 18)
fig.tight_layout()
"""
Explanation: Exploring Moving Average
Here, to identify whether the 50 days moving average is higher than 200days moving average in April 15,2017, I create a subset of spdi called spdi_MA. I create a new columns which contains booleans to signal the moving average relationship. True means 50 days moving average is higher than 200 days moving average. False means the opposite.
End of explanation
"""
# I selected the top 10 stocks.
sp_top10 = spdi[spdi['50 Days Moving Average'] > spdi['200 Days Moving Average']].sort_values(['Dividend Yield', 'Short Ratio'],
ascending = True,
na_position='last').head(10)
sp_top10
#plot their performance and their sector average performance
fig, ax = plt.subplots(figsize=(8, 8))
sp_top10.plot.barh(ax=ax, y="Holding Period Return", color="Blue")
sp_top10.plot.barh(ax=ax, y="Sector Average Return", color="Pink", alpha=0.6)
ax.vlines(0,-10,10, lw=3)
ax.set_title('Top10 S&P500 Stocks Selected, Holding Period Return vs. Sector Average HPR')
ax.set_xlabel('Returns')
"""
Explanation: Comment:
Interestingly, whatever the sectors is, stocks with 50days moving average higher than 200days moving average always have higher average holding period return. So this is a good benchmark when picking the stocks.
Final thing I did in this part is to combine all ideas above to see if I could get a good portfolio of stocks which performed well in last half years. According to the correlation we should pick stocks with lower dividend yield, lower short ratio and stocks with 50days moving average higher than 200days moving average.
I do this by using the sort_value function so it could give me the stocks with lowest dividend yield as well as short ratio. I also use a boolean selection on spdi dataset to find stocks with 50days MA> 200days MA.
End of explanation
"""
# Get another timeseries data of stock, I used Apple stocks as an example('AAPL').
start = datetime.datetime(2016, 11, 1)
end = datetime.datetime(2017, 4, 15)
stock1 = web.DataReader('AAPL', 'yahoo', start, end)[['Close', 'Volume']] #I only need Closing price and volume data
stock1.head()
stock1['Previous Close'] = stock1.Close.shift(1) # Use the shift(1) method to shift 1 space of the volume and price
stock1['Previous Volume'] = stock1.Volume.shift(1)
stock1['Volume Change'] = np.log(stock1['Volume']/ stock1['Previous Volume']) # Calculate the daily return and the daily volume change
stock1['Daily Return'] = np.log(stock1['Close']/ stock1['Previous Close'])
# I express the daily return and volume change as log return, which is more convenient when I sum the return
stock1.head()
# The scatter plot of volume change vs. Daily return, using plotly.
trace = go.Scatter(x = stock1['Volume Change'],
y = stock1['Daily Return'],
mode = 'markers'
)
layout = dict( title = 'Daily Volume Change vs. Price Change(Daily Return)',
height = 600,
width = 800,
yaxis = dict(zeroline = True,
title = 'Daily Return'),
xaxis = dict(zeroline=True,
title = 'Daily Volume Change')
)
iplot(go.Figure(data=[trace], layout=layout))
"""
Explanation: Conclusion: It seems the portfolio is good. All stocks gain a positive return. 7 out of 10 stocks outperform their own sector. 8 out of 10 are stocks in hot sectors (Financials, Health Care, Industrials, Information Technology, Materials). I also get the stock with super return (57.5%) named Micron Technology. But the portfolio is also not perfect as the portfolio contains underperformer like Mylan N.V.
Trading Idea simulation
Below we examine two simple dynamic trading strategy.
The first trading strategy is a brand-new idea, aiming to explore the relationship between the daily trading volume change and the daily return of the stocks. Before I do the simulation, I think the price of the stock should increase with an increasing trading volume in a up market because there are more demand. It turned out to be a failed attempt.
The second trading strategy is inspired by the idea above. I could long the stocks when the short-term moving average is higher than the long-term moving average and get the daily return. When opposite situation, I don't hold any position.(I don't consider shorting here because shorting is costly) Also, if we consider the closing price as a moving average of super short period, we could also use the relation between closing price and moving average as a decision factor.
-First Trading Strategy - Volume Change based - Failed Attempt
Start by examine the relationship of daily volume change and daily return
End of explanation
"""
res = ols(y=stock1['Daily Return'], x=stock1['Volume Change'] )
res
"""
Explanation: The graph looks bad. I didn't discover any obvious correlations between the volume change and the return of stocks.
Below we do a statistical linear regression to see whether the two are correlated.
End of explanation
"""
print('The strategy return is' ,stock1[stock1['Volume Change']>0]['Daily Return'].sum())
print('The market return is' , stock1['Daily Return'].sum()) #
"""
Explanation: The t-stat of coefficient and R-squared are not big enough. Though we get a positive correlaton, it is a weak and unreliable.
Still, I still want to examine the return of strategy, that is, the return that one long the stock when volume increase and don't hold any position when volume decrease. Below are the test of the example stock.
End of explanation
"""
# define functions which could get the strategy return and the market return of a stock by inputting the symbol
# The function basically use the same procedure I did for Apple stocks.
def strategy1_return(stock):
start = datetime.datetime(2016, 11, 1)
end = datetime.datetime(2017, 4, 15)
stock1 = web.DataReader(stock, 'yahoo', start, end)
stock1 = stock1[['Close', 'Volume']]
stock1['Previous Close'] = stock1.Close.shift(1)
stock1['Previous Volume'] = stock1.Volume.shift(1)
stock1['Volume Change'] = np.log(stock1['Volume']/ stock1['Previous Volume'])
stock1['Daily Return'] = np.log(stock1['Close']/ stock1['Previous Close'])
return stock1[stock1['Volume Change']>0]['Daily Return'].sum()
def market_return(stock):
start = datetime.datetime(2016, 11, 1)
end = datetime.datetime(2017, 4, 15)
stock1 = web.DataReader(stock, 'yahoo', start, end)
stock1 = stock1[['Close', 'Volume']]
stock1['Previous Close'] = stock1.Close.shift(1)
stock1['Previous Volume'] = stock1.Volume.shift(1)
stock1['Volume Change'] = np.log(stock1['Volume']/ stock1['Previous Volume'])
stock1['Daily Return'] = np.log(stock1['Close']/ stock1['Previous Close'])
return stock1['Daily Return'].sum()
# to insure that my functions are right, print out the returns for apple stocks.
print('The strategy return is', strategy1_return('AAPL'))
print('The market return is', market_return('AAPL'))
# get the symbol_list from the spdi dataset
symbol_list = list(spdi.reset_index()['Symbol'].astype(str))
# build two list to contains returns for strategy return and market return. It might takes a while to finish running since there are 20 stocks.
lists1 = []
listm1 = []
for symbol in symbol_list[0:20]:
lists1.append(strategy1_return(symbol)) # for each stock symbol get the strategy return and add it into the list
listm1.append(market_return(symbol)) # for each stock symbol get the market return and add it into the list
# create new data frame for this strategy in order to plot
strategy1 = pd.DataFrame({'Strategy Return': lists1,
'Market Return': listm1},
index = symbol_list[0:20])
strategy1.head(3)
# use plotly to plot the bar chart of market return vs. strategy return
mr = dict(type = 'bar',
name = 'Market Return',
y = strategy1['Market Return'],
x = strategy1.index,
marker = dict(color = 'rgb(200,200,200)'),
opacity=0.4
)
sr = dict(type = 'bar',
name = 'Strategy Return',
y = strategy1['Strategy Return'],
x = strategy1.index,
marker = dict(color = 'rgb(100,10,100)'),
opacity = 0.4
)
layout = dict(width=850, height=500,
yaxis={'title': 'Return'},
title='Market Return vs. Strategy Return',
xaxis={'title': 'Sample Company Symbol'}
)
iplot(go.Figure(data=[mr,sr], layout=layout))
"""
Explanation: I get a smaller return than the market return. It means the strategy failed on the sample. Below, I use the first 20 stocks as sample to test again my first strategy.
End of explanation
"""
# Get another timeseries data of stock, still used Apple stocks as an example('AAPL').
start = datetime.datetime(2016, 11, 1)
end = datetime.datetime(2017, 4, 15)
stock2 = web.DataReader('AAPL', 'yahoo', start, end)[['Close']] #This time I only need closing price
stock2.head()
# Calculate the 5/20 days moving average using the rolling method.
stock2['5 Days Moving Average'] = stock2['Close'].rolling(window=5).mean()
stock2['20 Days Moving Average'] = stock2['Close'].rolling(window=20).mean()
stock2['Previous Close'] = stock2.Close.shift(1)
stock2['Daily Return'] = np.log(stock2['Close']/ stock2['Previous Close']) #Similar to the first strategy, calculate daily log return
stock3 = stock2.dropna(axis = 0, how='any') #drop several nan data in the beginning caused by rolling function, create a new dataset stock3
stock3.head() # The new timeseries data begin on November 29,2016
# time series plot of closing price and 5/20 days moving average, plotly
close = go.Scatter(
x=stock3.index,
y=stock3['Close'] ,
name = 'Closing Price',
line = dict(color = 'black'),
opacity = 0.8)
MA5 = go.Scatter(
x=stock3.index,
y=stock3['5 Days Moving Average'] ,
name = '5 Days Moving Average',
line = dict(color = 'rgb(0,125,255)'),
opacity = 0.8)
MA20 = go.Scatter(
x=stock3.index,
y=stock3['20 Days Moving Average'],
name = '20 Days Moving Average',
line = dict(color = 'rgb(255,0,0)'),
opacity = 0.8)
layout = dict(
title = 'Time Series Plot of Stock Price and 5/20 Days Moving Average',
yaxis={'title': 'Price'},
xaxis={'title': 'Date'}
)
iplot(dict(data=[close,MA5, MA20], layout=layout))
# Here we define our benchmark for longing the stock, and similarly we calculate our new strategy return and market return
benchmark = (stock3['5 Days Moving Average'] > stock3['20 Days Moving Average']) | (stock3['Close']> stock3['5 Days Moving Average'])
print('The strategy return is', stock3[benchmark]['Daily Return'].sum())
print('The market return is', stock3['Daily Return'].sum())
# similarly, we define another two function for the second strategy
def strategy2_return(stock):
start = datetime.datetime(2016, 11, 1)
end = datetime.datetime(2017, 4, 15)
stock2 = web.DataReader(stock, 'yahoo', start, end)[['Close']]
stock2['5 Days Moving Average'] = stock2['Close'].rolling(window=5).mean()
stock2['20 Days Moving Average'] = stock2['Close'].rolling(window=20).mean()
stock2['Previous Close'] = stock2.Close.shift(1)
stock2['Daily Return'] = np.log(stock2['Close']/ stock2['Previous Close'])
stock3 = stock2.dropna(axis = 0, how='any')
benchmark = (stock3['5 Days Moving Average'] > stock3['20 Days Moving Average']) | (stock3['Close']> stock3['5 Days Moving Average'])
return stock3[benchmark]['Daily Return'].sum()
def market2_return(stock):
start = datetime.datetime(2016, 11, 1)
end = datetime.datetime(2017, 4, 15)
stock2 = web.DataReader(stock, 'yahoo', start, end)[['Close']]
stock2['5 Days Moving Average'] = stock2['Close'].rolling(window=5).mean()
stock2['20 Days Moving Average'] = stock2['Close'].rolling(window=20).mean()
stock2['Previous Close'] = stock2.Close.shift(1)
stock2['Daily Return'] = np.log(stock2['Close']/ stock2['Previous Close'])
stock3 = stock2.dropna(axis = 0, how='any')
benchmark = (stock3['5 Days Moving Average'] > stock3['20 Days Moving Average']) | (stock3['Close']> stock3['5 Days Moving Average'])
return stock3['Daily Return'].sum()
# to insure that my functions are right, print out the returns for apple stocks.
print('The strategy return is', strategy2_return('AAPL'))
print('The market return is', market2_return('AAPL'))
"""
Explanation: Comment:
Obviously , the first strategy failed because it can not insure us to get better return than the market. There are lots of stocks in the chart underperforming the market.
-Second Trading Strategy - Moving Average Based - Success
To restate the idea, the trader long the stocks when the short term moving average is higher than the long term moving average. Specifically, I long the stock when the stock price (super short term moving average) is greater than 5 days moving average, and the 5 days moving average is greater than the 20days moving average. I pick the 5 days and 20days simply because our testing period is only half years. One can test the 50days and 200days for longer period.
End of explanation
"""
lists2=[]
listm2=[]
# build two list to contains returns for strategy return and market return. It might takes a while to finish running since there are 20 stocks.
for symbol in symbol_list[0:20]:
lists2.append(strategy2_return(symbol))
listm2.append(market2_return(symbol))
# create new data frame for this strategy in order to plot
strategy2 = pd.DataFrame({'Strategy Return': lists2,
'Market Return': listm2},
index = symbol_list[0:20])
strategy2.head(3)
mr2 = dict(type = 'bar',
name = 'Market Return',
y = strategy2['Market Return'],
x = strategy2.index,
marker = dict(color = 'rgb(200,200,200)'),
opacity=0.4
)
sr2 = dict(type = 'bar',
name = 'Moving Average Strategy Return',
y = strategy2['Strategy Return'],
x = strategy2.index,
marker = dict(color = 'rgb(100,10,100)'),
opacity = 0.4
)
layout = dict(width=850, height=500,
yaxis={'title': 'Return'},
title='Market Return vs. Moving Average Strategy Return',
xaxis={'title': 'Sample Company Symbol'}
)
iplot(go.Figure(data=[mr2,sr2], layout=layout))
"""
Explanation: Steps below are quite similar to the first strategy, I use the symbol_list I derived in the first trading strategy simulation.
End of explanation
"""
|
chris1610/pbpython | notebooks/Selecting_Columns_in_DataFrame.ipynb | bsd-3-clause | import pandas as pd
import numpy as np
df = pd.read_csv(
'https://data.cityofnewyork.us/api/views/vfnx-vebw/rows.csv?accessType=DOWNLOAD&bom=true&format=true'
)
"""
Explanation: Tips for Selecting Columns in a DataFrame
Notebook to accompany this post.
End of explanation
"""
col_mapping = [f"{c[0]}:{c[1]}" for c in enumerate(df.columns)]
col_mapping
"""
Explanation: Build a mapping list so we can see the index of all the columns
End of explanation
"""
col_mapping_dict = {c[0]:c[1] for c in enumerate(df.columns)}
col_mapping_dict
"""
Explanation: We can also build a dictionary
End of explanation
"""
df.iloc[:, 2]
"""
Explanation: Use iloc to select just the second column (Unique Squirrel ID)
End of explanation
"""
df.iloc[:, [0,1,2]]
"""
Explanation: Pass a list of integers to select multiple columns by index
End of explanation
"""
df.iloc[:, 0:3]
"""
Explanation: We can also pass a slice object to select a range of columns
End of explanation
"""
np.r_[0:3,15:19,24,25]
"""
Explanation: If we want to combine the list and slice notation, we need to use nump.r_ to process the data into an appropriate format.
End of explanation
"""
df.iloc[:, np.r_[0:3,15:19,24,25]]
"""
Explanation: We can pass the output of np.r_ to .iloc to use multiple selection approaches
End of explanation
"""
df_2 = pd.read_csv(
'https://data.cityofnewyork.us/api/views/vfnx-vebw/rows.csv?accessType=DOWNLOAD&bom=true&format=true',
usecols=np.r_[1,2,5:8,15:25],
)
df_2.head()
"""
Explanation: We can use the same notation when reading in a csv as well
End of explanation
"""
run_cols = df.columns.str.contains('run', case=False)
run_cols
df.iloc[:, run_cols].head()
"""
Explanation: We can also select columns using a boolean array
End of explanation
"""
df.iloc[:, lambda df:df.columns.str.contains('run', case=False)].head()
"""
Explanation: A lambda function can be useful for combining into 1 line.
End of explanation
"""
df.iloc[:, lambda df: df.columns.str.contains('district|precinct|boundaries',
case=False)].head()
"""
Explanation: A more complex example
End of explanation
"""
location_cols = df.columns.str.contains('district|precinct|boundaries',
case=False)
location_cols
location_indices = [i for i, col in enumerate(location_cols) if col]
location_indices
df.iloc[:, np.r_[0:3,location_indices]].head()
"""
Explanation: Combining index and boolean arrays
End of explanation
"""
|
akutuzov/gensim | docs/notebooks/topic_coherence_tutorial.ipynb | lgpl-2.1 | import numpy as np
import logging
try:
import pyLDAvis.gensim
except ImportError:
ValueError("SKIP: please install pyLDAvis")
import json
import warnings
warnings.filterwarnings('ignore') # To ignore all warnings that arise here to enhance clarity
from gensim.models.coherencemodel import CoherenceModel
from gensim.models.ldamodel import LdaModel
from gensim.models.hdpmodel import HdpModel
from gensim.models.wrappers import LdaVowpalWabbit, LdaMallet
from gensim.corpora.dictionary import Dictionary
from numpy import array
"""
Explanation: Demonstration of the topic coherence pipeline in Gensim
Introduction
We will be using the u_mass and c_v coherence for two different LDA models: a "good" and a "bad" LDA model. The good LDA model will be trained over 50 iterations and the bad one for 1 iteration. Hence in theory, the good LDA model will be able come up with better or more human-understandable topics. Therefore the coherence measure output for the good LDA model should be more (better) than that for the bad LDA model. This is because, simply, the good LDA model usually comes up with better topics that are more human interpretable.
End of explanation
"""
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")
"""
Explanation: Set up logging
End of explanation
"""
texts = [['human', 'interface', 'computer'],
['survey', 'user', 'computer', 'system', 'response', 'time'],
['eps', 'user', 'interface', 'system'],
['system', 'human', 'system', 'eps'],
['user', 'response', 'time'],
['trees'],
['graph', 'trees'],
['graph', 'minors', 'trees'],
['graph', 'minors', 'survey']]
dictionary = Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
"""
Explanation: Set up corpus
As stated in table 2 from this paper, this corpus essentially has two classes of documents. First five are about human-computer interaction and the other four are about graphs. We will be setting up two LDA models. One with 50 iterations of training and the other with just 1. Hence the one with 50 iterations ("better" model) should be able to capture this underlying pattern of the corpus better than the "bad" LDA model. Therefore, in theory, our topic coherence for the good LDA model should be greater than the one for the bad LDA model.
End of explanation
"""
goodLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=50, num_topics=2)
badLdaModel = LdaModel(corpus=corpus, id2word=dictionary, iterations=1, num_topics=2)
"""
Explanation: Set up two topic models
We'll be setting up two different LDA Topic models. A good one and bad one. To build a "good" topic model, we'll simply train it using more iterations than the bad one. Therefore the u_mass coherence should in theory be better for the good model than the bad one since it would be producing more "human-interpretable" topics.
End of explanation
"""
goodcm = CoherenceModel(model=goodLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')
badcm = CoherenceModel(model=badLdaModel, corpus=corpus, dictionary=dictionary, coherence='u_mass')
"""
Explanation: Using U_Mass Coherence
End of explanation
"""
print goodcm
"""
Explanation: View the pipeline parameters for one coherence model
Following are the pipeline parameters for u_mass coherence. By pipeline parameters, we mean the functions being used to calculate segmentation, probability estimation, confirmation measure and aggregation as shown in figure 1 in this paper.
End of explanation
"""
pyLDAvis.enable_notebook()
pyLDAvis.gensim.prepare(goodLdaModel, corpus, dictionary)
pyLDAvis.gensim.prepare(badLdaModel, corpus, dictionary)
print goodcm.get_coherence()
print badcm.get_coherence()
"""
Explanation: Interpreting the topics
As we will see below using LDA visualization, the better model comes up with two topics composed of the following words:
1. goodLdaModel:
- Topic 1: More weightage assigned to words such as "system", "user", "eps", "interface" etc which captures the first set of documents.
- Topic 2: More weightage assigned to words such as "graph", "trees", "survey" which captures the topic in the second set of documents.
2. badLdaModel:
- Topic 1: More weightage assigned to words such as "system", "user", "trees", "graph" which doesn't make the topic clear enough.
- Topic 2: More weightage assigned to words such as "system", "trees", "graph", "user" which is similar to the first topic. Hence both topics are not human-interpretable.
Therefore, the topic coherence for the goodLdaModel should be greater for this than the badLdaModel since the topics it comes up with are more human-interpretable. We will see this using u_mass and c_v topic coherence measures.
Visualize topic models
End of explanation
"""
goodcm = CoherenceModel(model=goodLdaModel, texts=texts, dictionary=dictionary, coherence='c_v')
badcm = CoherenceModel(model=badLdaModel, texts=texts, dictionary=dictionary, coherence='c_v')
"""
Explanation: Using C_V coherence
End of explanation
"""
print goodcm
"""
Explanation: Pipeline parameters for C_V coherence
End of explanation
"""
print goodcm.get_coherence()
print badcm.get_coherence()
"""
Explanation: Print coherence values
End of explanation
"""
model1 = LdaVowpalWabbit('/home/devashish/vw-8', corpus=corpus, num_topics=2, id2word=dictionary, passes=50)
model2 = LdaVowpalWabbit('/home/devashish/vw-8', corpus=corpus, num_topics=2, id2word=dictionary, passes=1)
cm1 = CoherenceModel(model=model1, corpus=corpus, coherence='u_mass')
cm2 = CoherenceModel(model=model2, corpus=corpus, coherence='u_mass')
print cm1.get_coherence()
print cm2.get_coherence()
model1 = LdaMallet('/home/devashish/mallet-2.0.8RC3/bin/mallet',corpus=corpus , num_topics=2, id2word=dictionary, iterations=50)
model2 = LdaMallet('/home/devashish/mallet-2.0.8RC3/bin/mallet',corpus=corpus , num_topics=2, id2word=dictionary, iterations=1)
cm1 = CoherenceModel(model=model1, texts=texts, coherence='c_v')
cm2 = CoherenceModel(model=model2, texts=texts, coherence='c_v')
print cm1.get_coherence()
print cm2.get_coherence()
"""
Explanation: Support for wrappers
This API supports gensim's ldavowpalwabbit and ldamallet wrappers as input parameter to model.
End of explanation
"""
hm = HdpModel(corpus=corpus, id2word=dictionary)
# To get the topic words from the model
topics = []
for topic_id, topic in hm.show_topics(num_topics=10, formatted=False):
topic = [word for word, _ in topic]
topics.append(topic)
topics[:2]
# Initialize CoherenceModel using `topics` parameter
cm = CoherenceModel(topics=topics, corpus=corpus, dictionary=dictionary, coherence='u_mass')
cm.get_coherence()
"""
Explanation: Support for other topic models
The gensim topics coherence pipeline can be used with other topics models too. Only the tokenized topics should be made available for the pipeline. Eg. with the gensim HDP model
End of explanation
"""
|
kazzz24/deep-learning | gan_mnist/Intro_to_GANs_Exercises.ipynb | mit | %matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
"""
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
"""
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, [None, real_dim], name='inputs_real')
inputs_z = tf.placeholder(tf.float32, [None, z_dim], name='inputs_z')
return inputs_real, inputs_z
"""
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
"""
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('generator', reuse=reuse): # finish this
# Hidden layer
#h1 = tf.contrib.layers.fully_connected(z, n_units, activation_fn=None)
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
"""
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
"""
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
"""
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
"""
Explanation: Hyperparameters
End of explanation
"""
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size, g_hidden_size, reuse=False)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, reuse=False, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, n_units=d_hidden_size, reuse=True, alpha=alpha)
"""
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
"""
# Calculate losses
labels_real = tf.ones_like(d_logits_real) * (1 - smooth)
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=labels_real))
labels_fake = tf.zeros_like(d_logits_real)
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=labels_fake))
d_loss = d_loss_real + d_loss_fake
labels_fake_fooled = tf.ones_like(d_logits_fake)
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=labels_fake_fooled))
"""
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
"""
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
#print([v.name for v in t_vars])
g_vars = [v for v in t_vars if v.name.startswith('generator')]
d_vars = [v for v in t_vars if v.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer().minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer().minimize(g_loss, var_list=g_vars)
"""
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
"""
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
"""
Explanation: Training
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
"""
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
"""
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
"""
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
"""
_ = view_samples(-1, samples)
"""
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
"""
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
"""
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
"""
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
"""
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation
"""
|
peterdalle/mij | 2 Web scraping and APIs/Web scraping and Exercise.ipynb | gpl-3.0 | !pip install lxml
!pip install BeautifulSoup4
import urllib.request
from lxml import html
from bs4 import BeautifulSoup
"""
Explanation: Web scraping
We will scrape data from:
Internet Movie Database
Washington Post
Wikipedia
Ethics:
Scraping can be done much faster than humans. Pause ~1 second before scraping the same site again.
Don't republish other people's data without consent.
1. Import libraries
If you don't have the library installed already, install them with pip. We can do that via Jupyter Notebooks as well!
End of explanation
"""
# Scrape all HTML from webpage.
def scrapewebpage(url):
# Open URL and get HTML.
web = urllib.request.urlopen(url)
# Make sure there wasn't any errors opening the URL.
if (web.getcode() == 200):
html = web.read()
return(html)
else:
print("Error %s reading %s" % str(web.getcode()), url)
# Helper function that scrape the webpage and turn it into soup.
def makesoup(url):
html = scrapewebpage(url)
return(BeautifulSoup(html, "lxml"))
"""
Explanation: 2. Define functions for scraping
End of explanation
"""
# Scrape Interstellar (2014) by using our own function "makesoup" we defined above.
movie_soup = makesoup('http://www.imdb.com/title/tt0816692/')
# Get movie title.
title = movie_soup.find(itemprop="name").get_text()
title = title.strip() # Remove whitespace before and after text
# Get movie year.
year = movie_soup.find(id="titleYear").get_text()
year = year[1:5] # Remove parentheses, make (2014) into 2014.
# Get movie duration.
duration = movie_soup.find(itemprop="duration").get_text()
duration = duration.strip() # Remove whitespace before and after text
# Get director.
director = movie_soup.find(itemprop="director").get_text()
director = director.strip() # Remove whitespace before and after text
# Get movie rating.
rating = movie_soup.find(itemprop="ratingValue").get_text()
# Get cast list.
actors = []
for castlist in movie_soup.find_all("table", "cast_list"):
for actor in castlist.find_all(itemprop="actor"):
actors.append(actor.get_text().strip())
# Present the results.
print("Movie: " + title)
print("Year: " + year)
print("Director: " + director)
print("Duration: " + duration)
print("Rating: " + rating)
# Present list of actors.
print()
print("Main actors:")
for actor in actors:
print("- " + actor)
"""
Explanation: 3. Scrape Internet Movie Database
Get some info about a movie.
End of explanation
"""
wpost_soup = makesoup("http://www.washingtonpost.com/")
# Get headlines.
headlines = wpost_soup.find_all("div", "headline")
print("Found " + str(len(headlines)) + " headlines")
# Print headlines.
for headline in headlines:
print(headline.get_text().strip())
# Print headlines and links.
for links in headlines:
for link in links.find_all("a"):
print(link.get_text())
print(link.get("href"))
print()
# Get all the links on the page.
for link in wpost_soup.find_all("a"):
href = link.get("href")
if href is not None:
if href[:4] == "http":
print(href)
"""
Explanation: 4. Scrape Washington Post
Get the latest headlines and links.
End of explanation
"""
wiki_soup = makesoup("https://en.wikipedia.org/wiki/Parliamentary_Assembly_of_the_Council_of_Europe")
# Lets find the table "Composition by parliamentary delegation".
# The table doesn't have a unique name, which makes it difficult to scrape.
# However, it's the first table. So we can use find, which returns the first match.
table = wiki_soup.find("table")
# Go through all rows in the table.
for row in table.find_all("tr"):
# Go through all cells in each row.
cell = row.find_all("td")
if len(cell) == 3:
# Extract the text from the three cells.
country = cell[0].get_text()
seats = cell[1].get_text()
accessiondate = cell[2].get_text()
print(country + ": " + seats + " seats (" + accessiondate + ")")
"""
Explanation: 5. Scrape Wikipedia
How many seats does each country have in this conuncil? Scrape a table.
End of explanation
"""
# Modify this to your favorite movie.
soup = makesoup('http://www.imdb.com/title/tt0816692/')
# Get rating count instead of name.
title = soup.find(itemprop="name").get_text()
title = title.strip() # Remove whitespace before and after text
"""
Explanation: Exercise
Go to http://www.imdb.com/ and find your favorite movie.
Try to scrape the rating count (under the rating).
End of explanation
"""
|
VandyAstroML/Vanderbilt_Computational_Bootcamp | notebooks/Week_05/05_Numpy_Matplotlib.ipynb | mit | import numpy as np
"""
Explanation: Week 5 - Numpy & Matplotlib
Today's Agenda
Numpy
Matplotlib
Numpy - Numerical Python
From their website (http://www.numpy.org/):
NumPy is the fundamental package for scientific computing with Python.
* a powerful N-dimensional array object
* sophisticated (broadcasting) functions
* tools for integrating C/C++ and Fortran code
* useful linear algebra, Fourier transform, and random number capabilities
You can import "numpy" as
End of explanation
"""
# We first create an array `x`
start = 1
stop = 11
step = 1
x = np.arange(start, stop, step)
print(x)
"""
Explanation: Numpy arrays
In standard Python, data is stored as lists, and multidimensional data as lists of lists. In numpy, however, we can now work with arrays. To get these arrays, we can use np.asarray to convert a list into an array. Below we take a quick look at how a list behaves differently from an array.
End of explanation
"""
x * 2
"""
Explanation: We can also manipulate the array. For example, we can:
Multiply by two:
End of explanation
"""
x ** 2
"""
Explanation: Take the square of all the values in the array:
End of explanation
"""
(x**2) + (5*x) + (x / 3)
"""
Explanation: Or even do some math on it:
End of explanation
"""
print(np.arange(10))
print(np.linspace(1,10,10))
"""
Explanation: If we want to set up an array in numpy, we can use range to make a list and then convert it to an array, but we can also just create an array directly in numpy. np.arange will do this with integers, and np.linspace will do this with floats, and allows for non-integer steps.
End of explanation
"""
x=np.arange(10)
print(x)
print(x**2)
"""
Explanation: Last week we had to use a function or a loop to carry out math on a list. However with numpy we can do this a lot simpler by making sure we're working with an array, and carrying out the mathematical operations on that array.
End of explanation
"""
# Defining starting and ending values of the array, as well as the number of elements in the array.
start = 0
stop = 100
n_elements = 201
x = np.linspace(start, stop, n_elements)
print(x)
"""
Explanation: In numpy, we also have more options for quickly (and without much code) examining the contents of an array. One of the most helpful tools for this is np.where. np.where uses a conditional statement on the array and returns an array that contains indices of all the values that were true for the conditional statement. We can then call the original array and use the new array to get all the values that were true for the conditional statement.
There are also functions like max and min that will give the maximum and minimum, respectively.
End of explanation
"""
# This function returns the indices that match the criteria of `x % 5 == 0`:
x_5 = np.where(x%5 == 0)
print(x_5)
# And one can use those indices to *only* select those values:
print(x[x_5])
"""
Explanation: And we can select only those values that are divisible by 5:
End of explanation
"""
x[x%5 == 0]
"""
Explanation: Or similarly:
End of explanation
"""
print('The minimum of `x` is `{0}`'.format(x.min()))
print('The maximum of `x` is `{0}`'.format(x.max()))
"""
Explanation: And you can find the max and min values of the array:
End of explanation
"""
start = 0
stop = 100
n_elem = 501
x = np.linspace(start, stop, n_elem)
# We can now create another array from `x`:
y = (.1*x)**2 - (5*x) + 3
# And finally, we can dump `x` and `y` to a file:
np.savetxt('myfile.txt', np.transpose([x,y]))
# We can also load the data from `myfile.txt` and display it:
data = np.loadtxt('myfile.txt')
print('2D-array from file `myfile.txt`:\n\n', data, '\n')
# You can also select certain elements of the 2D-array
print('Selecting certain elements from `data`:\n\n', data[:3,:], '\n')
"""
Explanation: Numpy also provides some tools for loading and saving data, loadtxt and savetxt. Here I'm using a function called transpose so that instead of each array being a row, they each get treated as a column.
When we load the information again, it's now a 2D array. We can select parts of those arrays just as we could for 1D arrays.
End of explanation
"""
## Importing modules
%matplotlib inline
# Importing LaTeX
from matplotlib import rc
rc('text', usetex=True)
# Importing matplotlib and other modules
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Resources
Scientific Lectures on Python - Numpy: iPython Notebook
Data Science iPython Notebooks - Numpy: iPython Notebook
Matplotlib
Matplotlib is a Python 2D plotting library which
* produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms
* Quick way to visualize data from Python
* Main plotting utility in Python
From their website (http://matplotlib.org/):
Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython shell, the jupyter notebook, web application servers, and four graphical user interface toolkits.
A great starting point to figuring out how to make a particular figure is to start from the Matplotlib gallery and look for what you want to make.
End of explanation
"""
data = np.loadtxt('myfile.txt')
"""
Explanation: We can now load in the data from myfile.txt
End of explanation
"""
plt.figure(1, figsize=(8,8))
plt.plot(data[:,0],data[:,1])
plt.show()
"""
Explanation: The simplest figure is to simply make a plot. We can have multiple figures, but for now, just one. The plt.plot function will connect the points, but if we want a scatter plot, then plt.scatter will work.
End of explanation
"""
plt.figure(1, figsize=(8,8))
plt.plot(*data.T)
plt.show()
"""
Explanation: You can also pass the *data.T value instead:
End of explanation
"""
# Creating figure
plt.figure(figsize=(8,8))
plt.plot(*data.T)
plt.title(r'$y = 0.2x^{2} - 5x + 3$', fontsize=20)
plt.xlabel('x value', fontsize=20)
plt.ylabel('y value', fontsize=20)
plt.show()
"""
Explanation: We can take that same figure and add on the needed labels and titles.
End of explanation
"""
plt.figure(figsize=(8,8))
plt.plot(data[:,0],data[:,1])
plt.title(r'$y = 0.2x^{2} - 5x + 3$', fontsize=20)
plt.xlabel('x value', fontsize=20)
plt.ylabel('y value', fontsize=20)
plt.show()
"""
Explanation: There's a large number of options available for plotting, so try using the initial code below, combined with the information here to try out a few of the following things: changing the line width, changing the line color
End of explanation
"""
|
ML4DS/ML4all | P3.Python_datos/Intro3_Working_with_Data_student.ipynb | mit | # Let's import some libraries
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Working with data in Python
Notebook version:
* 1.0 (Sep 3, 2018) - First TMDE version
* 1.1 (Sep 14, 2018) - Minor fixes
Authors: Vanessa Gómez Verdejo (vanessa@tsc.uc3m.es), Óscar García Hinde (oghinde@tsc.uc3m.es),
Simón Roca Sotelo (sroca@tsc.uc3m.es), Carlos Sevilla Salcedo (sevisal@tsc.uc3m.es)
Throughout this course we're going to work with data consisting of noisy signals, samples from probability distributions, etc. For example, we might need to apply different transformations to our data in order to compute a good predictor.
In this notebook we will learn to use specific tools that will let us load, generate, transform and visualise data. We will expand on the mathematical tools that numpy offers and we will introduce a new library, matplotlib, that will allow us to plot all sorts of graphs from our data.
End of explanation
"""
# Random samplig examples
n = 1000 # number of samples
# Sampling from a standard uniform distribution:
x_unif = np.random.rand(n)
fig1 = plt.figure()
plt.hist(x_unif, bins=100)
plt.title('Samples from a uniform distribution between 0 and 1')
plt.show()
# Sampling from a normal distribution:
x_norm = np.random.randn(n)
fig2 = plt.figure()
plt.hist(x_norm, bins=100)
plt.title('Samples from a normal distribution with 0 mean and unity variance')
plt.show()
# Adding Gaussian noise to a linear function:
n = 30
x = np.linspace(-5, 5, n)
noise = np.random.randn(n)
y = 3*x
y_noise = y + noise
fig3 = plt.figure()
plt.plot(x, y, color='black', linestyle='--', label='Clean signal')
plt.plot(x, y_noise, color='red', label='Noisy signal')
plt.legend(loc=4, fontsize='large')
plt.title('Visualization of a noisy data-set')
plt.show()
"""
Explanation: 1. Data generation
One of the first things we need to learn is to generate random samples from a given distribution. Most things in life come muddled with random noise. A fundamental part of Detection and Estimation is finding out what the properties of this noise are in order to make better predictions. We assume that this noise can be modeled according to a specific probability distribution (i.e: nose generated by a Gaussian dastribution), which in turn allows us to make precise estimations of said distribution's parameters.
In python, random samples can be easily generated with the numpy.random package. Inside it we can find many usefull tools to sample from the most important probability distributions.
We have common number generator functions:
* rand(): uniformily generates random samples.
* randn(): returns samples from the “standard normal” distribution.
Or more specific ones:
* exponential([scale, size]): draw samples from an exponential distribution with a given scale parameter.
* normal([loc, scale, size]): draw random samples from a normal (Gaussian) distribution with parameters: loc (mean) and scale (standard deviation).
* uniform([low, high, size]): draw samples from a uniform distribution in the range low-high.
In the following examples we will look at different random generation methods and we will visualize the results. For the time being, you can ignore the visualization code. Later on we will learn how these visualization tools work.
End of explanation
"""
# Fixing the random number generator seed:
print("If we don't fix the seed, the sequence will be different each time:\n")
for i in range(3):
print('Iteration ', str(i))
print(np.random.rand(3), '\n')
print("\nHowever, if we fix the seed, we will always obtain the same sequence:\n")
for i in range(3):
print('Iteration ', str(i))
np.random.seed(0)
print(np.random.rand(3), '\n')
"""
Explanation: Note that the random vectors that are being generated will be different every time we execute the code. However, we often need to make sure we obtain the exact same sequence of random numbers in order to recreate specific experimental results. There are essentially two ways of doing this:
Store the random sequence in variable and reuse it whenever the need arises.
Fix the seed of the random number generator with numpy.random.seed(int).
See for yourselves:
End of explanation
"""
print('Exercise 1:\n')
# n = <FILL IN>
# x_unif = <FILL IN>
# print('Sample mean = ', <FILL IN>)
plt.hist(x_unif, bins=100)
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.title('Uniform distribution between 2 and 5')
plt.show()
"""
Explanation: Exercise 1:
Generate 1000 samples from a uniform distribution that spans from 2 to 5. Print the sample mean and check that it approximates its expected value.
Hint: check out the random.uniform() function
End of explanation
"""
print('\nExercise 2:\n')
# n = <FILL IN>
# x_gauss = <FILL IN>
# print('Sample mean = ', <FILL IN>)
# print('Sample variance = ', <FILL IN>)
plt.hist(x_gauss, bins=100)
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.title('Gaussian distribution with mean = 3 and variance = 2')
plt.show()
"""
Explanation: Exercise 2.: Generate 1000 samples from a Gaussian distribution with mean 3 and variance 2. Print the sample mean and variance and check that they approximate their expected values.
Hint: check out the random.normal() function. Also, think about the changes you need to apply to a standard normal distribution to modify its mean and variance and try to obtain the same results using the random.randn() function.
End of explanation
"""
print('\nExercise 3:\n')
# n = <FILL IN>
# x = <FILL IN>
# y = <FILL IN>
# noise = <FILL IN>
# y_noise = <FILL IN>
plt.plot(x, y_noise, color='green', label='Noisy signal')
plt.plot(x, y, color='black', linestyle='--', label='Clean signal')
plt.legend(loc=3, fontsize='large')
plt.title('Sine signal with added uniform noise')
plt.show()
"""
Explanation: Exercise 3.: Generate 100 samples of a sine signal between -5 and 5 and add uniform noise with mean 0 and amplitude 1.
End of explanation
"""
print('\nExercise 4:\n')
# n = <FILL IN>
# mean = <FILL IN>
# cov = <FILL IN>
# x_2d_gauss = <FILL IN>
plt.scatter(x_2d_gauss[:, 0], x_2d_gauss[:, 1], )
plt.title('2d Gaussian Scatter Plot')
plt.show()
"""
Explanation: Exercise 4.: Generate 1000 samples from a 2 dimensional Gaussian distribution with mean [2, 3] and covariance matrix [[2, 0], [0, 2]].
Hint: check out the random.multivariate_normal() function.
End of explanation
"""
t = np.arange(0.0, 1.0, 0.05) # Time vector, from 0s to 1s in steps of 0.05s.
a1 = np.sin(2*np.pi*t) # Samples of the first signal.
a2 = np.sin(4*np.pi*t) # Samples of the second signal.
# Visualization
# We can create a figure that will contain our plots.
plt.figure()
# We can plot the two signals in different subplots, as in Matlab.
# First signal
ax1 = plt.subplot(211)
ax1.plot(t,a1)
plt.title('First sinusoid.')
plt.xlabel('t (seconds)')
plt.ylabel('a_1(t)')
# Second signal
ax2 = plt.subplot(212)
ax2.plot(t,a2, 'r.')
plt.title('Second sinusoid.')
plt.xlabel('t (seconds)')
plt.ylabel('a_2(t)')
# We ensure the two plots won't overlap, and finally we show the results on the
# screen.
plt.tight_layout()
plt.show()
"""
Explanation: 2. Data representation: Matplotlib
When we work with real data, or even if we generate data following a certain function or random distribution, we often acquire a better understanding by plotting the content of a vector, instead of just looking at a bunch of real numbers. In a plot, we assign each axis a meaning (e.g., y-axis could be a probability, kilograms, euros, etc; and x-axis could be time, index of samples, etc.). It should be clear by now how important data visualization is for us and the people who receive our data. Data analysis wouldn't be Data analysis without a nice visualization.
In Python the simplest plotting library is matplotlib and its sintax is similar to Matlab plotting library. As in Matlab, we can plot any set of samples making use of a lot of features. For instance, we can model the sampling of a continuous signal, we can deal with discrete samples of a signal, or we can even plot the histogram of random samples.
Take a look at the following code we use to plot two different sinusoids:
End of explanation
"""
t = np.arange(0.0, 3, 0.05)
a1 = np.sin(2*np.pi*t)+t
a2 = np.ones(a1.shape)*t
plt.figure()
# We are going to plot two signals in the same figure. For each one we can
# specify colors, symbols, width, and the label to be displayed in a legend.
# Use the Matplotlib docs if you want to know all the things you can do.
plt.plot(t,a1,'r--',LineWidth=2, label='Sinusoidal')
plt.plot(t,a2, 'k:', label='Straight Line')
plt.title('Playing with different parameters')
plt.ylabel('Amplitude')
plt.xlabel('Time (seconds)')
# By default, axis limits will coincide with the highest/lowest values in our
# vectors. However, we can specify ranges for x and y.
plt.xlim((-0.5, 3))
plt.ylim((-0.5, 4))
# When plotting more than one curve in a single figure, having a legend is a
# good practice. You can ask Matplotlib to place it in the "best" position
# (trying not to overlap the lines), or you can specify positions like
# "upper left", "lower right"... check the docs!
plt.legend(loc='best')
# We can draw the origin lines, to separate the bidimensional space in four
# quadrants.
plt.axhline(0,color='black')
plt.axvline(0, color='black')
# We can also set a grid with different styles...
plt.grid(color='grey', linestyle='--', linewidth=0.8)
# And specify the "ticks", i.e., the values which are going to be specified in
# the axis, where the grid method is placing lines.
plt.xticks(np.arange(-0.5, 3, 0.5)) # In x, put a value each 0.5.
plt.yticks(np.arange(-0.5, 4, 1)) # In y, put a value each 1.
# Finally, plot all the previous elements.
plt.show()
"""
Explanation: Let's analyse how we have created the previous figures and in which things they differ:
A crucial aspect to consider is that both curves represent a set of discrete samples (the samples we've generated). While the second plot uses red dots to represent the data (specified through 'r.'), the first one will draw the points using the standard blue line. As in Matlab, using lines to plot samples will interpolate them by default. If we don't want Matplotlib to do so, we can specify a different symbol, like dots, squares, etc...
We can label the axis and set titles, enhancing the way in which our data is presented. Moreover, we can improve the clarity of a figure by including or modyfing the line width, colours, symbols, legends, and a big etcetera.
Look at the following figure and try to catch which argument and/or piece of code is related with each feature. It's intuitive! You can modify the parameters and see what's the new outcome.
End of explanation
"""
# x = <FILL IN>
# Create a weights vector w, in which w[0] = 2.4, w[1] = -0.8 and w[2] = 1.
# w = <FILL IN>
print('x shape:\n',x.shape)
print('\nw:\n', w)
print('w shape:\n', w.shape)
"""
Explanation: In just a single example we have seen a lot of Matplotlib functionalities that can be easely tuned. You have all you need to draw decent figures. However, those of you who want to learn more about Matplotlib can take a look at AnatomyOfMatplotlib, a collection of notebooks in which you will explore more in depth Matplotlib.
Now, try to solve the following exercises:
Exercise 5: Generate a random vector x, taking 200 samples of a uniform distribution, defined in the [-2,2] interval.
End of explanation
"""
# y = <FILL IN>
print('y shape:\n',y.shape)
"""
Explanation: Exercise 6: Obtain the vector y whose samples are obtained by the polynomial $w_2 x^2 + w_1 x + w_0$
End of explanation
"""
# X = <FILL IN>
# y2 = <FILL IN>
print('y shape:\n',y.shape)
print('y2 shape:\n',y2.shape)
if(np.sum(np.abs(y-y2))<1e-10):
print('\ny and y2 are the same, well done!')
else:
print('\nOops, something went wrong, try again!')
"""
Explanation: Exercise 7: You probably obtained the previous vector as a sum of different terms. If so, try to obtain y again (and name it y2) as a product of a matrix X and a vector w. Then, check that both methods lead to the same result (be careful with shapes).
Hint: w will remain the same, but now X has to be constructed in a way that the dot product of X and w is consistent).
End of explanation
"""
# x2 = <FILL IN>
# y3 = <FILL IN>
# Plot
# <SOL>
# </SOL>
"""
Explanation: Exercise 8: Define x2 as a range vector, going from -1 to 2, in steps of 0.05. Then, obtain y3 as the output of polynomial $w_2 x^2 + w_1 x + w_0$ for input x2 and plot the result using a red dashed line (--).
End of explanation
"""
# We take samples from a normalized gaussian distribution, and we change
# mean and variance with an operation.
sigma = 4
mn = 5
x_norm = mn + np.sqrt(sigma)*np.random.randn(5000)
# Let's obtain an histogram with high resultion, that is, a lot of bins.
fig1 = plt.figure()
plt.hist(x_norm, bins=100,label='Samples')
plt.title('Histogram with 100 bins')
# With vertical lines, we plot the mean and the intervals obtain summing one
# standard deviation to the mean.
plt.axvline(x=np.mean(x_norm),color='k',linestyle='--',label='Mean')
plt.axvline(x=np.mean(x_norm)+np.std(x_norm),color='grey',linestyle='--',label='Mean +/- std')
plt.axvline(x=np.mean(x_norm)-np.std(x_norm),color='grey',linestyle='--')
plt.legend(loc='best')
plt.show()
# We check that the mean and variance of the samples is aprox. the original one.
print('Sample mean = ', x_norm.mean())
print('Sample variance = ', x_norm.var())
# Now let's plot a low resolution histogram, with just a few bins.
fig2 = plt.figure()
# Density=True normalizes the histogram.
plt.hist(x_norm, bins=10,label='Samples',density=True)
plt.title('Histogram with 10 bins')
plt.axvline(x=np.mean(x_norm),color='k',linestyle='--',label='Mean')
plt.axvline(x=np.mean(x_norm)+np.std(x_norm),color='grey',linestyle='--',label='Mean +/- std')
plt.axvline(x=np.mean(x_norm)-np.std(x_norm),color='grey',linestyle='--')
plt.legend(loc='best')
plt.show()
# A different resolution leads to different representations, but don't forget
# that we are plotting the same samples.
print('Sample mean = ', x_norm.mean())
print('Sample variance = ', x_norm.var())
"""
Explanation: In the above exercises you have plotted a few vectors which are deterministic, that is, a range or a function applied to a range. Let's now consider the case of representing random samples and distributions.
If we have an expression to obtain the density of a given distribution, we can plot it in the same way we plotted functions before.
In a more general case, when we only have access to a limited number of samples, or we are interested in sampling them randomly, we usually make use of a histogram.
Consider x a vector containing samples coming from a 1-dimensional random variable. A histogram is a figure in which we represent the observed frequencies of different ranges of the x domain. We can express them as relative frequencies (summing up to 1) or absolute frequencies (counting events).
We can adapt the number and size of intervals (called bins) to directly affect the resolution of the plot.
When we have a sufficiently high number of random samples coming from the same distribution, its histogram is expected to have a similar shape to the theoretical expression corresponding to the density of this distribution.
In Matplotlib, we have already plotted histograms, with plt.hist(samples,bins=).
Let's see some examples:
End of explanation
"""
# x_exp = <FILL IN>
# plt.hist(<FILL IN>)
plt.legend(loc='best')
plt.show()
"""
Explanation: Now it's your turn!
Exercise 9: Obtain x_exp as 1000 samples of an exponential distribution with scale parameter of 10. Then, plot the corresponding histogram for the previous set of samples, using 50 bins. Obtain the empirical mean and make it appear in the histogram legend. Does it coincide with the theoretical one?
End of explanation
"""
np.random.seed(4) # Keep the same result
x_exp = np.random.exponential(10,10000) # exponential samples
x = np.arange(np.min(x_exp),np.max(x_exp),0.05)
# density = <FILL IN>
w_n = np.zeros_like(x_exp) + 1. / x_exp.size
plt.hist(x_exp, weights=w_n,label='Histogram.',bins=75)
plt.plot(x,density,'r--',label='Theoretical density.')
plt.legend()
plt.show()
"""
Explanation: Exercise 10: Taking into account that the exponential density can be expressed as:
$f(x;\beta) = \frac{1}{\beta} e^{-\frac{x}{\beta}}; x>=0$.
where $\beta$ is the scale factor, fill the variable density using the vector x and apply it the theoretical density for an exponential distribution. Then, take a look at the plot. Do the histogram and the density look alike? How does the number of samples affect the final result?
End of explanation
"""
# Creating a dictionary with different keys. These keys can be either an
# integer or a string. To separate elements you have to use ','.
my_dict = {'Is she a witch?': 'If... she... weights the same as a duck... she`s made of wood!', 42: 'Can you repeat the question?'}
print (my_dict)
"""
Explanation: 3. Data storage: Saving and loading files
Once we have learned to generate and plot data, the next thing we need to know is how we can store those results for future usage and, subsequently, how to load them.
Python is a programming language commonly used in the context of data analysis. This implies there is a vast number of libraries and functions to work with data. In our case, we will study how to save your data into mat or csv files, although there are some other methods we encourage you to take a look at (pickle, pandas, npz,...).
3.1. Dictionaries
All of these are most usually combined with dictionaries. Dictionaries are an useful data structure implemented in Python which allows to index its different elements with keys instead of with a range of numbers. This way you can access the different elements on the list using either a number or a string.
End of explanation
"""
# We can add a new key to a dictionary and fill in its value.
# Let's add a list of things that float in water:
my_dict['What floats in water?'] = ['Bread','Apples','Very small rocks','Cider','Gravy','Cherries','Mud']
# Now we can access to the key of things that float in water and add some other
# elements to the array in the dictionary:
my_dict['What floats in water?'].append('A duck')
print (my_dict['What floats in water?'])
# Print line by line the keys and elements on the dictionary
print('\nThese are the keys and elements on my list:\n')
for key in my_dict:
print (key,':',my_dict[key])
"""
Explanation: It works in a similar way to lists, being capable of storing arrays, numbers and strings of diferent sizes. In the case of dictionaries, to access to a certain value you just have to use its key.
End of explanation
"""
# alumnos = <FILL IN>
clothes = ['Shirt','Dress','Glasses','Shoes']
for alumno in alumnos:
print(alumno)
# <SOL>
# </SOL>
"""
Explanation: Let's now try to apply this knowledge about dictionaries with the following exercise:
Exercise 11: Create a dictionary with your name and a colleage's and create a dictionary for each of you with what you are wearing. Then print the whole list to see what are each of you wearing.
End of explanation
"""
# Saving the previous dictionary in a mat file:
import scipy.io as sio
sio.savemat('dictionaries_rule.mat', alumnos)
# Load the previously stored mat file:
data = sio.loadmat('dictionaries_rule.mat')
print (data.keys())
"""
Explanation: 3.2. Saving and Loading
Now that we know how to create and work with dictionaries we can start to save these dictionaries into different file types. In order to work with .mat files, we need to work with the scipy.io library, which provides us with the functions we need:
* scipy.io.savemat([filename,mdict]): stores the given dictionary in a mat file with the given file name.
scipy.io.loadmat([filename, mdict=None]): loads the mat file with the given file name. If a dictionary is given, it loads the data into it.
End of explanation
"""
# Saving a csv file with some text in it separated by spaces:
import csv
with open('eggs.csv', 'w') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=' ')
spamwriter.writerow(['Spam'] * 5 + ['Baked Beans'])
spamwriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam'])
# Loading the csv file and join the elements with commas instead of spaces:
with open('eggs.csv', 'r') as csvfile:
spamreader = csv.reader(csvfile, delimiter=' ')
for row in spamreader:
print (', '.join(row))
"""
Explanation: The csv (Comma Separated Values) files are one of the most common when working with databases. As stated in its name, these format defines the sepparation between elements in the file by a delimiter, typically the comma. Nevertheless, as this files can be defined using any delimiter, it is recommendable to specify which one you would like to use to avoid errors.
In particular, we are going to work with the functions which allow us to save and load data:
csv.writer([filename, delimiter]): creates the csv file with the specified filename.
csv.reader([filename, delimiter]): loads the csv file with the specified filename.
End of explanation
"""
|
ledeprogram/algorithms | class7/donow/benzaquen_mercy_donow_7.ipynb | gpl-3.0 | import pandas as pd
%matplotlib inline
import numpy as np
from sklearn.linear_model import LogisticRegression
"""
Explanation: Apply logistic regression to categorize whether a county had high mortality rate due to contamination
1. Import the necessary packages to read in the data, plot, and create a logistic regression model
End of explanation
"""
df = pd.read_csv("hanford.csv")
df.head()
"""
Explanation: 2. Read in the hanford.csv file in the data/ folder
End of explanation
"""
df.mean()
df.median()
#range
df["Exposure"].max() - df["Exposure"].min()
#range
df["Mortality"].max() - df["Mortality"].min()
df.std()
df.corr()
"""
Explanation: 3. Calculate the basic descriptive statistics on the data
End of explanation
"""
#IQR
IQR= df['Exposure'].quantile(q=0.75)- df['Exposure'].quantile(q=0.25)
"""
Explanation: 4. Find a reasonable threshold to say exposure is high and recode the data
End of explanation
"""
Q1= df['Exposure'].quantile(q=0.25) #1st Quartile
Q1
Q2= df['Exposure'].quantile(q=0.5) #2nd Quartile (Median)
Q3= df['Exposure'].quantile(q=0.75) #3rd Quartile
UAL= (IQR * 1.5) +Q3
UAL
LAL= Q1- (IQR * 1.5)
LAL
"""
Explanation: UAL= (IQR * 1.5) +Q3
LAL= Q1- (IQR * 1.5)
Anything outside of UAL and LAL is an outlier
End of explanation
"""
|
Aggieyixin/cjc2016 | code/04.PythonCrawler_beautifulsoup.ipynb | mit | import urllib2
from bs4 import BeautifulSoup
"""
Explanation: 数据抓取:
Beautifulsoup简介
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
需要解决的问题
页面解析
获取Javascript隐藏源数据
自动翻页
自动登录
连接API接口
End of explanation
"""
url = 'file:///Users/chengjun/GitHub/cjc2016/data/test.html'
content = urllib2.urlopen(url).read()
soup = BeautifulSoup(content, 'html.parser')
soup
"""
Explanation: 一般的数据抓取,使用urllib2和beautifulsoup配合就可以了。
尤其是对于翻页时url出现规则变化的网页,只需要处理规则化的url就可以了。
以简单的例子是抓取天涯论坛上关于某一个关键词的帖子。
在天涯论坛,关于雾霾的帖子的第一页是:
http://bbs.tianya.cn/list.jsp?item=free&nextid=0&order=8&k=雾霾
第二页是:
http://bbs.tianya.cn/list.jsp?item=free&nextid=1&order=8&k=雾霾
Beautiful Soup
Beautiful Soup is a Python library designed for quick turnaround projects like screen-scraping. Three features make it powerful:
Beautiful Soup provides a few simple methods. It doesn't take much code to write an application
Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. Then you just have to specify the original encoding.
Beautiful Soup sits on top of popular Python parsers like lxml and html5lib.
Install beautifulsoup4
open your terminal/cmd
$ pip install beautifulsoup4
第一个爬虫
Beautifulsoup Quick Start
http://www.crummy.com/software/BeautifulSoup/bs4/doc/
End of explanation
"""
print(soup.prettify())
"""
Explanation: html.parser
Beautiful Soup supports the html.parser included in Python’s standard library
lxml
but it also supports a number of third-party Python parsers. One is the lxml parser lxml. Depending on your setup, you might install lxml with one of these commands:
$ apt-get install python-lxml
$ easy_install lxml
$ pip install lxml
html5lib
Another alternative is the pure-Python html5lib parser html5lib, which parses HTML the way a web browser does. Depending on your setup, you might install html5lib with one of these commands:
$ apt-get install python-html5lib
$ easy_install html5lib
$ pip install html5lib
End of explanation
"""
for tag in soup.find_all(True):
print(tag.name)
soup('head') # or soup.head
soup('body') # or soup.body
soup('title') # or soup.title
soup('p')
soup.p
soup.title.name
soup.title.string
soup.title.text
soup.title.parent.name
soup.p
soup.p['class']
soup.find_all('p', {'class', 'title'})
soup.find_all('p', class_= 'title')
soup.find_all('p', {'class', 'story'})
soup.find_all('p', {'class', 'story'})[0].find_all('a')
soup.a
soup('a')
soup.find(id="link3")
soup.find_all('a')
soup.find_all('a', {'class', 'sister'}) # compare with soup.find_all('a')
soup.find_all('a', {'class', 'sister'})[0]
soup.find_all('a', {'class', 'sister'})[0].text
soup.find_all('a', {'class', 'sister'})[0]['href']
soup.find_all('a', {'class', 'sister'})[0]['id']
soup.find_all(["a", "b"])
print(soup.get_text())
"""
Explanation: html
head
title
body
p (class = 'title', 'story' )
a (class = 'sister')
href/id
End of explanation
"""
from IPython.display import display_html, HTML
HTML('<iframe src=http://mp.weixin.qq.com/s?__biz=MzA3MjQ5MTE3OA==&\
mid=206241627&idx=1&sn=471e59c6cf7c8dae452245dbea22c8f3&3rd=MzA3MDU4NTYzMw==&scene=6#rd\
width=500 height=500></iframe>')
# the webpage we would like to crawl
"""
Explanation: 数据抓取:
根据URL抓取微信公众号文章内容
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
End of explanation
"""
url = "http://mp.weixin.qq.com/s?__biz=MzA3MjQ5MTE3OA==&\
mid=206241627&idx=1&sn=471e59c6cf7c8dae452245dbea22c8f3&3rd=MzA3MDU4NTYzMw==&scene=6#rd"
content = urllib2.urlopen(url).read() #获取网页的html文本
soup = BeautifulSoup(content, 'html.parser')
title = soup.title.text
rmml = soup.find('div', {'class', 'rich_media_meta_list'})
date = rmml.find(id = 'post-date').text
rmc = soup.find('div', {'class', 'rich_media_content'})
content = rmc.get_text()
print title
print date
print content
"""
Explanation: 查看源代码
Inspect
End of explanation
"""
|
StudyExchange/Udacity | MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
图像分类
在该项目中,你将会对来自 CIFAR-10 数据集 中的图像进行分类。数据集中图片的内容包括飞机(airplane)、狗(dogs)、猫(cats)及其他物体。你需要处理这些图像,接着对所有的样本训练一个卷积神经网络。
具体而言,在项目中你要对图像进行正规化处理(normalization),同时还要对图像的标签进行 one-hot 编码。接着你将会应用到你所学的技能来搭建一个具有卷积层、最大池化(Max Pooling)层、Dropout 层及全连接(fully connected)层的神经网络。最后,你会训练你的神经网络,会得到你神经网络在样本图像上的预测结果。
下载数据
运行如下代码下载 CIFAR-10 dataset for python。
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 2
sample_id = 1
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
探索数据集
为防止在运行过程中内存不足的问题,该数据集已经事先被分成了5批(batch),名为data_batch_1、data_batch_2等。每一批中都含有 图像 及对应的 标签,都是如下类别中的一种:
飞机
汽车
鸟
猫
鹿
狗
青蛙
马
船
卡车
理解数据集也是对数据进行预测的一部分。修改如下代码中的 batch_id 和 sample_id,看看输出的图像是什么样子。其中,batch_id 代表着批次数(1-5),sample_id 代表着在该批内图像及标签的编号。
你可以尝试回答如下问题:
* 可能出现的 标签 都包括哪些?
* 图像数据的取值范围是多少?
* 标签 的排列顺序是随机的还是有序的?
对这些问题的回答,会有助于更好地处理数据,并能更好地进行预测。
答:
可能出现的标签范围是0到9,分别是依次对应飞机、汽车、鸟、猫、鹿、狗、青蛙、马、船、卡车。
图像数据的取值范围是0到255。
给出的数据中标签的排列顺序是随机的。
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
return x/255.0
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
图像预处理功能的实现
正规化
在如下的代码中,修改 normalize 函数,使之能够对输入的图像数据 x 进行处理,输出一个经过正规化的、Numpy array 格式的图像数据。
注意:
处理后的值应当在 $[0,1]$ 的范围之内。返回值应当和输入值具有相同的形状。
End of explanation
"""
from sklearn import preprocessing
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# TODO: Implement Function
lb = preprocessing.LabelBinarizer()
labels = list(range(0, 10))
lb.fit(labels)
one_hot = lb.transform(x)
return one_hot
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint:
Look into LabelBinarizer in the preprocessing module of sklearn.
One-hot 编码
在如下代码中,你将继续实现预处理的功能,实现一个 one_hot_encode 函数。函数的输入 x 是 标签 构成的列表,返回值是经过 One_hot 处理过后的这列 标签 对应的 One_hot 编码,以 Numpy array 储存。其中,标签 的取值范围从0到9。每次调用该函数时,对相同的标签值,它输出的编码也是相同的。请确保在函数外保存编码的映射(map of encodings)。
提示:
你可以尝试使用 sklearn preprocessing 模块中的 LabelBinarizer 函数。
【CodeReview170905】
170905-19:34发现函数one_hot_encode()实现错了,本来应该是0-9,也就是range(1,10),而我直接写成了[1,2,3,4,5,6,7,8,9,10],导致错误,进一步地导致网络发散。
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
随机打乱数据
正如你在上方探索数据部分所看到的,样本的顺序已经被随机打乱了。尽管再随机处理一次也没问题,不过对于该数据我们没必要再进行一次相关操作了。
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
对所有图像数据进行预处理并保存结果
运行如下代码,它将会预处理所有的 CIFAR-10 数据并将它另存为文件。此外,如下的代码还将会把 10% 的训练数据留出作为验证数据。
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
检查点
这是你的首个检查点。因为预处理完的数据已经被保存到硬盘上了,所以如果你需要回顾或重启该 notebook,你可以在这里重新开始。
End of explanation
"""
import tensorflow as tf
# There are tensorflow-gpu settings, but gpu can not work becourse of the net is too big.
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.3
set_session(tf.Session(config=config))
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
x = tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name='x')
return x
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
y = tf.placeholder(tf.float32, shape=(None, n_classes), name='y')
return y
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return keep_prob
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
搭建神经网络
为搭建神经网络,你需要将搭建每一层的过程封装到一个函数中。大部分的代码你在函数外已经见过。为能够更透彻地测试你的代码,我们要求你把每一层都封装到一个函数中。这能够帮助我们给予你更好的回复,同时还能让我们使用 unittests 在你提交报告前检测出你项目中的小问题。
注意: 如果你时间紧迫,那么在该部分我们为你提供了一个便捷方法。在接下来的一些问题中,你可以使用来自 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的函数来搭建各层,不过不可以用他们搭建卷积-最大池化层。TF Layers 和 Keras 及 TFLean 中对层的抽象比较相似,所以你应该很容易上手。
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
不过,如果你希望能够更多地实践,我们希望你能够在不使用 TF Layers 的情况下解决所有问题。你依然能使用来自其他包但和 layers 中重名的函数。例如,你可以使用 TF Neural Network 版本的 `conv_2d
让我们开始吧!
输入
神经网络需要能够读取图像数据、经 one-hot 编码之后的标签及 dropout 中的保留概率。修改如下函数:
修改 neural_net_image_input 函数:
返回 TF Placeholder。
使用 image_shape 设定形状,设定批大小(batch size)为 None。
使用 TF Placeholder 中的 Name 参数,命名该 TensorFlow placeholder 为 "x"。
修改 neural_net_label_input 函数:
返回 TF Placeholder。
使用 n_classes 设定形状,设定批大小(batch size)为 None。
使用 TF Placeholder 中的 Name 参数,命名该 TensorFlow placeholder 为 "y"。
修改 neural_net_keep_prob_input 函数:
返回 TF Placeholder 作为 dropout 的保留概率(keep probability)。
使用 TF Placeholder 中的 Name 参数,命名该 TensorFlow placeholder 为 "keep_prob"。
我们会在项目最后使用这些名字,来载入你储存的模型。
注意:在 TensorFlow 中,对形状设定为 None,能帮助设定一个动态的大小。
这里本来是想用tensorflow-gpu的,但是,gpu报错,原因是卷积网络和神经网络连接那两层节点太多,GPU内存不够。另外,data_validation的过程中,5000个实例没有做分割,一起加载,也导致了tensor过大。
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
#input = tf.placeholder(tf.float32, (None, 32, 32, 3))
x_tensor_shape = x_tensor.get_shape().as_list()
print('x_tensor_shape:\t{0}'.format(x_tensor_shape))
print('conv_num_outputs:{0}'.format(conv_num_outputs))
print('conv_ksize:\t{0}'.format(conv_ksize))
print('conv_strides:\t{0}'.format(conv_strides))
print('pool_ksize:\t{0}'.format(pool_ksize))
print('pool_strides:\t{0}'.format(pool_strides))
filter_weights = tf.Variable(tf.truncated_normal((conv_ksize[0], conv_ksize[1], x_tensor_shape[3], conv_num_outputs), mean=0.0, stddev = 0.05)) # (height, width, input_depth, output_depth)
filter_bias = tf.Variable(tf.zeros(conv_num_outputs))
strides = [1, conv_strides[0], conv_strides[1], 1] # (batch, height, width, depth)
conv_layer = tf.nn.conv2d(x_tensor, filter_weights, strides=strides, padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, filter_bias)
# conv_layer = conv_layer + filter_bias
conv_layer = tf.nn.relu(conv_layer)
# Apply Max Pooling
conv_layer = tf.nn.max_pool(
conv_layer,
ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=[1, pool_strides[0], pool_strides[1], 1],
padding='SAME')
return conv_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
Hint:
When unpacking values as an argument in Python, look into the unpacking operator.
卷积-最大池(Convolution and Max Pooling)化层
卷积层在图像处理中取得了不小的成功。在这部分的代码中,你需要修改 conv2d_maxpool 函数来先后实现卷积及最大池化的功能。
使用 conv_ksize、conv_num_outputs 及 x_tensor 来创建权重(weight)及偏差(bias)变量。
对 x_tensor 进行卷积,使用 conv_strides 及权重。
我们建议使用 SAME padding,不过你也可尝试其他 padding 模式。
加上偏差。
对卷积结果加上一个非线性函数作为激活层。
基于 pool_kszie 及 pool_strides 进行最大池化。
我们建议使用 SAME padding,不过你也可尝试其他 padding 模式。
注意:
你不可以使用来自 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的函数来实现这一层的功能。但是你可以使用 TensorFlow 的Neural Network包。
对于如上的快捷方法,你在其他层中可以尝试使用。
提示:
当你在 Python 中希望展开(unpacking)某个变量的值作为函数的参数,你可以参考 unpacking 运算符。
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
return tf.contrib.layers.flatten(x_tensor)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
展开层
修改 flatten 函数,来将4维的输入张量 x_tensor 转换为一个二维的张量。输出的形状应当是 (Batch Size, Flattened Image Size)。
快捷方法:你可以使用来自 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的函数来实现该功能。不过你也可以只使用 TensorFlow 包中的函数来挑战自己。
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
return tf.contrib.layers.fully_connected(x_tensor, num_outputs)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
全连接层
修改 fully_conn 函数,来对形如 (batch Size, num_outputs) 的输入 x_tensor 应用一个全连接层。快捷方法:你可以使用来自 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的函数来实现该功能。不过你也可以只使用 TensorFlow 包中的函数来挑战自己。
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
return tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=None)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
输出层
修改 output 函数,来对形如 (batch Size, num_outputs) 的输入 x_tensor 应用一个全连接层。快捷方法:你可以使用来自 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的函数来实现该功能。不过你也可以只使用 TensorFlow 包中的函数来挑战自己。
注意:
激活函数、softmax 或者交叉熵(corss entropy)不应被加入到该层。
【Code review 20170901】
这个地方请注意下题目要求, 注意:该层级不应应用 Activation、softmax 或交叉熵(cross entropy)
Tensorflow提供的全链接函数tf.contrib.layers.fully_connected, 这里 tf.contrib.layers.fully_connected 预设了使用relu 作为非线性激活函数, 所以这里如果使用tf.contrib.layers.fully_connected 需要把激活函数设为None
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs1 = 32
conv_ksize1 = (4, 4)
conv_strides1 = (1, 1)
pool_ksize1 = (2, 2)
pool_strides1 = (2, 2)
conv_layer1 = conv2d_maxpool(x, conv_num_outputs1, conv_ksize1, conv_strides1, pool_ksize1, pool_strides1)
conv_layer1 = tf.nn.dropout(conv_layer1, keep_prob)
conv_num_outputs2 = 64
conv_ksize2 = (4, 4)
conv_strides2 = (1, 1)
pool_ksize2 = (2, 2)
pool_strides2 = (2, 2)
conv_layer2 = conv2d_maxpool(x, conv_num_outputs2, conv_ksize2, conv_strides2, pool_ksize2, pool_strides2)
conv_layer2 = tf.nn.dropout(conv_layer2, keep_prob)
conv_num_outputs3 = 128
conv_ksize3 = (4, 4)
conv_strides3 = (1, 1)
pool_ksize3 = (2, 2)
pool_strides3 = (2, 2)
conv_layer3 = conv2d_maxpool(x, conv_num_outputs3, conv_ksize3, conv_strides3, pool_ksize3, pool_strides3)
conv_layer3 = tf.nn.dropout(conv_layer3, keep_prob)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
conv_layer_flatten = flatten(conv_layer3)
print('conv_layer_flatten.shape:%s' %conv_layer_flatten.shape)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc_num_outputs1 = 1024
fc_layer1 = fully_conn(conv_layer_flatten, fc_num_outputs1)
fc_layer1 = tf.nn.dropout(fc_layer1, keep_prob)
fc_num_outputs2 = 512
fc_layer2 = fully_conn(fc_layer1, fc_num_outputs2)
fc_layer2 = tf.nn.dropout(fc_layer2, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
num_outputs = 10
nn_output = output(fc_layer2, num_outputs)
print('fc_num_outputs1:\t{0}'.format(fc_num_outputs1))
print('fc_num_outputs2:\t{0}'.format(fc_num_outputs2))
print('num_outputs:\t\t{0}'.format(num_outputs))
print('')
# TODO: return output
return nn_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
创建卷积模型
修改 conv_net 函数,使之能够生成一个卷积神经网络模型。该函数的输入为一批图像数据 x,输出为 logits。在函数中,使用上方你修改的创建各种层的函数来创建该模型:
使用 1 到 3 个卷积-最大池化层
使用一个展开层
使用 1 到 3 个全连接层
使用一个输出层
返回呼出结果
在一个或多个层上使用 TensorFlow's Dropout,对应的保留概率为 keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# TODO: Implement Function
session.run(optimizer, feed_dict={keep_prob: keep_probability, x: feature_batch, y: label_batch})
pass
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
训练该神经网络
最优化
修改 train_neural_network 函数以执行单次最优化。该最优化过程应在一个 session 中使用 optimizer 来进行该过程,它的 feed_dict 包括:
* x 代表输入图像
* y 代表标签
* keep_prob 为 Dropout 过程中的保留概率
对每批数据该函数都会被调用,因而 tf.global_variables_initializer() 已经被调用过。
注意:该函数并不要返回某个值,它只对神经网络进行最优化。
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
loss = session.run(cost, feed_dict={ x: feature_batch, y: label_batch, keep_prob: 1.0 })
valid_accuracy = session.run(accuracy, feed_dict={ x: valid_features[0:400], y: valid_labels[0:400], keep_prob: 1.0 })
print('Loss: %.6f' %loss, end=' ')
print('Validation Accuracy: %.6f' %valid_accuracy)
pass
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
显示状态
修改 print_stats 函数来打印 loss 值及验证准确率。 使用全局的变量 valid_features 及 valid_labels 来计算验证准确率。 设定保留概率为 1.0 来计算 loss 值及验证准确率。
End of explanation
"""
# TODO: Tune Parameters
epochs = 20
batch_size = 256
keep_probability = 0.5
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
超参数调节
你需要调节如下的参数:
* 设定 epoches 为模型停止学习或开始过拟合时模型的迭代次数。
* 设定 batch_size 为你内存能支持的最大值。一般我们设定该值为:
* 64
* 128
* 256
* ...
* 设定 keep_probability 为在 dropout 过程中保留一个节点的概率。
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
对单批 CIFAR-10 数据进行训练
相比于在所有 CIFAR-10 数据上训练神经网络,我们首先使用一批数据进行训练。这会帮助你在调节模型提高精度的过程中节省时间。当最终的验证精度超过 50% 之后,你就可以前往下一节在所有数据上运行该模型了。
0902上午开始,因为这个activation_fn=None,导致神经网络发散。我用keras搭了一个一摸一样的网络,收拾收敛的,但是这个网络不收敛,估计是哪里实现错了,可以帮忙看一下为什么网络会发散吗?
【CodeReview170905】
170905-19:34发现函数one_hot_encode()实现错了,本来应该是0-9,也就是range(1,10),而我直接写成了[1,2,3,4,5,6,7,8,9,10],导致错误,进一步地导致网络发散。
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
完全训练该模型
因为你在单批 CIFAR-10 数据上已经得到了一个不错的准确率了,那你可以尝试在所有五批数据上进行训练。
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
检查点
该模型已经被存储到你的硬盘中。
测试模型
这部分将在测试数据集上测试你的模型。这边得到的准确率将作为你的最终准确率。你应该得到一个高于 50% 准确率。如果它没有超过 50%,那么你需要继续调整模型架构及参数。
End of explanation
"""
|
jbocharov-mids/W207-Machine-Learning | John_Bocharov_p2.ipynb | apache-2.0 | # This tells matplotlib not to try opening a new window for each plot.
%matplotlib inline
# General libraries.
import re
import numpy as np
import matplotlib.pyplot as plt
# SK-learn libraries for learning.
from sklearn.pipeline import Pipeline
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.grid_search import GridSearchCV
# SK-learn libraries for evaluation.
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.metrics import classification_report
# SK-learn library for importing the newsgroup data.
from sklearn.datasets import fetch_20newsgroups
# SK-learn libraries for feature extraction from text.
from sklearn.feature_extraction.text import *
"""
Explanation: Project 2: Topic Classification
In this project, you'll work with text data from newsgroup postings on a variety of topics. You'll train classifiers to distinguish between the topics based on the text of the posts. Whereas with digit classification, the input is relatively dense: a 28x28 matrix of pixels, many of which are non-zero, here we'll represent each document with a "bag-of-words" model. As you'll see, this makes the feature representation quite sparse -- only a few words of the total vocabulary are active in any given document. The bag-of-words assumption here is that the label depends only on the words; their order is not important.
The SK-learn documentation on feature extraction will prove useful:
http://scikit-learn.org/stable/modules/feature_extraction.html
Each problem can be addressed succinctly with the included packages -- please don't add any more. Grading will be based on writing clean, commented code, along with a few short answers.
As always, you're welcome to work on the project in groups and discuss ideas on the course wall, but please prepare your own write-up and write your own code.
End of explanation
"""
categories = ['alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space']
newsgroups_train = fetch_20newsgroups(subset='train',
remove=('headers', 'footers', 'quotes'),
categories=categories)
newsgroups_test = fetch_20newsgroups(subset='test',
remove=('headers', 'footers', 'quotes'),
categories=categories)
num_test = len(newsgroups_test.target)
test_data, test_labels = newsgroups_test.data[num_test/2:], newsgroups_test.target[num_test/2:]
dev_data, dev_labels = newsgroups_test.data[:num_test/2], newsgroups_test.target[:num_test/2]
train_data, train_labels = newsgroups_train.data, newsgroups_train.target
print 'training label shape:', train_labels.shape
print 'test label shape:', test_labels.shape
print 'dev label shape:', dev_labels.shape
print 'labels names:', newsgroups_train.target_names
"""
Explanation: Load the data, stripping out metadata so that we learn classifiers that only use textual features. By default, newsgroups data is split into train and test sets. We further split the test so we have a dev set. Note that we specify 4 categories to use for this project. If you remove the categories argument from the fetch function, you'll get all 20 categories.
End of explanation
"""
#def P1(num_examples=5):
### STUDENT START ###
### STUDENT END ###
#P1()
"""
Explanation: (1) For each of the first 5 training examples, print the text of the message along with the label.
End of explanation
"""
#def P2():
### STUDENT START ###
### STUDENT END ###
#P2()
"""
Explanation: (2) Use CountVectorizer to turn the raw training text into feature vectors. You should use the fit_transform function, which makes 2 passes through the data: first it computes the vocabulary ("fit"), second it converts the raw text into feature vectors using the vocabulary ("transform").
The vectorizer has a lot of options. To get familiar with some of them, write code to answer these questions:
a. The output of the transform (also of fit_transform) is a sparse matrix: http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.sparse.csr_matrix.html. What is the size of the vocabulary? What is the average number of non-zero features per example? What fraction of the entries in the matrix are non-zero? Hint: use "nnz" and "shape" attributes.
b. What are the 0th and last feature strings (in alphabetical order)? Hint: use the vectorizer's get_feature_names function.
c. Specify your own vocabulary with 4 words: ["atheism", "graphics", "space", "religion"]. Confirm the training vectors are appropriately shaped. Now what's the average number of non-zero features per example?
d. Instead of extracting unigram word features, use "analyzer" and "ngram_range" to extract bigram and trigram character features. What size vocabulary does this yield?
e. Use the "min_df" argument to prune words that appear in fewer than 10 documents. What size vocabulary does this yield?
f. Using the standard CountVectorizer, what fraction of the words in the dev data are missing from the vocabulary? Hint: build a vocabulary for both train and dev and look at the size of the difference.
End of explanation
"""
#def P3():
### STUDENT START ###
### STUDENT END ###
#P3()
"""
Explanation: (3) Use the default CountVectorizer options and report the f1 score (use metrics.f1_score) for a k nearest neighbors classifier; find the optimal value for k. Also fit a Multinomial Naive Bayes model and find the optimal value for alpha. Finally, fit a logistic regression model and find the optimal value for the regularization strength C using l2 regularization. A few questions:
a. Why doesn't nearest neighbors work well for this problem?
b. Any ideas why logistic regression doesn't work as well as Naive Bayes?
c. Logistic regression estimates a weight vector for each class, which you can access with the coef_ attribute. Output the sum of the squared weight values for each class for each setting of the C parameter. Briefly explain the relationship between the sum and the value of C.
End of explanation
"""
#def P4():
### STUDENT START ###
### STUDENT END ###
#P4()
"""
Explanation: ANSWER:
(4) Train a logistic regression model. Find the 5 features with the largest weights for each label -- 20 features in total. Create a table with 20 rows and 4 columns that shows the weight for each of these features for each of the labels. Create the table again with bigram features. Any surprising features in this table?
End of explanation
"""
#def empty_preprocessor(s):
# return s
#def better_preprocessor(s):
### STUDENT START ###
### STUDENT END ###
#def P5():
### STUDENT START ###
### STUDENT END ###
#P5()
"""
Explanation: ANSWER:
(5) Try to improve the logistic regression classifier by passing a custom preprocessor to CountVectorizer. The preprocessing function runs on the raw text, before it is split into words by the tokenizer. Your preprocessor should try to normalize the input in various ways to improve generalization. For example, try lowercasing everything, replacing sequences of numbers with a single token, removing various other non-letter characters, and shortening long words. If you're not already familiar with regular expressions for manipulating strings, see https://docs.python.org/2/library/re.html, and re.sub() in particular. With your new preprocessor, how much did you reduce the size of the dictionary?
For reference, I was able to improve dev F1 by 2 points.
End of explanation
"""
#def P6():
# Keep this random seed here to make comparison easier.
#np.random.seed(0)
### STUDENT START ###
### STUDENT END ###
#P6()
"""
Explanation: (6) The idea of regularization is to avoid learning very large weights (which are likely to fit the training data, but not generalize well) by adding a penalty to the total size of the learned weights. That is, logistic regression seeks the set of weights that minimizes errors in the training data AND has a small size. The default regularization, L2, computes this size as the sum of the squared weights (see P3, above). L1 regularization computes this size as the sum of the absolute values of the weights. The result is that whereas L2 regularization makes all the weights relatively small, L1 regularization drives lots of the weights to 0, effectively removing unimportant features.
Train a logistic regression model using a "l1" penalty. Output the number of learned weights that are not equal to zero. How does this compare to the number of non-zero weights you get with "l2"? Now, reduce the size of the vocabulary by keeping only those features that have at least one non-zero weight and retrain a model using "l2".
Make a plot showing accuracy of the re-trained model vs. the vocabulary size you get when pruning unused features by adjusting the C parameter.
Note: The gradient descent code that trains the logistic regression model sometimes has trouble converging with extreme settings of the C parameter. Relax the convergence criteria by setting tol=.01 (the default is .0001).
End of explanation
"""
#def P7():
### STUDENT START ###
## STUDENT END ###
#P7()
"""
Explanation: (7) Use the TfidfVectorizer -- how is this different from the CountVectorizer? Train a logistic regression model with C=100.
Make predictions on the dev data and show the top 3 documents where the ratio R is largest, where R is:
maximum predicted probability / predicted probability of the correct label
What kinds of mistakes is the model making? Suggest a way to address one particular issue that you see.
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/tensorflow/c_batched.ipynb | apache-2.0 | import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
"""
Explanation: <h1> 2c. Refactoring to add batching and feature-creation </h1>
In this notebook, we continue reading the same small dataset, but refactor our ML pipeline in two small, but significant, ways:
<ol>
<li> Refactor the input to read data in batches.
<li> Refactor the feature creation so that it is not one-to-one with inputs.
</ol>
The Pandas function in the previous notebook also batched, only after it had read the whole data into memory -- on a large dataset, this won't be an option.
End of explanation
"""
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename)
# Create dataset from file list
dataset = tf.data.TextLineDataset(file_list).map(decode_csv)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
def get_train():
return read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN)
def get_valid():
return read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL)
def get_test():
return read_dataset('./taxi-test.csv', mode = tf.estimator.ModeKeys.EVAL)
"""
Explanation: <h2> 1. Refactor the input </h2>
Read data created in Lab1a, but this time make it more general and performant. Instead of using Pandas, we will use TensorFlow's Dataset API.
End of explanation
"""
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
"""
Explanation: <h2> 2. Refactor the way features are created. </h2>
For now, pass these through (same as previous lab). However, refactoring this way will enable us to break the one-to-one relationship between inputs and features.
End of explanation
"""
tf.logging.set_verbosity(tf.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = feature_cols, model_dir = OUTDIR)
model.train(input_fn = get_train(), steps = 100); # TODO: change the name of input_fn as needed
"""
Explanation: <h2> Create and train the model </h2>
Note that we train for num_steps * batch_size examples.
End of explanation
"""
def print_rmse(model, name, input_fn):
metrics = model.evaluate(input_fn = input_fn, steps = 1)
print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))
print_rmse(model, 'validation', get_valid())
"""
Explanation: <h3> Evaluate model </h3>
As before, evaluate on the validation data. We'll do the third refactoring (to move the evaluation into the training loop) in the next lab.
End of explanation
"""
|
lao-tseu-is-alive/mynotebooks | cgSVGDisplay.ipynb | gpl-2.0 | %config InlineBackend.figure_format = 'svg'
url_svg = 'http://clipartist.net/social/clipartist.net/B/base_tux_g_v_linux.svg'
from IPython.display import SVG, display, HTML
# testing svg inside jupyter next one does not support width parameter at the time of writing
#display(SVG(url=url_svg))
display(HTML('<img src="' + url_svg + '" width=150 height=50/>'))
"""
Explanation: What about writing SVG inside a cell in IPython or Jupyter
End of explanation
"""
%%writefile basic_circle.svg
<svg xmlns="http://www.w3.org/2000/svg">
<circle id="greencircle" cx="30" cy="30" r="30" fill="blue" />
</svg>
url_svg = 'basic_circle.svg'
HTML('<img src="' + url_svg + '" width=70 height=70/>')
"""
Explanation: let's create a very simple SVG file
End of explanation
"""
class SvgScene:
def __init__(self,name="svg",height=500,width=500):
self.name = name
self.items = []
self.height = height
self.width = width
return
def add(self,item): self.items.append(item)
def strarray(self):
var = ["<?xml version=\"1.0\"?>\n",
"<svg height=\"%d\" width=\"%d\" xmlns=\"http://www.w3.org/2000/svg\" >\n" % (self.height,self.width),
" <g style=\"fill-opacity:1.0; stroke:black;\n",
" stroke-width:1;\">\n"]
for item in self.items: var += item.strarray()
var += [" </g>\n</svg>\n"]
return var
def write_svg(self,filename=None):
if filename:
self.svgname = filename
else:
self.svgname = self.name + ".svg"
file = open(self.svgname,'w')
file.writelines(self.strarray())
file.close()
return
def display(self):
url_svg = self.svgname
display(HTML('<img src="' + url_svg + '" width=' + str(self.width) + ' height=' + str(self.height) + '/>'))
return
class Line:
def __init__(self,start,end,color,width):
self.start = start
self.end = end
self.color = color
self.width = width
return
def strarray(self):
return [" <line x1=\"%d\" y1=\"%d\" x2=\"%d\" y2=\"%d\" style=\"stroke:%s;stroke-width:%d\"/>\n" %\
(self.start[0],self.start[1],self.end[0],self.end[1],colorstr(self.color),self.width)]
class Circle:
def __init__(self,center,radius,fill_color,line_color,line_width):
self.center = center
self.radius = radius
self.fill_color = fill_color
self.line_color = line_color
self.line_width = line_width
return
def strarray(self):
return [" <circle cx=\"%d\" cy=\"%d\" r=\"%d\"\n" %\
(self.center[0],self.center[1],self.radius),
" style=\"fill:%s;stroke:%s;stroke-width:%d\" />\n" % (colorstr(self.fill_color),colorstr(self.line_color),self.line_width)]
class Ellipse:
def __init__(self,center,radius_x,radius_y,fill_color,line_color,line_width):
self.center = center
self.radiusx = radius_x
self.radiusy = radius_y
self.fill_color = fill_color
self.line_color = line_color
self.line_width = line_width
def strarray(self):
return [" <ellipse cx=\"%d\" cy=\"%d\" rx=\"%d\" ry=\"%d\"\n" %\
(self.center[0],self.center[1],self.radius_x,self.radius_y),
" style=\"fill:%s;stroke:%s;stroke-width:%d\"/>\n" % (colorstr(self.fill_color),colorstr(self.line_color),self.line_width)]
class Polygon:
def __init__(self,points,fill_color,line_color,line_width):
self.points = points
self.fill_color = fill_color
self.line_color = line_color
self.line_width = line_width
def strarray(self):
polygon="<polygon points=\""
for point in self.points:
polygon+=" %d,%d" % (point[0],point[1])
return [polygon,\
"\" \nstyle=\"fill:%s;stroke:%s;stroke-width:%d\"/>\n" %\
(colorstr(self.fill_color),colorstr(self.line_color),self.line_width)]
class Rectangle:
def __init__(self,origin,height,width,fill_color,line_color,line_width):
self.origin = origin
self.height = height
self.width = width
self.fill_color = fill_color
self.line_color = line_color
self.line_width = line_width
return
def strarray(self):
return [" <rect x=\"%d\" y=\"%d\" height=\"%d\"\n" %\
(self.origin[0],self.origin[1],self.height),
" width=\"%d\" style=\"fill:%s;stroke:%s;stroke-width:%d\" />\n" %\
(self.width,colorstr(self.fill_color),colorstr(self.line_color),self.line_width)]
class Text:
def __init__(self,origin,text,size,color):
self.origin = origin
self.text = text
self.size = size
self.color = color
return
def strarray(self):
return [" <text x=\"%d\" y=\"%d\" font-size=\"%d\" fill=\"%s\">\n" %\
(self.origin[0],self.origin[1],self.size,colorstr(self.color)),
" %s\n" % self.text,
" </text>\n"]
def colorstr(rgb): return "#%x%x%x" % (rgb[0]/16,rgb[1]/16,rgb[2]/16)
scene = SvgScene("test",300,300)
scene.add(Rectangle((100,100),200,200,(0,255,255),(0,0,0),1))
scene.add(Line((200,200),(200,300),(0,0,0),1))
scene.add(Line((200,200),(300,200),(0,0,0),1))
scene.add(Line((200,200),(100,200),(0,0,0),1))
scene.add(Line((200,200),(200,100),(0,0,0),1))
scene.add(Circle((200,200),30,(0,0,255),(0,0,0),1))
scene.add(Circle((200,300),30,(0,255,0),(0,0,0),1))
scene.add(Circle((300,200),30,(255,0,0),(0,0,0),1))
scene.add(Circle((100,200),30,(255,255,0),(0,0,0),1))
scene.add(Circle((200,100),30,(255,0,255),(0,0,0),1))
scene.add(Text((50,50),"Testing SVG 1",24,(0,0,0)))
scene.write_svg()
scene.display()
"""
Explanation: Now let's create a Svg Scene based
inspired from Isendrak Skatasmid code at :
http://code.activestate.com/recipes/578123-draw-svg-images-in-python-python-recipe-enhanced-v/
End of explanation
"""
|
jhillairet/scikit-rf | doc/source/examples/metrology/NanoVNA_V2_4port-splitter.ipynb | bsd-3-clause | import skrf
from skrf.calibration import TwoPortOnePath
# load networks of the raw calibration standard measurements
short_raw = skrf.Network('./data_MiniCircuits_splitter/cal_short_raw.s2p')
open_raw = skrf.Network('./data_MiniCircuits_splitter/cal_open_raw.s2p')
match_raw = skrf.Network('./data_MiniCircuits_splitter/cal_match_raw.s2p')
thru_raw = skrf.Network('./data_MiniCircuits_splitter/cal_thru_raw.s2p')
# create an ideal 50-Ohm line for the short, open, match and through reference responses ("ideals")
line = skrf.DefinedGammaZ0(frequency=short_raw.frequency, Z0=50)
# create and run the calibration
cal = TwoPortOnePath(ideals=[line.short(nports=2), line.open(nports=2), line.match(nports=2), line.thru()],
measured=[short_raw, open_raw, match_raw, thru_raw],
n_thrus=1, source_port=1)
cal.run()
"""
Explanation: Measuring a 4-Port With The 1.5-Port NanoVNA V2
The NanoVNA V2 is a 1.5-port VNA, meaning it can only measure the forward reflection and transmission coefficients $S_{11}$ and $S_{21}$ with its two ports. To measure the reverse coefficients $S_{12}$ and $S_{22}$ of a 2-port, it needs to be measured a second time with the port orientation physically flipped (or fake-flipped), as explained in this example. The issue gets worse if the device under test (DUT) has even more ports, as described in this example.
In this example, a 4-port SMA power splitter is measured with a NanoVNA V2 using the Matched Port technique. It involves measuring the 4-port in all port combinations (12 measurements) with the unused ports terminated with a 50-Ohm match. For the VNA calibration, four more measurements with the SMA calibration standards are required (SHORT, OPEN, MATCH, THROUGH). The manufacturer of the DUT provides measured S-parameters on their website, which will later be used for comparison.
Data Aquisition
To use scikit-rf's 2-port calibration classes, such as TwoPortOnePath, the individual measurement results of the DUT and of the calibration standards need to be provided as 2-port networks holding the data as $S_{11}$ and $S_{21}$. The following code snippet can be used to configure the NanoVNA, and aquire and save the data:
```python
import skrf
from skrf.vi import vna
connect to NanoVNA on /dev/ttyACM0 (Linux)
nanovna = skrf.vi.vna.NanoVNAv2('ASRL/dev/ttyACM0::INSTR')
for Windows users: ASRL1 for COM1
nanovna = skrf.vi.vna.NanoVNAv2('ASRL1::INSTR')
configure frequency sweep (for example 1 MHz to 4.4 GHz in 1 MHz steps)
f_start = 1e6
f_stop = 4.4e9
f_step = 1e6
num = int(1 + (f_stop - f_start) / f_step)
nanovna.set_frequency_sweep(f_start, f_stop, num)
measure all 12 combinations of the 4-port
n_ports = 4
for i_src in range(n_ports):
for i_sink in range(n_ports):
if i_sink != i_src:
input('Connect vna_p1 -> dut_p{}, vna_p2 -> dut_p{} and press ENTER:'.format(i_src + 1, i_sink + 1))
nw_raw = nanovna.get_snp_network(ports=(0, 1)
nw_raw.write_touchstone('./data_MiniCircuits_splitter/dut_raw_{}{}'.format(i_sink + 1, i_src + 1))
```
The calibration standards should be measured with the same repeated calls of get_snp_network(ports=(0, 1)) and write_touchstone().
Offline Calibration Using TwoPortOnePath
The measured data transferred from the NanoVNA via USB is always raw (uncalibrated), regardless of any calibration preformed on the NanoVNA itself. This requires the correction of the data using an offline calibration. With the measurements of the calibration standards stored as individual 2-ports, a TwoPortOnePath calibration is easily created using scikit-rf. In this example, the impedances and phase delays of the measured SHORT, OPEN, MATCH, and THROUGH are assumed to be ideal, i.e. without any loss or offset:
End of explanation
"""
import numpy as np
# create an empty array (f x 4 x 4) for the 4-port to be filled
s = np.zeros((len(short_raw.frequency), 4, 4), dtype=complex)
splitter_cal = skrf.Network(frequency=short_raw.frequency, s=s)
# loop through all 12 measurements, apply the calibration and save it inside 4-port network
for i_src in range(4):
for i_recv in range(4):
if i_src != i_recv:
dut_raw_fwd = skrf.Network('./data_MiniCircuits_splitter/dut_raw_{}{}.s2p'.format(i_recv + 1, i_src + 1))
dut_raw_rev = skrf.Network('./data_MiniCircuits_splitter/dut_raw_{}{}.s2p'.format(i_src + 1, i_recv + 1))
dut_cal = cal.apply_cal((dut_raw_fwd, dut_raw_rev))
# dut_cal is now a fully populated and corrected 2-port; save it in splitter_cal
splitter_cal.s[:, i_src, i_src] = dut_cal.s[:, 0, 0]
splitter_cal.s[:, i_recv, i_src] = dut_cal.s[:, 1, 0]
splitter_cal.s[:, i_src, i_recv] = dut_cal.s[:, 0, 1]
splitter_cal.s[:, i_recv, i_recv] = dut_cal.s[:, 1, 1]
"""
Explanation: The 12 individual 2-port subnetworks can now be corrected with this calibration. For a full correction, the subnetworks with the forward and reverse measurements need to be provided in pairs, for example ($S_{32}$, $S_{22}$) paired with ($S_{23}$, $S_{33}$). A nested loop can take care of this. For the comparison with the measurements provided by the manufacturer, it is convenient to store the calibrated results in a single 4-port network, which can then easily be plotted:
End of explanation
"""
import matplotlib.pyplot as mplt
# load reference results by MiniCircuits
splitter_mc = skrf.Network('./data_MiniCircuits_splitter/MiniCircuits_ZX10Q-2-19-S+___Plus25degC.s4p')
# plot both results
fig, ax = mplt.subplots(4, 4)
fig.set_size_inches(12, 8)
for i in range(4):
for j in range(4):
splitter_cal.plot_s_db(i, j, ax=ax[i][j])
splitter_mc.plot_s_db(i, j, ax=ax[i][j])
ax[i][j].get_legend().remove()
ax[i][j].set_xlim(0, 4.4e9)
fig.legend(['NanoVNA_cal', 'Manufacturer'], loc='upper center', ncol=2)
fig.tight_layout(rect=(0, 0, 1, 0.95))
mplt.show()
"""
Explanation: The results are now corrected and assembled as a single 4-port network. For comparison, the magnitudes are plotted together with the measurements provided by the manufacturer:
End of explanation
"""
|
pmgbergen/porepy | tutorials/mpsa.ipynb | gpl-3.0 | import numpy as np
import porepy as pp
# Create grid
n = 5
g = pp.CartGrid([n,n])
g.compute_geometry()
"""
Explanation: Multi-point stress approximation (MPSA)
Porepy supports mpsa discretization for linear elasticity problem:
\begin{equation}
\nabla\cdot \sigma = -\vec f,\quad \vec x \in \Omega
\end{equation}
where $\vec f$ is a body force, and the stress $\sigma$ is given as a linear function of the displacement
\begin{equation}
\sigma = C:\vec u.
\end{equation}
The convention in porepy is that tension is positive. This means that the Cartesian component of the traction $\vec T = \sigma \cdot \vec n$, for a direction $\vec r$ is positive number if the inner product $\vec T\cdot \vec r$ is positive. The displacements will give the difference between the initial state of the rock and the deformed state. If we consider a point in its initial state $\vec x \in \Omega$ and let $\vec x^ \in \Omega$ be the same point in the deformed state, to be consistent with the convention we used for traction, the displacements are given by $\vec u = \vec x^ - \vec x$, that is, $u$ points from the initial state to the finial state.
To close the system we also need to define a set of boundary conditions. Here we have three posibilities, Neumman conditions, Dirichlet conditions or Robin conditions, and we divide the boundary into three disjont sets $\Gamma_N$, $\Gamma_D$ and $\Gamma_R$ for the three different types of boundary conditions
\begin{equation}
\vec u = g_D, \quad \vec x \in \Gamma_D\
\sigma\cdot n = g_N, \quad \vec x \in \Gamma_N\
\sigma\cdot n + W \vec u= g_R,\quad \vec x\in \Gamma_R
\end{equation}
To solve this system we first have to create the grid.
End of explanation
"""
# Create stiffness matrix
lam = np.ones(g.num_cells)
mu = np.ones(g.num_cells)
C = pp.FourthOrderTensor(mu, lam)
"""
Explanation: We also need to define the stress tensor $C$. In porepy the constutitive law,
\begin{equation}
\sigma = C:u = 2 \mu \epsilon +\lambda \text{trace}(\epsilon) I, \quad \epsilon = \frac{1}{2}(\nabla u + (\nabla u)^\top)
\end{equation}
is implemented, and to get the tensor for this law we call:
End of explanation
"""
# Define boundary type
dirich = np.ravel(np.argwhere(g.face_centers[1] < 1e-10))
bound = pp.BoundaryConditionVectorial(g, dirich, ['dir']*dirich.size)
"""
Explanation: Then we need to define boundary conditions. We set the bottom boundary as a Dirichlet boundary, and the other boundaries are set to Neuman.
End of explanation
"""
top_faces = np.ravel(np.argwhere(g.face_centers[1] > n - 1e-10))
bot_faces = np.ravel(np.argwhere(g.face_centers[1] < 1e-10))
u_b = np.zeros((g.dim, g.num_faces))
u_b[1, top_faces] = -1 * g.face_areas[top_faces]
u_b[:, bot_faces] = 0
u_b = u_b.ravel('F')
"""
Explanation: We discretize the stresses by using the multi-point stress approximation (for details, please see: E. Keilegavlen and J. M. Nordbotten. “Finite volume methods for elasticity with weak symmetry”. In: International Journal for Numerical Methods in Engineering (2017)).
We now define the values we put on the boundaries. We clamp the bottom boundary, and push down by a constant force on the top boundary. Note that the value of the Neumann condition given on a face $\pi$ is the integrated traction $\int_\pi g_N d\vec x$.
End of explanation
"""
parameter_keyword = "mechanics"
mpsa_class = pp.Mpsa(parameter_keyword)
f = np.zeros(g.dim * g.num_cells)
specified_parameters = {"fourth_order_tensor": C, "source": f, "bc": bound, "bc_values": u_b}
data = pp.initialize_default_data(g, {}, parameter_keyword, specified_parameters)
mpsa_class.discretize(g, data)
A, b = mpsa_class.assemble_matrix_rhs(g, data)
u = np.linalg.solve(A.A, b)
"""
Explanation: We discretize this system using the Mpsa class. We assume zero body forces $f=0$
End of explanation
"""
pp.plot_grid(g, cell_value=u[1::2], figsize=(15, 12))
"""
Explanation: And we can plott the y_displacement
End of explanation
"""
# Stress discretization
stress = data[pp.DISCRETIZATION_MATRICES][parameter_keyword][mpsa_class.stress_matrix_key]
# Discrete boundary conditions
bound_stress = data[pp.DISCRETIZATION_MATRICES][parameter_keyword][mpsa_class.bound_stress_matrix_key]
T = stress * u + bound_stress * u_b
T2d = np.reshape(T, (g.dim, -1), order='F')
u_b2d = np.reshape(u_b, (g.dim, -1), order='F')
assert np.allclose(np.abs(u_b2d[bound.is_neu]), np.abs(T2d[bound.is_neu]))
T = np.vstack((T2d, np.zeros(g.num_faces)))
pp.plot_grid(g, vector_value=T, figsize=(15, 12), alpha=0)
"""
Explanation: To understand the inner workings of the discretization, and to recover the traction on the grid faces, some more details are needed. The MPSA discretization creates two sparse matrices "stress" and "bound_stress". They give define the discretization of the cell-face traction:
\begin{equation}
T = \text{stress} \cdot u + \text{bound_stress} \cdot u_b
\end{equation}
Here $u$ is a vector of cell center displacement and has length g.dim $$ g.num_cells. The vector $u_b$ is the boundary condition values. It is the displacement for Dirichlet boundaries and traction for Neumann boundaries and has length g.dim $$ g.num_faces.
The global linear system is now formed by momentuum balance on all cells. A row in the discretized system reads
\begin{equation}
-\int_{\Omega_k} f dv = \int_{\partial\Omega_k} T(n)dA = [div \cdot \text{stress} \cdot u + div\cdot\text{bound_stress}\cdot u_b]_k,
\end{equation}
The call to mpsa_class.assemble_matrix_rhs(), creates the left hand side matrix $ \text{div} \cdot \text{stress} $, the right hand side vector, which consists of $\text{f}$ and $-\text{div} \cdot \text{bound_stress}$ (note sign change).
We can also retrieve the traction on the faces, by first accessing the discretization matrices stress and bound_stress
End of explanation
"""
|
mabevillar/rmtk | rmtk/plotting/hazard_outputs/plot_hazard_curves.ipynb | agpl-3.0 | %matplotlib inline
import matplotlib.pyplot as plt
from plot_hazard_outputs import HazardCurve, UniformHazardSpectra
hazard_curve_file = "../sample_outputs/hazard/hazard_curve.xml"
hazard_curves = HazardCurve(hazard_curve_file)
"""
Explanation: Hazard Curves and Uniform Hazard Spectra
This IPython notebook allows the user to visualise the hazard curves for individual sites generated from a probabilistic event-based hazard analysis or a classical PSHA-based hazard analysis, and to export the plots as png files. The user can also plot the uniform hazard spectra (UHS) for different sites.
Please specify the path of the xml file containing the hazard curve or uniform hazard spectra results in order to use the hazard curve plotter or the uniform hazard spectra plotter respectively.
End of explanation
"""
hazard_curves.loc_list
hazard_curves.plot('80.313820|28.786170','80.763820|29.986170')
"""
Explanation: Hazard Curve
End of explanation
"""
uhs_file = "../sample_outputs/hazard/uniform_hazard_spectra.xml"
uhs = UniformHazardSpectra(uhs_file)
uhs.plot(0)
"""
Explanation: Uniform Hazard Spectra
End of explanation
"""
|
Diyago/Machine-Learning-scripts | time series regression/DL aproach for timeseries/Air Pressure MLP.ipynb | apache-2.0 | from __future__ import print_function
import os
import sys
import pandas as pd
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import datetime
#set current working directory
os.chdir('D:/Practical Time Series')
#Read the dataset into a pandas.DataFrame
df = pd.read_csv('datasets/PRSA_data_2010.1.1-2014.12.31.csv')
print('Shape of the dataframe:', df.shape)
#Let's see the first five rows of the DataFrame
df.head()
"""
Explanation: In this notebook, we will use a multi-layer perceptron to develop time series forecasting models.
The dataset used for the examples of this notebook is on air pollution measured by concentration of
particulate matter (PM) of diameter less than or equal to 2.5 micrometers. There are other variables
such as air pressure, air temperature, dew point and so on.
Two time series models are developed - one on air pressure and the other on pm2.5.
The dataset has been downloaded from UCI Machine Learning Repository.
https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data
End of explanation
"""
df['datetime'] = df[['year', 'month', 'day', 'hour']].apply(lambda row: datetime.datetime(year=row['year'], month=row['month'], day=row['day'],
hour=row['hour']), axis=1)
df.sort_values('datetime', ascending=True, inplace=True)
#Let us draw a box plot to visualize the central tendency and dispersion of PRES
plt.figure(figsize=(5.5, 5.5))
g = sns.boxplot(df['PRES'])
g.set_title('Box plot of Air Pressure')
plt.savefig('plots/ch5/B07887_05_01.png', format='png', dpi=300)
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df['PRES'])
g.set_title('Time series of Air Pressure')
g.set_xlabel('Index')
g.set_ylabel('Air Pressure readings in hPa')
plt.savefig('plots/ch5/B07887_05_02.png', format='png', dpi=300)
"""
Explanation: To make sure that the rows are in the right order of date and time of observations,
a new column datetime is created from the date and time related columns of the DataFrame.
The new column consists of Python's datetime.datetime objects. The DataFrame is sorted in ascending order
over this column.
End of explanation
"""
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
df['scaled_PRES'] = scaler.fit_transform(np.array(df['PRES']).reshape(-1, 1))
"""
Explanation: Gradient descent algorithms perform better (for example converge faster) if the variables are wihtin range [-1, 1]. Many sources relax the boundary to even [-3, 3]. The PRES variable is mixmax scaled to bound the tranformed variable within [0,1].
End of explanation
"""
"""
Let's start by splitting the dataset into train and validation. The dataset's time period is from
Jan 1st, 2010 to Dec 31st, 2014. The first four years - 2010 to 2013 is used as train and
2014 is kept for validation.
"""
split_date = datetime.datetime(year=2014, month=1, day=1, hour=0)
df_train = df.loc[df['datetime']<split_date]
df_val = df.loc[df['datetime']>=split_date]
print('Shape of train:', df_train.shape)
print('Shape of test:', df_val.shape)
#First five rows of train
df_train.head()
#First five rows of validation
df_val.head()
#Reset the indices of the validation set
df_val.reset_index(drop=True, inplace=True)
"""
The train and validation time series of standardized PRES are also plotted.
"""
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df_train['scaled_PRES'], color='b')
g.set_title('Time series of scaled Air Pressure in train set')
g.set_xlabel('Index')
g.set_ylabel('Scaled Air Pressure readings')
plt.savefig('plots/ch5/B07887_05_03.png', format='png', dpi=300)
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df_val['scaled_PRES'], color='r')
g.set_title('Time series of scaled Air Pressure in validation set')
g.set_xlabel('Index')
g.set_ylabel('Scaled Air Pressure readings')
plt.savefig('plots/ch5/B07887_05_04.png', format='png', dpi=300)
"""
Explanation: Before training the model, the dataset is split in two parts - train set and validation set.
The neural network is trained on the train set. This means computation of the loss function, back propagation
and weights updated by a gradient descent algorithm is done on the train set. The validation set is
used to evaluate the model and to determine the number of epochs in model training. Increasing the number of
epochs will further decrease the loss function on the train set but might not necessarily have the same effect
for the validation set due to overfitting on the train set.Hence, the number of epochs is controlled by keeping
a tap on the loss function computed for the validation set. We use Keras with Tensorflow backend to define and train
the model. All the steps involved in model training and validation is done by calling appropriate functions
of the Keras API.
End of explanation
"""
def makeXy(ts, nb_timesteps):
"""
Input:
ts: original time series
nb_timesteps: number of time steps in the regressors
Output:
X: 2-D array of regressors
y: 1-D array of target
"""
X = []
y = []
for i in range(nb_timesteps, ts.shape[0]):
X.append(list(ts.loc[i-nb_timesteps:i-1]))
y.append(ts.loc[i])
X, y = np.array(X), np.array(y)
return X, y
X_train, y_train = makeXy(df_train['scaled_PRES'], 7)
print('Shape of train arrays:', X_train.shape, y_train.shape)
X_val, y_val = makeXy(df_val['scaled_PRES'], 7)
print('Shape of validation arrays:', X_val.shape, y_val.shape)
"""
Explanation: Now we need to generate regressors (X) and target variable (y) for train and validation. 2-D array of regressors and 1-D array of target is created from the original 1-D array of column standardized_PRES in the DataFrames. For the time series forecasting model, Past seven days of observations are used to predict for the next day. This is equivalent to a AR(7) model. We define a function which takes the original time series and the number of timesteps in regressors as input to generate the arrays of X and y.
End of explanation
"""
from keras.layers import Dense, Input, Dropout
from keras.optimizers import SGD
from keras.models import Model
from keras.models import load_model
from keras.callbacks import ModelCheckpoint
#Define input layer which has shape (None, 7) and of type float32. None indicates the number of instances
input_layer = Input(shape=(7,), dtype='float32')
#Dense layers are defined with linear activation
dense1 = Dense(32, activation='linear')(input_layer)
dense2 = Dense(16, activation='linear')(dense1)
dense3 = Dense(16, activation='linear')(dense2)
"""
Explanation: Now we define the MLP using the Keras Functional API. In this approach a layer can be declared as the input of the following layer at the time of defining the next layer.
End of explanation
"""
dropout_layer = Dropout(0.2)(dense3)
#Finally, the output layer gives prediction for the next day's air pressure.
output_layer = Dense(1, activation='linear')(dropout_layer)
"""
Explanation: Multiple hidden layers and large number of neurons in each hidden layer gives neural networks the ability to model complex non-linearity of the underlying relations between regressors and target. However, deep neural networks can also overfit train data and give poor results on validation or test set. Dropout has been effectively used to regularize deep neural networks. In this example, a Dropout layer is added before the output layer. Dropout randomly sets p fraction of input neurons to zero before passing to the next layer. Randomly dropping inputs essentially acts as a bootstrap aggregating or bagging type of model ensembling. Random forest uses bagging by building trees on random subsets of input features. We use p=0.2 to dropout 20% of randomly selected input features.
End of explanation
"""
ts_model = Model(inputs=input_layer, outputs=output_layer)
ts_model.compile(loss='mean_squared_error', optimizer='adam')
ts_model.summary()
"""
Explanation: The input, dense and output layers will now be packed inside a Model, which is wrapper class for training and making
predictions. Mean square error (MSE) is used as the loss function.
The network's weights are optimized by the Adam algorithm. Adam stands for adaptive moment estimation
and has been a popular choice for training deep neural networks. Unlike, stochastic gradient descent, Adam uses
different learning rates for each weight and separately updates the same as the training progresses. The learning rate of a weight is updated based on exponentially weighted moving averages of the weight's gradients and the squared gradients.
End of explanation
"""
save_weights_at = os.path.join('keras_models', 'PRSA_data_Air_Pressure_MLP_weights.{epoch:02d}-{val_loss:.4f}.hdf5')
save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0,
save_best_only=True, save_weights_only=False, mode='min',
period=1)
ts_model.fit(x=X_train, y=y_train, batch_size=16, epochs=20,
verbose=1, callbacks=[save_best], validation_data=(X_val, y_val),
shuffle=True)
"""
Explanation: The model is trained by calling the fit function on the model object and passing the X_train and y_train. The training is done for a predefined number of epochs. Additionally, batch_size defines the number of samples of train set to be used for a instance of back propagation. The validation dataset is also passed to evaluate the model after every epoch completes. A ModelCheckpoint object tracks the loss function on the validation set and saves the model for the epoch, at which the loss function has been minimum.
End of explanation
"""
best_model = load_model(os.path.join('keras_models', 'PRSA_data_Air_Pressure_MLP_weights.13-0.0001.hdf5'))
preds = best_model.predict(X_val)
pred_PRES = scaler.inverse_transform(preds)
pred_PRES = np.squeeze(pred_PRES)
from sklearn.metrics import r2_score
r2 = r2_score(df_val['PRES'].loc[7:], pred_PRES)
print('R-squared for the validation set:', round(r2,4))
#Let's plot the first 50 actual and predicted values of air pressure.
plt.figure(figsize=(5.5, 5.5))
plt.plot(range(50), df_val['PRES'].loc[7:56], linestyle='-', marker='*', color='r')
plt.plot(range(50), pred_PRES[:50], linestyle='-', marker='.', color='b')
plt.legend(['Actual','Predicted'], loc=2)
plt.title('Actual vs Predicted Air Pressure')
plt.ylabel('Air Pressure')
plt.xlabel('Index')
plt.savefig('plots/ch5/B07887_05_05.png', format='png', dpi=300)
"""
Explanation: Prediction are made for the air pressure from the best saved model. The model's predictions, which are on the scaled air-pressure, are inverse transformed to get predictions on original air pressure. The goodness-of-fit or R squared is also calculated.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/tensorflow_extended/solutions/Vertex_AI_Training_and_Serving_with_TFX_and_Vertex_Pipelines.ipynb | apache-2.0 | # Use the latest version of pip.
!pip install --upgrade pip
!pip install --upgrade "tfx[kfp]<2"
"""
Explanation: Training and Serving with TFX and Vertex Pipelines
Learning objectives
Prepare example data.
Create a pipeline.
Run the pipeline on Vertex Pipelines.
Test with a prediction request.
Introduction
In this notebook, you will create and run a TFX pipeline which trains an ML model using Vertex AI Training service and publishes it to Vertex AI for serving. This notebook is based on the TFX pipeline we built in Simple TFX Pipeline for Vertex Pipelines Tutorial. If you have not read that tutorial yet, you should read it before proceeding with this notebook.
You can train models on Vertex AI using AutoML, or use custom training. In custom training, you can select many different machine types to power your training jobs, enable distributed training and use hyperparameter tuning. You can also serve prediction requests by deploying the trained model to Vertex AI Models and creating an endpoint.
In this notebook, we will use Vertex AI Training with custom jobs to train
a model in a TFX pipeline.
We will also deploy the model to serve prediction request using Vertex AI.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Install python packages
We will install required Python packages including TFX and KFP to author ML
pipelines and submit jobs to Vertex Pipelines.
End of explanation
"""
# docs_infra: no_execute
import sys
if not 'google.colab' in sys.modules:
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Did you restart the runtime?
You can restart runtime with following cell.
End of explanation
"""
# Import necessary liabraries and print their versions
import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
import kfp
print('KFP version: {}'.format(kfp.__version__))
"""
Explanation: Check the package versions.
End of explanation
"""
# Set the required variables
GOOGLE_CLOUD_PROJECT = 'qwiklabs-gcp-02-b8bef0a57866' # Replace this with your Project-ID
GOOGLE_CLOUD_REGION = 'us-central1' # Replace this with your bucket region
GCS_BUCKET_NAME = 'qwiklabs-gcp-02-b8bef0a57866' # Replace this with your Cloud Storage bucket
if not (GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_REGION and GCS_BUCKET_NAME):
from absl import logging
logging.error('Please set all required parameters.')
"""
Explanation: Set up variables
We will set up some variables used to customize the pipelines below. Following
information is required:
GCP Project id. See
Identifying your project id.
GCP Region to run pipelines. For more information about the regions that
Vertex Pipelines is available in, see the
Vertex AI locations guide.
Google Cloud Storage Bucket to store pipeline outputs.
Enter required values in the cell below before running it.
End of explanation
"""
!gcloud config set project {GOOGLE_CLOUD_PROJECT}
PIPELINE_NAME = 'penguin-vertex-training'
# Path to various pipeline artifact.
PIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for users' Python module.
MODULE_ROOT = 'gs://{}/pipeline_module/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for users' data.
DATA_ROOT = 'gs://{}/data/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
# Name of Vertex AI Endpoint.
ENDPOINT_NAME = 'prediction-' + PIPELINE_NAME
print('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT))
"""
Explanation: Set gcloud to use your project.
End of explanation
"""
# Create a directory and copy the dataset
!gsutil cp gs://download.tensorflow.org/data/palmer_penguins/penguins_processed.csv {DATA_ROOT}/
"""
Explanation: Prepare example data
We will use the same
Palmer Penguins dataset
as
Simple TFX Pipeline Tutorial.
There are four numeric features in this dataset which were already normalized
to have range [0,1]. We will build a classification model which predicts the
species of penguins.
We need to make our own copy of the dataset. Because TFX ExampleGen reads
inputs from a directory, we need to create a directory and copy dataset to it
on GCS.
End of explanation
"""
# TODO 1
# Review the contents of the CSV file
!gsutil cat {DATA_ROOT}/penguins_processed.csv | head
"""
Explanation: Take a quick look at the CSV file.
End of explanation
"""
_trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}
# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple and
# slightly modified run_fn() to add distribution_strategy.
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_metadata.proto.v0 import schema_pb2
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
# Since we're not generating or creating a schema, we will instead create
# a feature spec. Since there are a fairly small number of features this is
# manageable for this dataset.
_FEATURE_SPEC = {
**{
feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)
for feature in _FEATURE_KEYS
}, _LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)
}
def _input_fn(file_pattern: List[str],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema,
batch_size: int) -> tf.data.Dataset:
"""Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema).repeat()
def _make_keras_model() -> tf.keras.Model:
"""Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
"""
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# NEW: Read `use_gpu` from the custom_config of the Trainer.
# if it uses GPU, enable MirroredStrategy.
def _get_distribution_strategy(fn_args: tfx.components.FnArgs):
if fn_args.custom_config.get('use_gpu', False):
logging.info('Using MirroredStrategy with one GPU.')
return tf.distribute.MirroredStrategy(devices=['device:GPU:0'])
return None
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# This schema is usually either an output of SchemaGen or a manually-curated
# version provided by pipeline author. A schema can also derived from TFT
# graph if a Transform component is used. In the case when either is missing,
# `schema_from_feature_spec` could be used to generate schema from very simple
# feature_spec, but the schema returned would be very primitive.
schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
schema,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
schema,
batch_size=_EVAL_BATCH_SIZE)
# NEW: If we have a distribution strategy, build a model in a strategy scope.
strategy = _get_distribution_strategy(fn_args)
if strategy is None:
model = _make_keras_model()
else:
with strategy.scope():
model = _make_keras_model()
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format='tf')
"""
Explanation: Create a pipeline
Our pipeline will be very similar to the pipeline we created in
Simple TFX Pipeline for Vertex Pipelines Tutorial.
The pipeline will consists of three components, CsvExampleGen, Trainer and
Pusher. But we will use a special Trainer and Pusher component. The Trainer component will move
training workloads to Vertex AI, and the Pusher component will publish the
trained ML model to Vertex AI instead of a filesystem.
TFX provides a special Trainer to submit training jobs to Vertex AI Training
service. All we have to do is use Trainer in the extension module
instead of the standard Trainer component along with some required GCP
parameters.
In this tutorial, we will run Vertex AI Training jobs only using CPUs first
and then with a GPU.
TFX also provides a special Pusher to upload the model to Vertex AI Models.
Pusher will create Vertex AI Endpoint resource to serve online
perdictions, too. See
Vertex AI documentation
to learn more about online predictions provided by Vertex AI.
Write model code.
The model itself is almost similar to the model in
Simple TFX Pipeline Tutorial.
We will add _get_distribution_strategy() function which creates a
TensorFlow distribution strategy
and it is used in run_fn to use MirroredStrategy if GPU is available.
End of explanation
"""
!gsutil cp {_trainer_module_file} {MODULE_ROOT}/
"""
Explanation: Copy the module file to GCS which can be accessed from the pipeline components.
Otherwise, you might want to build a container image including the module file
and use the image to run the pipeline and AI Platform Training jobs.
End of explanation
"""
# TODO 2
def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,
module_file: str, endpoint_name: str, project_id: str,
region: str, use_gpu: bool) -> tfx.dsl.Pipeline:
"""Implements the penguin pipeline with TFX."""
# Brings data into the pipeline or otherwise joins/converts training data.
example_gen = tfx.components.CsvExampleGen(input_base=data_root)
# NEW: Configuration for Vertex AI Training.
# This dictionary will be passed as `CustomJobSpec`.
vertex_job_spec = {
'project': project_id,
'worker_pool_specs': [{
'machine_spec': {
'machine_type': 'n1-standard-4',
},
'replica_count': 1,
'container_spec': {
'image_uri': 'gcr.io/tfx-oss-public/tfx:{}'.format(tfx.__version__),
},
}],
}
if use_gpu:
# See https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec#acceleratortype
# for available machine types.
vertex_job_spec['worker_pool_specs'][0]['machine_spec'].update({
'accelerator_type': 'NVIDIA_TESLA_K80',
'accelerator_count': 1
})
# Trains a model using Vertex AI Training.
# NEW: We need to specify a Trainer for GCP with related configs.
trainer = tfx.extensions.google_cloud_ai_platform.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5),
custom_config={
tfx.extensions.google_cloud_ai_platform.ENABLE_VERTEX_KEY:
True,
tfx.extensions.google_cloud_ai_platform.VERTEX_REGION_KEY:
region,
tfx.extensions.google_cloud_ai_platform.TRAINING_ARGS_KEY:
vertex_job_spec,
'use_gpu':
use_gpu,
})
# NEW: Configuration for pusher.
vertex_serving_spec = {
'project_id': project_id,
'endpoint_name': endpoint_name,
# Remaining argument is passed to aiplatform.Model.deploy()
# See https://cloud.google.com/vertex-ai/docs/predictions/deploy-model-api#deploy_the_model
# for the detail.
#
# Machine type is the compute resource to serve prediction requests.
# See https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types
# for available machine types and acccerators.
'machine_type': 'n1-standard-4',
}
# Vertex AI provides pre-built containers with various configurations for
# serving.
# See https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers
# for available container images.
serving_image = 'us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-6:latest'
if use_gpu:
vertex_serving_spec.update({
'accelerator_type': 'NVIDIA_TESLA_K80',
'accelerator_count': 1
})
serving_image = 'us-docker.pkg.dev/vertex-ai/prediction/tf2-gpu.2-6:latest'
# NEW: Pushes the model to Vertex AI.
pusher = tfx.extensions.google_cloud_ai_platform.Pusher(
model=trainer.outputs['model'],
custom_config={
tfx.extensions.google_cloud_ai_platform.ENABLE_VERTEX_KEY:
True,
tfx.extensions.google_cloud_ai_platform.VERTEX_REGION_KEY:
region,
tfx.extensions.google_cloud_ai_platform.VERTEX_CONTAINER_IMAGE_URI_KEY:
serving_image,
tfx.extensions.google_cloud_ai_platform.SERVING_ARGS_KEY:
vertex_serving_spec,
})
components = [
example_gen,
trainer,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
components=components)
"""
Explanation: Write a pipeline definition
We will define a function to create a TFX pipeline. It has the same three
Components as in
Simple TFX Pipeline Tutorial,
but we use a Trainer and Pusher component in the GCP extension module.
tfx.extensions.google_cloud_ai_platform.Trainer behaves like a regular
Trainer, but it just moves the computation for the model training to cloud.
It launches a custom job in Vertex AI Training service and the trainer
component in the orchestration system will just wait until the Vertex AI
Training job completes.
tfx.extensions.google_cloud_ai_platform.Pusher creates a Vertex AI Model and a Vertex AI Endpoint using the
trained model.
End of explanation
"""
import os
PIPELINE_DEFINITION_FILE = PIPELINE_NAME + '_pipeline.json'
runner = tfx.orchestration.experimental.KubeflowV2DagRunner(
config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig(),
output_filename=PIPELINE_DEFINITION_FILE)
_ = runner.run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
data_root=DATA_ROOT,
module_file=os.path.join(MODULE_ROOT, _trainer_module_file),
endpoint_name=ENDPOINT_NAME,
project_id=GOOGLE_CLOUD_PROJECT,
region=GOOGLE_CLOUD_REGION,
# We will use CPUs only for now.
use_gpu=False))
"""
Explanation: Run the pipeline on Vertex Pipelines.
We will use Vertex Pipelines to run the pipeline as we did in
Simple TFX Pipeline for Vertex Pipelines Tutorial.
End of explanation
"""
# TODO 3
# docs_infra: no_execute
from google.cloud import aiplatform
from google.cloud.aiplatform import pipeline_jobs
import logging
logging.getLogger().setLevel(logging.INFO)
aiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=GOOGLE_CLOUD_REGION)
# Create a job to submit the pipeline
job = pipeline_jobs.PipelineJob(template_path=PIPELINE_DEFINITION_FILE,
display_name=PIPELINE_NAME)
job.submit()
"""
Explanation: The generated definition file can be submitted using Google Cloud aiplatform
client in google-cloud-aiplatform package.
End of explanation
"""
ENDPOINT_ID='8646374722876997632' # Replace this with your ENDPOINT_ID
if not ENDPOINT_ID:
from absl import logging
logging.error('Please set the endpoint id.')
"""
Explanation: Now you can visit the link in the output above or visit 'Vertex AI > Pipelines'
in Google Cloud Console to see the
progress.
It will take around 30 mintues to complete the pipeline.
Test with a prediction request
Once the pipeline completes, you will find a deployed model at the one of the
endpoints in 'Vertex AI > Endpoints'. We need to know the id of the endpoint to
send a prediction request to the new endpoint. This is different from the
endpoint name we entered above. You can find the id at the Endpoints page in
Google Cloud Console, it looks like a very long number.
Set ENDPOINT_ID below before running it.
End of explanation
"""
# TODO 4
# docs_infra: no_execute
import numpy as np
# The AI Platform services require regional API endpoints.
client_options = {
'api_endpoint': GOOGLE_CLOUD_REGION + '-aiplatform.googleapis.com'
}
# Initialize client that will be used to create and send requests.
client = aiplatform.gapic.PredictionServiceClient(client_options=client_options)
# Set data values for the prediction request.
# Our model expects 4 feature inputs and produces 3 output values for each
# species. Note that the output is logit value rather than probabilities.
# See the model code to understand input / output structure.
instances = [{
'culmen_length_mm':[0.71],
'culmen_depth_mm':[0.38],
'flipper_length_mm':[0.98],
'body_mass_g': [0.78],
}]
endpoint = client.endpoint_path(
project=GOOGLE_CLOUD_PROJECT,
location=GOOGLE_CLOUD_REGION,
endpoint=ENDPOINT_ID,
)
# Send a prediction request and get response.
response = client.predict(endpoint=endpoint, instances=instances)
# Uses argmax to find the index of the maximum value.
print('species:', np.argmax(response.predictions[0]))
"""
Explanation: We use the same aiplatform client to send a request to the endpoint. We will
send a prediction request for Penguin species classification. The input is the four features that we used, and the model will return three values, because our
model outputs one value for each species.
For example, the following specific example has the largest value at index '2'
and will print '2'.
End of explanation
"""
|
CodyKochmann/battle_tested | tutorials/hardening_filters.ipynb | mit | def list_of_strings_v1(iterable):
""" converts the iterable input into a list of strings """
# build the output
out = [str(i) for i in iterable]
# validate the output
for i in out:
assert type(i) == str
# return
return out
"""
Explanation: battle_tested was originally created to harden your safeties.
In the example below battle_tested is being used to harden a function that is suppose to give you a predictable list of strings so you can continue with your code knowing the input has already been sanitized.
End of explanation
"""
list_of_strings_v1(range(10))
"""
Explanation: Here's an example of what many programmers would consider enough of a test.
End of explanation
"""
from battle_tested import fuzz
fuzz(list_of_strings_v1)
"""
Explanation: The above proves it works and is pretty clean and understandable right?
End of explanation
"""
def list_of_strings_v2(iterable):
""" converts the iterable input into a list of strings """
try:
iter(iterable)
# build the output
out = [str(i) for i in iterable]
except TypeError: # raised when input was not iterable
out = [str(iterable)]
# validate the output
for i in out:
assert type(i) == str
# return
return out
fuzz(list_of_strings_v2)
"""
Explanation: And with 2 lines of code, that was proven wrong.
While you could argue that the input of the tests is crazy and would never happen with how you structured your code, lets see how hard it really is to rewrite this function so it actually can reliably act as your input's filter.
End of explanation
"""
|
celiasmith/syde556 | SYDE 556 Lecture 5 Dynamics.ipynb | gpl-2.0 | %pylab inline
import nengo
model = nengo.Network()
with model:
ensA = nengo.Ensemble(100, dimensions=1)
def feedback(x):
return x+1
conn = nengo.Connection(ensA, ensA, function=feedback, synapse = 0.1)
ensA_p = nengo.Probe(ensA, synapse=.01)
sim = nengo.Simulator(model)
sim.run(.5)
plot(sim.trange(), sim.data[ensA_p])
ylim(-1.5,1.5);
"""
Explanation: SYDE 556/750: Simulating Neurobiological Systems
Accompanying Readings: Chapter 8
ADMIN STUFF: Next assignment, project proposal due dates
Dynamics
Everything we've looked at so far has been feedforward
There's some pattern of activity in one group of neurons representing $x$
We want that to cause some pattern of activity in another group of neurons to represent $y=f(x)$
These can be chained together to make more complex systems $z=h(f(x)+g(y))$
What about recurrent networks?
What happens when we connect a neural group back to itself?
<img src="files/lecture5/recnet1.png">
Recurrent functions
What if we do exactly what we've done so far in the past, but instead of connecting one group of neurons to another, we just connect it back to itself
Instead of $y=f(x)$
We get $x=f(x)$ (???)
As written, this is clearly non-sensical
For example, if we do $f(x)=x+1$ then we'd have $x=x+1$, or $x-x=1$, or $0=1$
But don't forget about time
What if it was $x_{t+1} = f(x_t)$
Which makes more sense because we're talking about a real physical system
This is a lot like a differential equation
What would happen if we built this?
Try it out
Let's try implementing this kind of circuit
Start with $x_{t+1}=x_t+1$
End of explanation
"""
with model:
def feedback(x):
return -x
conn.function = feedback
sim = nengo.Simulator(model)
sim.run(.5)
plot(sim.trange(), sim.data[ensA_p])
ylim(-1.5,1.5);
"""
Explanation: That sort of makes sense
$x$ increases quickly, then hits an upper bound
How quickly?
What parameters of the system affect this?
What are the precise dynamics?
What about $f(x)=-x$?
End of explanation
"""
from nengo.utils.functions import piecewise
with model:
stim = nengo.Node(piecewise({0:1, .2:-1, .4:0}))
nengo.Connection(stim, ensA)
sim = nengo.Simulator(model)
sim.run(.6)
plot(sim.trange(), sim.data[ensA_p])
ylim(-1.5,1.5);
"""
Explanation: That also makes sense. What if we nudge it away from zero?
End of explanation
"""
with model:
stim.output = piecewise({.1:.2, .2:.4, .5:0})
def feedback(x):
return x*x
conn.function = feedback
sim = nengo.Simulator(model)
sim.run(.6)
plot(sim.trange(), sim.data[ensA_p])
ylim(-1.5,1.5);
"""
Explanation: With an input of 1, $x=0.5$
With an input of -1, $x=-0.5$
With an input of 0, it goes back to $x=0$
Does this make sense?
Why / why not?
And why that particular timing/curvature?
What about $f(x)=x^2$?
End of explanation
"""
import nengo
from nengo.utils.functions import piecewise
model = nengo.Network(seed=4)
with model:
stim = nengo.Node(piecewise({.3:1}))
ensA = nengo.Ensemble(100, dimensions=1)
def feedback(x):
return x
nengo.Connection(stim, ensA)
#conn = nengo.Connection(ensA, ensA, function=feedback)
stim_p = nengo.Probe(stim)
ensA_p = nengo.Probe(ensA, synapse=0.01)
sim = nengo.Simulator(model)
sim.run(1)
plot(sim.trange(), sim.data[ensA_p], label="$\hat{x}$")
plot(sim.trange(), sim.data[stim_p], label="$x$")
legend()
ylim(-.2,1.5);
"""
Explanation: Well that's weird
Stable at $x=0$ with no input
Stable at .2
Unstable at .4, shoots up high
Something very strange happens around $x=1$ when the input is turned off (why decay if $f(x) = x^2$?)
Why is this happening?
Making sense of dynamics
Let's go back to something simple
Just a single feed-forward neural population
Encode $x$ into current, compute spikes, decode filtered spikes into $\hat{x}$
Instead of a constant input, let's change the input
Change it suddenly from zero to one to get a sense of what's happening with changes
End of explanation
"""
with model:
ensA_p = nengo.Probe(ensA, synapse=0.03)
sim = nengo.Simulator(model)
sim.run(1)
plot(sim.trange(), sim.data[ensA_p], label="$\hat{x}$")
plot(sim.trange(), sim.data[stim_p], label="$x$")
legend()
ylim(-.2,1.5);
"""
Explanation: This was supposed to compute $f(x)=x$
For a constant input, that works
But we get something else when there's a change in the input
What is this difference?
What affects it?
End of explanation
"""
tau = 0.03
with model:
ensA_p = nengo.Probe(ensA, synapse=tau)
sim = nengo.Simulator(model)
sim.run(1)
stim_filt = nengo.Lowpass(tau).filt(sim.data[stim_p], dt=sim.dt)
plot(sim.trange(), sim.data[ensA_p], label="$\hat{x}$")
plot(sim.trange(), sim.data[stim_p], label="$x$")
plot(sim.trange(), stim_filt, label="$h(t)*x(t)$")
legend()
ylim(-.2,1.5);
"""
Explanation: The time constant of the post-synaptic filter
We're not getting $f(x)=x$
Instead we're getting $f(x(t))=x(t)*h(t)$
End of explanation
"""
import nengo
from nengo.utils.functions import piecewise
from nengo.utils.ensemble import tuning_curves
tau = 0.01
model = nengo.Network('Eye control', seed=8)
with model:
stim = nengo.Node(piecewise({.3:1, .6:0 }))
velocity = nengo.Ensemble(100, dimensions=1)
position = nengo.Ensemble(20, dimensions=1)
def feedback(x):
return 1*x
conn = nengo.Connection(stim, velocity)
conn = nengo.Connection(velocity, position, transform=tau, synapse=tau)
conn = nengo.Connection(position, position, function=feedback, synapse=tau)
stim_p = nengo.Probe(stim)
position_p = nengo.Probe(position, synapse=.01)
velocity_p = nengo.Probe(velocity, synapse=.01)
sim = nengo.Simulator(model)
sim.run(1)
x, A = tuning_curves(position, sim)
plot(x,A)
figure()
plot(sim.trange(), sim.data[stim_p], label = "stim")
plot(sim.trange(), sim.data[position_p], label = "position")
plot(sim.trange(), sim.data[velocity_p], label = "velocity")
legend(loc="best");
"""
Explanation: So there are dynamics and filtering going on, since there is always a synaptic filter on a connection
Why isn't it exactly the same?
Recurrent connections are dynamic as well (i.e. passing past information to future state of the population)
Let's take a look more carefully
Recurrent connections
So a connection actually approximates $f(x(t))*h(t)$
So what does a recurrent connection do?
Also $x(t) = f(x(t))*h(t)$
where $$
h(t) = \begin{cases}
e^{-t/\tau} &\mbox{if } t > 0 \
0 &\mbox{otherwise}
\end{cases}
$$
How can we work with this?
General rule of thumb: convolutions are annoying, so let's get rid of them
We could do a Fourier transform
$X(\omega)=F(\omega)H(\omega)$
But, since we are studying the response of a system (rather than a continuous signal), there's a more general and appropriate transform that makes life even easier:
Laplace transform (it is more general because $s = a + j\omega$)
The Laplace transform of our equations are:
$X(s)=F(s)H(s)$
$H(s)={1 \over {1+s\tau}}$
Rearranging:
$X(s)=F(s){1 \over {1+s\tau}}$
$X(s)(1+s\tau) = F(s)$
$X(s) + X(s)s\tau = F(s)$
$sX(s) = {1 \over \tau} (F(s)-X(s))$
Convert back into the time domain (inverse Laplace):
${dx \over dt} = {1 \over \tau} (f(x(t))-x(t))$
Dynamics
This says that if we introduce a recurrent connection, we end up implementing a differential equation
So what happened with $f(x)=x+1$?
$\dot{x} = {1 \over \tau} (x+1-x)$
$\dot{x} = {1 \over \tau}$
What about $f(x)=-x$?
$\dot{x} = {1 \over \tau} (-x-x)$
$\dot{x} = {-2x \over \tau}$
Consistent with figures above, so at inputs of $\pm 1$ get to $0 = 2x\pm 1$, $x=\pm .5$
And $f(x)=x^2$?
$\dot{x} = {1 \over \tau} (x^2-x)$
Consistent with figure, at input of .2, $0=x^2-x+.2=(x-.72)(x-.27)$, for input of .4 you get imaginary solutions.
For 0 input, x = 0,1 ... what if we get it over 1 before turning off input?
Synthesis
What if there's some differential equation we really want to implement?
We want $\dot{x} = f(x)$
So we do a recurrent connection of $f'(x)=\tau f(x)+x$
The resulting model will end up implementing $\dot{x} = {1 \over \tau} (\tau f(x)+x-x)=f(x)$
Inputs
What happens if there's an input as well?
We'll call the input $u$ from another population, and it is also computing some function $g(u)$
$x(t) = f(x(t))h(t)+g(u(t))h(t)$
Follow the same derivation steps
$\dot{x} = {1 \over \tau} (f(x)-x + g(u))$
So if you have some input that you want added to $\dot{x}$, you need to scale it by $\tau$
This lets us do any differential equation of the form $\dot{x}=f(x)+g(u)$
A derivation
Linear systems
Let's take a step back and look at just linear systems
The book shows that we can implement any equation of the form
$\dot{x}(t) = A x(t) + B u(t)$
Where $A$ and $x$ are a matrix and vector -- giving a standard control theoretic structure
<img src="files/lecture5/control_sys.png" width="600">
Our goal is to convert this to a structure which has $h(t)$ as the transfer function instead of the standard $\int$
<img src="files/lecture5/control_sysh.png" width="600">
Using Laplace on the standard form gives:
$sX(s) = A X(s) + B U(s)$
Laplace on the 'neural control' form gives (as before where $F(s) = A'X(s) + B'U(s)$):
$X(s) = {1 \over {1 + s\tau}} (A'X(s) + B'U(s))$
$X(s) + \tau sX(s) = (A'X(s) + B'U(s))$
$sX(s) = {1 \over \tau} (A'X(s) + B'U(s) - X(s))$
$sX(s) = {1 \over \tau} ((A' - I) X(s) + B'U(s))$
Making the 'standard' and 'neural' equations equal to one another, we find that for any system with a given A and B, the A' and B' of the equivalent neural system are given by:
$A' = \tau A + I$ and
$B' = \tau B$
where $I$ is the identity matrix
This is nice because lots of engineers think of the systems they build in these terms (i.e. as linear control systems).
Nonlinear systems
In fact, these same steps can be taken to account for nonlinear control systems as well:
$\dot{x}(t) = f(x(t),u(t),t)$
For a neural system with transfer function $h(t)$:
$X(s) = H(s)F'(X(s),U(s),s)$
$X(s) = {1 \over {1 + s\tau}} F'(X(s),U(s),s)$
$sX(s) = {1 \over \tau} (F'(X(s),U(s),s) - X(s))$
This gives the general result (slightly more general than what we saw earlier):
$F'(X(s),U(s),s) = \tau(F(X(s),U(s),s)) + X(s)$
Applications
Eye control
Part of the brainstem called the nuclei prepositus hypoglossi
Input is eye velocity $v$
Output is eye position $x$
$\dot{x}=v$
This is an integrator ($x$ is the integral of $v$)
It's a linear system, so, to get it in the standard control form $\dot{x}=Ax+Bu$ we have:
$A=0$
$B=1$
So that means we need $A'=\tau 0 + I = 1$ and $B'=\tau 1 = \tau$
<img src="files/lecture5/eye_sys.png" width="400">
End of explanation
"""
import nengo
from nengo.dists import Uniform
from nengo.utils.ensemble import tuning_curves
model = nengo.Network(label='Neurons')
with model:
neurons = nengo.Ensemble(100, dimensions=1, max_rates=Uniform(100,200))
connection = nengo.Connection(neurons, neurons)
sim = nengo.Simulator(model)
d = sim.data[connection].weights.T
x, A = tuning_curves(neurons, sim)
xhat = numpy.dot(A, d)
x, A = tuning_curves(neurons, sim)
plot(x,A)
figure()
plot(x, xhat-x)
axhline(0, color='k')
xlabel('$x$')
ylabel('$\hat{x}-x$');
"""
Explanation: That's pretty good... the area under the input is about equal to the magnitude of the output.
But, in order to be a perfect integrator, we'd need exactly $x=1\times x$
We won't get exactly that
Neural implementations are always approximations
Two forms of error:
$E_{distortion}$, the decoding error
$E_{noise}$, the random noise error
What will they do?
Distortion error
<img src="files/lecture5/integrator_error.png">
What affects this?
End of explanation
"""
import nengo
from nengo.utils.functions import piecewise
tau = 0.1
tau_c = 2.0
model = nengo.Network('Eye control', seed=5)
with model:
stim = nengo.Node(piecewise({.3:1, .6:0 }))
velocity = nengo.Ensemble(100, dimensions=1)
position = nengo.Ensemble(200, dimensions=1)
def feedback(x):
return (-tau/tau_c + 1)*x
conn = nengo.Connection(stim, velocity)
conn = nengo.Connection(velocity, position, transform=tau, synapse=tau)
conn = nengo.Connection(position, position, function=feedback, synapse=tau)
stim_p = nengo.Probe(stim)
position_p = nengo.Probe(position, synapse=.01)
velocity_p = nengo.Probe(velocity, synapse=.01)
sim = nengo.Simulator(model)
sim.run(5)
plot(sim.trange(), sim.data[stim_p], label = "stim")
plot(sim.trange(), sim.data[position_p], label = "position")
plot(sim.trange(), sim.data[velocity_p], label = "velocity")
legend(loc="best");
"""
Explanation: We can think of the distortion error as introducing a bunch of local attractors into the representation
Any 'downward' x-crossing will be a stable point ('upwards' is unstable).
There will be a tendency to drift towards one of these even if the input is zero.
Noise error
What will random noise do?
Push the representation back and forth
What if it is small?
What if it is large?
What will changing the post-synaptic time constant $\tau$ do?
How does that interact with noise?
Real neural integrators
But real eyes aren't perfect integrators
If you get someone to look at someting, then turn off the lights but tell them to keep looking in the same direction, their eye will drift back to centre (with about 70s time constant)
How do we implement that?
$\dot{x}=-{1 \over \tau_c}x + v$
$\tau_c$ is the time constant of that return to centre
$A'=\tau {-1 \over \tau_c}+1$
$B' = \tau$
End of explanation
"""
import nengo
from nengo.utils.functions import piecewise
tau = 0.1
model = nengo.Network('Controlled integrator', seed=1)
with model:
vel = nengo.Node(piecewise({.2:1.5, .5:0 }))
dec = nengo.Node(piecewise({.7:.2, .9:0 }))
velocity = nengo.Ensemble(100, dimensions=1)
decay = nengo.Ensemble(100, dimensions=1)
position = nengo.Ensemble(400, dimensions=2)
def feedback(x):
return -x[1]*x[0]+x[0], 0
conn = nengo.Connection(vel, velocity)
conn = nengo.Connection(dec, decay)
conn = nengo.Connection(velocity, position[0], transform=tau, synapse=tau)
conn = nengo.Connection(decay, position[1], synapse=0.01)
conn = nengo.Connection(position, position, function=feedback, synapse=tau)
position_p = nengo.Probe(position, synapse=.01)
velocity_p = nengo.Probe(velocity, synapse=.01)
decay_p = nengo.Probe(decay, synapse=.01)
sim = nengo.Simulator(model)
sim.run(1)
plot(sim.trange(), sim.data[decay_p])
lineObjects = plot(sim.trange(), sim.data[position_p])
plot(sim.trange(), sim.data[velocity_p])
legend(('decay','position','decay','velocity'),loc="best");
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/controlled_integrator.py.cfg")
"""
Explanation: That also looks right. Note that as $\tau_c \rightarrow \infty$ this will approach the integrator.
Humans (a) and Goldfish (b)
Humans have more neurons doing this than goldfish (~1000 vs ~40)
They also have slower decay (70 s vs. 10 s).
Why do these fit together?
<img src="files/lecture5/integrator_decay.png">
Controlled Integrator
What if we want an integrator where we can adjust the decay on-the-fly?
Separate input telling us what the decay constant $d$ should be
$\dot{x} = -d x + v$
So there are two inputs: $v$ and $d$
This is no longer in the standard $Ax + Bu$ form. Sort of...
Let $A = -d(t)$, so it's not a matrix
But it is of the more general form: ${dx \over dt}=f(x)+g(u)$
We need to compute a nonlinear function of an input ($d$) and the state variable ($x$)
How can we do this?
Going to 2D so we can compute the nonlinear function
Let's have the state variable be $[x, d]$
<img src="files/lecture5/controlled_integrator.png" width = "600">
End of explanation
"""
import nengo
model = nengo.Network('Oscillator')
freq = -.5
with model:
stim = nengo.Node(lambda t: [.5,.5] if t<.02 else [0,0])
osc = nengo.Ensemble(200, dimensions=2)
def feedback(x):
return x[0]+freq*x[1], -freq*x[0]+x[1]
nengo.Connection(osc, osc, function=feedback, synapse=.01)
nengo.Connection(stim, osc)
osc_p = nengo.Probe(osc, synapse=.01)
sim = nengo.Simulator(model)
sim.run(.5)
figure(figsize=(12,4))
subplot(1,2,1)
plot(sim.trange(), sim.data[osc_p]);
xlabel('Time (s)')
ylabel('State value')
subplot(1,2,2)
plot(sim.data[osc_p][:,0],sim.data[osc_p][:,1])
xlabel('$x_0$')
ylabel('$x_1$');
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/oscillator.py.cfg")
"""
Explanation: Other fun functions
Oscillator
$F = -kx = m \ddot{x}$ let $\omega = \sqrt{\frac{k}{m}}$
$\frac{d}{dt} \begin{bmatrix}
\omega x \
\dot{x}
\end{bmatrix}
=
\begin{bmatrix}
0 & \omega \
-\omega & 0
\end{bmatrix}
\begin{bmatrix}
x_0 \
x_1
\end{bmatrix}$
Therefore, with the above, $\dot{x}=[x_1, -x_0]$
End of explanation
"""
import nengo
model = nengo.Network('Lorenz Attractor', seed=3)
tau = 0.1
sigma = 10
beta = 8.0/3
rho = 28
def feedback(x):
dx0 = -sigma * x[0] + sigma * x[1]
dx1 = -x[0] * x[2] - x[1]
dx2 = x[0] * x[1] - beta * (x[2] + rho) - rho
return [dx0 * tau + x[0],
dx1 * tau + x[1],
dx2 * tau + x[2]]
with model:
lorenz = nengo.Ensemble(2000, dimensions=3, radius=60)
nengo.Connection(lorenz, lorenz, function=feedback, synapse=tau)
lorenz_p = nengo.Probe(lorenz, synapse=tau)
sim = nengo.Simulator(model)
sim.run(14)
figure(figsize=(12,4))
subplot(1,2,1)
plot(sim.trange(), sim.data[lorenz_p]);
xlabel('Time (s)')
ylabel('State value')
subplot(1,2,2)
plot(sim.data[lorenz_p][:,0],sim.data[lorenz_p][:,1])
xlabel('$x_0$')
ylabel('$x_1$');
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/lorenz.py.cfg")
"""
Explanation: Lorenz Attractor (a chaotic attractor)
$\dot{x}=[10x_1-10x_0, -x_0 x_2-x_1, x_0 x_1 - {8 \over 3}(x_2+28)-28]$
End of explanation
"""
import nengo
model = nengo.Network('Square Oscillator')
tau = 0.02
r=6
def feedback(x):
if abs(x[1])>abs(x[0]):
if x[1]>0: dx=[r, 0]
else: dx=[-r, 0]
else:
if x[0]>0: dx=[0, -r]
else: dx=[0, r]
return [tau*dx[0]+x[0], tau*dx[1]+x[1]]
with model:
stim = nengo.Node(lambda t: [.5,.5] if t<.02 else [0,0])
square_osc = nengo.Ensemble(1000, dimensions=2)
nengo.Connection(square_osc, square_osc, function=feedback, synapse=tau)
nengo.Connection(stim, square_osc)
square_osc_p = nengo.Probe(square_osc, synapse=tau)
sim = nengo.Simulator(model)
sim.run(2)
figure(figsize=(12,4))
subplot(1,2,1)
plot(sim.trange(), sim.data[square_osc_p]);
xlabel('Time (s)')
ylabel('State value')
subplot(1,2,2)
plot(sim.data[square_osc_p][:,0],sim.data[square_osc_p][:,1])
xlabel('$x_0$')
ylabel('$x_1$');
from nengo_gui.ipython import IPythonViz
IPythonViz(model) #do config
"""
Explanation: Note: This is not the original Lorenz attractor.
The original is $\dot{x}=[10x_1-10x_0, x_0 (28-x_2)-x_1, x_0 x_1 - {8 \over 3}(x_2)]$
Why change it to $\dot{x}=[10x_1-10x_0, -x_0 x_2-x_1, x_0 x_1 - {8 \over 3}(x_2+28)-28]$?
What's being changed here?
Oscillators with different paths
Since we can implement any function, we're not limited to linear oscillators
What about a "square" oscillator?
Instead of the value going in a circle, it traces out a square
$$
{\dot{x}} = \begin{cases}
[r, 0] &\mbox{if } |x_1|>|x_0| \wedge x_1>0 \
[-r, 0] &\mbox{if } |x_1|>|x_0| \wedge x_1<0 \
[0, -r] &\mbox{if } |x_1|<|x_0| \wedge x_0>0 \
[0, r] &\mbox{if } |x_1|<|x_0| \wedge x_0<0 \
\end{cases}
$$
End of explanation
"""
import nengo
model = nengo.Network('Heart Oscillator')
tau = 0.02
r=4
def feedback(x):
return [-tau*r*x[1]+x[0], tau*r*x[0]+x[1]]
def heart_shape(x):
theta = np.arctan2(x[1], x[0])
r = 2 - 2 * np.sin(theta) + np.sin(theta)*np.sqrt(np.abs(np.cos(theta)))/(np.sin(theta)+1.4)
return -r*np.cos(theta), r*np.sin(theta)
with model:
stim = nengo.Node(lambda t: [.5,.5] if t<.02 else [0,0])
heart_osc = nengo.Ensemble(1000, dimensions=2)
heart = nengo.Ensemble(100, dimensions=2, radius=4)
nengo.Connection(stim, heart_osc)
nengo.Connection(heart_osc, heart_osc, function=feedback, synapse=tau)
nengo.Connection(heart_osc, heart, function=heart_shape, synapse=tau)
heart_p = nengo.Probe(heart, synapse=tau)
sim = nengo.Simulator(model)
sim.run(4)
figure(figsize=(12,4))
subplot(1,2,1)
plot(sim.trange(), sim.data[heart_p]);
xlabel('Time (s)')
ylabel('State value')
subplot(1,2,2)
plot(sim.data[heart_p][:,0],sim.data[heart_p][:,1])
xlabel('$x_0$')
ylabel('$x_1$');
from nengo_gui.ipython import IPythonViz
IPythonViz(model) #do config
"""
Explanation: Does this do what you expect?
How is it affected by:
Number of neurons?
Post-synaptic time constant?
Decoding filter time constant?
Speed of oscillation (r)?
What about this shape?
End of explanation
"""
import nengo
from nengo.utils.functions import piecewise
model = nengo.Network('Controlled Oscillator')
tau = 0.1
freq = 20
def feedback(x):
return x[1]*x[2]*freq*tau+1.1*x[0], -x[0]*x[2]*freq*tau+1.1*x[1], 0
with model:
stim = nengo.Node(lambda t: [20,20] if t<.02 else [0,0])
freq_ctrl = nengo.Node(piecewise({0:1, 2:.5, 6:-1}))
ctrl_osc = nengo.Ensemble(500, dimensions=3)
nengo.Connection(ctrl_osc, ctrl_osc, function=feedback, synapse=tau)
nengo.Connection(stim, ctrl_osc[0:2])
nengo.Connection(freq_ctrl, ctrl_osc[2])
ctrl_osc_p = nengo.Probe(ctrl_osc, synapse=0.01)
sim = nengo.Simulator(model)
sim.run(8)
figure(figsize=(12,4))
subplot(1,2,1)
plot(sim.trange(), sim.data[ctrl_osc_p]);
xlabel('Time (s)')
ylabel('State value')
subplot(1,2,2)
plot(sim.data[ctrl_osc_p][:,0],sim.data[ctrl_osc_p][:,1])
xlabel('$x_0$')
ylabel('$x_1$');
from nengo_gui.ipython import IPythonViz
IPythonViz(model, "configs/controlled_oscillator.py.cfg")
"""
Explanation: We are doing things differently here
The actual $x$ value is a normal circle oscillator
The heart shape is a function of $x$
But that's just a different decoder
Would it be possible to do an oscillator where $x$ followed this shape?
How could we tell them apart in terms of neural behaviour?
Controlled Oscillator
Change the frequency of the oscillator on-the-fly
$\dot{x}=[x_1 x_2, -x_0 x_2]$
End of explanation
"""
|
amueller/scipy-2016-sklearn | notebooks/03 Data Representation for Machine Learning.ipynb | cc0-1.0 | from sklearn.datasets import load_iris
iris = load_iris()
"""
Explanation: The use of watermark (above) is optional, and we use it to keep track of the changes while developing the tutorial material. (You can install this IPython extension via "pip install watermark". For more information, please see: https://github.com/rasbt/watermark).
SciPy 2016 Scikit-learn Tutorial
Representation and Visualization of Data
Machine learning is about fitting models to data; for that reason, we'll start by
discussing how data can be represented in order to be understood by the computer. Along
with this, we'll build on our matplotlib examples from the previous section and show some
examples of how to visualize data.
Data in scikit-learn
Data in scikit-learn, with very few exceptions, is assumed to be stored as a
two-dimensional array, of shape [n_samples, n_features]. Many algorithms also accept scipy.sparse matrices of the same shape.
n_samples: The number of samples: each sample is an item to process (e.g. classify).
A sample can be a document, a picture, a sound, a video, an astronomical object,
a row in database or CSV file,
or whatever you can describe with a fixed set of quantitative traits.
n_features: The number of features or distinct traits that can be used to describe each
item in a quantitative manner. Features are generally real-valued, but may be Boolean or
discrete-valued in some cases.
The number of features must be fixed in advance. However it can be very high dimensional
(e.g. millions of features) with most of them being "zeros" for a given sample. This is a case
where scipy.sparse matrices can be useful, in that they are
much more memory-efficient than NumPy arrays.
As we recall from the previous section (or Jupyter notebook), we represent samples (data points or instances) as rows in the data array, and we store the corresponding features, the "dimensions," as columns.
A Simple Example: the Iris Dataset
As an example of a simple dataset, we're going to take a look at the iris data stored by scikit-learn.
The data consists of measurements of three different iris flower species. There are three different species of iris
in this particular dataset as illustrated below:
Iris Setosa
<img src="figures/iris_setosa.jpg" width="50%">
Iris Versicolor
<img src="figures/iris_versicolor.jpg" width="50%">
Iris Virginica
<img src="figures/iris_virginica.jpg" width="50%">
Quick Question:
Let's assume that we are interested in categorizing new observations; we want to predict whether unknown flowers are Iris-Setosa, Iris-Versicolor, or Iris-Virginica flowers, respectively. Based on what we've discussed in the previous section, how would we construct such a dataset?*
Remember: we need a 2D array of size [n_samples x n_features].
What would the n_samples refer to?
What might the n_features refer to?
Remember that there must be a fixed number of features for each sample, and feature
number j must be a similar kind of quantity for each sample.
Loading the Iris Data with Scikit-learn
For future experiments with machine learning algorithms, we recommend you to bookmark the UCI machine learning repository, which hosts many of the commonly used datasets that are useful for benchmarking machine learning algorithms -- a very popular resource for machine learning practioners and researchers. Conveniently, some of these datasets are already included in scikit-learn so that we can skip the tedious parts of downloading, reading, parsing, and cleaning these text/CSV files. You can find a list of available datasets in scikit-learn at: http://scikit-learn.org/stable/datasets/#toy-datasets.
For example, scikit-learn has a very straightforward set of data on these iris species. The data consist of
the following:
Features in the Iris dataset:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
Target classes to predict:
Iris Setosa
Iris Versicolour
Iris Virginica
<img src="figures/petal_sepal.jpg" alt="Sepal" style="width: 50%;"/>
(Image: "Petal-sepal". Licensed under CC BY-SA 3.0 via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:Petal-sepal.jpg#/media/File:Petal-sepal.jpg)
scikit-learn embeds a copy of the iris CSV file along with a helper function to load it into numpy arrays:
End of explanation
"""
iris.keys()
"""
Explanation: The resulting dataset is a Bunch object: you can see what's available using
the method keys():
End of explanation
"""
n_samples, n_features = iris.data.shape
print('Number of samples:', n_samples)
print('Number of features:', n_features)
# the sepal length, sepal width, petal length and petal width of the first sample (first flower)
print(iris.data[0])
"""
Explanation: The features of each sample flower are stored in the data attribute of the dataset:
End of explanation
"""
print(iris.data.shape)
print(iris.target.shape)
print(iris.target)
import numpy as np
np.bincount(iris.target)
"""
Explanation: The information about the class of each sample is stored in the target attribute of the dataset:
End of explanation
"""
print(iris.target_names)
"""
Explanation: Using the NumPy's bincount function (above), we can see that the classes are distributed uniformly in this dataset - there are 50 flowers from each species, where
class 0: Iris-Setosa
class 1: Iris-Versicolor
class 2: Iris-Virginica
These class names are stored in the last attribute, namely target_names:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
x_index = 3
colors = ['blue', 'red', 'green']
for label, color in zip(range(len(iris.target_names)), colors):
plt.hist(iris.data[iris.target==label, x_index],
label=iris.target_names[label],
color=color)
plt.xlabel(iris.feature_names[x_index])
plt.legend(loc='upper right')
plt.show()
x_index = 3
y_index = 0
colors = ['blue', 'red', 'green']
for label, color in zip(range(len(iris.target_names)), colors):
plt.scatter(iris.data[iris.target==label, x_index],
iris.data[iris.target==label, y_index],
label=iris.target_names[label],
c=color)
plt.xlabel(iris.feature_names[x_index])
plt.ylabel(iris.feature_names[y_index])
plt.legend(loc='upper left')
plt.show()
"""
Explanation: This data is four dimensional, but we can visualize one or two of the dimensions
at a time using a simple histogram or scatter-plot. Again, we'll start by enabling
matplotlib inline mode:
End of explanation
"""
import pandas as pd
iris_df = pd.DataFrame(iris.data, columns=iris.feature_names)
pd.plotting.scatter_matrix(iris_df, figsize=(8, 8));
"""
Explanation: Quick Exercise:
Change x_index and y_index in the above script
and find a combination of two parameters
which maximally separate the three classes.
This exercise is a preview of dimensionality reduction, which we'll see later.
An aside: scatterplot matrices
Instead of looking at the data one plot at a time, a common tool that analysts use is called the scatterplot matrix.
Scatterplot matrices show scatter plots between all features in the data set, as well as histograms to show the distribution of each feature.
End of explanation
"""
from sklearn import datasets
"""
Explanation: Other Available Data
Scikit-learn makes available a host of datasets for testing learning algorithms.
They come in three flavors:
Packaged Data: these small datasets are packaged with the scikit-learn installation,
and can be downloaded using the tools in sklearn.datasets.load_*
Downloadable Data: these larger datasets are available for download, and scikit-learn
includes tools which streamline this process. These tools can be found in
sklearn.datasets.fetch_*
Generated Data: there are several datasets which are generated from models based on a
random seed. These are available in the sklearn.datasets.make_*
You can explore the available dataset loaders, fetchers, and generators using IPython's
tab-completion functionality. After importing the datasets submodule from sklearn,
type
datasets.load_<TAB>
or
datasets.fetch_<TAB>
or
datasets.make_<TAB>
to see a list of available functions.
End of explanation
"""
from sklearn.datasets import get_data_home
get_data_home()
"""
Explanation: The data downloaded using the fetch_ scripts are stored locally,
within a subdirectory of your home directory.
You can use the following to determine where it is:
End of explanation
"""
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
n_samples, n_features = digits.data.shape
print((n_samples, n_features))
print(digits.data[0])
print(digits.target)
"""
Explanation: Be warned: many of these datasets are quite large, and can take a long time to download!
If you start a download within the IPython notebook
and you want to kill it, you can use ipython's "kernel interrupt" feature, available in the menu or using
the shortcut Ctrl-m i.
You can press Ctrl-m h for a list of all ipython keyboard shortcuts.
Loading Digits Data
Now we'll take a look at another dataset, one where we have to put a bit
more thought into how to represent the data. We can explore the data in
a similar manner as above:
End of explanation
"""
print(digits.data.shape)
print(digits.images.shape)
"""
Explanation: The target here is just the digit represented by the data. The data is an array of
length 64... but what does this data mean?
There's a clue in the fact that we have two versions of the data array:
data and images. Let's take a look at them:
End of explanation
"""
import numpy as np
print(np.all(digits.images.reshape((1797, 64)) == digits.data))
"""
Explanation: We can see that they're related by a simple reshaping:
End of explanation
"""
# set up the figure
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
"""
Explanation: Let's visualize the data. It's little bit more involved than the simple scatter-plot
we used above, but we can do it rather quickly.
End of explanation
"""
from sklearn.datasets import make_s_curve
data, colors = make_s_curve(n_samples=1000)
print(data.shape)
print(colors.shape)
from mpl_toolkits.mplot3d import Axes3D
ax = plt.axes(projection='3d')
ax.scatter(data[:, 0], data[:, 1], data[:, 2], c=colors)
ax.view_init(10, -60)
"""
Explanation: We see now what the features mean. Each feature is a real-valued quantity representing the
darkness of a pixel in an 8x8 image of a hand-written digit.
Even though each sample has data that is inherently two-dimensional, the data matrix flattens
this 2D data into a single vector, which can be contained in one row of the data matrix.
Generated Data: the S-Curve
One dataset often used as an example of a simple nonlinear dataset is the S-cure:
End of explanation
"""
from sklearn.datasets import fetch_olivetti_faces
# fetch the faces data
# Use a script like above to plot the faces image data.
# hint: plt.cm.bone is a good colormap for this data
"""
Explanation: This example is typically used with an unsupervised learning method called Locally
Linear Embedding. We'll explore unsupervised learning in detail later in the tutorial.
Exercise: working with the faces dataset
Here we'll take a moment for you to explore the datasets yourself.
Later on we'll be using the Olivetti faces dataset.
Take a moment to fetch the data (about 1.4MB), and visualize the faces.
You can copy the code used to visualize the digits above, and modify it for this data.
End of explanation
"""
# %load solutions/03A_faces_plot.py
"""
Explanation: Solution:
End of explanation
"""
|
itoledoc/python_coffee | .ipynb_checkpoints/itoledoc_coffee-checkpoint.ipynb | mit | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
"""
Explanation: Python Coffee, November 5, 2015
Import required libraries
End of explanation
"""
import plotly.tools as tls
import plotly.plotly as py
import cufflinks as cf
import plotly
plotly.offline.init_notebook_mode()
cf.offline.go_offline()
"""
Explanation: The previous import code requires that you have pandas, numpy and matplotlib installed. If you are using conda
you already have all of this libraries installed. Otherwise, use pip to install them. The magic command %matplotlib inline loads the required variables and tools needed to embed matplotlib figures in a ipython notebook.
Import optional libraries to use plotly.
Plot.ly is a cloud based visualization tool, which has a mature python API. It is very useful to create profesional looking and interactive plots, that are
shared publicly on the cloud; so be careful on publishing only data that you want (and can) share.
Installing plot.ly is done easily with pip or conda, but it requires you to create an account and then require a API token. If you don't want to install it, you can jump this section.
End of explanation
"""
df = pd.read_csv('data_files/baseline_channels_phase.txt', sep=' ')
"""
Explanation: Import data file with pandas
End of explanation
"""
df.head()
"""
Explanation: df is an instance of the pandas object (data structure) pandas.DataFrame. A DataFrame instance has several methods (functions) to operate over the object. For example, is easy to display the data for a first exploration of what it contains using .head()
End of explanation
"""
df.values
"""
Explanation: A DataFrame can be converted into a numpy array by using the method .values:
End of explanation
"""
df.iloc[0,1]
"""
Explanation: For numpy expert, you have also methods to access the data using the numpy standards. If you want to extract the data at the coordinate (0,1) you can do:
End of explanation
"""
df.ix[3, 'ant1name']
"""
Explanation: But also you can use the column names and index keys, to substract, for example, the name of the first antenna in a baseline pair from row 3:
End of explanation
"""
data_group = df.groupby(['ant1name', 'ant2name'])
df2 = data_group.agg({'freq': np.mean, 'chan': np.count_nonzero}).reset_index()
df2.head()
data_raw = df.groupby(['ant1name', 'ant2name', 'chan']).y.mean()
data_raw.head(30)
data_raw.unstack().head(20)
pd.options.display.max_columns = 200
data_raw.unstack().head(20)
data_raw = data_raw.unstack().reset_index()
data_raw.head()
data_raw.to_excel('test.xls', index=False)
todegclean = np.degrees(np.arcsin(np.sin(np.radians(data_raw.iloc[:,2:]))))
todegclean.head()
todegclean['mean'] = todegclean.mean(axis=1)
todegclean.head()
data_clean = todegclean.iloc[:,:-1].apply(lambda x: x - todegclean.iloc[:,-1])
data_clean.head(20)
data_ready = pd.merge(data_raw[['ant1name', 'ant2name']], todegclean, left_index=True, right_index=True)
data_ready.head()
"""
Explanation: DataFrame are objects containgin tabular data, that can be grouped by columns and then used to aggreate data. Let's say you want to obtaing the mean frequency for the baselines and the number of channels used:
End of explanation
"""
data_clean2 = data_clean.unstack().reset_index().copy()
data_clean2.query('100 < level_1 < 200')
data_clean2.query('100 < level_1 < 200').iplot(kind='scatter3d', x='chan', y='level_1', mode='markers', z=0, size=6,
title='Phase BL', filename='phase_test', width=1, opacity=0.8, colors='blue', symbol='circle',
layout={'scene': {'aspectratio': {'x': 1, 'y': 3, 'z': 0.7}}})
ploting = data_clean2.query('100 < level_1 < 200').figure(kind='scatter3d', x='chan', y='level_1', mode='markers', z=0, size=6,
title='Phase BL', filename='phase_test', width=1, opacity=0.8, colors='blue', symbol='circle',
layout={'scene': {'aspectratio': {'x': 1, 'y': 3, 'z': 0.7}}})
# ploting
ploting.data[0]['marker']['color'] = 'blue'
ploting.data[0]['marker']['line'] = {'color': 'blue', 'width': 0.5}
ploting.data[0]['marker']['opacity'] = 0.5
plotly.offline.iplot(ploting)
"""
Explanation: Plot.ly
End of explanation
"""
fig=plt.figure()
ax=fig.gca(projection='3d')
X = np.arange(0, data_clean.shape[1],1)
Y = np.arange(0, data_clean.shape[0],1)
X, Y = np.meshgrid(X,Y)
surf = ax.scatter(X, Y, data_clean, '.', c=data_clean,s=2,lw=0,cmap='winter')
%matplotlib notebook
fig=plt.figure()
ax=fig.gca(projection='3d')
X = np.arange(0, data_clean.shape[1],1)
Y = np.arange(0, data_clean.shape[0],1)
X, Y = np.meshgrid(X,Y)
surf = ax.scatter(X, Y, data_clean, '.', c=data_clean,s=2,lw=0,cmap='winter')
data_clean2.plot(kind='scatter', x='chan', y=0)
import seaborn as sns
data_clean2.plot(kind='scatter', x='level_1', y=0)
data_ready['noise'] = todegclean.iloc[:,2:].std(axis=1)
data_ready[['ant1name', 'ant2name', 'noise']].head(10)
corr = data_ready[['ant1name', 'ant2name', 'noise']].pivot_table(index=['ant1name'], columns=['ant2name'])
corr.columns.levels[1]
corr2 = pd.DataFrame(corr.values, index=corr.index.values, columns=corr.columns.levels[1].values)
corr2.head(10)
f, ax = plt.subplots(figsize=(11, 9))
cmap = sns.diverging_palette(220, 10, as_cmap=True)
sns.heatmap(corr2, cmap=cmap,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5}, ax=ax)
?sns.heatmap
"""
Explanation: Matplotlib
End of explanation
"""
|
5agado/data-science-learning | deep learning/StyleGAN/StyleGAN - Explore Directions.ipynb | apache-2.0 | is_stylegan_v1 = False
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import sys
import os
from datetime import datetime
from tqdm import tqdm
# ffmpeg installation location, for creating videos
plt.rcParams['animation.ffmpeg_path'] = str('/usr/bin/ffmpeg')
import ipywidgets as widgets
from ipywidgets import interact, interact_manual
from IPython.display import display
from ipywidgets import Button
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
%load_ext autoreload
%autoreload 2
# StyleGAN2 Repo
sys.path.append('/tf/notebooks/stylegan2')
# StyleGAN Utils
from stylegan_utils import load_network, gen_image_fun, synth_image_fun, create_video
# v1 override
if is_stylegan_v1:
from stylegan_utils import load_network_v1 as load_network
from stylegan_utils import gen_image_fun_v1 as gen_image_fun
from stylegan_utils import synth_image_fun_v1 as synth_image_fun
import run_projector
import projector
import training.dataset
import training.misc
# Data Science Utils
sys.path.append(os.path.join(*[os.pardir]*3, 'data-science-learning'))
from ds_utils import generative_utils
res_dir = Path.home() / 'Documents/generated_data/stylegan'
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Load-Network" data-toc-modified-id="Load-Network-1"><span class="toc-item-num">1 </span>Load Network</a></span></li><li><span><a href="#Explore-Directions" data-toc-modified-id="Explore-Directions-2"><span class="toc-item-num">2 </span>Explore Directions</a></span><ul class="toc-item"><li><span><a href="#Interactive" data-toc-modified-id="Interactive-2.1"><span class="toc-item-num">2.1 </span>Interactive</a></span></li></ul></li><li><span><a href="#Ganspace" data-toc-modified-id="Ganspace-3"><span class="toc-item-num">3 </span>Ganspace</a></span></li></ul></div>
End of explanation
"""
MODELS_DIR = Path.home() / 'Documents/models/stylegan2'
MODEL_NAME = 'original_ffhq'
SNAPSHOT_NAME = 'stylegan2-ffhq-config-f'
Gs, Gs_kwargs, noise_vars = load_network(str(MODELS_DIR / MODEL_NAME / SNAPSHOT_NAME) + '.pkl')
Z_SIZE = Gs.input_shape[1]
IMG_SIZE = Gs.output_shape[2:]
IMG_SIZE
img = gen_image_fun(Gs, np.random.randn(1, Z_SIZE), Gs_kwargs, noise_vars)
plt.imshow(img)
"""
Explanation: Load Network
End of explanation
"""
def plot_direction_grid(dlatent, direction, coeffs):
fig, ax = plt.subplots(1, len(coeffs), figsize=(15, 10), dpi=100)
for i, coeff in enumerate(coeffs):
new_latent = (dlatent.copy() + coeff*direction)
ax[i].imshow(synth_image_fun(Gs, new_latent, Gs_kwargs, randomize_noise=False))
ax[i].set_title(f'Coeff: {coeff:0.1f}')
ax[i].axis('off')
plt.show()
# load learned direction
direction = np.load('/tf/media/datasets/stylegan/learned_directions.npy')
nb_latents = 5
# generate dlatents from mapping network
dlatents = Gs.components.mapping.run(np.random.randn(nb_latents, Z_SIZE), None, truncation_psi=1.)
for i in range(nb_latents):
plot_direction_grid(dlatents[i:i+1], direction, np.linspace(-2, 2, 5))
"""
Explanation: Explore Directions
End of explanation
"""
# Setup plot image
dpi = 100
fig, ax = plt.subplots(dpi=dpi, figsize=(7, 7))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0, wspace=0)
plt.axis('off')
im = ax.imshow(gen_image_fun(Gs, np.random.randn(1, Z_SIZE),Gs_kwargs, noise_vars, truncation_psi=1))
#prevent any output for this cell
plt.close()
# fetch attributes names
directions_dir = MODELS_DIR / MODEL_NAME / 'directions' / 'set01'
attributes = [e.stem for e in directions_dir.glob('*.npy')]
# get names or projected images
data_dir = res_dir / 'projection' / MODEL_NAME / SNAPSHOT_NAME / ''
entries = [p.name for p in data_dir.glob("*") if p.is_dir()]
entries.remove('tfrecords')
# set target latent to play with
#dlatents = Gs.components.mapping.run(np.random.randn(1, Z_SIZE), None, truncation_psi=0.5)
#target_latent = dlatents[0:1]
#target_latent = np.array([np.load("/out_4/image_latents2000.npy")])
%matplotlib inline
@interact
def i_direction(attribute=attributes,
entry=entries,
coeff=(-10., 10.)):
direction = np.load(directions_dir / f'{attribute}.npy')
target_latent = np.array([np.load(data_dir / entry / "image_latents1000.npy")])
new_latent_vector = target_latent.copy() + coeff*direction
im.set_data(synth_image_fun(Gs, new_latent_vector, Gs_kwargs, True))
ax.set_title('Coeff: %0.1f' % coeff)
display(fig)
dest_dir = Path("C:/tmp/tmp_mona")
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
fig.savefig(dest_dir / (timestamp + '.png'), bbox_inches='tight')
"""
Explanation: Interactive
End of explanation
"""
|
Danghor/Formal-Languages | Python/Regexp-Tutorial.ipynb | gpl-2.0 | import re
"""
Explanation: Regular Expressions in Python (A Short Tutorial)
This is a tutorial showing how regular expressions are supported in Python.
The assumption is that the reader already has a grasp of the concept of
regular expressions as it is taught in lectures
on formal languages, for example in
Formal Languages and Their Application, but does not know how regular expressions are supported in Python.
In Python, regular expressions are not part of the core language but are rather implemented in the module re. This module is part of the Python standard library and therefore there is no need
to install this module. The full documentation of this module can be found at
https://docs.python.org/3/library/re.html.
End of explanation
"""
re.findall('a', 'abcabcABC')
"""
Explanation: Regular expressions are strings that describe <em style=\color:blue>languages</em>, where a
<em style=\color:blue>language</em> is defined as a <em style=\color:blue\ a>set of strings</em>.
In the following, let us assume that $\Sigma$ is the set of all Unicode characters and $\Sigma^$ is the set
of strings consisting of Unicode characters. We will define the set $\textrm{RegExp}$ of regular expressions inductively.
In order to define the meaning of a regular expression $r$ we define a function
$$\mathcal{L}:\textrm{RegExp} \rightarrow 2^{\Sigma^} $$
such that $\mathcal{L}(r)$ is the <em style=\color:blue>language</em> specified by the regular expression $r$.
In order to demonstrate how regular expressions work we will use the function findall from the module
re. This function is called in the following way:
$$ \texttt{re.findall}(r, s, \textrm{flags}=0) $$
Here, the arguments are interpreted as follows:
- $r$ is a string that is interpreted as a regular expression,
- $s$ is a string that is to be searched by $r$, and
- $\textrm{flags}$ is an optional argument of type int which is set to $0$ by default.
This argument is useful to set flags that might be used to alter the interpretation of the regular
expression $r$.
For example, if the flag re.IGNORECASE is set, then the search performed by findall is not
case sensitive.
The function findall returns a list of those non-overlapping substrings of the string $s$ that
match the regular expression $r$. In the following example, the regular expression $r$ searches
for the letter a and since the string $s$ contains the character a two times, findall returns a
list with two occurrences of a:
End of explanation
"""
re.findall('a', 'abcabcABC', re.IGNORECASE)
"""
Explanation: In the next example, the flag re.IGNORECASE is set and hence the function call returns a list of length 3.
End of explanation
"""
re.findall('a', 'abaa')
"""
Explanation: To begin our definition of the set $\textrm{RegExp}$ of Python regular expressions, we first have to define
the set $\textrm{MetaChars}$ of all <em style="color:blue">meta-characters</em>:
MetaChars := { '.', '^', '$', '*', '+', '?', '{', '}', '[', ']', '\', '|', '(', ')' }
These characters are used as <em style="color:blue">operator symbols</em> or as
part of operator symbols inside of regular expressions.
Now we can start our inductive definition of regular expressions:
- Any Unicode character $c$ such that $c \not\in \textrm{MetaChars}$ is a regular expression.
The regular expressions $c$ matches the character $c$, i.e. we have
$$ \mathcal{L}(c) = { c }. $$
- If $c$ is a meta character, i.e. we have $c \in \textrm{MetaChars}$, then the string $\backslash c$
is a regular expression matching the meta-character $c$, i.e. we have
$$ \mathcal{L}(\backslash c) = { c }. $$
End of explanation
"""
re.findall(r'\+', '+-+')
"""
Explanation: In the following example we have to use <em style="color:blue">raw strings</em> in order to prevent
the backlash character to be mistaken as an <em style="color:blue">escape sequence</em>. A string is a
<em style="color:blue">raw string</em> if the opening quote character is preceded with the character
r.
End of explanation
"""
re.findall(r'the', 'The horse, the dog, and the cat.', flags=re.IGNORECASE)
"""
Explanation: Concatenation
The next rule shows how regular expressions can be <em style="color:blue">concatenated</em>:
- If $r_1$ and $r_2$ are regular expressions, then $r_1r_2$ is a regular expression. This
regular expression matches any string $s$ that can be split into two substrings $s_1$ and $s_2$
such that $r_1$ matches $s_1$ and $r_2$ matches $s_2$. Formally, we have
$$\mathcal{L}(r_1r_2) :=
\bigl{ s_1s_2 \mid s_1 \in \mathcal{L}(r_1) \wedge s_2 \in \mathcal{L}(r_2) \bigr}.
$$
In the lecture notes we have used the notation $r_1 \cdot r_2$ instead of the Python notation $r_1r_2$.
Using concatenation of regular expressions, we can now find words.
End of explanation
"""
re.findall(r'The|a', 'The horse, the dog, and a cat.', flags=re.IGNORECASE)
"""
Explanation: Choice
Regular expressions provide the operator | that can be used to choose between
<em style="color:blue">alternatives:</em>
- If $r_1$ and $r_2$ are regular expressions, then $r_1|r_2$ is a regular expression. This
regular expression matches any string $s$ that can is matched by either $r_1$ or $r_2$.
Formally, we have
$$\mathcal{L}(r_1|r_2) := \mathcal{L}(r_1) \cup \mathcal{L}(r_2). $$
In the lecture notes we have used the notation $r_1 + r_2$ instead of the Python notation $r_1|r_2$.
End of explanation
"""
re.findall(r'a+', 'abaabaAaba.', flags=re.IGNORECASE)
"""
Explanation: Quantifiers
The most interesting regular expression operators are the <em style="color:blue">quantifiers</em>.
The official documentation calls them <em style="color:blue">repetition qualifiers</em> but in this notebook
they are called quantifiers, since this is shorter. Syntactically, quantifiers are
<em style="color:blue">postfix operators</em>.
- If $r$ is a regular expressions, then $r+$ is a regular expression. This
regular expression matches any string $s$ that can be split into a list on $n$ substrings $s_1$,
$s_2$, $\cdots$, $s_n$ such that $r$ matches $s_i$ for all $i \in {1,\cdots,n}$.
Formally, we have
$$\mathcal{L}(r+) :=
\Bigl{ s \Bigm| \exists n \in \mathbb{N}: \bigl(n \geq 1 \wedge
\exists s_1,\cdots,s_n : (s_1 \cdots s_n = s \wedge
\forall i \in {1,\cdots, n}: s_i \in \mathcal{L}(r)\bigr)
\Bigr}.
$$
Informally, $r+$ matches $r$ any positive number of times.
End of explanation
"""
re.findall(r'a*', 'abaabbaaaba')
"""
Explanation: If $r$ is a regular expressions, then $r$ is a regular expression. This
regular expression matches either the empty string or any string $s$ that can be split into a list on $n$ substrings $s_1$,
$s_2$, $\cdots$, $s_n$ such that $r$ matches $s_i$ for all $i \in {1,\cdots,n}$.
Formally, we have
$$\mathcal{L}(r) := \bigl{ \texttt{''} \bigr} \cup
\Bigl{ s \Bigm| \exists n \in \mathbb{N}: \bigl(n \geq 1 \wedge
\exists s_1,\cdots,s_n : (s_1 \cdots s_n = s \wedge
\forall i \in {1,\cdots, n}: s_i \in \mathcal{L}(r)\bigr)
\Bigr}.
$$
Informally, $r*$ matches $r$ any number of times, including zero times. Therefore, in the following example the result also contains various empty strings. For example, in the string 'abaabaaaba' the regular expression a* will find an empty string at the beginning of each occurrence of the character 'b'. The final occurrence of the empty string is found at the end of the string:
End of explanation
"""
re.findall(r'a?', 'abaa')
"""
Explanation: If $r$ is a regular expressions, then $r?$ is a regular expression. This
regular expression matches either the empty string or any string $s$ that is matched by $r$. Formally we have
$$\mathcal{L}(r?) := \bigl{ \texttt{''} \bigr} \cup \mathcal{L}(r). $$
Informally, $r?$ matches $r$ at most one times but also zero times. Therefore, in the following example the result also contains two empty strings. One of these is found at the beginning of the character 'b', the second is found at the end of the string.
End of explanation
"""
re.findall(r'a{2,3}', 'aaaa')
"""
Explanation: If $r$ is a regular expressions and $m,n\in\mathbb{N}$ such that $m \leq n$, then $r{m,n}$ is a
regular expression. This regular expression matches any number $k$ of repetitions of $r$ such that $m \leq k \leq n$.
Formally, we have
$$\mathcal{L}(r{m,n}) =
\Bigl{ s \mid \exists k \in \mathbb{N}: \bigl(m \leq k \leq n \wedge
\exists s_1,\cdots,s_k : (s_1 \cdots s_k = s \wedge
\forall i \in {1,\cdots, k}: s_i \in \mathcal{L}(r)\bigr)
\Bigr}.
$$
Informally, $r{m,n}$ matches $r$ at least $m$ times and at most $n$ times.
End of explanation
"""
re.findall(r'a{2}', 'aabaaaba')
"""
Explanation: Above, the regular expression r'a{2,3}' matches the string 'aaaa' only once since the first match consumes three occurrences of a and then there is only a single a left.
If $r$ is a regular expressions and $n\in\mathbb{N}$, then $r{n}$ is a regular expression. This regular expression matches exactly $n$ repetitions of $r$. Formally, we have
$$\mathcal{L}(r{n}) = \mathcal{L}(r{n,n}).$$
End of explanation
"""
re.findall(r'a{,2}', 'aabaaaba')
"""
Explanation: If $r$ is a regular expressions and $n\in\mathbb{N}$, then $r{,n}$ is a regular expression. This regular expression matches up to $n$ repetitions of $r$. Formally, we have
$$\mathcal{L}(r{,n}) = \mathcal{L}(r{0,n}).$$
End of explanation
"""
re.findall(r'a{2,}', 'aabaaaba')
"""
Explanation: If $r$ is a regular expressions and $n\in\mathbb{N}$, then $r{n,}$ is a regular expression. This regular expression matches $n$ or more repetitions of $r$. Formally, we have
$$\mathcal{L}(r{n,}) = \mathcal{L}(r{n}r*).$$
End of explanation
"""
re.findall(r'a{2,3}?', 'aaaa'), re.findall(r'a{2,3}', 'aaaa')
"""
Explanation: Non-Greedy Quantifiers
The quantifiers ?, +, *, {m,n}, {n}, {,n}, and {n,} are <em style="color:blue">greedy</em>, i.e. they
match the longest possible substrings. Suffixing these operators with the character ? makes them
<em style="color:blue">non-greedy</em>. For example, the regular expression a{2,3}? matches either
two occurrences or three occurrences of the character a but will prefer to match only two characters. Hence, the regular expression a{2,3}? will find two matches in the string 'aaaa', while the regular expression a{2,3} only finds a single match.
End of explanation
"""
re.findall(r'[abc]+', 'abcdcba')
"""
Explanation: Character Classes
In order to match a set of characters we can use a <em style="color:blue">character class</em>.
If $c_1$, $\cdots$, $c_n$ are Unicode characters, then $[c_1\cdots c_n]$ is a regular expression that
matches any of the characters from the set ${c_1,\cdots,c_n}$:
$$ \mathcal{L}\bigl([c_1\cdots c_n]\bigr) := { c_1, \cdots, c_n } $$
End of explanation
"""
re.findall(r'[1-9][0-9]*|0', '11 abc 12 2345 007 42 0')
"""
Explanation: Character classes can also contain <em style="color:blue">ranges</em>. Syntactically, a range has the form
$c_1\texttt{-}c_2$, where $c_1$ and $c_2$ are Unicode characters.
For example, the regular expression [0-9] contains the range 0-9 and matches any decimal digit. To find all natural numbers embedded in a string we could use the regular expression [1-9][0-9]*|[0-9]. This regular expression matches either a single digit or a string that starts with a non-zero digit and is followed by any number of digits.
End of explanation
"""
re.findall(r'[0-9]|[1-9][0-9]*', '11 abc 12 2345 007 42 0')
"""
Explanation: Note that the next example looks quite similar but gives a different result:
End of explanation
"""
re.findall(r'[\dabc]+', '11 abc12 1a2 2b3c4d5')
"""
Explanation: Here, the regular expression starts with the alternative [0-9], which matches any single digit.
So once a digit is found, the resulting substring is returned and the search starts again. Therefore, if this regular expression is used in findall, it will only return a list of single digits.
There are some predefined character classes:
- \d matches any digit.
- \D matches any non-digit character.
- \s matches any whitespace character.
- \S matches any non-whitespace character.
- \w matches any alphanumeric character.
If we would use only <font style="font-variant: small-caps">Ascii</font> characters this would
be equivalent to the character class [0-9a-zA-Z_].
- \W matches any non-alphanumeric character.
- \b matches at a word boundary. The string that is matched is the empty string.
- \B matches at any place that is not a word boundary.
Again, the string that is matched is the empty string.
These escape sequences can also be used inside of square brackets.
End of explanation
"""
re.findall(r'[^abc]+', 'axyzbuvwchij')
re.findall(r'\b\w+\b', 'This is some text where we want to count the words.')
"""
Explanation: Character classes can be negated if the first character after the opening [ is the character ^.
For example, [^abc] matches any character that is different from a, b, or c.
End of explanation
"""
re.findall(r'\b(0|[1-9][0-9]*)\b', '11 abc 12 2345 007 42 0')
"""
Explanation: The following regular expression uses the character class \b to isolate numbers. Note that we had to use parentheses since concatenation of regular expressions binds stronger than the choice operator |.
End of explanation
"""
re.findall(r'(\d+)\s+\1', '12 12 23 23 17 18')
"""
Explanation: Grouping
If $r$ is a regular expression, then $(r)$ is a regular expression describing the same language as
$r$. There are two reasons for using parentheses:
- Parentheses can be used to override the precedence of an operator.
This concept is the same as in programming languages. For example, the regular expression ab+
matches the character a followed by any positive number of occurrences of the character b because
the precedence of a quantifiers is higher than the precedence of concatenation of regular expressions.
However, (ab)+ matches the strings ab, abab, ababab, and so on.
- Parentheses can be used for <em style="color:blue">back-references</em> because inside
a regular expression we can refer to the substring matched by a regular expression enclosed in a pair of
parentheses using the syntax $\backslash n$ where $n \in {1,\cdots,9}$.
Here, $\backslash n$ refers to the $n$th parenthesized <em style="color:blue">group</em> in the regular
expression, where a group is defined as any part of the regular expression enclosed in parentheses.
Counting starts with the left parentheses, For example, the regular expression
(a(b|c)*d)?ef(gh)+
has three groups:
1. (a(b|c)*d) is the first group,
2. (b|c) is the second group, and
3. (gh) is the third group.
For example, if we want to recognize a string that starts with a number followed by some white space and then
followed by the <b>same</b> number we can use the regular expression (\d+)\w+\1.
End of explanation
"""
re.findall(r'c.*?t', 'ct cat caat could we look at that!')
"""
Explanation: In general, given a digit $n$, the expression $\backslash n$ refers to the string matched in the $n$-th group of the regular expression.
The Dot
The regular expression . matches any character except the newline. For example, c.*?t matches any string that starts with the character c and ends with the character t and does not contain any newline. If we are using the non-greedy version of the quantifier *, we can find all such words in the string below.
End of explanation
"""
data = \
'''
This is a text containing five lines, two of which are empty.
This is the second non-empty line,
and this is the third non-empty line.
'''
re.findall(r'^.*$', data, flags=re.MULTILINE)
"""
Explanation: The dot . does not have any special meaning when used inside a character range. Hence, the regular expression
[.] matches only the character ..
Start and End of a Line
The regular expression ^ matches at the start of a string. If we set the flag re.MULTILINE, which we
will usually do when working with this regular expression containing the expression ^,
then ^ also matches at the beginning of each line,
i.e. it matches after every newline character.
Similarly, the regular expression $ matches at the end of a string. If we set the flag re.MULTILINE, then $ also matches at the end of each line,
i.e. it matches before every newline character.
End of explanation
"""
text = 'Here is 1$, here are 21 €, and there are 42 $.'
L = re.findall(r'([0-9]+)(?=\s*\$)', text)
print(f'L = {L}')
sum(int(x) for x in L)
"""
Explanation: Lookahead Assertions
Sometimes we need to look ahead in order to know whether we have found what we are looking for. Consider the case that you want to add up all numbers followed by a dollar symbol but you are not interested in any other numbers. In this case a
lookahead assertion comes in handy. The syntax of a lookahead assertion is:
$$ r_1 (\texttt{?=}r_2) $$
Here $r_1$ and $r_2$ are regular expressions and ?= is the <em style="color:blue">lookahead operator</em>. $r_1$ is the regular expression you are searching for while $r_2$ is the regular expression describing the lookahead. Note that this lookahead is not matched. It is only checked whether $r_1$ is followed by $r_2$ but only the text matching $r_1$ is matched. Syntactically, the
lookahead $r_2$ has to be preceded by the lookahead operator and both have to be surrounded by parentheses.
In the following example we are looking for all numbers that are followed by dollar symbols and we sum these numbers up.
End of explanation
"""
text = 'Here is 1$, here are 21 €, and there are 42 $.'
L = re.findall(r'[0-9]+(?![0-9]*\s*\$)', text)
print(f'L = {L}')
sum(int(x) for x in L)
"""
Explanation: There are also <em style="color:blue">negative lookahead assertion</em>. The syntax is:
$$ r_1 (\texttt{?!}r_2) $$
Here $r_1$ and $r_2$ are regular expressions and ?! is the <em style="color:blue">negative lookahead operator</em>.
The expression above checks for all occurrences of $r_1$ that are <b>not</b> followed by $r_2$.
In the following examples we sum up all numbers that are <u>not</u> followed by a dollar symbol.
Note that the lookahead expression has to ensure that there are no additional digits. In general, negative lookahead is very tricky and I recommend against using it.
End of explanation
"""
with open('alice.txt', 'r') as f:
text = f.read()
print(text[:1020])
"""
Explanation: Examples
In order to have some strings to play with, let us read the file alice.txt, which contains the book
Alice's Adventures in Wonderland written by
Lewis Carroll.
End of explanation
"""
len(re.findall(r'^.*[^\s].*?$', text, flags=re.MULTILINE))
"""
Explanation: How many non-empty lines does this story have?
End of explanation
"""
set(re.findall(r'\b[dfs]\w{2}[kt]\b', text, flags=re.IGNORECASE))
"""
Explanation: Next, let us check, whether this text is suitable for minors. In order to do so we search for all four
letter words that start with either d, f or s and end with k or t.
End of explanation
"""
L = re.findall(r'\b\w+\b', text.lower())
S = set(L)
print(f'There are {len(L)} words in this book and {len(S)} different words.')
"""
Explanation: How many words are in this text and how many different words are used?
End of explanation
"""
|
fonnesbeck/scipy2015_tutorial | notebooks/1. Data Preparation.ipynb | cc0-1.0 | counts = pd.Series([632, 1638, 569, 115])
counts
"""
Explanation: Data Preparation using pandas
An initial step in statistical data analysis is the preparation of the data to be used in the analysis. In practice, ~~a little~~ ~~some~~ ~~much~~ the majority of the actual time spent on a statistical modeling project is typically devoted to importing, cleaning, validating and transforming the dataset.
This section will introduce pandas, an important third-party Python package for data analysis, as a tool for data preparation, and provide some general advice for what should or should not be done to data before it is analyzed.
Introduction to pandas
pandas is a Python package providing fast, flexible, and expressive data structures designed to work with relational or labeled data both. It is a fundamental high-level building block for doing practical, real world data analysis in Python.
pandas is well suited for:
Tabular data with heterogeneously-typed columns, as you might find in an SQL table or Excel spreadsheet
Ordered and unordered (not necessarily fixed-frequency) time series data.
Arbitrary matrix data with row and column labels
Virtually any statistical dataset, labeled or unlabeled, can be converted to a pandas data structure for cleaning, transformation, and analysis.
Key features
Easy handling of missing data
Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional objects
Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the data can be aligned automatically
Powerful, flexible group by functionality to perform split-apply-combine operations on data sets
Intelligent label-based slicing, fancy indexing, and subsetting of large data sets
Intuitive merging and joining data sets
Flexible reshaping and pivoting of data sets
Hierarchical labeling of axes
Robust IO tools for loading data from flat files, Excel files, databases, and HDF5
Time series functionality: date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging, etc.
Series
A Series is a single vector of data (like a NumPy array) with an index that labels each element in the vector.
End of explanation
"""
counts.values
counts.index
"""
Explanation: If an index is not specified, a default sequence of integers is assigned as the index. A NumPy array comprises the values of the Series, while the index is a pandas Index object.
End of explanation
"""
bacteria = pd.Series([632, 1638, 569, 115],
index=['Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes'])
bacteria
"""
Explanation: We can assign meaningful labels to the index, if they are available. These counts are of bacteria taxa constituting the microbiome of hospital patients, so using the taxon of each bacterium is a useful index.
End of explanation
"""
bacteria['Actinobacteria']
bacteria[bacteria.index.str.endswith('bacteria')]
'Bacteroidetes' in bacteria
"""
Explanation: These labels can be used to refer to the values in the Series.
End of explanation
"""
bacteria[0]
"""
Explanation: Notice that the indexing operation preserved the association between the values and the corresponding indices.
We can still use positional indexing if we wish.
End of explanation
"""
bacteria.name = 'counts'
bacteria.index.name = 'phylum'
bacteria
"""
Explanation: We can give both the array of values and the index meaningful labels themselves:
End of explanation
"""
np.log(bacteria)
"""
Explanation: NumPy's math functions and other operations can be applied to Series without losing the data structure.
End of explanation
"""
bacteria[bacteria>1000]
"""
Explanation: We can also filter according to the values in the Series:
End of explanation
"""
bacteria_dict = {'Firmicutes': 632, 'Proteobacteria': 1638, 'Actinobacteria': 569, 'Bacteroidetes': 115}
bact = pd.Series(bacteria_dict)
bact
"""
Explanation: A Series can be thought of as an ordered key-value store. In fact, we can create one from a dict:
End of explanation
"""
bacteria2 = pd.Series(bacteria_dict,
index=['Cyanobacteria','Firmicutes','Proteobacteria','Actinobacteria'])
bacteria2
bacteria2.isnull()
"""
Explanation: Notice that the Series is created in key-sorted order.
If we pass a custom index to Series, it will select the corresponding values from the dict, and treat indices without corrsponding values as missing. pandas uses the NaN (not a number) type for missing values.
End of explanation
"""
bacteria + bacteria2
"""
Explanation: Critically, the labels are used to align data when used in operations with other Series objects:
End of explanation
"""
bacteria_data = pd.DataFrame({'value':[632, 1638, 569, 115, 433, 1130, 754, 555],
'patient':[1, 1, 1, 1, 2, 2, 2, 2],
'phylum':['Firmicutes', 'Proteobacteria', 'Actinobacteria',
'Bacteroidetes', 'Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes']})
bacteria_data
"""
Explanation: Contrast this with NumPy arrays, where arrays of the same length will combine values element-wise; adding Series combined values with the same label in the resulting series. Notice also that the missing values were propogated by addition.
DataFrame
Inevitably, we want to be able to store, view and manipulate data that is multivariate, where for every index there are multiple fields or columns of data, often of varying data type.
A DataFrame is a tabular data structure, encapsulating multiple series like columns in a spreadsheet. Data are stored internally as a 2-dimensional object, but the DataFrame allows us to represent and manipulate higher-dimensional data.
End of explanation
"""
bacteria_data[['phylum','value','patient']]
"""
Explanation: Notice the DataFrame is sorted by column name. We can change the order by indexing them in the order we desire:
End of explanation
"""
bacteria_data.columns
"""
Explanation: A DataFrame has a second index, representing the columns:
End of explanation
"""
bacteria_data['value']
bacteria_data.value
"""
Explanation: If we wish to access columns, we can do so either by dict-like indexing or by attribute:
End of explanation
"""
type(bacteria_data['value'])
"""
Explanation: Using the standard indexing syntax for a single column of data from a DataFrame returns the column as a Series.
End of explanation
"""
bacteria_data[['value']]
"""
Explanation: Passing the column name as a list returns the column as a DataFrame instead.
End of explanation
"""
bacteria_data.ix[3]
"""
Explanation: Notice that indexing works differently with a DataFrame than with a Series, where in the latter, dict-like indexing retrieved a particular element (row). If we want access to a row in a DataFrame, we index its ix attribute.
End of explanation
"""
bacteria_data = pd.DataFrame({0: {'patient': 1, 'phylum': 'Firmicutes', 'value': 632},
1: {'patient': 1, 'phylum': 'Proteobacteria', 'value': 1638},
2: {'patient': 1, 'phylum': 'Actinobacteria', 'value': 569},
3: {'patient': 1, 'phylum': 'Bacteroidetes', 'value': 115},
4: {'patient': 2, 'phylum': 'Firmicutes', 'value': 433},
5: {'patient': 2, 'phylum': 'Proteobacteria', 'value': 1130},
6: {'patient': 2, 'phylum': 'Actinobacteria', 'value': 754},
7: {'patient': 2, 'phylum': 'Bacteroidetes', 'value': 555}})
bacteria_data
"""
Explanation: Since a row potentially contains different data types, the returned Series of values is of the generic object type.
If we want to create a DataFrame row-wise rather than column-wise, we can do so with a dict of dicts:
End of explanation
"""
bacteria_data = bacteria_data.T
bacteria_data
"""
Explanation: However, we probably want this transposed:
End of explanation
"""
vals = bacteria_data.value
vals
"""
Explanation: Views
Its important to note that the Series returned when a DataFrame is indexed is merely a view on the DataFrame, and not a copy of the data itself. So you must be cautious when manipulating this data.
For example, let's isolate a column of our dataset by assigning it as a Series to a variable.
End of explanation
"""
vals[5] = 0
vals
"""
Explanation: Now, let's assign a new value to one of the elements of the Series.
End of explanation
"""
bacteria_data
"""
Explanation: However, we may not anticipate that the value in the original DataFrame has also been changed!
End of explanation
"""
vals = bacteria_data.value.copy()
vals[5] = 1000
bacteria_data
"""
Explanation: We can avoid this by working with a copy when modifying subsets of the original data.
End of explanation
"""
bacteria_data.value[5] = 1130
"""
Explanation: So, as we have seen, we can create or modify columns by assignment; let's put back the value we accidentally changed.
End of explanation
"""
bacteria_data['year'] = 2013
bacteria_data
"""
Explanation: Or, we may wish to add a column representing the year the data were collected.
End of explanation
"""
bacteria_data.treatment = 1
bacteria_data
bacteria_data.treatment
"""
Explanation: But note, we cannot use the attribute indexing method to add a new column:
End of explanation
"""
treatment = pd.Series([0]*4 + [1]*2)
treatment
bacteria_data['treatment'] = treatment
bacteria_data
"""
Explanation: Auto-alignment
When adding a column that is not a simple constant, we need to be a bit more careful. Due to pandas' auto-alignment behavior, specifying a Series as a new column causes its values to be added according to the DataFrame's index:
End of explanation
"""
month = ['Jan', 'Feb', 'Mar', 'Apr']
bacteria_data['month'] = month
bacteria_data['month'] = ['Jan']*len(bacteria_data)
bacteria_data
"""
Explanation: Other Python data structures (ones without an index) need to be the same length as the DataFrame:
End of explanation
"""
del bacteria_data['month']
bacteria_data
"""
Explanation: We can use del to remove columns, in the same way dict entries can be removed:
End of explanation
"""
bacteria_data.drop('treatment', axis=1)
"""
Explanation: Or employ the drop method.
End of explanation
"""
bacteria_data.values
"""
Explanation: We can extract the underlying data as a simple ndarray by accessing the values attribute:
End of explanation
"""
df = pd.DataFrame({'foo': [1,2,3], 'bar':[0.4, -1.0, 4.5]})
df.values, df.values.dtype
"""
Explanation: Notice that because of the mix of string, integer and float (and NaN) values, the dtype of the array is object. The dtype will automatically be chosen to be as general as needed to accomodate all the columns.
End of explanation
"""
bacteria_data.index
"""
Explanation: pandas uses a custom data structure to represent the indices of Series and DataFrames.
End of explanation
"""
bacteria_data.index[0] = 15
"""
Explanation: Index objects are immutable:
End of explanation
"""
bacteria2.index = bacteria.index
bacteria2
"""
Explanation: This is so that Index objects can be shared between data structures without fear that they will be changed.
End of explanation
"""
# Write your answer here
"""
Explanation: Excercise: Indexing
From the bacteria_data table above, create an index to return all rows for which the phylum name ends in "bacteria" and the value is greater than 1000.
End of explanation
"""
!head ../data/olympics.1996.txt
"""
Explanation: Using pandas
This section, we will import and clean up some of the datasets that we will be using later on in the tutorial. And in doing so, we will introduce the key functionality of pandas that is required to use the software effectively.
Importing data
A key, but often under-appreciated, step in data analysis is importing the data that we wish to analyze. Though it is easy to load basic data structures into Python using built-in tools or those provided by packages like NumPy, it is non-trivial to import structured data well, and to easily convert this input into a robust data structure:
genes = np.loadtxt("genes.csv", delimiter=",", dtype=[('gene', '|S10'), ('value', '<f4')])
pandas provides a convenient set of functions for importing tabular data in a number of formats directly into a DataFrame object. These functions include a slew of options to perform type inference, indexing, parsing, iterating and cleaning automatically as data are imported.
Delimited data
The file olympics.1996.txt in the data directory contains counts of medals awarded at the 1996 Summer Olympic Games by country, along with the countries' respective population sizes. This data is stored in a tab-separated format.
End of explanation
"""
medals = pd.read_table('../data/olympics.1996.txt', sep='\t',
index_col=0,
header=None, names=['country', 'medals', 'population'])
medals.head()
"""
Explanation: This table can be read into a DataFrame using read_table.
End of explanation
"""
oecd_site = 'http://www.oecd.org/about/membersandpartners/list-oecd-member-countries.htm'
pd.read_html(oecd_site)
"""
Explanation: There is no header row in this dataset, so we specified this, and provided our own header names. If we did not specify header=None the function would have assumed the first row contained column names.
The tab separator was passed to the sep argument as \t.
The sep argument can be customized as needed to accomodate arbitrary separators. For example, we can use a regular expression to define a variable amount of whitespace, which is unfortunately common in some datasets:
sep='\s+'
Scraping Data from the Web
We would like to add another variable to this dataset. Along with population, a country's economic development may be a useful predictor of Olympic success. A very simple indicator of this might be OECD membership status.
The OECD website contains a table listing OECD member nations, along with its year of membership. We would like to import this table and extract the contries that were members as of the 1996 games.
The read_html function accepts a URL argument, and will attempt to extract all the tables from that address, returning whatever it finds in a list of DataFrames.
End of explanation
"""
oecd = pd.read_html(oecd_site, header=0)[1][[1,2]]
oecd.head()
oecd['year'] = pd.to_datetime(oecd.Date).apply(lambda x: x.year)
oecd_year = oecd.set_index(oecd.Country.str.title())['year'].dropna()
oecd_year
"""
Explanation: There is typically some cleanup that is required of the returned data, such as the assignment of column names or conversion of types.
The table of interest is at index 1, and we will extract two columns from the table. Otherwise, this table is pretty clean.
End of explanation
"""
medals_data = medals.assign(oecd=medals.index.isin((oecd_year[oecd_year<1997]).index).astype(int))
"""
Explanation: We can create an indicator (binary) variable for OECD status by checking if each country is in the index of countries with membership year less than 1997.
The new DataFrame method assign is a convenient means for creating the new column from this operation.
End of explanation
"""
medals_data = medals_data.assign(log_population=np.log(medals.population))
"""
Explanation: Since the distribution of populations spans several orders of magnitude, we may wish to use the logarithm of the population size, which may be created similarly.
End of explanation
"""
medals_data.head()
"""
Explanation: The NumPy log function will return a pandas Series (or DataFrame when applied to one) instead of a ndarray; all of NumPy's functions are compatible with pandas in this way.
End of explanation
"""
!cat ../data/microbiome/microbiome.csv
"""
Explanation: Comma-separated Values (CSV)
The most common form of delimited data is comma-separated values (CSV). Since CSV is so ubiquitous, the read_csv is available as a convenience function for read_table.
Consider some more microbiome data.
End of explanation
"""
mb = pd.read_csv("../data/microbiome/microbiome.csv")
mb.head()
"""
Explanation: This table can be read into a DataFrame using read_csv:
End of explanation
"""
pd.read_csv("../data/microbiome/microbiome.csv", skiprows=[3,4,6]).head()
"""
Explanation: If we have sections of data that we do not wish to import (for example, known bad data), we can populate the skiprows argument:
End of explanation
"""
few_recs = pd.read_csv("../data/microbiome/microbiome.csv", nrows=4)
few_recs
"""
Explanation: Conversely, if we only want to import a small number of rows from, say, a very large data file we can use nrows:
End of explanation
"""
data_chunks = pd.read_csv("../data/microbiome/microbiome.csv", chunksize=15)
data_chunks
"""
Explanation: Alternately, if we want to process our data in reasonable chunks, the chunksize argument will return an iterable object that can be employed in a data processing loop. For example, our microbiome data are organized by bacterial phylum, with 15 patients represented in each:
End of explanation
"""
# Write your answer here
"""
Explanation: Exercise: Calculating summary statistics
Import the microbiome data, calculating the mean counts across all patients for each taxon, returning these values in a dictionary.
Hint: using chunksize makes this more efficent!
End of explanation
"""
mb = pd.read_csv("../data/microbiome/microbiome.csv", index_col=['Taxon','Patient'])
mb.head()
"""
Explanation: Hierarchical Indices
For a more useful index, we can specify the first two columns, which together provide a unique index to the data.
End of explanation
"""
mb.index
"""
Explanation: This is called a hierarchical index, which allows multiple dimensions of data to be represented in tabular form.
End of explanation
"""
mb.ix[('Firmicutes', 2)]
"""
Explanation: The corresponding index is a MultiIndex object that consists of a sequence of tuples, the elements of which is some combination of the three columns used to create the index. Where there are multiple repeated values, pandas does not print the repeats, making it easy to identify groups of values.
Rows can be indexed by passing the appropriate tuple.
End of explanation
"""
mb.ix['Proteobacteria']
"""
Explanation: With a hierachical index, we can select subsets of the data based on a partial index:
End of explanation
"""
mb.xs(1, level='Patient')
"""
Explanation: To extract arbitrary levels from a hierarchical row index, the cross-section method xs can be used.
End of explanation
"""
mb.swaplevel('Patient', 'Taxon').head()
"""
Explanation: We may also reorder levels as we like.
End of explanation
"""
mb.Stool / mb.Tissue
"""
Explanation: Operations
DataFrame and Series objects allow for several operations to take place either on a single object, or between two or more objects.
For example, we can perform arithmetic on the elements of two objects, such as calculating the ratio of bacteria counts between locations:
End of explanation
"""
mb_file = pd.ExcelFile('../data/microbiome/MID1.xls')
mb_file
"""
Explanation: Microsoft Excel
Since so much financial and scientific data ends up in Excel spreadsheets (regrettably), pandas' ability to directly import Excel spreadsheets is valuable. This support is contingent on having one or two dependencies (depending on what version of Excel file is being imported) installed: xlrd and openpyxl (these may be installed with either pip or easy_install).
Importing Excel data to pandas is a two-step process. First, we create an ExcelFile object using the path of the file:
End of explanation
"""
mb1 = mb_file.parse("Sheet 1", header=None)
mb1.columns = ["Taxon", "Count"]
mb1.head()
"""
Explanation: Then, since modern spreadsheets consist of one or more "sheets", we parse the sheet with the data of interest:
End of explanation
"""
mb2 = pd.read_excel('../data/microbiome/MID2.xls', sheetname='Sheet 1', header=None)
mb2.head()
"""
Explanation: There is now a read_excel conveneince function in pandas that combines these steps into a single call:
End of explanation
"""
import sqlite3
query = '''
CREATE TABLE samples
(taxon VARCHAR(15), patient INTEGER, tissue INTEGER, stool INTEGER);
'''
"""
Explanation: Relational Databases
If you are fortunate, your data will be stored in a database (relational or non-relational) rather than in arbitrary text files or spreadsheet. Relational databases are particularly useful for storing large quantities of structured data, where fields are grouped together in tables according to their relationships with one another.
pandas' DataFrame interacts with relational (i.e. SQL) databases, and even provides facilties for using SQL syntax on the DataFrame itself, which we will get to later. For now, let's work with a ubiquitous embedded database called SQLite, which comes bundled with Python. A SQLite database can be queried with the standard library's sqlite3 module.
End of explanation
"""
con = sqlite3.connect('microbiome.sqlite3')
con.execute(query)
con.commit()
few_recs.ix[0]
con.execute('INSERT INTO samples VALUES(\'{}\',{},{},{})'.format(*few_recs.ix[0]))
query = 'INSERT INTO samples VALUES(?, ?, ?, ?)'
con.executemany(query, few_recs.values[1:])
con.commit()
"""
Explanation: This query string will create a table to hold some of our microbiome data, which we can execute after connecting to a database (which will be created, if it does not exist).
End of explanation
"""
cursor = con.execute('SELECT * FROM samples')
rows = cursor.fetchall()
rows
"""
Explanation: Using SELECT queries, we can read from the database.
End of explanation
"""
pd.DataFrame(rows)
"""
Explanation: These results can be passed directly to a DataFrame
End of explanation
"""
table_info = con.execute('PRAGMA table_info(samples);').fetchall()
table_info
pd.DataFrame(rows, columns=np.transpose(table_info)[1])
"""
Explanation: To obtain the column names, we can obtain the table information from the database, via the special PRAGMA statement.
End of explanation
"""
pd.read_sql_query('SELECT * FROM samples', con)
"""
Explanation: A more direct approach is to pass the query to the read_sql_query functon, which returns a populated `DataFrame.
End of explanation
"""
more_recs = pd.read_csv("../data/microbiome/microbiome_missing.csv").head(20)
more_recs.to_sql('samples', con, if_exists='append', index=False)
cursor = con.execute('SELECT * FROM samples')
cursor.fetchall()
"""
Explanation: Correspondingly, we can append records into the database with to_sql.
End of explanation
"""
# Get rid of the database we created
!rm microbiome.sqlite3
"""
Explanation: There are several other data formats that can be imported into Python and converted into DataFrames, with the help of buitl-in or third-party libraries. These include JSON, XML, HDF5, non-relational databases, and various web APIs.
End of explanation
"""
ebola_dirs = !ls ../data/ebola/
ebola_dirs
"""
Explanation: 2014 Ebola Outbreak Data
The ../data/ebola folder contains summarized reports of Ebola cases from three countries during the recent outbreak of the disease in West Africa. For each country, there are daily reports that contain various information about the outbreak in several cities in each country.
From these data files, use pandas to import them and create a single data frame that includes the daily totals of new cases for each country.
We may use this compiled data for more advaned applications later in the course.
The data are taken from Caitlin Rivers' ebola GitHub repository, and are licenced for both commercial and non-commercial use. The tutorial repository contains a subset of this data from three countries (Sierra Leone, Liberia and Guinea) that we will use as an example. They reside in a nested subdirectory in the data directory.
End of explanation
"""
import glob
filenames = {data_dir[:data_dir.find('_')]: glob.glob('../data/ebola/{0}/*.csv'.format(data_dir)) for data_dir in ebola_dirs[1:]}
"""
Explanation: Within each country directory, there are CSV files containing daily information regarding the state of the outbreak for that country. The first step is to efficiently import all the relevant files.
Our approach will be to construct a dictionary containing a list of filenames to import. We can use the glob package to identify all the CSV files in each directory. This can all be placed within a dictionary comprehension.
End of explanation
"""
pd.read_csv('../data/ebola/sl_data/2014-08-12-v77.csv').head()
pd.read_csv('../data/ebola/guinea_data/2014-09-02.csv').head()
"""
Explanation: We are now in a position to iterate over the dictionary and import the corresponding files. However, the data layout of the files across the dataset is partially inconsistent.
End of explanation
"""
sample = pd.read_csv('../data/ebola/sl_data/2014-08-12-v77.csv')
"""
Explanation: Clearly, we will need to develop row masks to extract the data we need across all files, without having to manually extract data from each file.
Let's hack at one file to develop the mask.
End of explanation
"""
lower_vars = sample.variable.str.lower()
"""
Explanation: To prevent issues with capitalization, we will simply revert all labels to lower case.
End of explanation
"""
case_mask = (lower_vars.str.contains('new')
& (lower_vars.str.contains('case') | lower_vars.str.contains('suspect'))
& ~lower_vars.str.contains('non')
& ~lower_vars.str.contains('total'))
"""
Explanation: Since we are interested in extracting new cases only, we can use the string accessor attribute to look for key words that we would like to include or exclude.
End of explanation
"""
sample.loc[case_mask, ['date', 'variable', 'National']]
"""
Explanation: We could have instead used regular expressions to do the same thing.
Finally, we are only interested in three columns.
End of explanation
"""
datasets = []
for country in filenames:
country_files = filenames[country]
for f in country_files:
data = pd.read_csv(f)
# Convert to lower case to avoid capitalization issues
data.columns = data.columns.str.lower()
# Column naming is inconsistent. These procedures deal with that.
keep_columns = ['date']
if 'description' in data.columns:
keep_columns.append('description')
else:
keep_columns.append('variable')
if 'totals' in data.columns:
keep_columns.append('totals')
else:
keep_columns.append('national')
# Index out the columns we need, and rename them
keep_data = data[keep_columns]
keep_data.columns = 'date', 'variable', 'totals'
# Extract the rows we might want
lower_vars = keep_data.variable.str.lower()
# Of course we can also use regex to do this
case_mask = (lower_vars.str.contains('new')
& (lower_vars.str.contains('case') | lower_vars.str.contains('suspect')
| lower_vars.str.contains('confirm'))
& ~lower_vars.str.contains('non')
& ~lower_vars.str.contains('total'))
keep_data = keep_data[case_mask].dropna()
# Convert data types
keep_data['date'] = pd.to_datetime(keep_data.date)
keep_data['totals'] = keep_data.totals.astype(int)
# Assign country label and append to datasets list
datasets.append(keep_data.assign(country=country))
"""
Explanation: We can now embed this operation in a loop over all the filenames in the database.
End of explanation
"""
all_data = pd.concat(datasets)
all_data.head()
"""
Explanation: Now that we have a list populated with DataFrame objects for each day and country, we can call concat to concatenate them into a single DataFrame.
End of explanation
"""
all_data.index.is_unique
"""
Explanation: This works because the structure of each table was identical
Manipulating indices
Notice from above, however, that the index contains redundant integer index values. We can confirm this:
End of explanation
"""
all_data = pd.concat(datasets).reset_index(drop=True)
all_data.head()
"""
Explanation: We can create a new unique index by calling the reset_index method on the new data frame after we import it, which will generate a new ordered, unique index.
End of explanation
"""
all_data.reindex(all_data.index[::-1])
"""
Explanation: Reindexing allows users to manipulate the data labels in a DataFrame. It forces a DataFrame to conform to the new index, and optionally, fill in missing data if requested.
A simple use of reindex is to alter the order of the rows. For example, records are currently ordered first by country then by day, since this is the order in which they were iterated over and imported. We might arbitrarily want to reverse the order, which is performed by passing the appropriate index values to reindex.
End of explanation
"""
all_data.reindex(columns=['date', 'country', 'variable', 'totals']).head()
"""
Explanation: Notice that the reindexing operation is not performed "in-place"; the original DataFrame remains as it was, and the method returns a copy of the DataFrame with the new index. This is a common trait for pandas, and is a Good Thing.
We may also wish to reorder the columns this way.
End of explanation
"""
all_data_grouped = all_data.groupby(['country', 'date'])
daily_cases = all_data_grouped['totals'].sum()
daily_cases.head(10)
"""
Explanation: Group by operations
One of pandas' most powerful features is the ability to perform operations on subgroups of a DataFrame. These so-called group by operations defines subunits of the dataset according to the values of one or more variabes in the DataFrame.
For this data, we want to sum the new case counts by day and country; so we pass these two column names to the groupby method, then sum the totals column accross them.
End of explanation
"""
daily_cases[('liberia', '2014-09-02')]
"""
Explanation: The resulting series retains a hierarchical index from the group by operation. Hence, we can index out the counts for a given country on a particular day by indexing with the appropriate tuple.
End of explanation
"""
daily_cases.sort(ascending=False)
daily_cases.head(10)
"""
Explanation: One issue with the data we have extracted is that there appear to be serious outliers in the Liberian counts. The values are much too large to be a daily count, even during a serious outbreak.
End of explanation
"""
daily_cases = daily_cases[daily_cases<200]
"""
Explanation: We can filter these outliers using an appropriate threshold.
End of explanation
"""
daily_cases.unstack().head()
"""
Explanation: Plotting
pandas data structures have high-level methods for creating a variety of plots, which tends to be easier than generating the corresponding plot using matplotlib.
For example, we may want to create a plot of the cumulative cases for each of the three countries. The easiest way to do this is to remove the hierarchical index, and create a DataFrame of three columns, which will result in three lines when plotted.
First, call unstack to remove the hierarichical index:
End of explanation
"""
daily_cases.unstack().T.head()
"""
Explanation: Next, transpose the resulting DataFrame to swap the rows and columns.
End of explanation
"""
daily_cases.unstack().T.fillna(0).head()
"""
Explanation: Since we have missing values for some dates, we will assume that the counts for those days were zero (the actual counts for that day may have bee included in the next reporting day's data).
End of explanation
"""
daily_cases.unstack().T.fillna(0).cumsum().plot()
"""
Explanation: Finally, calculate the cumulative sum for all the columns, and generate a line plot, which we get by default.
End of explanation
"""
weekly_cases = daily_cases.unstack().T.resample('W', how='sum')
weekly_cases
weekly_cases.cumsum().plot()
"""
Explanation: Resampling
An alternative to filling days without case reports with zeros is to aggregate the data at a coarser time scale. New cases are often reported by week; we can use the resample method to summarize the data into weekly values.
End of explanation
"""
medals_data.to_csv("../data/medals.csv", index=False)
"""
Explanation: Writing Data to Files
As well as being able to read several data input formats, pandas can also export data to a variety of storage formats. We will bring your attention to just one of these, but the usage is similar across formats.
End of explanation
"""
!head -n 20 ../data/microbiome/microbiome_missing.csv
pd.read_csv("../data/microbiome/microbiome_missing.csv").head(20)
"""
Explanation: The to_csv method writes a DataFrame to a comma-separated values (csv) file. You can specify custom delimiters (via sep argument), how missing values are written (via na_rep argument), whether the index is writen (via index argument), whether the header is included (via header argument), among other options.
Missing data
The occurence of missing data is so prevalent that it pays to use tools like pandas, which seamlessly integrates missing data handling so that it can be dealt with easily, and in the manner required by the analysis at hand.
Missing data are represented in Series and DataFrame objects by the NaN floating point value. However, None is also treated as missing, since it is commonly used as such in other contexts (e.g. NumPy).
End of explanation
"""
pd.isnull(pd.read_csv("../data/microbiome/microbiome_missing.csv")).head(20)
"""
Explanation: Above, pandas recognized NA and an empty field as missing data.
End of explanation
"""
missing_sample = pd.read_csv("../data/microbiome/microbiome_missing.csv",
na_values=['?', -99999], nrows=20)
missing_sample
"""
Explanation: Unfortunately, there will sometimes be inconsistency with the conventions for missing data. In this example, there is a question mark "?" and a large negative number where there should have been a positive integer. We can specify additional symbols with the na_values argument:
End of explanation
"""
missing_sample.dropna()
"""
Explanation: These can be specified on a column-wise basis using an appropriate dict as the argument for na_values.
By default, dropna drops entire rows in which one or more values are missing.
End of explanation
"""
missing_sample.dropna(axis=1)
"""
Explanation: If we want to drop missing values column-wise instead of row-wise, we use axis=1.
End of explanation
"""
missing_sample.fillna(-999)
"""
Explanation: Rather than omitting missing data from an analysis, in some cases it may be suitable to fill the missing value in, either with a default value (such as zero), a sentinel value, or a value that is either imputed or carried forward/backward from similar data points. We can do this programmatically in pandas with the fillna argument.
End of explanation
"""
## Write your answer here
"""
Explanation: Sentinel values are useful in pandas because missing values are treated as floats, so it is impossible to use explicit missing values with integer columns. Using some large (positive or negative) integer as a sentinel value will allow the column to be integer typed.
Exercise: Mean imputation
Fill the missing values in missing_sample with the mean count from the corresponding species across patients.
End of explanation
"""
|
biosustain/cameo-notebooks | 02-import-models.ipynb | apache-2.0 | less data/e_coli_core.xml
from cameo import load_model
model = load_model('data/e_coli_core.xml')
model
"""
Explanation: Import models
Import models from files
The function :class:~cameo.io.load_model accepts a number of different input formats.
SBML (Systems Biology Markup Language).
JSON
Pickle (pickled models)
Model identifiers (from the BiGG Models)
End of explanation
"""
from cameo import models
models.index_models_bigg()
models.index_models_minho()
"""
Explanation: Import models from the internet
In the quick start chapter we demonstrated how to use :class:~cameo.io.load_model to import a model by ID. But where did the model come from? Cameo has currently access to two model repositories on the internet, http://bigg.ucsd.edu and http://darwin.di.uminho.pt/models.
End of explanation
"""
models.bigg.iJN746
models.minho.iMM904
"""
Explanation: Models from BiGG and the University of Minho can conveniently be accessd via :class:~cameo.models.bigg and :class:~cameo.models.minho respectively.
End of explanation
"""
models.minho.validated.VvuMBEL943 # use TAB completion to see the other models
"""
Explanation: Models in the Minho database have been manually verified. The subset of models shown bellow can be used to run simulations as described in the publications.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.