repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
phoebe-project/phoebe2-docs | 2.1/tutorials/general_concepts.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: General Concepts
HOW TO RUN THIS FILE: if you're running this in a Jupyter notebook or Google Colab session, you can click on a cell and then shift+Enter to run the cell and automatically select the next cell. Alt+Enter will run a cell and create a new cell below it. Ctrl+Enter will run a cell but keep it selected. To restart from scratch, restart the kernel/runtime.
This tutorial introduces all the general concepts of dealing with Parameters, ParameterSets, and the Bundle. This tutorial aims to be quite complete - covering almost everything you can do with Parameters, so on first read you may just want to try to get familiar, and then return here as a reference for any details later.
All of these tutorials assume basic comfort with Python in general - particularly with the concepts of lists, dictionaries, and objects as well as basic comfort with using the numpy and matplotlib packages.
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Let's get started with some basic imports
End of explanation
"""
logger = phoebe.logger(clevel='INFO', flevel='DEBUG', filename='tutorial.log')
"""
Explanation: If running in IPython notebooks, you may see a "ShimWarning" depending on the version of Jupyter you are using - this is safe to ignore.
PHOEBE 2 uses constants defined in the IAU 2015 Resolution which conflict with the constants defined in astropy. As a result, you'll see the warnings as phoebe.u and phoebe.c "hijacks" the values in astropy.units and astropy.constants.
Whenever providing units, please make sure to use phoebe.u instead of astropy.units, otherwise the conversions may be inconsistent.
Logger
Before starting any script, it is a good habit to initialize a logger and define which levels of information you want printed to the command line (clevel) and dumped to a file (flevel). A convenience function is provided at the top-level via phoebe.logger to initialize the logger with any desired level.
The levels from most to least information are:
DEBUG
INFO
WARNING
ERROR
CRITICAL
End of explanation
"""
param = phoebe.parameters.StringParameter(qualifier='myparameter',
description='mydescription',
value='myvalue')
"""
Explanation: All of these arguments are optional and will default to clevel='WARNING' if not provided. There is therefore no need to provide a filename if you don't provide a value for flevel.
So with this logger, anything with INFO, WARNING, ERROR, or CRITICAL levels will be printed to the screen. All messages of any level will be written to a file named 'tutorial.log' in the current directory.
Note: the logger messages are not included in the outputs shown below.
Parameters
Parameters hold a single value, but need to be aware about their own types, limits, and connection with other Parameters (more on this later when we discuss ParameterSets).
Note that generally you won't ever have to "create" or "define" your own Parameters, those will be created for you by helper functions, but we have to start somewhere... so let's create our first Parameter.
We'll start with creating a StringParameter since it is the most generic, and then discuss and specific differences for each type of Parameter.
End of explanation
"""
print type(param)
"""
Explanation: If you ever need to know the type of a Parameter, you can always use python's built-in type functionality:
End of explanation
"""
print param
"""
Explanation: If we print the parameter object we can see a summary of information
End of explanation
"""
print param.tags
"""
Explanation: You can see here that we've defined three a few things about parameter: the qualifier, description, and value (others do exist, they just don't show up in the summary).
These "things" can be split into two groups: tags and attributes (although in a pythonic sense, both can be accessed as attributes). Don't worry too much about this distinction - it isn't really important except for the fact that tags are shared across all Parameters whereas attributes are dependent on the type of the Parameter.
The tags of a Parameter define the Parameter and how it connects to other Parameters (again, more on this when we get to ParameterSets). For now, just know that you can access a list of all the tags as follows:
End of explanation
"""
print param['qualifier'], param.qualifier
"""
Explanation: and that each of these is available through both a dictionary key and an object attribute. For example:
End of explanation
"""
param.attributes
"""
Explanation: The 'qualifier' attribute is essentially an abbreviated name for the Parameter.
These tags will be shared across all Parameters, regardless of their type.
Attributes, on the other hand, can be dependent on the type of the Parameter and tell the Parameter its rules and how to interpret its value. You can access a list of available attributes as follows:
End of explanation
"""
print param['description'], param.description
"""
Explanation: and again, each of these are available through both a dictionary key and as an object attribute. For example, all parameters have a 'description' attribute which gives additional information about what the Parameter means:
End of explanation
"""
print param.get_value(), param['value'], param.value
"""
Explanation: For the special case of the 'value' attribute, there is also a get_value method (will become handy later when we want to be able to request the value in a specific unit).
End of explanation
"""
param.set_value('newvalue')
print param.get_value()
"""
Explanation: The value attribute is also the only attribute that you'll likely want to change, so it also has a set_value method:
End of explanation
"""
param = phoebe.parameters.ChoiceParameter(qualifier='mychoiceparameter',
description='mydescription',
choices=['choice1', 'choice2'],
value='choice1')
print param
print param.attributes
print param['choices'], param.choices
print param.get_value()
#param.set_value('not_a_choice') # would raise a ValueError
param.set_value('choice2')
print param.get_value()
"""
Explanation: The 'visible_if' attribute only comes into play when the Parameter is a member of a ParameterSet, so we'll discuss it at the end of this tutorial when we get to ParameterSets.
The 'copy_for' attribute is only used when the Parameter is in a particular type of ParameterSet called a Bundle (explained at the very end of this tutorial). We'll see the 'copy_for' capability in action later in the Datasets Tutorial, but for now, just know that you can view this property only and cannot change it... and most of the time it will just be an empty string.
StringParameters
We'll just mention StringParameters again for completeness, but we've already seen about all they can do - the value must cast to a valid string but no limits or checks are performed at all on the value.
ChoiceParameters
ChoiceParameters are essentially StringParameters with one very important exception: the value must match one of the prescribed choices.
Therefore, they have a 'choice' attribute and an error will be raised if trying to set the value to any string not in that list.
End of explanation
"""
param = phoebe.parameters.SelectParameter(qualifier='myselectparameter',
description='mydescription',
choices=['choice1', 'choice2'],
value=['choice1'])
print param
print param.attributes
print param['choices'], param.choices
print param.get_value()
"""
Explanation: SelectParameters
NEW IN PHOEBE 2.1
SelectParameters are very similar to ChoiceParameters except that the value is a list, where each item must match one of the prescribed choices.
End of explanation
"""
param.set_value(["choice*"])
print param.get_value()
print param.expand_value()
"""
Explanation: However, SelectParameters also allow you to use * as a wildcard and will expand to any of the choices that match that wildcard. For example,
End of explanation
"""
param = phoebe.parameters.FloatParameter(qualifier='myfloatparameter',
description='mydescription',
default_unit=u.m,
limits=[None,20],
value=5)
print param
"""
Explanation: FloatParameters
FloatParameters are probably the most common Parameter used in PHOEBE and hold both a float and a unit, with the ability to retrieve the value in any other convertible unit.
End of explanation
"""
print param.attributes
"""
Explanation: You'll notice here a few new mentions in the summary... "Constrained by", "Constrains", and "Related to" are all referring to constraints which will be discussed in a future tutorial.
End of explanation
"""
print param['limits'], param.limits
#param.set_value(30) # would raise a ValueError
param.set_value(2)
print param.get_value()
"""
Explanation: FloatParameters have an attribute which hold the "limits" - whenever a value is set it will be checked to make sure it falls within the limits. If either the lower or upper limit is None, then there is no limit check for that extreme.
End of explanation
"""
print param['default_unit'], param.default_unit
"""
Explanation: FloatParameters have an attribute which holds the "default_unit" - this is the unit in which the value is stored and the unit that will be provided if not otherwise overriden.
End of explanation
"""
print param.get_value()
"""
Explanation: Calling get_value will then return a float in these units
End of explanation
"""
print param.get_value(unit=u.km), param.get_value(unit='km')
"""
Explanation: But we can also request the value in a different unit, by passing an astropy Unit object or its string representation.
End of explanation
"""
print param.get_quantity(), param.get_quantity(unit=u.km)
"""
Explanation: FloatParameters also have their own method to access an astropy Quantity object that includes both the value and the unit
End of explanation
"""
param.set_value(10)
print param.get_quantity()
param.set_value(0.001*u.km)
print param.get_quantity()
param.set_value(10, unit='cm')
print param.get_quantity()
"""
Explanation: The set_value method also accepts a unit - this doesn't change the default_unit internally, but instead converts the provided value before storing.
End of explanation
"""
param.set_default_unit(u.km)
print param.get_quantity()
"""
Explanation: If for some reason you want to change the default_unit, you can do so as well:
End of explanation
"""
print param.limits
"""
Explanation: But note that the limits are still stored as a quantity object in the originally defined default_units
End of explanation
"""
param = phoebe.parameters.IntParameter(qualifier='myintparameter',
description='mydescription',
limits=[0,None],
value=1)
print param
print param.attributes
"""
Explanation: IntParameters
IntParameters are essentially the same as FloatParameters except they always cast to an Integer and they have no units.
End of explanation
"""
print param['limits'], param.limits
"""
Explanation: Like FloatParameters above, IntParameters still have limits
End of explanation
"""
param.set_value(1.9)
print param.get_value()
"""
Explanation: Note that if you try to set the value to a float it will not raise an error, but will cast that value to an integer (following python rules of truncation, not rounding)
End of explanation
"""
param = phoebe.parameters.BoolParameter(qualifier='myboolparameter',
description='mydescription',
value=True)
print param
print param.attributes
"""
Explanation: Bool Parameters
BoolParameters are even simpler - they accept True or False.
End of explanation
"""
param.set_value(0)
print param.get_value()
param.set_value(None)
print param.get_value()
"""
Explanation: Note that, like IntParameters, BoolParameters will attempt to cast anything you give it into True or False.
End of explanation
"""
param.set_value('')
print param.get_value()
param.set_value('some_string')
print param.get_value()
"""
Explanation: As with Python, an empty string will cast to False and a non-empty string will cast to True
End of explanation
"""
param.set_value('False')
print param.get_value()
param.set_value('false')
print param.get_value()
"""
Explanation: The only exception to this is that (unlike Python), 'true' or 'True' will cast to True and 'false' or 'False' will cast to False.
End of explanation
"""
param = phoebe.parameters.FloatArrayParameter(qualifier='myfloatarrayparameters',
description='mydescription',
default_unit=u.m,
value=np.array([0,1,2,3]))
print param
print param.attributes
print param.get_value(unit=u.km)
"""
Explanation: FloatArrayParameters
FloatArrayParameters are essentially the same as FloatParameters (in that they have the same unit treatment, although obviously no limits) but hold numpy arrays rather than a single value.
By convention in Phoebe, these will (almost) always have a pluralized qualifier.
End of explanation
"""
param1 = phoebe.parameters.FloatParameter(qualifier='param1',
description='param1 description',
default_unit=u.m,
limits=[None,20],
value=5,
context='context1',
kind='kind1')
param2 = phoebe.parameters.FloatParameter(qualifier='param2',
description='param2 description',
default_unit=u.deg,
limits=[0,2*np.pi],
value=0,
context='context2',
kind='kind2')
param3 = phoebe.parameters.FloatParameter(qualifier='param3',
description='param3 description',
default_unit=u.kg,
limits=[0,2*np.pi],
value=0,
context='context1',
kind='kind2')
ps = phoebe.parameters.ParameterSet([param1, param2, param3])
print ps.to_list()
"""
Explanation: FloatArrayParameters also allow for built-in interpolation... but this requires them to be a member of a Bundle, so we'll discuss this in just a bit.
ParametersSets
ParameterSets are a collection of Parameters that can be filtered by their tags to return another ParameterSet.
For illustration, let's create 3 random FloatParameters and combine them to make a ParameterSet.
End of explanation
"""
print ps
"""
Explanation: If we print a ParameterSet, we'll see a listing of all the Parameters and their values.
End of explanation
"""
print ps.tags
"""
Explanation: Similarly to Parameters, we can access the tags of a ParameterSet
End of explanation
"""
print ps.get('param1@kind1')
"""
Explanation: Twigs
The string notation used for the Parameters is called a 'twig' - its simply a combination of all the tags joined with the '@' symbol and gives a very convenient way to access any Parameter.
The order of the tags doesn't matter, and you only need to provide enough tags to produce a unique match. Since there is only one parameter with kind='kind1', we do not need to provide the extraneous context='context1' in the twig to get a match.
End of explanation
"""
print ps.get('param1@kind1').description
"""
Explanation: Note that this returned the ParameterObject itself, so you can now use any of the Parameter methods or attributes we saw earlier. For example:
End of explanation
"""
ps.set_value('param1@kind1', 10)
print ps.get_value('param1@kind1')
"""
Explanation: But we can also use set and get_value methods from the ParameterSet itself:
End of explanation
"""
print ps.meta.keys()
"""
Explanation: Tags
Each Parameter has a number of tags, and the ParameterSet has the same tags - where the value of any given tag is None if not shared by all Parameters in that ParameterSet.
So let's just print the names of the tags again and then describe what each one means.
End of explanation
"""
print ps.context
"""
Explanation: Most of these "metatags" act as labels - for example, you can give a component tag to each of the components for easier referencing.
But a few of these tags are fixed and not editable:
qualifier: literally the name of the parameter.
kind: tells what kind a parameter is (ie whether a component is a star or an orbit).
context: tells what context this parameter belongs to
twig: a shortcut to the parameter in a single string.
uniquetwig: the minimal twig needed to reach this parameter.
uniqueid: an internal representation used to reach this parameter
These contexts are (you'll notice that most are represented in the tags):
setting
history
system
component
feature
dataset
constraint
compute
model
fitting [not yet supported]
feedback [not yet supported]
plugin [not yet supported]
One way to distinguish between context and kind is with the following question and answer:
"What kind of [context] is this? It's a [kind] tagged [context]=[tag-with-same-name-as-context]."
In different cases, this will then become:
"What kind of component is this? It's a star tagged component=starA." (context='component', kind='star', component='starA')
"What kind of feature is this? It's a spot tagged feature=spot01." (context='feature', kind='spot', feature='spot01')
"What kind of dataset is this? It's a LC (light curve) tagged dataset=lc01." (context='dataset', kind='LC', dataset='lc01')
"What kind of compute (options) are these? They're phoebe (compute options) tagged compute=preview." (context='compute', kind='phoebe', compute='preview')
As we saw before, these tags can be accessed at the Parameter level as either a dictionary key or as an object attribute. For ParameterSets, the tags are only accessible through object attributes.
End of explanation
"""
print ps.contexts
"""
Explanation: This returns None since not all objects in this ParameterSet share a single context. But you can see all the options for a given tag by providing the plural version of that tag name:
End of explanation
"""
print ps.filter(context='context1')
"""
Explanation: Filtering
Any of the tags can also be used to filter the ParameterSet:
End of explanation
"""
print ps.filter(context='context1', kind='kind1')
"""
Explanation: Here we were returned a ParameterSet of all Parameters that matched the filter criteria. Since we're returned another ParameterSet, we can chain additional filter calls together.
End of explanation
"""
print ps.filter(context='context1', kind='kind1')
"""
Explanation: Now we see that we have drilled down to a single Parameter. Note that a ParameterSet is still returned - filter will always return a ParameterSet.
We could have accomplished the exact same thing with a single call to filter:
End of explanation
"""
print ps.filter(context='context1', kind='kind1').get()
print ps.get(context='context1', kind='kind1')
"""
Explanation: If you want to access the actual Parameter, you must use get instead of (or in addition to) filter. All of the following lines do the exact same thing:
End of explanation
"""
print ps['context1@kind1']
print ps['context1']['kind1']
"""
Explanation: Or we can use those twigs. Remember that twigs are just a combination of these tags separated by the @ symbol. You can use these for dictionary access in a ParameterSet - without needing to provide the name of the tag, and without having to worry about order. And whenever this returns a ParameterSet, these are also chainable, so the following two lines will do the same thing:
End of explanation
"""
print ps['context1']
print ps['context1@kind1']
"""
Explanation: You may notice that the final result was a Parameter, not a ParameterSet. Twig dictionary access tries to be smart - if exactly 1 Parameter is found, it will return that Parameter instead of a ParameterSet. Notice the difference between the two following lines:
End of explanation
"""
print ps['context1@kind1']['description']
"""
Explanation: Of course, once you get the Parameter you can then use dictionary keys to access any attributes of that Parameter.
End of explanation
"""
print ps['description@context1@kind1']
"""
Explanation: So we decided we might as well allow access to those attributes directly from the twig as well
End of explanation
"""
b = phoebe.Bundle()
print b
"""
Explanation: The Bundle
The Bundle is nothing more than a glorified ParameterSet with some extra methods to compute models, add new components and datasets, etc.
You can initialize an empty Bundle as follows:
End of explanation
"""
print b.filter(context='system')
"""
Explanation: and filter just as you would for a ParameterSet
End of explanation
"""
param1 = phoebe.parameters.ChoiceParameter(qualifier='what_is_this',
choices=['matter', 'aether'],
value='matter',
context='context1')
param2 = phoebe.parameters.FloatParameter(qualifier='mass',
default_unit=u.kg,
value=5,
visible_if='what_is_this:matter',
context='context1')
b = phoebe.Bundle([param1, param2])
print b.filter()
"""
Explanation: Visible If
As promised earlier, the 'visible_if' attribute of a Parameter controls whether its visible to a ParameterSet... but it only does anything if the Parameter belongs to a Bundle.
Let's make a new ParameterSet in which the visibility of one parameter is dependent on the value of another.
End of explanation
"""
b.set_value('what_is_this', 'aether')
print b.filter()
"""
Explanation: It doesn't make much sense to need to define a mass if this thing isn't baryonic. So if we change the value of 'what_is_this' to 'aether' then the 'mass' Parameter will temporarily hide itself.
End of explanation
"""
xparam = phoebe.parameters.FloatArrayParameter(qualifier='xs',
default_unit=u.d,
value=np.linspace(0,1,10),
context='context1')
yparam = phoebe.parameters.FloatArrayParameter(qualifier='ys',
default_unit=u.m,
value=np.linspace(0,1,10)**2,
context='context1')
b = phoebe.Bundle([xparam, yparam])
b.filter('ys').get().twig
b['ys'].get_value()
"""
Explanation: FloatArrayParameters: interpolation
As mentioned earlier, when a part of a Bundle, FloatArrayParameters can handle simple linear interpolation with respect to another FloatArrayParameter in the same Bundle.
End of explanation
"""
b['ys'].interp_value(xs=0)
b['ys'].interp_value(xs=0.2)
"""
Explanation: Now we can interpolate the 'ys' param for any given value of 'xs'
End of explanation
"""
|
VUInformationRetrieval/IR2016_2017 | make_dataset.ipynb | gpl-2.0 | from Bio import Entrez
# NCBI requires you to set your email address to make use of NCBI's E-utilities
Entrez.email = "Your.Name.Here@example.org"
"""
Explanation: Building the dataset of research papers
The Entrez module, a part of the Biopython library, will be used to interface with PubMed.<br>
You can download Biopython from here.
In this notebook we will be covering several of the steps taken in the Biopython Tutorial, specifically in Chapter 9 Accessing NCBI’s Entrez databases.
End of explanation
"""
import pickle, bz2, os
"""
Explanation: The datasets will be saved as serialized Python objects, compressed with bzip2.
Saving/loading them will therefore require the pickle and bz2 modules.
End of explanation
"""
# accessing extended information about the PubMed database
pubmed = Entrez.read( Entrez.einfo(db="pubmed"), validate=False )[u'DbInfo']
# list of possible search fields for use with ESearch:
search_fields = { f['Name']:f['Description'] for f in pubmed["FieldList"] }
"""
Explanation: EInfo: Obtaining information about the Entrez databases
End of explanation
"""
search_fields
"""
Explanation: In search_fields, we find 'TIAB' ('Free text associated with Abstract/Title') as a possible search field to use in searches.
End of explanation
"""
example_authors = ['Haasdijk E']
example_search = Entrez.read( Entrez.esearch( db="pubmed", term=' AND '.join([a+'[AUTH]' for a in example_authors]) ) )
example_search
"""
Explanation: ESearch: Searching the Entrez databases
To have a look at the kind of data we get when searching the database, we'll perform a search for papers authored by Haasdijk:
End of explanation
"""
type( example_search['IdList'][0] )
"""
Explanation: Note how the result being produced is not in Python's native string format:
End of explanation
"""
example_ids = [ int(id) for id in example_search['IdList'] ]
print(example_ids)
"""
Explanation: The part of the query's result we are most interested in is accessible through
End of explanation
"""
search_term = 'malaria'
Ids_file = 'data/' + search_term + '__Ids.pkl.bz2'
if os.path.exists( Ids_file ):
Ids = pickle.load( bz2.BZ2File( Ids_file, 'rb' ) )
else:
# determine the number of hits for the search term
search = Entrez.read( Entrez.esearch( db="pubmed", term=search_term+'[TIAB]', retmax=0 ) )
total = int( search['Count'] )
# `Ids` will be incrementally assembled, by performing multiple queries,
# each returning at most `retrieve_per_query` entries.
Ids_str = []
retrieve_per_query = 10000
for start in range( 0, total, retrieve_per_query ):
print('Fetching IDs of results [%d,%d]' % ( start, start+retrieve_per_query ) )
s = Entrez.read( Entrez.esearch( db="pubmed", term=search_term+'[TIAB]', retstart=start, retmax=retrieve_per_query ) )
Ids_str.extend( s[ u'IdList' ] )
# convert Ids to integers (and ensure that the conversion is reversible)
Ids = [ int(id) for id in Ids_str ]
for (id_str, id_int) in zip(Ids_str, Ids):
if str(id_int) != id_str:
raise Exception('Conversion of PubMed ID %s from string to integer it not reversible.' % id_str )
# Save list of Ids
pickle.dump( Ids, bz2.BZ2File( Ids_file, 'wb' ) )
total = len( Ids )
print('%d documents contain the search term "%s".' % ( total, search_term ) )
"""
Explanation: PubMed IDs dataset
We will now assemble a dataset comprised of research articles containing the keyword "evolution", in either their titles or abstracts.
End of explanation
"""
Ids[:5]
"""
Explanation: Taking a look at what we just retrieved, here are the last 5 elements of the Ids list:
End of explanation
"""
example_paper = Entrez.read( Entrez.esummary(db="pubmed", id='27749938') )[0]
def print_dict( p ):
for k,v in p.items():
print(k)
print('\t', v)
print_dict(example_paper)
"""
Explanation: ESummary: Retrieving summaries from primary IDs
To have a look at the kind of metadata we get from a call to Entrez.esummary(), we now fetch the summary of one of Haasdijk's papers (using one of the PubMed IDs we obtained in the previous section:
End of explanation
"""
( example_paper['Title'], example_paper['AuthorList'], int(example_paper['PubDate'][:4]), example_paper['DOI'] )
"""
Explanation: For now, we'll keep just some basic information for each paper: title, list of authors, publication year, and DOI.
In case you are not familiar with the DOI system, know that the paper above can be accessed through the link http://dx.doi.org/10.1007/s12065-012-0071-x (which is http://dx.doi.org/ followed by the paper's DOI).
End of explanation
"""
Summaries_file = 'data/' + search_term + '__Summaries.pkl.bz2'
if os.path.exists( Summaries_file ):
Summaries = pickle.load( bz2.BZ2File( Summaries_file, 'rb' ) )
else:
# `Summaries` will be incrementally assembled, by performing multiple queries,
# each returning at most `retrieve_per_query` entries.
Summaries = []
retrieve_per_query = 500
print('Fetching Summaries of results: ')
for start in range( 0, len(Ids), retrieve_per_query ):
if (start % 10000 == 0):
print('')
print(start, end='')
else:
print('.', end='')
# build comma separated string with the ids at indexes [start, start+retrieve_per_query)
query_ids = ','.join( [ str(id) for id in Ids[ start : start+retrieve_per_query ] ] )
s = Entrez.read( Entrez.esummary( db="pubmed", id=query_ids ) )
# out of the retrieved data, we will keep only a tuple (title, authors, year, DOI), associated with the paper's id.
# (all values converted to native Python formats)
f = [
( int( p['Id'] ), (
str( p['Title'] ),
[ str(a) for a in p['AuthorList'] ],
int( p['PubDate'][:4] ), # keeps just the publication year
str( p.get('DOI', '') ) # papers for which no DOI is available get an empty string in their place
) )
for p in s
]
Summaries.extend( f )
# Save Summaries, as a dictionary indexed by Ids
Summaries = dict( Summaries )
pickle.dump( Summaries, bz2.BZ2File( Summaries_file, 'wb' ) )
"""
Explanation: Summaries dataset
We are now ready to assemble a dataset containing the summaries of all the paper Ids we previously fetched.
To reduce the memory footprint, and to ensure the saved datasets won't depend on Biopython being installed to be properly loaded, values returned by Entrez.read() will be converted to their corresponding native Python types. We start by defining a function for helping with the conversion of strings:
End of explanation
"""
{ id : Summaries[id] for id in Ids[:3] }
"""
Explanation: Let us take a look at the first 3 retrieved summaries:
End of explanation
"""
q = Entrez.read( Entrez.efetch(db="pubmed", id='27749938', retmode="xml") )
"""
Explanation: EFetch: Downloading full records from Entrez
Entrez.efetch() is the function that will allow us to obtain paper abstracts. Let us start by taking a look at the kind of data it returns when we query PubMed's database.
End of explanation
"""
type(q), len(q)
"""
Explanation: q is a list, with each member corresponding to a queried id. Because here we only queried for one id, its results are then in q[0].
End of explanation
"""
type(q[0]), q[0].keys()
print_dict( q[0][ 'PubmedData' ] )
"""
Explanation: At q[0] we find a dictionary containing two keys, the contents of which we print below.
End of explanation
"""
print_dict( { k:v for k,v in q[0][ 'MedlineCitation' ].items() if k!='Article' } )
print_dict( q[0][ 'MedlineCitation' ][ 'Article' ] )
"""
Explanation: The key 'MedlineCitation' maps into another dictionary. In that dictionary, most of the information is contained under the key 'Article'. To minimize the clutter, below we show the contents of 'MedlineCitation' excluding its 'Article' member, and below that we then show the contents of 'Article'.
End of explanation
"""
{ int(q[0]['MedlineCitation']['PMID']) : str(q[0]['MedlineCitation']['Article']['Abstract']['AbstractText'][0]) }
"""
Explanation: A paper's abstract can therefore be accessed with:
End of explanation
"""
print_dict( Entrez.read( Entrez.efetch(db="pubmed", id='17782550', retmode="xml") )[0]['MedlineCitation']['Article'] )
"""
Explanation: A paper for which no abstract is available will simply not contain the 'Abstract' key in its 'Article' dictionary:
End of explanation
"""
r = Entrez.read( Entrez.efetch(db="pubmed", id='24027805', retmode="xml") )
print_dict( r[0][ 'PubmedBookData' ] )
print_dict( r[0][ 'BookDocument' ] )
"""
Explanation: Some of the ids in our dataset refer to books from the NCBI Bookshelf, a collection of freely available, downloadable, on-line versions of selected biomedical books. For such ids, Entrez.efetch() returns a slightly different structure, where the keys [u'BookDocument', u'PubmedBookData'] take the place of the [u'MedlineCitation', u'PubmedData'] keys we saw above.
Here is an example of the data we obtain for the id corresponding to the book The Social Biology of Microbial Communities:
End of explanation
"""
{ int(r[0]['BookDocument']['PMID']) : str(r[0]['BookDocument']['Abstract']['AbstractText'][0]) }
"""
Explanation: In a book from the NCBI Bookshelf, its abstract can then be accessed as such:
End of explanation
"""
Abstracts_file = 'data/' + search_term + '__Abstracts.pkl.bz2'
import http.client
from collections import deque
if os.path.exists( Abstracts_file ):
Abstracts = pickle.load( bz2.BZ2File( Abstracts_file, 'rb' ) )
else:
# `Abstracts` will be incrementally assembled, by performing multiple queries,
# each returning at most `retrieve_per_query` entries.
Abstracts = deque()
retrieve_per_query = 500
print('Fetching Abstracts of results: ')
for start in range( 0, len(Ids), retrieve_per_query ):
if (start % 10000 == 0):
print('')
print(start, end='')
else:
print('.', end='')
# build comma separated string with the ids at indexes [start, start+retrieve_per_query)
query_ids = ','.join( [ str(id) for id in Ids[ start : start+retrieve_per_query ] ] )
# issue requests to the server, until we get the full amount of data we expect
while True:
try:
s = Entrez.read( Entrez.efetch(db="pubmed", id=query_ids, retmode="xml" ) )
except http.client.IncompleteRead:
print('r', end='')
continue
break
i = 0
for p in s:
abstr = ''
if 'MedlineCitation' in p:
pmid = p['MedlineCitation']['PMID']
if 'Abstract' in p['MedlineCitation']['Article']:
abstr = p['MedlineCitation']['Article']['Abstract']['AbstractText'][0]
elif 'BookDocument' in p:
pmid = p['BookDocument']['PMID']
if 'Abstract' in p['BookDocument']:
abstr = p['BookDocument']['Abstract']['AbstractText'][0]
else:
raise Exception('Unrecognized record type, for id %d (keys: %s)' % (Ids[start+i], str(p.keys())) )
Abstracts.append( (int(pmid), str(abstr)) )
i += 1
# Save Abstracts, as a dictionary indexed by Ids
Abstracts = dict( Abstracts )
pickle.dump( Abstracts, bz2.BZ2File( Abstracts_file, 'wb' ) )
"""
Explanation: Abstracts dataset
We can now assemble a dataset mapping paper ids to their abstracts.
End of explanation
"""
Abstracts[27749938]
"""
Explanation: Taking a look at one paper's abstract:
End of explanation
"""
CA_search_term = search_term+'[TIAB] AND PLoS computational biology[JOUR]'
CA_ids = Entrez.read( Entrez.esearch( db="pubmed", term=CA_search_term ) )['IdList']
CA_ids
CA_summ = {
p['Id'] : ( p['Title'], p['AuthorList'], p['PubDate'][:4], p['FullJournalName'], p.get('DOI', '') )
for p in Entrez.read( Entrez.esummary(db="pubmed", id=','.join( CA_ids )) )
}
CA_summ
"""
Explanation: ELink: Searching for related items in NCBI Entrez
To understand how to obtain paper citations with Entrez, we will first assemble a small set of PubMed IDs, and then query for their citations.
To that end, we search here for papers published in the PLOS Computational Biology journal (as before, having also the word "malaria" in either the title or abstract):
End of explanation
"""
CA_citing = {
id : Entrez.read( Entrez.elink(
cmd = "neighbor", # ELink command mode: "neighbor", returns
# a set of UIDs in `db` linked to the input UIDs in `dbfrom`.
dbfrom = "pubmed", # Database containing the input UIDs: PubMed
db = "pmc", # Database from which to retrieve UIDs: PubMed Central
LinkName = "pubmed_pmc_refs", # Name of the Entrez link to retrieve: "pubmed_pmc_refs", gets
# "Full-text articles in the PubMed Central Database that cite the current articles"
from_uid = id # input UIDs
) )
for id in CA_ids
}
CA_citing['22511852']
"""
Explanation: Because we restricted our search to papers in an open-access journal, you can then follow their DOIs to freely access their PDFs at the journal's website.
We will now issue calls to Entrez.elink() using these PubMed IDs, to retrieve the IDs of papers that cite them.
The database from which the IDs will be retrieved is PubMed Central, a free digital database of full-text scientific literature in the biomedical and life sciences.
A complete list of the kinds of links you can retrieve with Entrez.elink() can be found here.
End of explanation
"""
cits = [ l['Id'] for l in CA_citing['22511852'][0]['LinkSetDb'][0]['Link'] ]
cits
"""
Explanation: We have in CA_citing[paper_id][0]['LinkSetDb'][0]['Link'] the list of papers citing paper_id. To get it as just a list of ids, we can do
End of explanation
"""
cits_pm = Entrez.read( Entrez.elink( dbfrom="pmc", db="pubmed", LinkName="pmc_pubmed", from_uid=",".join(cits)) )
cits_pm
ids_map = { pmc_id : link['Id'] for (pmc_id,link) in zip(cits_pm[0]['IdList'], cits_pm[0]['LinkSetDb'][0]['Link']) }
ids_map
"""
Explanation: However, one more step is needed, as what we have now are PubMed Central IDs, and not PubMed IDs. Their conversion can be achieved through an additional call to Entrez.elink():
End of explanation
"""
{ p['Id'] : ( p['Title'], p['AuthorList'], p['PubDate'][:4], p['FullJournalName'], p.get('DOI', '') )
for p in Entrez.read( Entrez.esummary(db="pubmed", id=','.join( ids_map.values() )) )
}
"""
Explanation: And to check these papers:
End of explanation
"""
Citations_file = 'data/' + search_term + '__Citations.pkl.bz2'
Citations = []
"""
Explanation: Citations dataset
We have now seen all the steps required to assemble a dataset of citations to each of the papers in our dataset.
End of explanation
"""
import http.client
if Citations == [] and os.path.exists( Citations_file ):
Citations = pickle.load( bz2.BZ2File( Citations_file, 'rb' ) )
if len(Citations) < len(Ids):
i = len(Citations)
checkpoint = len(Ids) / 10 + 1 # save to hard drive at every 10% of Ids fetched
for pm_id in Ids[i:]: # either starts from index 0, or resumes from where we previously left off
while True:
try:
# query for papers archived in PubMed Central that cite the paper with PubMed ID `pm_id`
c = Entrez.read( Entrez.elink( dbfrom = "pubmed", db="pmc", LinkName = "pubmed_pmc_refs", id=str(pm_id) ) )
c = c[0]['LinkSetDb']
if len(c) == 0:
# no citations found for the current paper
c = []
else:
c = [ l['Id'] for l in c[0]['Link'] ]
# convert citations from PubMed Central IDs to PubMed IDs
p = []
retrieve_per_query = 500
for start in range( 0, len(c), retrieve_per_query ):
query_ids = ','.join( c[start : start+retrieve_per_query] )
r = Entrez.read( Entrez.elink( dbfrom="pmc", db="pubmed", LinkName="pmc_pubmed", from_uid=query_ids ) )
# select the IDs. If no matching PubMed ID was found, [] is returned instead
p.extend( [] if r[0]['LinkSetDb']==[] else [ int(link['Id']) for link in r[0]['LinkSetDb'][0]['Link'] ] )
c = p
except http.client.BadStatusLine:
# Presumably, the server closed the connection before sending a valid response. Retry until we have the data.
print('r')
continue
break
Citations.append( (pm_id, c) )
if (i % 10000 == 0):
print('')
print(i, end='')
if (i % 100 == 0):
print('.', end='')
i += 1
if i % checkpoint == 0:
print('\tsaving at checkpoint', i)
pickle.dump( Citations, bz2.BZ2File( Citations_file, 'wb' ) )
print('\n done.')
# Save Citations, as a dictionary indexed by Ids
Citations = dict( Citations )
pickle.dump( Citations, bz2.BZ2File( Citations_file, 'wb' ) )
"""
Explanation: At least one server query will be issued per paper in Ids. Because NCBI allows for at most 3 queries per second (see here), this dataset will take a long time to assemble. Should you need to interrupt it for some reason, or the connection fail at some point, it is safe to just rerun the cell below until all data is collected.
End of explanation
"""
Citations[24130474]
"""
Explanation: To see that we have indeed obtained the data we expected, you can match the ids below, with the ids listed at the end of last section.
End of explanation
"""
|
jaduimstra/nilmtk | notebooks/experimental/mle.ipynb | apache-2.0 | import numpy as np
import pandas as pd
from os.path import join
from pylab import rcParams
import matplotlib.pyplot as plt
%matplotlib inline
rcParams['figure.figsize'] = (13, 6)
#plt.style.use('ggplot')
from datetime import datetime as datetime2
from datetime import timedelta
import nilmtk
from nilmtk.disaggregate.maximum_likelihood_estimation import MLE
from nilmtk import DataSet, TimeFrame, MeterGroup, HDFDataStore
from scipy.stats import poisson, norm
from sklearn import mixture
import warnings
warnings.filterwarnings("ignore")
"""
Explanation: Maximum Likelihood Estimation algorithm
This script is a example of the Maximum Likelihood Estimation algorithm for energy disaggregation.
It is liable when a single disaggregated appliance is desired instead of disaggregating many. Beside, the appliance should be mostly resistive to achieve accuracy.
It is based on paired Events:
- OnPower: delta value when the appliance is turned on.
- OffPOwer: delta value when the appliance is turned off.
- Duration: duration between onpower and offpower.
Also, to discard many unlikely events, three constrains are set:
- PowerNoise: mininum delta value below which the delta is considered noise.
- PowerPair: maximum difference between OnPower and OffPower (considering appliances with constans energy consumption).
- timeWindow: maximum time frame between Onpower and Offpower.
Features aforementioned are modeled with Gaussian, Gaussian Mixtures or Poisson. For each incoming paired event, the algorithm will extract these three features and will evaluate the maximum likelihood probability for that paired event of being a certain appliance.
IMPORTS
End of explanation
"""
def get_all_appliances(appliance):
# Filtering by appliances:
print "Fetching " + appliance + " over data loaded to nilmtk."
metergroup = nilmtk.global_meter_group.select_using_appliances(type=appliance)
if len(metergroup.appliances) == 0:
print "None " + appliance + " found on memory."
pass
# Selecting only single meters:
print "Filtering to get one meter for each " + appliance
meters = [meter for meter in metergroup.meters if (len(meter.appliances) == 1)]
metergroup = MeterGroup(meters)
print metergroup
print "Found " + str(len(metergroup.meters)) + " " + appliance
return metergroup
def get_all_trainings(appliance, train):
# Filtering by appliances:
print "Fetching " + appliance + " over data train data."
elecs = []
for building in train.buildings:
print "Building " + str(building) + "..."
elec = train.buildings[building].elec[appliance]
if len(elec.appliances) == 1:
print elec
print "Fetched elec."
elecs.append(elec)
else:
print elec
print "Groundtruth does not exist. Many appliances or None"
metergroup = MeterGroup(elecs)
return metergroup
"""
Explanation: Functions
End of explanation
"""
path = '../../../nilmtk/data/ukdale'
ukdale = train = DataSet(join(path, 'ukdale.h5'))
"""
Explanation: Loading data
End of explanation
"""
train = DataSet(join(path, 'ukdale.h5'))
test = DataSet(join(path, 'ukdale.h5'))
train.set_window(end="17-5-2013")
test.set_window(start="17-5-2013")
#zoom.set_window(start="17-5-2013")
print('loaded ' + str(len(ukdale.buildings)) + ' buildings')
"""
Explanation: And spliting into train and test data
End of explanation
"""
# Appliance to disaggregate:
applianceName = 'kettle'
# Groundtruth from the training data:
metergroup = get_all_trainings(applianceName,train)
"""
Explanation: Getting the training data
The selected appliance might not be trained from ElecMeters where other appliances are presented as we can extract the groundtruth
End of explanation
"""
mle = MLE()
"""
Explanation: MLE algorithm
Training
First, we create the model
End of explanation
"""
# setting parameters in the model:
mle.update(appliance=applianceName, resistive=True, units=('power','active'), thDelta= 1500, thLikelihood= 1e-10, powerNoise= 50, powerPair= 100, timeWindow= 400, sample_period= '10S', sampling_method='first')
# Settings the features parameters by guessing:
mle.onpower = {'name':'gmm', 'model': mixture.GMM(n_components=2)}
mle.offpower = {'name':'gmm', 'model': mixture.GMM(n_components=2)}
mle.duration = {'name':'poisson', 'model': poisson(0)}
"""
Explanation: Then, we update the model parameter with some guessing values.
First guess for features: onpower and offpower gaussian mixtures and duration poisson.
End of explanation
"""
mle.train(metergroup)
"""
Explanation: Training the model
We train the model with all ocurrences of that model of appliance found on the training data
End of explanation
"""
mle.featuresHist_colors()
"""
Explanation: And then we visualize features with featureHist_colors() to see the distribution and how many samples we have for each appliance (same model from different houses).
End of explanation
"""
mle.no_overfitting()
mle.featuresHist_colors()
"""
Explanation: Sometimes, we have more events from some houses than others, as we see on the figure above. Therefore, we need to crop information to keep the same number of samples for everyhouse.
End of explanation
"""
mle.featuresHist()
"""
Explanation: There is another visualization tool to see how model distributions fit over the data:
End of explanation
"""
mle.duration = {'name':'gmm', 'model': mixture.GMM(n_components=10)}
"""
Explanation: Onpower and Offpower seem to fit well with the data but we need to change the model for duration
End of explanation
"""
mle.train(metergroup)
mle.no_overfitting()
mle.featuresHist()
"""
Explanation: And then we retrain the model and use no_overfitting
End of explanation
"""
mle.check_cdfIntegrity(step=10)
"""
Explanation: Once we have the final model distribution for each feature. We need to the integrity of each distribution. Each CDF has to be bounded by one.
End of explanation
"""
# Building to disaggregate:
building = 2
mains = test.buildings[building].elec.mains()
# File to store the disaggregation
filename= '/home/energos/Escritorio/ukdale-disag-ml.h5'
output = HDFDataStore(filename, 'w')
"""
Explanation: Disaggregation
End of explanation
"""
mle.disaggregate(mains, output)
"""
Explanation: The next step will take a few minutes
End of explanation
"""
## Groundtruth
kettle = test.buildings[building].elec.select_using_appliances(type=applianceName)
output.load(mains.key).next().plot()
kettle.plot()
output.close()
"""
Explanation: We also receive some information such as total number of events, number of onpower events, number of onevents that have not been paired and chunks disaggregated
Comparing disaggregation with the groundtruth
End of explanation
"""
|
rashikaranpuria/Machine-Learning-Specialization | Clustering_&_Retrieval/Week6/.ipynb_checkpoints/6_hierarchical_clustering_blank-checkpoint.ipynb | mit | import graphlab
import matplotlib.pyplot as plt
import numpy as np
import sys
import os
import time
from scipy.sparse import csr_matrix
from sklearn.cluster import KMeans
from sklearn.metrics import pairwise_distances
%matplotlib inline
"""
Explanation: Hierarchical Clustering
Hierarchical clustering refers to a class of clustering methods that seek to build a hierarchy of clusters, in which some clusters contain others. In this assignment, we will explore a top-down approach, recursively bipartitioning the data using k-means.
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Import packages
End of explanation
"""
wiki = graphlab.SFrame('people_wiki.gl/')
"""
Explanation: Load the Wikipedia dataset
End of explanation
"""
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
"""
Explanation: As we did in previous assignments, let's extract the TF-IDF features:
End of explanation
"""
from em_utilities import sframe_to_scipy # converter
# This will take about a minute or two.
tf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf')
"""
Explanation: To run k-means on this dataset, we should convert the data matrix into a sparse matrix.
End of explanation
"""
from sklearn.preprocessing import normalize
tf_idf = normalize(tf_idf)
"""
Explanation: To be consistent with the k-means assignment, let's normalize all vectors to have unit norm.
End of explanation
"""
def bipartition(cluster, maxiter=400, num_runs=4, seed=None):
'''cluster: should be a dictionary containing the following keys
* dataframe: original dataframe
* matrix: same data, in matrix format
* centroid: centroid for this particular cluster'''
data_matrix = cluster['matrix']
dataframe = cluster['dataframe']
# Run k-means on the data matrix with k=2. We use scikit-learn here to simplify workflow.
kmeans_model = KMeans(n_clusters=2, max_iter=maxiter, n_init=num_runs, random_state=seed, n_jobs=-1)
kmeans_model.fit(data_matrix)
centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_
# Divide the data matrix into two parts using the cluster assignments.
data_matrix_left_child, data_matrix_right_child = data_matrix[cluster_assignment==0], \
data_matrix[cluster_assignment==1]
# Divide the dataframe into two parts, again using the cluster assignments.
cluster_assignment_sa = graphlab.SArray(cluster_assignment) # minor format conversion
dataframe_left_child, dataframe_right_child = dataframe[cluster_assignment_sa==0], \
dataframe[cluster_assignment_sa==1]
# Package relevant variables for the child clusters
cluster_left_child = {'matrix': data_matrix_left_child,
'dataframe': dataframe_left_child,
'centroid': centroids[0]}
cluster_right_child = {'matrix': data_matrix_right_child,
'dataframe': dataframe_right_child,
'centroid': centroids[1]}
return (cluster_left_child, cluster_right_child)
"""
Explanation: Bipartition the Wikipedia dataset using k-means
Recall our workflow for clustering text data with k-means:
Load the dataframe containing a dataset, such as the Wikipedia text dataset.
Extract the data matrix from the dataframe.
Run k-means on the data matrix with some value of k.
Visualize the clustering results using the centroids, cluster assignments, and the original dataframe. We keep the original dataframe around because the data matrix does not keep auxiliary information (in the case of the text dataset, the title of each article).
Let us modify the workflow to perform bipartitioning:
Load the dataframe containing a dataset, such as the Wikipedia text dataset.
Extract the data matrix from the dataframe.
Run k-means on the data matrix with k=2.
Divide the data matrix into two parts using the cluster assignments.
Divide the dataframe into two parts, again using the cluster assignments. This step is necessary to allow for visualization.
Visualize the bipartition of data.
We'd like to be able to repeat Steps 3-6 multiple times to produce a hierarchy of clusters such as the following:
(root)
|
+------------+-------------+
| |
Cluster Cluster
+------+-----+ +------+-----+
| | | |
Cluster Cluster Cluster Cluster
Each parent cluster is bipartitioned to produce two child clusters. At the very top is the root cluster, which consists of the entire dataset.
Now we write a wrapper function to bipartition a given cluster using k-means. There are three variables that together comprise the cluster:
dataframe: a subset of the original dataframe that correspond to member rows of the cluster
matrix: same set of rows, stored in sparse matrix format
centroid: the centroid of the cluster (not applicable for the root cluster)
Rather than passing around the three variables separately, we package them into a Python dictionary. The wrapper function takes a single dictionary (representing a parent cluster) and returns two dictionaries (representing the child clusters).
End of explanation
"""
wiki_data = {'matrix': tf_idf, 'dataframe': wiki} # no 'centroid' for the root cluster
left_child, right_child = bipartition(wiki_data, maxiter=100, num_runs=8, seed=1)
"""
Explanation: The following cell performs bipartitioning of the Wikipedia dataset. Allow 20-60 seconds to finish.
Note. For the purpose of the assignment, we set an explicit seed (seed=1) to produce identical outputs for every run. In pratical applications, you might want to use different random seeds for all runs.
End of explanation
"""
left_child
"""
Explanation: Let's examine the contents of one of the two clusters, which we call the left_child, referring to the tree visualization above.
End of explanation
"""
right_child
"""
Explanation: And here is the content of the other cluster we named right_child.
End of explanation
"""
def display_single_tf_idf_cluster(cluster, map_index_to_word):
'''map_index_to_word: SFrame specifying the mapping betweeen words and column indices'''
wiki_subset = cluster['dataframe']
tf_idf_subset = cluster['matrix']
centroid = cluster['centroid']
# Print top 5 words with largest TF-IDF weights in the cluster
idx = centroid.argsort()[::-1]
for i in xrange(5):
print('{0:s}:{1:.3f}'.format(map_index_to_word['category'][idx[i]], centroid[idx[i]])),
print('')
# Compute distances from the centroid to all data points in the cluster.
distances = pairwise_distances(tf_idf_subset, [centroid], metric='euclidean').flatten()
# compute nearest neighbors of the centroid within the cluster.
nearest_neighbors = distances.argsort()
# For 8 nearest neighbors, print the title as well as first 180 characters of text.
# Wrap the text at 80-character mark.
for i in xrange(8):
text = ' '.join(wiki_subset[nearest_neighbors[i]]['text'].split(None, 25)[0:25])
print('* {0:50s} {1:.5f}\n {2:s}\n {3:s}'.format(wiki_subset[nearest_neighbors[i]]['name'],
distances[nearest_neighbors[i]], text[:90], text[90:180] if len(text) > 90 else ''))
print('')
"""
Explanation: Visualize the bipartition
We provide you with a modified version of the visualization function from the k-means assignment. For each cluster, we print the top 5 words with highest TF-IDF weights in the centroid and display excerpts for the 8 nearest neighbors of the centroid.
End of explanation
"""
display_single_tf_idf_cluster(left_child, map_index_to_word)
display_single_tf_idf_cluster(right_child, map_index_to_word)
"""
Explanation: Let's visualize the two child clusters:
End of explanation
"""
athletes = left_child
non_athletes = right_child
"""
Explanation: The left cluster consists of athletes, whereas the right cluster consists of non-athletes. So far, we have a single-level hierarchy consisting of two clusters, as follows:
Wikipedia
+
|
+--------------------------+--------------------+
| |
+ +
Athletes Non-athletes
Is this hierarchy good enough? When building a hierarchy of clusters, we must keep our particular application in mind. For instance, we might want to build a directory for Wikipedia articles. A good directory would let you quickly narrow down your search to a small set of related articles. The categories of athletes and non-athletes are too general to facilitate efficient search. For this reason, we decide to build another level into our hierarchy of clusters with the goal of getting more specific cluster structure at the lower level. To that end, we subdivide both the athletes and non-athletes clusters.
Perform recursive bipartitioning
Cluster of athletes
To help identify the clusters we've built so far, let's give them easy-to-read aliases:
End of explanation
"""
# Bipartition the cluster of athletes
left_child_athletes, right_child_athletes = bipartition(athletes, maxiter=100, num_runs=8, seed=1)
"""
Explanation: Using the bipartition function, we produce two child clusters of the athlete cluster:
End of explanation
"""
display_single_tf_idf_cluster(left_child_athletes, map_index_to_word)
"""
Explanation: The left child cluster mainly consists of baseball players:
End of explanation
"""
display_single_tf_idf_cluster(right_child_athletes, map_index_to_word)
"""
Explanation: On the other hand, the right child cluster is a mix of football players and ice hockey players:
End of explanation
"""
baseball = left_child_athletes
ice_hockey_football = right_child_athletes
"""
Explanation: Note. Concerning use of "football"
The occurrences of the word "football" above refer to association football. This sports is also known as "soccer" in United States (to avoid confusion with American football). We will use "football" throughout when discussing topic representation.
Our hierarchy of clusters now looks like this:
Wikipedia
+
|
+--------------------------+--------------------+
| |
+ +
Athletes Non-athletes
+
|
+-----------+--------+
| |
| +
+ football/
baseball ice hockey
Should we keep subdividing the clusters? If so, which cluster should we subdivide? To answer this question, we again think about our application. Since we organize our directory by topics, it would be nice to have topics that are about as coarse as each other. For instance, if one cluster is about baseball, we expect some other clusters about football, basketball, volleyball, and so forth. That is, we would like to achieve similar level of granularity for all clusters.
Notice that the right child cluster is more coarse than the left child cluster. The right cluster posseses a greater variety of topics than the left (ice hockey/football vs. baseball). So the right child cluster should be subdivided further to produce finer child clusters.
Let's give the clusters aliases as well:
End of explanation
"""
# Bipartition the cluster of non-athletes
left_child_non_athletes, right_child_non_athletes = bipartition(non_athletes, maxiter=100, num_runs=8, seed=1)
display_single_tf_idf_cluster(left_child_non_athletes, map_index_to_word)
display_single_tf_idf_cluster(right_child_non_athletes, map_index_to_word)
"""
Explanation: Cluster of ice hockey players and football players
In answering the following quiz question, take a look at the topics represented in the top documents (those closest to the centroid), as well as the list of words with highest TF-IDF weights.
Quiz Question. Bipartition the cluster of ice hockey and football players. Which of the two child clusters should be futher subdivided?
Note. To achieve consistent results, use the arguments maxiter=100, num_runs=8, seed=1 when calling the bipartition function.
The left child cluster
The right child cluster
Caution. The granularity criteria is an imperfect heuristic and must be taken with a grain of salt. It takes a lot of manual intervention to obtain a good hierarchy of clusters.
If a cluster is highly mixed, the top articles and words may not convey the full picture of the cluster. Thus, we may be misled if we judge the purity of clusters solely by their top documents and words.
Many interesting topics are hidden somewhere inside the clusters but do not appear in the visualization. We may need to subdivide further to discover new topics. For instance, subdividing the ice_hockey_football cluster led to the appearance of golf.
Quiz Question. Which diagram best describes the hierarchy right after splitting the ice_hockey_football cluster? Refer to the quiz form for the diagrams.
Cluster of non-athletes
Now let us subdivide the cluster of non-athletes.
End of explanation
"""
scholars_politicians_etc = left_child_non_athletes
musicians_artists_etc = right_child_non_athletes
"""
Explanation: The first cluster consists of scholars, politicians, and government officials whereas the second consists of musicians, artists, and actors. Run the following code cell to make convenient aliases for the clusters.
End of explanation
"""
|
gregstarr/PyGPS | Examples/.ipynb_checkpoints/Combining Pseudorange and Phase-checkpoint.ipynb | agpl-3.0 | files = glob("/home/greg/Documents/Summer Research/rinex files/ma*")
poop=rinexobs(files[6])
plt.figure(figsize=(14,14))
ax1 = plt.subplot(211)
ax1.xaxis.set_major_formatter(fmt)
plt.plot(2.85*(poop[:,23,'P2','data']*1.0E9/3.0E8-poop[:,23,'C1','data']*1.0E9/3.0E8)[10:],
'.',markersize=3,label='pr tec')
plt.plot(2.85E9*((poop[:,23,'L1','data'])/f1-(poop[:,23,'L2','data'])/f2)[10:],
'.',markersize=3,label='ph tec')
plt.title('mah13 sv23, biased')
plt.xlabel('time')
plt.ylabel('TECu')
plt.legend()
plt.grid()
plt.show()
"""
Explanation: These graphs have all been produced according to appendix A of "Effect of GPS System Biases on Differential
Group Delay Measurements" available here: http://www.dtic.mil/dtic/tr/fulltext/u2/a220284.pdf. It is referenced in Anthea Coster's 1992 paper "Ionospheric Monitoring System". Equation 2.3 in the paper states that TEC in TECu is equal to the propogation time difference between the two frequencies. In order to find the propogation time, I divided the distance by the speed of light. I did the same thing with the phase but first I had to multiply the cycles (which is available from the rinex file) by the wavelength.
Basically what I want to explore with this notebook is combining pseudorange and phase data to get better TEC plots. Pseudorange is an absolute measurement but is subject to a lot of noise. Phase is low noise but has an unknown cycle ambiguity. According to the previously mentioned paper, the cycle ambiguity in phase data can be estimated by finding the average difference between the pseudorange and phase. In this script I converted pseudorange into cycles by dividing by wavelength. Then I calculated TEC according to equation 2.3 and plotted it next to the pseudorange calculated TEC and the biased phase calculated TEC.
Shown above is only a short time period free of cycle slips and loss of lock. In order to implement this averaging strategy, I think you have to re-average every time there is a cycle slip or missing data.
End of explanation
"""
sl=200
plt.figure(figsize=(15,15))
ax1=plt.subplot(211)
ax1.xaxis.set_major_formatter(fmt)
plt.plot(2.85E9*(poop[:,23,'P2','data']/3.0E8
-poop[:,23,'C1','data']/3.0E8),'b.',label='prtec',markersize=3)
for i in range(int(len(poop[:,23,'L1','data'])/sl)):
phtec = 2.85E9*(poop[poop.labels[sl*i:sl*(i+1)],23,'L1','data']/f1
-poop[poop.labels[sl*i:sl*(i+1)],23,'L2','data']/f2)
prtec = 2.85E9*(poop[poop.labels[sl*i:sl*(i+1)],23,'P2','data']/3.0E8
-poop[poop.labels[sl*i:sl*(i+1)],23,'C1','data']/3.0E8)
b = np.average((phtec-prtec)[np.logical_not(np.isnan(phtec-prtec))])
plt.plot(phtec-b,'r-',linewidth=3,label='')
plt.axis([poop.labels[10],poop.labels[10000],-50,50])
plt.title('bias corrected phase data')
plt.xlabel('time')
plt.ylabel('TECu')
plt.grid()
plt.legend()
plt.show()
"""
Explanation: This plot is uncorrected, it is a remake of the plot in Anthea's email on Wed, Jun 15, 2016 at 7:06 AM. The phase calculated TEC appears to match the pseudorange calculated TEC in parts, but other parts are offset by a bias. The next script and plot shows how the phase data can be shifted to line up with the average of the pseudorange data. It does so according to how large the slice value is. Basically it slices up the whole data set into a specifiable range of points and does the averaging on each range of points individually in order to fix phase lock cycle slips. This isn't the best way of doing it, the program should check for loss of lock and shift the phase data in chunks based on that. I am going to read more about cycle slips now.
End of explanation
"""
f1 = 1575.42E6
f2 = 1227.6E6
svn = 23
L1 = -1*3.0E8*poop[:,svn,'L1','data']/f1 #(1a)
L2 = -1*3.0E8*poop[:,svn,'L2','data']/f2 #(1b)
P1 = poop[:,svn,'C1','data'] #(1c)
P2 = poop[:,svn,'P2','data'] #(1d)
#wide lane combination
wld = 3.0E8/(f1-f2)
Ld = (f1*L1-f2*L2)/(f1-f2) #(3)
prd = (f1*P1+f2*P2)/(f1+f2) #(4)
bd = (Ld-prd)/wld #(5)
#wide lane cycle slip detection
bdmean = bd[1]
rms = 0
g=np.empty((bd.shape))
g[:]=np.nan
p=np.empty((bd.shape))
p[:]=np.nan
for i in range(2,len(bd)):
if not np.isnan(bd[i]):
g[i] = abs(bd[i]-bdmean)
p[i] = np.sqrt(rms)
rms = rms+((bd[i]-bdmean)**2-rms)/(i-1) #(8b)
bdmean = bdmean+(bd[i]-bdmean)/(i-1) #(8a)
plt.figure(figsize=(12,12))
plt.subplot(211).xaxis.set_major_formatter(fmt)
plt.plot(bd.keys(),g,label='bd[i]-bias average')
plt.plot(bd.keys(),4*p,label='rms')
plt.legend()
plt.grid()
plt.title('if current bd>4*rms then it is a wide-lane cycle slip')
plt.ylabel('cycles')
plt.xlabel('time')
plt.show()
#ionospheric combination
LI = L1-L2 #(6)
PI = P2-P1 #(7)
#ionospheric cycle slip detection
plt.figure(figsize=(10,10))
# get x and y vectors
mask=~np.isnan(PI)
x = np.arange(len(PI[mask]))
y = PI[mask]
# calculate polynomial
z = np.polyfit(x, y, 6)
f = np.poly1d(z)
# calculate new x's and y's
x_new = np.linspace(x[0], x[-1], len(x))
Q = f(x_new)
residual = LI[mask]-Q
plt.plot(residual[1:])
plt.show()
for i in range(1,len(residual)):
if(residual[i]-residual[i-1]>1):
print(i,residual[i]-residual[i-1])
"""
Explanation: try some stuff out from "An Automatic Editing Algorith for GPS Data" by Blewitt
End of explanation
"""
|
tiagoft/curso_audio | afinacao.ipynb | mit | referencia_inicial = 440.0 # Hz
frequencias = [] # Esta lista recebera todas as frequencias de uma escala
f = referencia_inicial
while len(frequencias) < 12:
if f > (referencia_inicial * 2):
f /= 2.
frequencias.append(f)
f *= (3/2.)
frequencias.sort()
print frequencias
print f
"""
Explanation: Afinação e Notas Musicais
Objetivo
Após esta unidade, o aluno será capaz de aplicar modelos matemáticos para relacionar o fenômeno perceptual da altura, o fenômeno físico da frequência fundamental e o fenômeno cultural das notas musicais.
Pré-requisitos
Para acompanhar adequadamente esta unidade, o aluno deve estar tranquilo com:
1. Podemos organizar notas musicais em oitavas,
1. Os nomes das notas musicais se repetem em cada oitava,
1. Algumas notas são referidas usando um modificador sustenido ou bemol
1. Quando duas notas soam juntas, o som resultante pode ser dissonante ou consonante.
Como afinar um cravo
O conceito de afinação está bastante ligado ao conceito de intervalos. Um intervalo é a diferença perceptual de alturas entre dois tons que soam simultaneamente. Tons medidos e absolutos (por exemplo: uma senóide de frequência 440 Hz) só foram possíveis depois da invenção de instrumentos modernos de geração e de instrumentos precisos para medição. Antes destes avanços tecnológicos, só era possível afinar instrumentos utilizando a percepção de intervalos.
Na Grécia antiga, já haviam instrumentos de sopro e de corda (veja a flauta de Baco ou a lira de Apolo, por exemplo). Isso significa que havia alguma forma de afiná-los (em especial, o instrumento de corda, que desafina rapidamente com o calor). Portanto, os gregos já dominavam o conceito de que uma corda emite um som mais agudo quando é esticada mais fortemente, e um som mais grave quando é esticada com menos força.
Afinação Pitagórica
Pitágoras é dito ser um dos pioneiros em sistematizar uma forma de afinar instrumentos. Ele observou que duas cordas de igual material, esticadas sob a mesma força, mas de comprimentos diferentes, emitem sons de alturas diferentes. Sabemos, hoje, que cada altura diferente está ligada a uma frequência fundamental diferente.
A união dos sons simultâneos das duas cordas produz um intervalo. Pitágoras observou que o intervalo produzido era muito mais agradável ao ouvido (ou: consonante) quando a razão entre os comprimentos das cordas era de 1 para 2. Neste caso, sabemos que a frequência fundamental de uma das vibrações é a metade da outra, e este intervalo é chamado de oitava. O segundo intervalo mais agradável ao ouvido ocorria quando os comprimentos das cordas tinham razão 2 para 3. Neste caso, surge um intervalo chamado de quinta.
Pitágoras, então, definiu o seguinte método para encontrar as alturas de uma escala:
1. Inicie com um tom de referência
1. Afine a próxima nota usando um intervalo de quinta na direção dos agudos
1. Se meu intervalo em relação à referência é maior que uma oitava, então caminhe uma oitava em direção dos graves
1. Se já afinei todas as cordas de uma oitava, então pare.
1. Use a nova nota obtida como referência e continue do passo 2.
End of explanation
"""
print 100*(f - (referencia_inicial * 2)) / (referencia_inicial*2)
"""
Explanation: Veja que há um fenômeno interessante que acontece. A nota que deveria ter frequência de 880.0 Hz (um intervalo de oitava em relação à referência inicial) na verdade é calculada como 892 Hz. Isso representa um erro calculável, na forma:
End of explanation
"""
frequencias_t = [] # Esta lista recebera todas as frequencias de uma escala
ft = referencia_inicial
while len(frequencias_t) < 12:
frequencias_t.append(ft)
ft *= 2**(1/12.)
frequencias_t.sort()
print frequencias_t
print ft
"""
Explanation: Esse erro, de 1.36%, é bem conhecido e é chamado de Coma Pitagórico. Ele representa uma espécie de erro perceptual acumulado do sistema de afinação. Mas, veja: há um erro perceptual acumulado que, causa uma sensação de dissonância mesmo que a afinação toda utilize apenas intervalos perfeitamente consonantes. Trata-se de um paradoxo com o qual os músicos tiveram que lidar através da história.
Na verdade, isso é um fenômeno matematicamente inevitável. A série de frequências geradas pela afinação pitagórica tem a forma:
$$f \times (\frac{3}{2}) ^ N,$$
excluídas as oitavas de ajuste.
Um intervalo de oitava é gerado utilizando uma multiplicação da frequência inicial por uma potência de dois e, como sabemos, não existe nenhum caso em que uma potência de dois é gerada por uma potência de três.
Nomes das notas musicais
Por razões históricas e puramente culturais, as civilizações que descenderam da Grécia antiga utilizaram doze notas como as partes de uma escala. Poderiam ser mais (como em algumas culturas asiáticas) ou menos (como em algumas culturas africanas). As notas receberam seus nomes (dó, ré, mi, fá, sol, lá, si) tomando por base uma poesia em latim chamada Ut Queant Laxis, que é um hino a João Batista. Também, é comum utilizar a notação em letras (C, D, E, F, G, A, B) e os sinais de bemol (Cb, Db, Eb, etc) e sustenido (C#, D#, E#, etc) para denotar acidentes.
Ciclo de Quintas
O sistema de afinação Pitagórico determina, implicitamente, um caminho por entre as notas musicais de uma escala que foi historicamente chamado de ciclo de quintas. Trata-se de um ciclo onde se colocam todas as notas musicais (excluindo-se a oitava), de forma que um passo em qualquer direção percorre um intervalo de quinta:
C, G, D, A, E, B, F#, C#, G#, D#, A#, F, C
Afinação de igual temperamento
A relação entre a frequência de vibração de uma corda e a sensação de altura relacionada a ela foi estudada incialmente (ao menos nestes termos) por Vincenzo Galilei, no século XVI. Esse conhecimento permitiu usar uma linguagem mais próxima à contemporânea para se referir à frequência de cordas vibrantes. Mesmo assim, ainda não haviam instrumentos para detectar frequências com precisão, e, portanto, os sistemas de afinação ainda dependiam de relações perceptuais intervalares.
Uma afinação relativa, como a pitagórica, varia de acordo com o tom. Isso significa que, se um cravo for afinado à partir de um Dó, ele será afinado de maneira diferente que se partir de um Ré. Assim, instrumentos de teclado devem ser afinados novamente quanto uma música é tocada em um outro tom. Ao longo da história, surgiram algumas propostas mostrando caminhos alternativos para a afinação, e re-distribuindo o Coma de forma que ele fique acumulado em notas pouco usadas. Porém, especialmente com a necessidade de tocar o cravo em um repertório vasto sem ter grandes pausas, o processo histórico-cultural consolidou a afinação de igual temperamento.
A afinação de igual temperamento já era conhecida desde Vincenzo Galilei, mas ganhou grande força depois do surgimento das peças da série Cravo Bem Temperado de Bach. A afinação de igual temperamento distribui o Coma igualmente por todas as notas da escala, de forma que nenhuma soa especialmente desafinada. O custo disso é que todas as notas soam levemente desafinadas.
Na afinação de igual temperamento, a razão entre as frequências de duas notas consecutivas é igual a $\sqrt[12]{2}$, de forma que ao fim de 12 notas a frequência obtida será $f \times (\sqrt[12]{2})^{12} = f \times 2$.
End of explanation
"""
intervalos_diatonica = [2, 3, 4, 5, 6, 7]
intervalos_cromatica = [2, 4, 5, 7, 9, 11]
razoes = [9/8., 5/4., 4/3., 3/2., 5/3., 15/8.]
for i in xrange(len(intervalos_diatonica)):
frequencia_ideal = referencia_inicial * razoes[i]
frequencia_pitagorica = frequencias[intervalos_cromatica[i]]
frequencia_temperada = frequencias_t[intervalos_cromatica[i]]
erro_pitagorica = 100*(frequencia_pitagorica - (frequencia_ideal)) / (frequencia_ideal)
erro_temperada = 100*(frequencia_temperada - (frequencia_ideal)) / (frequencia_ideal)
print "Intervalo:", intervalos_diatonica[i]
print "Erro pitagorica:", erro_pitagorica
print "Erro temperada:", erro_temperada
"""
Explanation: Quão desafinado é um sistema de afinação?
No experimento a seguir, vamos calcular o quão desafinado é cada sistema de afinação em relação aos intervalos da escala diatônica. É importante lembrar que esses intervalos foram definidos ao longo de um processo histórico, e não através do resultado de um cálculo. Utilizaremos os seguintes:
| Intervalo | Razão de Frequências |
|:-----------:|:------------:|
| II | 9/8 |
| III | 5/4 |
| IV | 4/3 |
| V | 3/2 |
| VI | 5/3 |
| VII | 15/8 |
End of explanation
"""
|
suresh/notes | python/In Depth - Kernel Density Estimation.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
import pandas as pd
"""
Explanation: Gaussian Mixture Models (GMM) are a kind of hybrid between a clustering estimator and a density estimator. Density estimator is an algorithm which takes a D-dimensional dataset and produces an estimate of the D-dimensional probability distribution which that data is drawn from. GMM algorithm accomplishes by representing the density as a weighted sum of Gaussian distributions.
Kernel density estimation (KDE) is in some senses takes the mixture of Gaussians to its logical extreme: it uses a mixture consisting of one Gaussian component per point, resulting in an essentially non-parameteric estimator of density
End of explanation
"""
def make_data(N, f=0.3, rseed=1087):
rand = np.random.RandomState(rseed)
x = rand.randn(N)
x[int(f*N):] += 5
return x
x = make_data(1000)
"""
Explanation: Motivation for KDE - Histograms
Density estimator is an algorithm which seeks to model the probability distribution that generated a dataset. This is simpler to see in 1-dimensional datam as the histogram. A histogram divides the data into discrete bins, counts the number of points that fall in each bind, and then visualizes the results in an intuitive manner.
End of explanation
"""
hist = plt.hist(x, bins=30, normed=True)
"""
Explanation: Standard count-based histogram can be viewed from the plt.hist() function. normed parameter of this function makes the heights of the bars to reflect probability density:
End of explanation
"""
density, bins, patches = hist
widths = bins[1:] - bins[:-1]
(density * widths).sum()
"""
Explanation: This histogram is equal binned, hence this normalization simply changes the scale on the y-axis, keeping the shape of the histogram constant. normed keeps the total area under the histogram to be 1, as we can confirm below:
End of explanation
"""
x = make_data(20)
bins = np.linspace(-5, 10, 10)
fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharex=True, sharey=True,
subplot_kw = {'xlim': (-4, 9), 'ylim': (-0.02, 0.3)})
fig.subplots_adjust(wspace=0.05)
for i, offset in enumerate([0.0, 0.6]):
ax[i].hist(x, bins=bins+offset, normed=True)
ax[i].plot(x, np.full_like(x, -0.01), '|k', markeredgewidth=1)
"""
Explanation: One problem with histogram as a density estimator is that the choice of bin size and location can lead to representations that have qualitatively different features.
Let's see this example of 20 points, the choice of bins can lead to an entirely different interpretation of the data.
End of explanation
"""
fig, ax = plt.subplots()
bins = np.arange(-3, 8)
ax.plot(x, np.full_like(x, -0.1), '|k',
markeredgewidth=1)
for count, edge in zip(*np.histogram(x, bins)):
for i in range(count):
ax.add_patch(plt.Rectangle((edge, i), 1, 1,
alpha=0.5))
ax.set_xlim(-4, 8)
ax.set_ylim(-0.2, 8)
"""
Explanation: We can think of histogram as a stack of blocks, where we stack one block within each bins on top of each point in the dataset. Let's view this in the following chart:
End of explanation
"""
x_d = np.linspace(-4, 8, 2000)
density = sum((abs(xi - x_d) < 0.5) for xi in x)
plt.fill_between(x_d, density, alpha=0.5)
plt.plot(x, np.full_like(x, -0.1), '|k', markeredgewidth=1)
plt.axis([-4, 8, -0.2, 8]);
x_d[:50]
"""
Explanation: The effects fo two binnings comes from the fact that the height of the block stack often reflects not on the actual density of points neaby, but on coincidences of how the bins align with the data points. This mis-alignment between points and their blocks is a potential cause of the poor histogram results.
What if, instead of stacking the blocks aligned with the bins, we were to stack the blocks aligned with the points they represent? if we do this the blocks won't be aligned, but we can add their contributions at each location along the x-axis to find the result.
End of explanation
"""
from scipy.stats import norm
x_d = np.linspace(-4, 8, 1000)
density = sum(norm(xi).pdf(x_d) for xi in x)
plt.fill_between(x_d, density, alpha=0.5)
plt.plot(x, np.full_like(x, -0.1), '|k', markeredgewidth=1)
plt.axis([-4, 8, -0.2, 5]);
"""
Explanation: Rough edges are not aesthetically pleasing, nor are they reflecting of any true properties of the data. In order to smooth them out, we might decide to replace the blocks at each location with a smooth function, like a Gaussian. Let's use a standard normal curve at each point instead of a block:
End of explanation
"""
from sklearn.neighbors import KernelDensity
# instantiate and fit the KDE model
kde = KernelDensity(bandwidth=1.0, kernel='gaussian')
kde.fit(x[:, None])
# score samples returns the log of the probability density
logprob = kde.score_samples(x_d[:, None])
plt.fill_between(x_d, np.exp(logprob), alpha=0.5)
plt.plot(x, np.full_like(x, -0.01), '|k', markeredgewidth=1)
plt.ylim(-0.02, 0.30);
"""
Explanation: This smoothed-out plot, with a Gaussian distribution contributed at the location of each input point, gives a much more accurate idea of the shape of the data distribution, and one which has much less variance (i.e., changes much less in response to differences in sampling).
Kernel Density Estimation in Practice
Free parameters of the kernel density estimation are the kernal, which specifies the shape of the distribution placed at each point and the kernel bandwith, which controls the size of the kernel at each point. Scikit-Learn has a choice of 6 kernels.
End of explanation
"""
KernelDensity().get_params().keys()
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import LeaveOneOut
bandwidths = 10 ** np.linspace(-1, 1, 100)
grid = GridSearchCV(KernelDensity(kernel='gaussian'), {'bandwidth': bandwidths}, cv=LeaveOneOut(len(x)))
grid.fit(x[:, None])
grid.best_params_
"""
Explanation: Selecting the bandwidth via cross-validation
Choice of bandwith within KDE is extremely important to finding a suitable density estimate, and is the knob that controls the bias-variance trade-off in the estimate of density: too narrow a bandwidth to a high-variance estimate (over-fitting) where the presence or absence of a sinple point makes a large difference.
In machine learning contexts, we've seen that such hyperparameter tuning often is done empirically via a cross-validation approach.
End of explanation
"""
|
DawesLab/LabNotebooks | Double Slit Correlation Model.ipynb | mit | import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sp
from numpy import pi, sin, cos, linspace, exp, real, imag, abs, conj, meshgrid, log, log10, angle, zeros, complex128, random
from numpy.fft import fft, fftshift, ifft
from mpl_toolkits.mplot3d import axes3d
import BeamOptics as bopt
%matplotlib inline
b=.08*1e-3 # the slit width
a=.5*1e-3 # the slit spacing
k=2*pi/(795*1e-9) # longitudinal wavenumber
wt=0 # let time be zero
C=1 # unit amplitude
L=1.8 # distance from slits to CCD
d=.016 # distance from signal to LO at upstream end (used to calculate k_perp)
ccdwidth = 1300 # number of pixels
pixwidth = 20e-6 # pixel width (in meters)
y = linspace(-pixwidth*ccdwidth/2,pixwidth*ccdwidth/2,ccdwidth)
# define the various double slit fields and LO:
def alpha(y,a):
return k*a*y/(2*L)
def beta(y,b):
return k*b*y/(2*L)
def E_ds(y,a,b):
""" Double-slit field """
# From Hecht p 458:
#return b*C*(sin(beta(y)) / beta(y)) * (sin(wt-k*L) + sin(wt-k*L+2*alpha(y)))
# drop the time-dep term as it will average away:
return 2*b*C*(sin(beta(y,b)) / beta(y,b)) * cos(alpha(y,a)) #* sin(wt - k*L + alpha(y))
def E_dg(y,a,b):
""" Double gaussian field """
# The width needs to be small enough to see interference
# otherwise the beam doesn't diffract and shows no interference.
# We're using b for the gaussian width (i.e. equal to the slit width)
w=b
#return C*exp(1j*k*0.1*d*y/L)
return 5e-3*(bopt.gaussian_beam(0,y-a/2,L,E0=1,wavelambda=795e-9,w0=w,k=[0,0,k]) +
bopt.gaussian_beam(0,y+a/2,L,E0=1,wavelambda=795e-9,w0=w,k=[0,0,k]))
def E_lo(y,d):
"""Plane-wave LO beam incident at small angle, transverse wavenumber k*d*y/L"""
return C*exp(-1j*k*d*y/L)
"""
Explanation: Double-slit correlation model
Based on Double Slit Model notebook, extended to model correlations with phase variation
End of explanation
"""
def plotFFT(d,a,b):
"""Single function version of generating the FFT output"""
TotalField = E_dg(y,a,b)+E_lo(y,d)
TotalIntensity=TotalField*TotalField.conj()
plt.plot(abs(fft(TotalIntensity)),".-")
plt.ylim([0,1e-2])
plt.xlim([0,650])
plt.title("FFT output")
plotFFT(d=0.046,a=0.5e-3,b=0.08e-3)
"""
Explanation: Define a single function to explore the FFT:
End of explanation
"""
# bopt.gaussian_beam(x, y, z, E0, wavelambda, w0, k)
# set to evaluate gaussian at L (full distance to CCD) with waist width of 2 cm
# using d=0.046 for agreement with experiment
d=0.046
E_lo_gauss = bopt.gaussian_beam(0,y,L,E0=1,wavelambda=795e-9,w0=0.02,k=[0,k*d/L,k])
frames = 59
rounds = 20
drift_type= 3
# SG I made a few drift modes to model the phase drift that would be present in the lab
# drift mode two appears to be the most similar to the phase shifts we observe in the lab
time=linspace(0,2*pi,rounds*frames)
phase=[]
if drift_type == 0:
phase= [sin(t) for t in time]
#mode 0 is just a sine wave in time
elif drift_type == 1:
phase= [sin(t+random.randn()/2) for t in time]
#phase= [sin(t)+random.randn()/2 for t in time]
#mode 1 is a sine wave with some randomness added to each data point
elif drift_type == 2:
phase=[0]
for i in range(len(time)-1):
phase.append(phase[-1]+random.randn()/4*sin(time[i]))
#mode 2 is a sine wave with some randomness added to each data point, and also considering
#the location of the previous data point
elif drift_type == 3:
phase=[0]
for i in range(len(time)-1):
phase.append(phase[-1]+0.1*(random.randn()))
#mode 2 is a sine wave with some randomness added to each data point, and also considering
#the location of the previous data point
raw_intensity_data = zeros([1300,frames,rounds],dtype=complex128)
scaled = zeros([1300,frames,rounds],dtype=complex128)
i=0
for r in range(rounds):
for f in range(frames):
TotalField = E_dg(y,a,b)*exp(-1j*phase[i]) + E_lo_gauss #adds the appropriate phase
#TotalField = E_dg(y,a,b) + E_lo_gauss
TotalIntensity = TotalField * TotalField.conj()
raw_intensity_data[:,f,r] = TotalIntensity
scaled[:,f,r]=fft(TotalIntensity)
i=i+1 #increases index
#checking how phase moves around
plt.polar(phase,time,'-')
plt.title("phase shift with (simulated) time")
plt.plot((np.unwrap(angle(scaled[461,:,:].flatten("F")))))
plt.plot((np.unwrap(angle(scaled[470,:,:].flatten("F")))))
#plt.ylim(0,1e-2)
#TODO -unwrapping the phase (numpy)
plt.plot(abs(fft(TotalIntensity)),".-")
print(TotalIntensity.shape)
plt.ylim([0,0.01]) # Had to lower the LO power quite a bit, and then zoom way in.
plt.xlim([430,500])
"""
Explanation: Replace with Gaussian LO: import gaussian beam function, and repeat:
End of explanation
"""
mode_of_interest = 440
mode_offset = 300
range_to_analyze = 300
# Calculate the correlation matrix between phase of each mode.
modes = range(0,range_to_analyze)
PearsonPhase = np.zeros((range_to_analyze,range_to_analyze))
for m in modes:
output = scaled[m+mode_offset,:,:].flatten('F') # Choose the mode to analyze
x = np.angle(output)
for l in modes:
#SG added np.unwrap call to the angle
Pearson, p = sp.pearsonr(np.unwrap(np.angle(scaled[l+mode_offset].flatten('F'))), x)
if (m==l):
PearsonPhase[m,l] = 0 #AMCD Null the 1.0 auto-correlation
else:
PearsonPhase[m,l] = Pearson
plt.imshow(PearsonPhase,interpolation='none')
plt.title("Phase")
print(type(PearsonPhase))
print("max value =",np.amax(PearsonPhase))
plt.imshow(PearsonPhase,interpolation='none')
plt.title("Phase")
print("max value =",np.amax(PearsonPhase))
"""
Explanation: Adding different phase drifts to individual modes
original signal -> FFT ->
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_lcmv_beamformer.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import numpy as np
import mne
from mne.datasets import sample
from mne.beamformer import lcmv
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
subjects_dir = data_path + '/subjects'
"""
Explanation: Compute LCMV beamformer on evoked data
Compute LCMV beamformer solutions on an evoked dataset for three different
choices of source orientation and store the solutions in stc files for
visualisation.
End of explanation
"""
event_id, tmin, tmax = 1, -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.info['bads'] = ['MEG 2443', 'EEG 053'] # 2 bads channels
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
left_temporal_channels = mne.read_selection('Left-temporal')
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,
exclude='bads', selection=left_temporal_channels)
# Pick the channels of interest
raw.pick_channels([raw.ch_names[pick] for pick in picks])
# Re-normalize our empty-room projectors, so they are fine after subselection
raw.info.normalize_proj()
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=(None, 0), preload=True, proj=True,
reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))
evoked = epochs.average()
forward = mne.read_forward_solution(fname_fwd, surf_ori=True)
# Compute regularized noise and data covariances
noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0, method='shrunk')
data_cov = mne.compute_covariance(epochs, tmin=0.04, tmax=0.15,
method='shrunk')
plt.close('all')
pick_oris = [None, 'normal', 'max-power']
names = ['free', 'normal', 'max-power']
descriptions = ['Free orientation', 'Normal orientation', 'Max-power '
'orientation']
colors = ['b', 'k', 'r']
for pick_ori, name, desc, color in zip(pick_oris, names, descriptions, colors):
stc = lcmv(evoked, forward, noise_cov, data_cov, reg=0.05,
pick_ori=pick_ori)
# View activation time-series
label = mne.read_label(fname_label)
stc_label = stc.in_label(label)
plt.plot(1e3 * stc_label.times, np.mean(stc_label.data, axis=0), color,
hold=True, label=desc)
plt.xlabel('Time (ms)')
plt.ylabel('LCMV value')
plt.ylim(-0.8, 2.2)
plt.title('LCMV in %s' % label_name)
plt.legend()
plt.show()
# Plot last stc in the brain in 3D with PySurfer if available
brain = stc.plot(hemi='lh', subjects_dir=subjects_dir,
initial_time=0.1, time_unit='s')
brain.show_view('lateral')
"""
Explanation: Get epochs
End of explanation
"""
|
sdpython/actuariat_python | _doc/notebooks/sessions/seance5_approche_fonctionnelle_enonce.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import pyensae
from pyquickhelper.helpgen import NbImage
from jyquickhelper import add_notebook_menu
add_notebook_menu()
from actuariat_python.data import table_mortalite_euro_stat
table_mortalite_euro_stat()
import pandas
df = pandas.read_csv("mortalite.txt", sep="\t", encoding="utf8", low_memory=False)
df.head()
"""
Explanation: Données, approches fonctionnelles - énoncé
L'approche fonctionnelle est une façon de traiter les données en ne conservant qu'une petite partie en mémoire. D'une manière générale, cela s'applique à tous les calculs qu'on peut faire avec le langage SQL. Le notebook utilisera des données issues d'une table de mortalité extraite de table de mortalité de 1960 à 2010 (le lien est cassé car data-publica ne fournit plus ces données, le notebook récupère une copie) qu'on récupère à l'aide de la fonction table_mortalite_euro_stat.
End of explanation
"""
it = iter([0,1,2,3,4,5,6,7,8])
print(it, type(it))
"""
Explanation: Itérateur, Générateur
itérateur
La notion d'itérateur est incournable dans ce genre d'approche fonctionnelle. Un itérateur parcourt les éléments d'un ensemble. C'est le cas de la fonction range.
End of explanation
"""
[0,1,2,3,4,5,6,7,8]
"""
Explanation: Il faut le dissocier d'une liste qui est un conteneur.
End of explanation
"""
import sys
print(sys.getsizeof(iter([0,1,2,3,4,5,6,7,8])))
print(sys.getsizeof(iter([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14])))
print(sys.getsizeof([0,1,2,3,4,5,6,7,8]))
print(sys.getsizeof([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14]))
"""
Explanation: Pour s'en convaincre, on compare la taille d'un itérateur avec celui d'une liste : la taille de l'itérateur ne change pas quelque soit la liste, la taille de la liste croît avec le nombre d'éléments qu'elle contient.
End of explanation
"""
it = iter([0,1,2,3,4,5,6,7,8])
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
"""
Explanation: L'itérateur ne sait faire qu'une chose : passer à l'élément suivant et lancer une exception StopIteration lorsqu'il arrive à la fin.
End of explanation
"""
def genere_nombre_pair(n):
for i in range(0,n):
yield 2*i
genere_nombre_pair(5)
"""
Explanation: générateur
Un générateur se comporte comme un itérateur, il retourne des éléments les uns à la suite des autres que ces éléments soit dans un container ou pas.
End of explanation
"""
def genere_nombre_pair(n):
for i in range(0,n):
print("je passe par là", i, n)
yield 2*i
genere_nombre_pair(5)
"""
Explanation: Appelé comme suit, un générateur ne fait rien. On s'en convaint en insérant une instruction print dans la fonction :
End of explanation
"""
list(genere_nombre_pair(5))
"""
Explanation: Mais si on construit une liste avec tout ces nombres, on vérifie que la fonction genere_nombre_pair est bien executée :
End of explanation
"""
def genere_nombre_pair(n):
for i in range(0,n):
yield 2*i
it = genere_nombre_pair(5)
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
print(next(it))
"""
Explanation: L'instruction next fonctionne de la même façon :
End of explanation
"""
it = genere_nombre_pair(5)
for nombre in it:
print(nombre)
"""
Explanation: Le moyen le plus simple de parcourir les éléments retournés par un itérateur ou un générateur est une boucle for :
End of explanation
"""
def genere_nombre_pair(n):
for i in range(0,n):
print("pair", i)
yield 2*i
def genere_multiple_six(n):
for pair in genere_nombre_pair(n):
print("six", pair)
yield 3*pair
print(genere_multiple_six)
for i in genere_multiple_six(3):
print(i)
"""
Explanation: On peut combiner les générateurs :
End of explanation
"""
def addition(x, y):
return x + y
addition(1, 3)
additionl = lambda x,y : x+y
additionl(1, 3)
"""
Explanation: intérêt
Les itérateurs et les générateurs sont des fonctions qui parcourent des ensembles d'éléments ou donne cette illusion.
Ils ne servent qu'à passer à l'élément suivant.
Ils ne le font que si on le demande explicitement avec une boucle for par exemple. C'est pour cela qu'on parle d'évaluation paresseuse ou lazy evaluation.
On peut combiner les itérateurs / générateurs.
Il faut voir les itérateurs et générateurs comme des flux, une ou plusieurs entrées d'éléments, une sortie d'éléments, rien ne se passe tant qu'on n'envoie pas de l'eau pour faire tourner la roue.
lambda fonction
Une fonction lambda est une fonction plus courte d'écrire des fonctions très simples.
End of explanation
"""
notes = [dict(nom="A", juge=1, note=8),
dict(nom="A", juge=2, note=9),
dict(nom="A", juge=3, note=7),
dict(nom="A", juge=4, note=4),
dict(nom="A", juge=5, note=5),
dict(nom="B", juge=1, note=7),
dict(nom="B", juge=2, note=4),
dict(nom="B", juge=3, note=7),
dict(nom="B", juge=4, note=9),
dict(nom="B", juge=1, note=10),
dict(nom="C", juge=2, note=0),
dict(nom="C", juge=3, note=10),
dict(nom="C", juge=4, note=8),
dict(nom="C", juge=5, note=8),
dict(nom="C", juge=5, note=8),
]
import pandas
pandas.DataFrame(notes)
import cytoolz.itertoolz as itz
import cytoolz.dicttoolz as dtz
from functools import reduce
from operator import add
"""
Explanation: Exercice 1 : application aux grandes bases de données
Imaginons qu'on a une base de données de 10 milliards de lignes. On doit lui appliquer deux traitements : f1, f2. On a deux options possibles :
Appliquer la fonction f1 sur tous les éléments, puis appliquer f2 sur tous les éléments transformés par f1.
Application la combinaison des générateurs f1, f2 sur chaque ligne de la base de données.
Que se passe-t-il si on a fait une erreur d'implémentation dans la fonction f2 ?
Map/Reduce, approche fonctionnelle avec cytoolz
On a vu les fonctions iter et next mais on ne les utilise quasiment jamais. La programmation fonctionnelle consiste le plus souvent à combiner des itérateurs et générateurs pour ne les utiliser qu'au sein d'une boucle. C'est cette boucle qui appelle implicitement les deux fonctions iter et next.
La combinaison d'itérateurs fait sans cesse appel aux mêmes schémas logiques. Python implémente quelques schémas qu'on complète par un module tel que cytoolz. Les deux modules toolz et cytoolz sont deux implémentations du même ensemble de fonctions décrit par la documentation : pytoolz. toolz est une implémentation purement Python. cytoolz s'appuie sur le langage C++, elle est plus rapide.
Par défault, les éléments entrent et sortent dans le même ordre. La liste qui suit n'est pas exhaustive (voir itertoolz).
schémas simples:
filter : sélectionner des éléments, $n$ qui entrent, $<n$ qui sortent.
map : transformer les éléments, $n$ qui entrent, $n$ qui sortent.
take : prendre les $k$ premiers éléments, $n$ qui entrent, $k <= n$ qui sortent.
drop : passer les $k$ premiers éléments, $n$ qui entrent, $n-k$ qui sortent.
sorted : tri les éléments, $n$ qui entrent, $n$ qui sortent dans un ordre différent.
reduce : aggréger (au sens de sommer) les éléments, $n$ qui entrent, 1 qui sort.
concat : fusionner deux séquences d'éléments définies par deux itérateurs, $n$ et $m$ qui entrent, $n+m$ qui sortent.
schémas complexes
Certains schémas sont la combinaison de schémas simples mais il est plus efficace d'utiliser la version combinée.
join : associe deux séquences, $n$ et $m$ qui entrent, au pire $nm$ qui sortent.
groupby : classe les éléments, $n$ qui entrent, $p<=n$ groupes d'éléments qui sortent.
reduceby : combinaison (groupby, reduce), $n$ qui entrent, $p<=n$ qui sortent.
schéma qui retourne un seul élément
all : vrai si tous les éléments sont vrais.
any : vrai si un éléments est vrai.
first : premier élément qui entre.
last : dernier élément qui sort.
min, max, sum, len...
schéma qui aggrège
add : utilisé avec la fonction reduce pour aggréger les éléments et n'en retourner qu'un.
API PyToolz décrit l'ensemble des fonctions disponibles.
Exercice 2 : cytoolz
La note d'un candidat à un concours de patinage artistique fait la moyenne de trois moyennes parmi cinq, les deux extrêmes n'étant pas prises en compte. Il faut calculer cette somme pour un ensemble de candidats avec cytoolz.
End of explanation
"""
df.to_csv("mortalite_compresse.csv", index=False)
from pyquickhelper.filehelper import gzip_files
gzip_files("mortalite_compresse.csv.gz", ["mortalite_compresse.csv"], encoding="utf-8")
"""
Explanation: Blaze, odo : interfaces communes
Blaze fournit une interface commune, proche de celle des Dataframe, pour de nombreux modules comme bcolz... odo propose des outils de conversions dans de nombreux formats.
Pandas to Blaze
Ils sont présentés dans un autre notebook. On reproduit ce qui se fait une une ligne avec odo.
End of explanation
"""
import dask.dataframe as dd
fd = dd.read_csv('mortalite_compresse*.csv.gz', compression='gzip', blocksize=None)
#fd = dd.read_csv('mortalite_compresse.csv', blocksize=None)
"""
Explanation: Parallélisation avec dask
dask
dask propose de paralléliser les opérations usuelles qu'on applique à un dataframe.
L'opération suivante est très rapide, signifiant que dask attend de savoir quoi faire avant de charger les données :
End of explanation
"""
fd.head()
fd.npartitions
fd.divisions
s = fd.sample(frac=0.01)
s.head()
life = fd[fd.indicateur=='LIFEXP']
life
life.head()
"""
Explanation: Extraire les premières lignes prend très peu de temps car dask ne décompresse que le début :
End of explanation
"""
|
domino14/macondo | notebooks/preendgame_heuristics/preendgame_heuristics.ipynb | gpl-3.0 | from copy import deepcopy
import csv
from datetime import date
import numpy as np
import pandas as pd
import seaborn as sns
import time
log_folder = '../logs/'
log_file = log_folder + 'log_20200515_preendgames.csv'
todays_date = date.today().strftime("%Y%m%d")
final_spread_dict = {}
out_first_dict = {}
win_dict = {}
"""
Explanation: Background
This notebook seeks to quantify the value of leaving a certain number of tiles in the bag during the pre-endgame based on a repository of games. We will then implement these values as a pre-endgame heuristic in the Macondo speedy player to improve simulation quality.
Initial questions:
1. What is the probability that you will go out first if you make a play leaving N tiles in the bag?
2. What is the expected difference between your end-of-turn spread and end-of-game spread?
2. What's your win probability?
Implementation details
Similar
Assumptions
We're only analyzing complete games
Next steps
Standardize sign convention for spread.
Start figuring out how to calculate pre-endgame spread
Quackle values for reference
0,0.0
1,-8.0
2,0.0
3,-0.5
4,-2.0
5,-3.5
6,-2.0
7,2.0
8,10.0,
9,7.0,
10,4.0,
11,-1.0,
12,-2.0
Runtime
I was able to run this script on my local machine for ~20M rows in 2 minutes.
End of explanation
"""
t0 = time.time()
with open(log_file,'r') as f:
moveReader = csv.reader(f)
next(moveReader)
for i,row in enumerate(moveReader):
if (i+1)%1000000==0:
print('Processed {} rows in {} seconds'.format(i+1, time.time()-t0))
if i<10:
print(row)
if row[0]=='p1':
final_spread_dict[row[1]] = int(row[6])-int(row[11])
else:
final_spread_dict[row[1]] = int(row[11])-int(row[6])
out_first_dict[row[1]] = row[0]
# This flag indicates whether p1 won or not, with 0.5 as the value if the game was tied.
win_dict[row[1]] = (np.sign(final_spread_dict[row[1]])+1)/2
preendgame_boundaries = [8,21] # how many tiles are in the bag before we count as pre-endgame?
preendgame_tile_range = range(preendgame_boundaries[0],preendgame_boundaries[1]+1)
counter_dict = {x:{y:0 for y in range(x-7,x+1)} for x in preendgame_tile_range}
end_of_turn_spread_counter_dict = deepcopy(counter_dict)
equity_counter_dict = deepcopy(counter_dict)
final_spread_counter_dict = deepcopy(counter_dict)
game_counter_dict = deepcopy(counter_dict)
out_first_counter_dict = deepcopy(counter_dict)
win_counter_dict = deepcopy(counter_dict)
t0=time.time()
print('There are {} games'.format(len(final_spread_dict)))
with open(log_file,'r') as f:
moveReader = csv.reader(f)
next(moveReader)
for i,row in enumerate(moveReader):
if (i+1)%1000000==0:
print('Processed {} rows in {} seconds'.format(i+1, time.time()-t0))
beginning_of_turn_tiles_left = int(row[10])
end_of_turn_tiles_left = int(row[10])-int(row[7])
if (beginning_of_turn_tiles_left >= preendgame_boundaries[0] and
beginning_of_turn_tiles_left <= preendgame_boundaries[1]):
end_of_turn_spread_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] +=\
int(row[6])-int(row[11])
equity_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] +=\
float(row[9])-float(row[5])
game_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] += 1
out_first_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] += \
out_first_dict[row[1]] == row[0]
if row[0]=='p1':
final_spread_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] += final_spread_dict[row[1]]
win_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] += win_dict[row[1]]
else:
final_spread_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] -= final_spread_dict[row[1]]
win_counter_dict[beginning_of_turn_tiles_left][end_of_turn_tiles_left] += (1-win_dict[row[1]])
# if i<1000:
# print(row)
# print(game_counter_dict[beginning_of_turn_tiles_left])
# print(end_of_turn_spread_counter_dict[beginning_of_turn_tiles_left])
# print(equity_counter_dict[beginning_of_turn_tiles_left])
# print(final_spread_counter_dict[beginning_of_turn_tiles_left])
# print(win_counter_dict[beginning_of_turn_tiles_left])
# print(out_first_counter_dict[beginning_of_turn_tiles_left])
count_df = pd.DataFrame(game_counter_dict)
end_of_turn_spread_df = pd.DataFrame(end_of_turn_spread_counter_dict)
equity_df = pd.DataFrame(equity_counter_dict)
final_spread_df = pd.DataFrame(final_spread_counter_dict)
out_first_df = pd.DataFrame(out_first_counter_dict)
win_df = pd.DataFrame(win_counter_dict)
spread_delta_df = final_spread_df-end_of_turn_spread_df
avg_spread_delta_df = spread_delta_df/count_df
avg_equity_df = equity_df/count_df
out_first_pct_df = out_first_df/count_df
win_pct_df = 100*win_df/count_df
tst_df = avg_spread_delta_df-avg_equity_df
win_pct_df
np.mean(tst_df,axis=1)
sns.heatmap(tst_df)
avg_spread_delta_plot = sns.heatmap(avg_spread_delta_df)
fig = avg_spread_delta_plot.get_figure()
fig.savefig("average_spread_delta.png")
quackle_peg_dict = {
1:-8.0,
2:0.0,
3:-0.5,
4:-2.0,
5:-3.5,
6:-2.0,
7:2.0,
8:10.0,
9:7.0,
10:4.0,
11:-1.0,
12:-2.0
}
quackle_peg_series = pd.Series(quackle_peg_dict, name='quackle_values')
df = pd.concat([df,quackle_peg_series],axis=1)
df['quackle_macondo_delta'] = df['quackle_values']-df['avg_spread_delta']
df = df.reset_index().rename({'index':'tiles_left_after_play'}, axis=1)
df
sns.barplot(x='tiles_left_after_play',y='final_spread',data=df)
sns.barplot(x='tiles_left_after_play',y='avg_spread_delta',data=df)
sns.barplot(x='tiles_left_after_play',y='out_first_pct',data=df)
sns.barplot(x='tiles_left_after_play',y='win_pct',data=df)
"""
Explanation: Store the final spread of each game for comparison. The assumption here is that the last row logged is the final turn of the game, so for each game ID we overwrite the final move dictionary until there are no more rows from that game
End of explanation
"""
df['avg_spread_delta'].to_csv('peg_heuristics_' + todays_date + '.csv')
df.to_csv('peg_summary_' + todays_date + '.csv')
"""
Explanation: Save a summary and a verbose version of preendgame heuristic values.
End of explanation
"""
|
gwu-libraries/notebooks | 20170720-building-social-network-graphs-CSV.ipynb | mit | import sys
import json
import re
import numpy as np
from datetime import datetime
import pandas as pd
tweetfile = '/home/soominpark/sfmproject/Work/Network Graphs/food_security.csv'
tweets = pd.read_csv(tweetfile)
"""
Explanation: Exports nodes and edges from tweets (Retweets, Mentions, or Replies) [CSV]
Exports nodes and edges from tweets (either from retweets or mentions) in json format that can be exported from SFM, and saves it in a file format compatible with various social network graph tools such as Gephi, Cytoscape, Kumu, etc. These are for directed graphs.
End of explanation
"""
# 1. Export edges from Retweets
retweets = tweets[tweets['is_retweet'] == 'Yes']
retweets['original_twitter'] = retweets['text'].str.extract('RT @([a-zA-Z0-9]\w{0,}):', expand=True)
edges = retweets[['screen_name', 'original_twitter','created_at']]
edges.columns = ['Source', 'Target', 'Strength']
# 2. Export edges from Mentions
mentions = tweets[tweets['mentions'].notnull()]
edges = pd.DataFrame(columns=('Source','Target','Strength'))
for index, row in mentions.iterrows():
mention_list = row['mentions'].split(", ")
for mention in mention_list:
edges = edges.append(pd.DataFrame([[row['screen_name'],
mention,
row['created_at']]]
, columns=('Source','Target','Strength')), ignore_index=True)
# 3. Export edges from Replies
replies = tweets[tweets['in_reply_to_screen_name'].notnull()]
edges = replies[['screen_name', 'in_reply_to_screen_name','created_at']]
edges.columns = ['Source', 'Target', 'Strength']
"""
Explanation: 1. Export edges from Retweets, Mentions, or Replies
Run one of three blocks of codes below for your purpose.
End of explanation
"""
strengthLevel = 3 # Network connection strength level: the number of times in total each of the tweeters responded to or mentioned the other.
# If you have 1 as the level, then all tweeters who mentioned or replied to another at least once will be displayed. But if you have 5, only those who have mentioned or responded to a particular tweeter at least 5 times will be displayed, which means that only the strongest bonds are shown.
edges2 = edges.groupby(['Source','Target'])['Strength'].count()
edges2 = edges2.reset_index()
edges2 = edges2[edges2['Strength'] >= strengthLevel]
"""
Explanation: 2. Leave only the tweets whose strength level >= user specified level (directed)
End of explanation
"""
# Export nodes from the edges and add node attributes for both Sources and Targets.
users = tweets[['screen_name','followers_count','friends_count']]
users = users.sort_values(['screen_name','followers_count'], ascending=[True, False])
users = users.drop_duplicates(['screen_name'], keep='first')
ids = edges2['Source'].append(edges2['Target']).to_frame()
ids['Label'] = ids
ids.columns = ['screen_name', 'Label']
ids = ids.drop_duplicates(['screen_name'], keep='first')
nodes = pd.merge(ids, users, on='screen_name', how='left')
print(nodes.shape)
print(edges2.shape)
"""
Explanation: 3. Export nodes
End of explanation
"""
# change column names for Kumu import (Run this when using Kumu)
edges2.columns = ['From','To','Strength']
# Print nodes to check
nodes.head(3)
# Print edges to check
edges2.head(3)
# Export nodes and edges to csv files
nodes.to_csv('nodes.csv', encoding='utf-8', index=False)
edges2.to_csv('edges.csv', encoding='utf-8', index=False)
"""
Explanation: 4. Export nodes and edges to csv files
End of explanation
"""
|
KUrushi/knocks | chapter2/UNIX command.ipynb | mit | with open("hightemp.txt") as f:
count = len(f.readlines())
print(count)
"""
Explanation: 10. 行数のカウント
行数をカウントせよ.確認にはwcコマンドを用いよ
End of explanation
"""
%%bash
wc -l hightemp.txt
"""
Explanation: wc
ファイル内のバイト数, 単語数, 及び行数を集計し表示する.
また, 空白で区切られたものを単語として扱う.
表示: 行数 単語数 バイト数
wc [-clw] [--bytes] [--chars] [--lines] [--words] [file]
オプション
-c, --bytes, --chars バイト数のみ集計し表示
-w, --word 単語数のみ集計し表示
-l, --lines 行数のみ集計し表示
file
End of explanation
"""
def replace_tab2space(file):
with open(file) as f:
for i in f.readlines():
print(i.strip('\n').replace('\t', ' '))
replace_tab2space('hightemp.txt')
"""
Explanation: 11. タブをスペースに置換
タブ1文字につきスペース1文字に置換せよ.確認にはsedコマンド,trコマンド,もしくはexpandコマンドを用いよ.
End of explanation
"""
%%bash
expand -t 1 hightemp.txt
"""
Explanation: expand
タブをスペースに変換する(1)
expand [-i, --initial] [-t NUMBER, --tabs=NUMBER] [FILE...]
オプション
-i, --initial 空白以外の文字直後のタブは無視する
-t NUMBER, --tabs=NUMBER タブ幅を数値NUMBERで指定する(デフォルトは8桁)
FILE テキスト・ファイルを指定する
End of explanation
"""
def write_col(col):
with open("hightemp.txt", 'r') as f:
writing = [i.split('\t')[col-1]+"\n" for i in f.readlines()]
with open('col{}.txt'.format(col), 'w') as f:
f.write("".join(writing))
write_col(1)
write_col(2)
"""
Explanation: 12. 1列目をcol1.txtに,2列目をcol2.txtに保存
各行の1列目だけを抜き出したものをcol1.txtに,2列目だけを抜き出したものをcol2.txtとしてファイルに保存せよ.確認にはcutコマンドを用いよ.
End of explanation
"""
%%bash
cut -f 1 hightemp.txt > cut_col1.txt
cut -f 2 hightemp.txt > cut_col2.txt
"""
Explanation: cut
テキスト・ファイルの各行から一部分を取り出す
expand [-i, --initial] [-t NUMBER, --tabs=NUMBER] [FILE...]
オプション
-b, --bytes byte-list byte-listで指定した位置のバイトだけ表示する
-c, --characters character-list character-listで指定した位置の文字だけ表示する
-d, --delimiter delim フィールドの区切りを設定する。初期設定値はタブ
-f, --fields field-list field-listで指定したフィールドだけ表示する
-s, --only-delimited フィールドの区切りのない行を無視する
file テキスト・ファイルを指定する
End of explanation
"""
with open('col1.txt', 'r') as f1:
col1 = [i.strip('\n') for i in f1.readlines()]
with open('col2.txt', 'r') as f2:
col2 = [i.strip('\n') for i in f2.readlines()]
writing = ""
for i in range(len(col1)):
writing += col1[i] + '\t' + col2[i] + '\n'
with open('marge.txt', 'w') as f:
f.write(writing)
"""
Explanation: 13. col1.txtとcol2.txtをマージ
12で作ったcol1.txtとcol2.txtを結合し,元のファイルの1列目と2列目をタブ区切りで並べたテキストファイルを作成せよ.確認にはpasteコマンドを用いよ.
End of explanation
"""
%%bash
paste col1.txt col2.txt > paste_marge.txt
"""
Explanation: paste
ファイルを水平方向に連結する
paste [オプション] [FILE]
オプション
-d --delimiters=LIST タブの代わりに区切り文字をLISTで指定する
-s, --serial 縦方向に連結する
FILE 連結するファイルを指定する
End of explanation
"""
def head(N):
with open('marge.txt') as f:
return "".join(f.readlines()[:N+1])
print(head(3))
"""
Explanation: 14. 先頭からN行を出力
自然数Nをコマンドライン引数などの手段で受け取り,
入力のうち先頭のN行だけを表示せよ.確認にはheadコマンドを用いよ.
End of explanation
"""
%%bash
head -n 3 marge.txt
"""
Explanation: head
ファイルの先頭部分を表示する
head [-c N[bkm]] [-n N] [-qv] [--bytes=N[bkm]] [--lines=N] [--quiet] [--silent] [--verbose] [file...]
オプション
-c N, --bytes N ファイルの先頭からNバイト分の文字列を表示する。Nの後にbを付加したときは,指定した数の512倍のバイトを,kを付加したときは指定した数の1024倍のバイトを,mを付加したときには指定した数の1048576倍のバイトを表示する
-n N, --lines N ファイルの先頭からN行を表示する
-q, --quiet, --silent ファイル名を表示しない
-v, --verbose 常にファイル名を表示する
file 表示するファイルを指定する
End of explanation
"""
def tail(N):
with open('marge.txt') as f:
tail = "".join(f.readlines()[-1:-N:-1])
return tail
print(tail(3))
"""
Explanation: 15. 末尾のN行を出力
自然数Nをコマンドライン引数などの手段で受け取り,
入力のうち末尾のN行だけを表示せよ.確認にはtailコマンドを用いよ.
End of explanation
"""
def split_flie(name, N):
with open(name, 'r') as f:
split = "".join(f.readlines()[:N])
return split
print(split_flie("marge.txt", 3))
"""
Explanation: 16. ファイルをN分割する
自然数Nをコマンドライン引数などの手段で受け取り,
入力のファイルを行単位でN分割せよ.同様の処理をsplitコマンドで実現せよ.
End of explanation
"""
%%bash
split -l 3 marge.txt split_marge.txt
"""
Explanation: split
ファイルを分割する
split [-lines] [-l lines] [-b bytes[bkm]] [-C bytes[bkm]] [--lines=lines] [--bytes=bytes[bkm]] [--linebytes=bytes[bkm]] [infile [outfile-prefix]]
オプション
-lines, -l lines, --lines=lines 元ファイルのlineで指定した行ごとに区切り,出力ファイルに書き出す
-b bytes[bkm], --bytes=bytes[bkm] 元ファイルをbytesで示したバイト数で分割し,出力する。数字の後に記号を付加することにより単位を変えることができる。kはKバイトmはMバイト
-C bytes[bkm], --line-bytes=bytes[bkm] 各行で区切り出力する。このとき各行でbytesを越える場合は次のファイルに書き出す
infile 元ファイルを指定する
outfile-prefix 書き出し先ファイルのファイル名のベース名を決定する。出力はベース名にアルファベットを付けたファイル名となる
End of explanation
"""
def kinds_col(file_name, N=0):
with open(file_name, 'r') as f:
tmp = f.readlines()
return set([i.strip('\n') for i in tmp])
print(kinds_col('col1.txt'))
"""
Explanation: 17. 1列目の文字列の異なり
1列目の文字列の種類(異なる文字列の集合)を求めよ.
確認にはsort, uniqコマンドを用いよ.
End of explanation
"""
def sorted_list(filename, col):
with open(filename, 'r') as f:
return_list = [i.strip("\n").split('\t') for i in f.readlines()]
return sorted(return_list, key=lambda x: x[col], reverse=True)
print(sorted_list("hightemp.txt", 2))
"""
Explanation: 18. 各行を3コラム目の数値の降順にソート
各行を3コラム目の数値の逆順で整列せよ
(注意: 各行の内容は変更せずに並び替えよ).
確認にはsortコマンドを用いよ
(この問題はコマンドで実行した時の結果と合わなくてもよい).
End of explanation
"""
def frequency_sort(filename, col):
from collections import Counter
with open(filename, 'r') as f:
return_list = [i.strip("\n").split('\t')[col-1] for i in f.readlines()]
return [i[0] for i in Counter(return_list).most_common()]
print(frequency_sort("hightemp.txt", 1))
"""
Explanation: 19. 各行の1コラム目の文字列の出現頻度を求め,出現頻度の高い順に並べる
各行の1列目の文字列の出現頻度を求め,その高い順に並べて表示せよ.確認にはcut, uniq, sortコマンドを用いよ.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.22/_downloads/f094864c4eeae2b4353a90789dd18b2b/plot_mixed_source_space_inverse.ipynb | bsd-3-clause | # Author: Annalisa Pascarella <a.pascarella@iac.cnr.it>
#
# License: BSD (3-clause)
import os.path as op
import matplotlib.pyplot as plt
from nilearn import plotting
import mne
from mne.minimum_norm import make_inverse_operator, apply_inverse
# Set dir
data_path = mne.datasets.sample.data_path()
subject = 'sample'
data_dir = op.join(data_path, 'MEG', subject)
subjects_dir = op.join(data_path, 'subjects')
bem_dir = op.join(subjects_dir, subject, 'bem')
# Set file names
fname_mixed_src = op.join(bem_dir, '%s-oct-6-mixed-src.fif' % subject)
fname_aseg = op.join(subjects_dir, subject, 'mri', 'aseg.mgz')
fname_model = op.join(bem_dir, '%s-5120-bem.fif' % subject)
fname_bem = op.join(bem_dir, '%s-5120-bem-sol.fif' % subject)
fname_evoked = data_dir + '/sample_audvis-ave.fif'
fname_trans = data_dir + '/sample_audvis_raw-trans.fif'
fname_fwd = data_dir + '/sample_audvis-meg-oct-6-mixed-fwd.fif'
fname_cov = data_dir + '/sample_audvis-shrunk-cov.fif'
"""
Explanation: Compute MNE inverse solution on evoked data with a mixed source space
Create a mixed source space and compute an MNE inverse solution on an
evoked dataset.
End of explanation
"""
labels_vol = ['Left-Amygdala',
'Left-Thalamus-Proper',
'Left-Cerebellum-Cortex',
'Brain-Stem',
'Right-Amygdala',
'Right-Thalamus-Proper',
'Right-Cerebellum-Cortex']
"""
Explanation: Set up our source space
List substructures we are interested in. We select only the
sub structures we want to include in the source space:
End of explanation
"""
src = mne.setup_source_space(subject, spacing='oct5',
add_dist=False, subjects_dir=subjects_dir)
"""
Explanation: Get a surface-based source space, here with few source points for speed
in this demonstration, in general you should use oct6 spacing!
End of explanation
"""
vol_src = mne.setup_volume_source_space(
subject, mri=fname_aseg, pos=10.0, bem=fname_model,
volume_label=labels_vol, subjects_dir=subjects_dir,
add_interpolator=False, # just for speed, usually this should be True
verbose=True)
# Generate the mixed source space
src += vol_src
print(f"The source space contains {len(src)} spaces and "
f"{sum(s['nuse'] for s in src)} vertices")
"""
Explanation: Now we create a mixed src space by adding the volume regions specified in the
list labels_vol. First, read the aseg file and the source space bounds
using the inner skull surface (here using 10mm spacing to save time,
we recommend something smaller like 5.0 in actual analyses):
End of explanation
"""
src.plot(subjects_dir=subjects_dir)
"""
Explanation: View the source space
End of explanation
"""
nii_fname = op.join(bem_dir, '%s-mixed-src.nii' % subject)
src.export_volume(nii_fname, mri_resolution=True, overwrite=True)
plotting.plot_img(nii_fname, cmap='nipy_spectral')
"""
Explanation: We could write the mixed source space with::
write_source_spaces(fname_mixed_src, src, overwrite=True)
We can also export source positions to nifti file and visualize it again:
End of explanation
"""
fwd = mne.make_forward_solution(
fname_evoked, fname_trans, src, fname_bem,
mindist=5.0, # ignore sources<=5mm from innerskull
meg=True, eeg=False, n_jobs=1)
del src # save memory
leadfield = fwd['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
print(f"The fwd source space contains {len(fwd['src'])} spaces and "
f"{sum(s['nuse'] for s in fwd['src'])} vertices")
# Load data
condition = 'Left Auditory'
evoked = mne.read_evokeds(fname_evoked, condition=condition,
baseline=(None, 0))
noise_cov = mne.read_cov(fname_cov)
"""
Explanation: Compute the fwd matrix
End of explanation
"""
snr = 3.0 # use smaller SNR for raw data
inv_method = 'dSPM' # sLORETA, MNE, dSPM
parc = 'aparc' # the parcellation to use, e.g., 'aparc' 'aparc.a2009s'
loose = dict(surface=0.2, volume=1.)
lambda2 = 1.0 / snr ** 2
inverse_operator = make_inverse_operator(
evoked.info, fwd, noise_cov, depth=None, loose=loose, verbose=True)
del fwd
stc = apply_inverse(evoked, inverse_operator, lambda2, inv_method,
pick_ori=None)
src = inverse_operator['src']
"""
Explanation: Compute inverse solution
End of explanation
"""
initial_time = 0.1
stc_vec = apply_inverse(evoked, inverse_operator, lambda2, inv_method,
pick_ori='vector')
brain = stc_vec.plot(
hemi='both', src=inverse_operator['src'], views='coronal',
initial_time=initial_time, subjects_dir=subjects_dir)
"""
Explanation: Plot the mixed source estimate
End of explanation
"""
brain = stc.surface().plot(initial_time=initial_time,
subjects_dir=subjects_dir)
"""
Explanation: Plot the surface
End of explanation
"""
fig = stc.volume().plot(initial_time=initial_time, src=src,
subjects_dir=subjects_dir)
"""
Explanation: Plot the volume
End of explanation
"""
# Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi
labels_parc = mne.read_labels_from_annot(
subject, parc=parc, subjects_dir=subjects_dir)
label_ts = mne.extract_label_time_course(
[stc], labels_parc, src, mode='mean', allow_empty=True)
# plot the times series of 2 labels
fig, axes = plt.subplots(1)
axes.plot(1e3 * stc.times, label_ts[0][0, :], 'k', label='bankssts-lh')
axes.plot(1e3 * stc.times, label_ts[0][-1, :].T, 'r', label='Brain-stem')
axes.set(xlabel='Time (ms)', ylabel='MNE current (nAm)')
axes.legend()
mne.viz.tight_layout()
"""
Explanation: Process labels
Average the source estimates within each label of the cortical parcellation
and each sub structure contained in the src space
End of explanation
"""
|
planet-os/notebooks | api-examples/Metno_wind_demo.ipynb | mit | %matplotlib notebook
import urllib.request
import numpy as np
import simplejson as json
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import warnings
import datetime
import dateutil.parser
import matplotlib.cbook
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
import requests
from netCDF4 import Dataset
from dh_py_access.lib.dataset import dataset as dataset
import dh_py_access.package_api as package_api
import dh_py_access.lib.datahub as datahub
np.warnings.filterwarnings('ignore')
"""
Explanation: Wind energy production forecast from Met.no weather forecast
With this notebook we illustrate how one might improve weather forecast for wind energy production, considering that the height of wind turbines doesn’t match the height of wind speed commonly used in weather forecasts. Also, that wind energy production does not depend only on wind speed, but also air density.
In this Notebook we will use three weather forecast from Planet OS Datahub:
GFS - A global model produced by NOAA that provides a 10-day forecast at a resolution of 0.25 degrees (15 days with lower resolution and fewer variables);
FMI HIRLAM - A regional model limited to Northern-Europe and Northern-Atlantic with a forecast period of two days and resolution of 0.07 degrees;
Met.no HARMONIE & Met.no HARMONIE dedicated wind forecast A regional model limited to Scandinavia with a resolution of 0.025 x 0.05 degrees. The dedicated wind forecast version is augmented by Planet OS to include the air density as explicit variable.
API documentation is available at http://docs.planetos.com. If you have questions or comments, join the Planet OS Slack community to chat with our development team. For general information on usage of IPython/Jupyter and Matplotlib, please refer to their corresponding documentation. https://ipython.org/ and http://matplotlib.org/
End of explanation
"""
API_key = open("APIKEY").read().strip()
dh=datahub.datahub_main(API_key)
"""
Explanation: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
End of explanation
"""
fmi_hirlam_surface = dataset('fmi_hirlam_surface',dh)
metno_harmonie_metcoop = dataset('metno_harmonie_metcoop',dh)
metno_harmonie_wind = dataset('metno_harmonie_wind_det',dh)
gfs = dataset('noaa_gfs_pgrb2_global_forecast_recompute_0.25degree',dh)
"""
Explanation: We use a simple dh_py_access library for controlling REST requests.
End of explanation
"""
## Does not look good in github
##gfs.variable_names()
"""
Explanation: dh_py_access provides some easy to use functions to list dataset metadata
End of explanation
"""
lon = 22
lat = 59+6./60
today = datetime.datetime.today()
reft = datetime.datetime(today.year,today.month,today.day,int(today.hour/6)*6) - datetime.timedelta(hours=12)
reft = reft.isoformat()
##reft = "2018-02-11T18:00:00"
arg_dict = {'lon':lon,'lat':lat,'reftime_start':reft,'reftime_end':reft,'count':250}
arg_dict_metno_wind_det = dict(arg_dict, **{'vars':'wind_u_z,wind_v_z,air_density_z'})
arg_dict_metno_harm_metcoop = dict(arg_dict, **{'vars':'u_wind_10m,v_wind_10m'})
arg_dict_hirlam = dict(arg_dict, **{'vars':'u-component_of_wind_height_above_ground,v-component_of_wind_height_above_ground'})
arg_dict_gfs = dict(arg_dict, **{'vars':'ugrd_m,vgrd_m','count':450})
"""
Explanation: Initialize coordinates and reftime. Please note that more recent reftime, than computed below, may be available for particular dataset. We just use conservative example for demo
End of explanation
"""
dmw = metno_harmonie_wind.get_json_data_in_pandas(**arg_dict_metno_wind_det)
dmm = metno_harmonie_metcoop.get_json_data_in_pandas(**arg_dict_metno_harm_metcoop)
dhs = fmi_hirlam_surface.get_json_data_in_pandas(**arg_dict_hirlam)
dgfs = gfs.get_json_data_in_pandas(**arg_dict_gfs)
## show how to filter Pandas
## dgfs[dgfs['z']==80]
"""
Explanation: Fetch data and convert to Pandas dataframe
End of explanation
"""
vel80_metno = np.array(np.sqrt(dmw[dmw['z']==80]['wind_u_z']**2 + dmw[dmw['z']==80]['wind_v_z']**2))
vel10_metno = np.array(np.sqrt(dmm['u_wind_10m']**2 + dmm['v_wind_10m']**2))
vel10_hirlam = np.array(np.sqrt(dhs['u-component_of_wind_height_above_ground']**2 +
dhs['v-component_of_wind_height_above_ground']**2))
vel10_gfs = np.sqrt(dgfs[dgfs['z']==10]['ugrd_m']**2+dgfs[dgfs['z']==10]['vgrd_m']**2)
vel80_gfs = np.sqrt(dgfs[dgfs['z']==80]['ugrd_m']**2+dgfs[dgfs['z']==80]['vgrd_m']**2)
t_metno = [dateutil.parser.parse(i) for i in dmw[dmw['z']==80]['time']]
t_metno_10 = [dateutil.parser.parse(i) for i in dmm['time']]
t_hirlam = [dateutil.parser.parse(i) for i in dhs['time']]
t_gfs_10 = [dateutil.parser.parse(i) for i in dgfs[dgfs['z']==10]['time']]
t_gfs_80 = [dateutil.parser.parse(i) for i in dgfs[dgfs['z']==80]['time']]
fig, ax = plt.subplots()
days = mdates.DayLocator()
daysFmt = mdates.DateFormatter('%Y-%m-%d')
hours = mdates.HourLocator()
ax.set_ylabel("wind speed")
ax.plot(t_metno, vel80_metno, label='Metno 80m')
ax.plot(t_metno_10, vel10_metno, label='Metno 10m')
ax.plot(t_hirlam, vel10_hirlam, label='HIRLAM 10m')
gfs_lim=67
ax.plot(t_gfs_10[:gfs_lim], vel10_gfs[:gfs_lim], label='GFS 10m')
ax.plot(t_gfs_80[:gfs_lim], vel80_gfs[:gfs_lim], label='GFS 80m')
ax.xaxis.set_major_locator(days)
ax.xaxis.set_major_formatter(daysFmt)
ax.xaxis.set_minor_locator(hours)
#fig.autofmt_xdate()
plt.legend()
plt.grid()
plt.savefig("model_comp")
"""
Explanation: Filter out necessary data from the DataFrames
End of explanation
"""
fig, ax = plt.subplots()
ax2 = ax.twinx()
days = mdates.DayLocator()
daysFmt = mdates.DateFormatter('%Y-%m-%d')
hours = mdates.HourLocator()
ax.plot(t_metno,vel80_metno)
aird80 = dmw[dmw['z']==80]['air_density_z']
ax2.plot(t_metno,aird80,c='g')
ax.xaxis.set_major_locator(days)
ax.xaxis.set_major_formatter(daysFmt)
ax.xaxis.set_minor_locator(hours)
ax.set_ylabel("wind speed")
ax2.set_ylabel("air density")
fig.tight_layout()
#fig.autofmt_xdate()
plt.savefig("density_80m")
"""
Explanation: We easily see that differences between models can be larger than difference of 10m to 80m winds in the same model.
What role does density play in energy production?
From the theory we know that wind energy production is roughly
$1/2 A \rho \mathbf{v}^3$,
where $A$ is area, $\rho$ is air density and $\mathbf{v}$ is wind speed. We are not concerned about $A$, which is a turbine parameter, but we can analyse the linear relation of density and cube relation of wind speed itself.
First, let's see how the density varies over time
End of explanation
"""
fig, ax = plt.subplots()
ax2 = ax.twinx()
ax2.set_ylabel("energy production")
ax.set_ylabel("wind speed")
ax.plot(t_metno,vel80_metno, c='b', label='wind speed')
ax2.plot(t_metno,aird80*vel80_metno**3, c='r', label='energy prod w.dens')
ax.xaxis.set_major_locator(days)
ax.xaxis.set_major_formatter(daysFmt)
ax.xaxis.set_minor_locator(hours)
fig.autofmt_xdate()
lines, labels = ax.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.legend(lines + lines2, labels + labels2, loc=0)
"""
Explanation: No let's energy production looks compared to the wind speed
End of explanation
"""
density = package_api.package_api(dh,'metno_harmonie_wind_det','air_density_z',-20,60,10,80,'full_domain_harmonie')
density.make_package()
density.download_package()
density_data = Dataset(density.get_local_file_name())
## biggest change of density in one location during forecast period
maxval = np.nanmax(density_data.variables['air_density_z'],axis=0)
minval = np.nanmin(density_data.variables['air_density_z'],axis=0)
"""
Explanation: Finally, let's analyse how much density changes vary over the whole domain during one forecast. For this purpose, we download the whole density field with package api
End of explanation
"""
print(np.nanmax(maxval-minval),np.nanmean(maxval-minval))
"""
Explanation: Maximum relative change of air density in single location is:
End of explanation
"""
|
tommyogden/maxwellbloch | docs/examples/mbs-lambda-weak-pulse-cloud-atoms-with-coupling-store.ipynb | mit | mb_solve_json = """
{
"atom": {
"fields": [
{
"coupled_levels": [[0, 1]],
"detuning": 0.0,
"label": "probe",
"rabi_freq": 1.0e-3,
"rabi_freq_t_args":
{
"ampl": 1.0,
"centre": 0.0,
"fwhm": 1.0
},
"rabi_freq_t_func": "gaussian"
},
{
"coupled_levels": [[1, 2]],
"detuning": 0.0,
"detuning_positive": false,
"label": "coupling",
"rabi_freq": 5.0,
"rabi_freq_t_args":
{
"ampl": 1.0,
"fwhm": 0.2,
"off": 4.0,
"on": 6.0
},
"rabi_freq_t_func": "ramp_offon"
}
],
"num_states": 3
},
"t_min": -2.0,
"t_max": 12.0,
"t_steps": 140,
"z_min": -0.2,
"z_max": 1.2,
"z_steps": 140,
"z_steps_inner": 50,
"num_density_z_func": "gaussian",
"num_density_z_args": {
"ampl": 1.0,
"fwhm": 0.5,
"centre": 0.5
},
"interaction_strengths": [1.0e3, 1.0e3],
"savefile": "mbs-lambda-weak-pulse-cloud-atoms-some-coupling-store"
}
"""
from maxwellbloch import mb_solve
mbs = mb_solve.MBSolve().from_json_str(mb_solve_json)
"""
Explanation: Λ-Type Three-Level: Weak Pulse with Time-Dependent Coupling in a Cloud — Storage and Retrieval
Time taken to solve this problem on a 2013 MacBook Pro:
2h 32min 15s
Define the Problem
End of explanation
"""
%time Omegas_zt, states_zt = mbs.mbsolve(recalc=False)
"""
Explanation: Solve the Problem
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import numpy as np
fig = plt.figure(1, figsize=(16, 12))
# Probe
ax = fig.add_subplot(211)
cmap_range = np.linspace(0.0, 1.0e-3, 11)
cf = ax.contourf(mbs.tlist, mbs.zlist,
np.abs(mbs.Omegas_zt[0]/(2*np.pi)),
cmap_range, cmap=plt.cm.Blues)
ax.set_title('Rabi Frequency ($\Gamma / 2\pi $)')
ax.set_ylabel('Distance ($L$)')
ax.text(0.02, 0.95, 'Probe',
verticalalignment='top', horizontalalignment='left',
transform=ax.transAxes, color='grey', fontsize=16)
plt.colorbar(cf)
# Coupling
ax = fig.add_subplot(212)
cmap_range = np.linspace(0.0, 8.0, 11)
cf = ax.contourf(mbs.tlist, mbs.zlist,
np.abs(mbs.Omegas_zt[1]/(2*np.pi)),
cmap_range, cmap=plt.cm.Greens)
ax.set_xlabel('Time ($1/\Gamma$)')
ax.set_ylabel('Distance ($L$)')
ax.text(0.02, 0.95, 'Coupling',
verticalalignment='top', horizontalalignment='left',
transform=ax.transAxes, color='grey', fontsize=16)
plt.colorbar(cf)
# Both
for ax in fig.axes:
for y in [0.0, 1.0]:
ax.axhline(y, c='grey', lw=1.0, ls='dotted')
plt.tight_layout();
"""
Explanation: Plot Output
End of explanation
"""
|
InsightSoftwareConsortium/SimpleITK-Notebooks | Python/00_Setup.ipynb | apache-2.0 | import importlib
from distutils.version import LooseVersion
# check that all packages are installed (see requirements.txt file)
required_packages = {
"jupyter",
"numpy",
"matplotlib",
"ipywidgets",
"scipy",
"pandas",
"numba",
"multiprocess",
"SimpleITK",
}
problem_packages = list()
# Iterate over the required packages: If the package is not installed
# ignore the exception.
for package in required_packages:
try:
p = importlib.import_module(package)
except ImportError:
problem_packages.append(package)
if len(problem_packages) == 0:
print("All is well.")
else:
print(
"The following packages are required but not installed: "
+ ", ".join(problem_packages)
)
import SimpleITK as sitk
%run update_path_to_download_script
from downloaddata import fetch_data, fetch_data_all
from ipywidgets import interact
print(sitk.Version())
"""
Explanation: Welcome to SimpleITK Jupyter Notebooks
Newcomers to Jupyter Notebooks:
We use two types of cells, code and markdown.
To run a code cell, select it (mouse or arrow key so that it is highlighted) and then press shift+enter which also moves focus to the next cell or ctrl+enter which doesn't.
Closing the browser window does not close the Jupyter server. To close the server, go to the terminal where you ran it and press ctrl+c twice.
For additional details see the Jupyter project documentation on Jupyter Notebook or JupyterLab.
SimpleITK Environment Setup
Check that SimpleITK and auxiliary program(s) are correctly installed in your environment, and that you have the SimpleITK version which you expect (<b>requires network connectivity</b>).
You can optionally download all of the data used in the notebooks in advance. This step is only necessary if you expect to run the notebooks without network connectivity.
The following cell checks that all expected packages are installed.
End of explanation
"""
# Uncomment the line below to change the default external viewer to your viewer of choice and test that it works.
#%env SITK_SHOW_COMMAND /Applications/ITK-SNAP.app/Contents/MacOS/ITK-SNAP
# Retrieve an image from the network, read it and display using the external viewer.
# The show method will also set the display window's title and by setting debugOn to True,
# will also print information with respect to the command it is attempting to invoke.
# NOTE: The debug information is printed to the terminal from which you launched the notebook
# server.
sitk.Show(sitk.ReadImage(fetch_data("SimpleITK.jpg")), "SimpleITK Logo", debugOn=True)
"""
Explanation: We expect that you have an external image viewer installed. The default viewer is <a href="https://fiji.sc/#download">Fiji</a>. If you have another viewer (i.e. ITK-SNAP or 3D Slicer) you will need to set an environment variable to point to it. This can be done from within a notebook as shown below.
End of explanation
"""
interact(lambda x: x, x=(0, 10));
"""
Explanation: Now we check that the ipywidgets will display correctly. When you run the following cell you should see a slider.
If you don't see a slider please shutdown the Jupyter server, at the command line prompt press Control-c twice, and then run the following command:
jupyter nbextension enable --py --sys-prefix widgetsnbextension
End of explanation
"""
fetch_data_all(os.path.join("..", "Data"), os.path.join("..", "Data", "manifest.json"))
"""
Explanation: Download all of the data in advance if you expect to be working offline (may take a couple of minutes).
End of explanation
"""
|
jhamrick/original-nbgrader | examples/grade_assignment/Submitted Assignment.ipynb | mit | NAME = "Jane Doe"
COLLABORATORS = "n/a"
"""
Explanation: Example Assignment
<a href="#Problem-1">Problem 1</a>
<a href="#Problem-2">Problem 2</a>
<a href="#Part-A">Part A</a>
<a href="#Part-B">Part B</a>
<a href="#Part-C">Part C</a>
Before you turn this problem in, make sure everything runs as expected. First, restart the kernel (in the menubar, select Kernel$\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your name and collaborators below:
End of explanation
"""
# import plotting libraries
%matplotlib inline
import matplotlib.pyplot as plt
"""
Explanation:
End of explanation
"""
def squares(n):
"""Compute the squares of numbers from 1 to n, such that the
ith element of the returned list equals i^2.
"""
return [i**2 for i in xrange(1, n + 1)]
"""
Explanation: Problem 1
Write a function that returns a list of numbers, such that $x_i=i^2$, for $1\leq i \leq n$. Make sure it handles the case where $n<1$ by raising a ValueError.
End of explanation
"""
squares(10)
"""
Explanation: Your function should print [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] for $n=10$. Check that it does:
End of explanation
"""
def sum_of_squares(n):
"""Compute the sum of the squares of numbers from 1 to n."""
return sum(squares(n))
"""
Explanation: Problem 2
Part A
Using your squares function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the squares function -- it should NOT reimplement its functionality.
End of explanation
"""
sum_of_squares(10)
"""
Explanation: The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get:
End of explanation
"""
fig, ax = plt.subplots() # do not delete this line!
x = range(1, 16)
y = [sum_of_squares(x[i]) for i in xrange(len(x))]
ax.plot(x, y)
ax.set_title("Sum of squares from 1 to $n$")
ax.set_xlabel("$n$")
ax.set_ylabel("sum")
"""
Explanation: Part B
Using LaTeX math notation, write out the equation that is implemented by your sum_of_squares function.
$\sum_{i=1}^n i^2$
Part C
Create a plot of the sum of squares for $n=1$ to $n=15$. Make sure to appropriately label the $x$-axis and $y$-axis, and to give the plot a title. Set the $x$-axis limits to be 1 (minimum) and 15 (maximum).
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/ipsl/cmip6/models/sandbox-2/aerosol.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-2', 'aerosol')
"""
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-2
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 70 (38 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
"""
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Prescribed Fields Aod Plus Ccn
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol internal mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.3. External Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol external mixture?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
"""
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.24/_downloads/686e03eb7a01e30e026e3dd11e64df18/30_filtering_resampling.ipynb | bsd-3-clause | import os
import numpy as np
import matplotlib.pyplot as plt
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
# use just 60 seconds of data and mag channels, to save memory
raw.crop(0, 60).pick_types(meg='mag', stim=True).load_data()
"""
Explanation: Filtering and resampling data
This tutorial covers filtering and resampling, and gives examples of how
filtering can be used for artifact repair.
We begin as always by importing the necessary Python modules and loading some
example data <sample-dataset>. We'll also crop the data to 60 seconds
(to save memory on the documentation server):
End of explanation
"""
raw.plot(duration=60, proj=False, n_channels=len(raw.ch_names),
remove_dc=False)
"""
Explanation: Background on filtering
A filter removes or attenuates parts of a signal. Usually, filters act on
specific frequency ranges of a signal — for example, suppressing all
frequency components above or below a certain cutoff value. There are many
ways of designing digital filters; see disc-filtering for a longer
discussion of the various approaches to filtering physiological signals in
MNE-Python.
Repairing artifacts by filtering
Artifacts that are restricted to a narrow frequency range can sometimes
be repaired by filtering the data. Two examples of frequency-restricted
artifacts are slow drifts and power line noise. Here we illustrate how each
of these can be repaired by filtering.
Slow drifts
Low-frequency drifts in raw data can usually be spotted by plotting a fairly
long span of data with the :meth:~mne.io.Raw.plot method, though it is
helpful to disable channel-wise DC shift correction to make slow drifts
more readily visible. Here we plot 60 seconds, showing all the magnetometer
channels:
End of explanation
"""
for cutoff in (0.1, 0.2):
raw_highpass = raw.copy().filter(l_freq=cutoff, h_freq=None)
fig = raw_highpass.plot(duration=60, proj=False,
n_channels=len(raw.ch_names), remove_dc=False)
fig.subplots_adjust(top=0.9)
fig.suptitle('High-pass filtered at {} Hz'.format(cutoff), size='xx-large',
weight='bold')
"""
Explanation: A half-period of this slow drift appears to last around 10 seconds, so a full
period would be 20 seconds, i.e., $\frac{1}{20} \mathrm{Hz}$. To be
sure those components are excluded, we want our highpass to be higher than
that, so let's try $\frac{1}{10} \mathrm{Hz}$ and $\frac{1}{5}
\mathrm{Hz}$ filters to see which works best:
End of explanation
"""
filter_params = mne.filter.create_filter(raw.get_data(), raw.info['sfreq'],
l_freq=0.2, h_freq=None)
"""
Explanation: Looks like 0.1 Hz was not quite high enough to fully remove the slow drifts.
Notice that the text output summarizes the relevant characteristics of the
filter that was created. If you want to visualize the filter, you can pass
the same arguments used in the call to :meth:raw.filter()
<mne.io.Raw.filter> above to the function :func:mne.filter.create_filter
to get the filter parameters, and then pass the filter parameters to
:func:mne.viz.plot_filter. :func:~mne.filter.create_filter also requires
parameters data (a :class:NumPy array <numpy.ndarray>) and sfreq
(the sampling frequency of the data), so we'll extract those from our
:class:~mne.io.Raw object:
End of explanation
"""
mne.viz.plot_filter(filter_params, raw.info['sfreq'], flim=(0.01, 5))
"""
Explanation: Notice that the output is the same as when we applied this filter to the data
using :meth:raw.filter() <mne.io.Raw.filter>. You can now pass the filter
parameters (and the sampling frequency) to :func:~mne.viz.plot_filter to
plot the filter:
End of explanation
"""
def add_arrows(axes):
# add some arrows at 60 Hz and its harmonics
for ax in axes:
freqs = ax.lines[-1].get_xdata()
psds = ax.lines[-1].get_ydata()
for freq in (60, 120, 180, 240):
idx = np.searchsorted(freqs, freq)
# get ymax of a small region around the freq. of interest
y = psds[(idx - 4):(idx + 5)].max()
ax.arrow(x=freqs[idx], y=y + 18, dx=0, dy=-12, color='red',
width=0.1, head_width=3, length_includes_head=True)
fig = raw.plot_psd(fmax=250, average=True)
add_arrows(fig.axes[:2])
"""
Explanation: Power line noise
Power line noise is an environmental artifact that manifests as persistent
oscillations centered around the AC power line frequency_. Power line
artifacts are easiest to see on plots of the spectrum, so we'll use
:meth:~mne.io.Raw.plot_psd to illustrate. We'll also write a little
function that adds arrows to the spectrum plot to highlight the artifacts:
End of explanation
"""
meg_picks = mne.pick_types(raw.info, meg=True)
freqs = (60, 120, 180, 240)
raw_notch = raw.copy().notch_filter(freqs=freqs, picks=meg_picks)
for title, data in zip(['Un', 'Notch '], [raw, raw_notch]):
fig = data.plot_psd(fmax=250, average=True)
fig.subplots_adjust(top=0.85)
fig.suptitle('{}filtered'.format(title), size='xx-large', weight='bold')
add_arrows(fig.axes[:2])
"""
Explanation: It should be evident that MEG channels are more susceptible to this kind of
interference than EEG that is recorded in the magnetically shielded room.
Removing power-line noise can be done with a notch filter,
applied directly to the :class:~mne.io.Raw object, specifying an array of
frequencies to be attenuated. Since the EEG channels are relatively
unaffected by the power line noise, we'll also specify a picks argument
so that only the magnetometers and gradiometers get filtered:
End of explanation
"""
raw_notch_fit = raw.copy().notch_filter(
freqs=freqs, picks=meg_picks, method='spectrum_fit', filter_length='10s')
for title, data in zip(['Un', 'spectrum_fit '], [raw, raw_notch_fit]):
fig = data.plot_psd(fmax=250, average=True)
fig.subplots_adjust(top=0.85)
fig.suptitle('{}filtered'.format(title), size='xx-large', weight='bold')
add_arrows(fig.axes[:2])
"""
Explanation: :meth:~mne.io.Raw.notch_filter also has parameters to control the notch
width, transition bandwidth and other aspects of the filter. See the
docstring for details.
It's also possible to try to use a spectrum fitting routine to notch filter.
In principle it can automatically detect the frequencies to notch, but our
implementation generally does not do so reliably, so we specify the
frequencies to remove instead, and it does a good job of removing the
line noise at those frequencies:
End of explanation
"""
raw_downsampled = raw.copy().resample(sfreq=200)
for data, title in zip([raw, raw_downsampled], ['Original', 'Downsampled']):
fig = data.plot_psd(average=True)
fig.subplots_adjust(top=0.9)
fig.suptitle(title)
plt.setp(fig.axes, xlim=(0, 300))
"""
Explanation: Resampling
EEG and MEG recordings are notable for their high temporal precision, and are
often recorded with sampling rates around 1000 Hz or higher. This is good
when precise timing of events is important to the experimental design or
analysis plan, but also consumes more memory and computational resources when
processing the data. In cases where high-frequency components of the signal
are not of interest and precise timing is not needed (e.g., computing EOG or
ECG projectors on a long recording), downsampling the signal can be a useful
time-saver.
In MNE-Python, the resampling methods (:meth:raw.resample()
<mne.io.Raw.resample>, :meth:epochs.resample() <mne.Epochs.resample> and
:meth:evoked.resample() <mne.Evoked.resample>) apply a low-pass filter to
the signal to avoid aliasing, so you don't need to explicitly filter it
yourself first. This built-in filtering that happens when using
:meth:raw.resample() <mne.io.Raw.resample>, :meth:epochs.resample()
<mne.Epochs.resample>, or :meth:evoked.resample() <mne.Evoked.resample> is
a brick-wall filter applied in the frequency domain at the Nyquist
frequency of the desired new sampling rate. This can be clearly seen in the
PSD plot, where a dashed vertical line indicates the filter cutoff; the
original data had an existing lowpass at around 172 Hz (see
raw.info['lowpass']), and the data resampled from 600 Hz to 200 Hz gets
automatically lowpass filtered at 100 Hz (the Nyquist frequency_ for a
target rate of 200 Hz):
End of explanation
"""
current_sfreq = raw.info['sfreq']
desired_sfreq = 90 # Hz
decim = np.round(current_sfreq / desired_sfreq).astype(int)
obtained_sfreq = current_sfreq / decim
lowpass_freq = obtained_sfreq / 3.
raw_filtered = raw.copy().filter(l_freq=None, h_freq=lowpass_freq)
events = mne.find_events(raw_filtered)
epochs = mne.Epochs(raw_filtered, events, decim=decim)
print('desired sampling frequency was {} Hz; decim factor of {} yielded an '
'actual sampling frequency of {} Hz.'
.format(desired_sfreq, decim, epochs.info['sfreq']))
"""
Explanation: Because resampling involves filtering, there are some pitfalls to resampling
at different points in the analysis stream:
Performing resampling on :class:~mne.io.Raw data (before epoching) will
negatively affect the temporal precision of Event arrays, by causing
jitter_ in the event timing. This reduced temporal precision will
propagate to subsequent epoching operations.
Performing resampling after epoching can introduce edge artifacts on
every epoch, whereas filtering the :class:~mne.io.Raw object will only
introduce artifacts at the start and end of the recording (which is often
far enough from the first and last epochs to have no affect on the
analysis).
The following section suggests best practices to mitigate both of these
issues.
Best practices
To avoid the reduction in temporal precision of events that comes with
resampling a :class:~mne.io.Raw object, and also avoid the edge artifacts
that come with filtering an :class:~mne.Epochs or :class:~mne.Evoked
object, the best practice is to:
low-pass filter the :class:~mne.io.Raw data at or below
$\frac{1}{3}$ of the desired sample rate, then
decimate the data after epoching, by either passing the decim
parameter to the :class:~mne.Epochs constructor, or using the
:meth:~mne.Epochs.decimate method after the :class:~mne.Epochs have
been created.
<div class="alert alert-danger"><h4>Warning</h4><p>The recommendation for setting the low-pass corner frequency at
$\frac{1}{3}$ of the desired sample rate is a fairly safe rule of
thumb based on the default settings in :meth:`raw.filter()
<mne.io.Raw.filter>` (which are different from the filter settings used
inside the :meth:`raw.resample() <mne.io.Raw.resample>` method). If you
use a customized lowpass filter (specifically, if your transition
bandwidth is wider than 0.5× the lowpass cutoff), downsampling to 3× the
lowpass cutoff may still not be enough to avoid `aliasing`_, and
MNE-Python will not warn you about it (because the :class:`raw.info
<mne.Info>` object only keeps track of the lowpass cutoff, not the
transition bandwidth). Conversely, if you use a steeper filter, the
warning may be too sensitive. If you are unsure, plot the PSD of your
filtered data *before decimating* and ensure that there is no content in
the frequencies above the `Nyquist frequency`_ of the sample rate you'll
end up with *after* decimation.</p></div>
Note that this method of manually filtering and decimating is exact only when
the original sampling frequency is an integer multiple of the desired new
sampling frequency. Since the sampling frequency of our example data is
600.614990234375 Hz, ending up with a specific sampling frequency like (say)
90 Hz will not be possible:
End of explanation
"""
|
AlDanial/cloc | tests/inputs/Trapezoid_Rule.ipynb | gpl-2.0 | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return (x-3)*(x-5)*(x-7)+85
x = np.linspace(0, 10, 200)
y = f(x)
"""
Explanation: Basic Numerical Integration: the Trapezoid Rule
A simple illustration of the trapezoid rule for definite integration:
$$
\int_{a}^{b} f(x)\, dx \approx \frac{1}{2} \sum_{k=1}^{N} \left( x_{k} - x_{k-1} \right) \left( f(x_{k}) + f(x_{k-1}) \right).
$$
<br>
First, we define a simple function and sample it between 0 and 10 at 200 points
End of explanation
"""
a, b = 1, 8 # the left and right boundaries
N = 5 # the number of points
xint = np.linspace(a, b, N)
yint = f(xint)
"""
Explanation: Choose a region to integrate over and take only a few points in that region
End of explanation
"""
plt.plot(x, y, lw=2)
plt.axis([0, 9, 0, 140])
plt.fill_between(xint, 0, yint, facecolor='gray', alpha=0.4)
plt.text(0.5 * (a + b), 30,r"$\int_a^b f(x)dx$", horizontalalignment='center', fontsize=20);
"""
Explanation: Plot both the function and the area below it in the trapezoid approximation
End of explanation
"""
from __future__ import print_function
from scipy.integrate import quad
integral, error = quad(f, a, b)
integral_trapezoid = sum( (xint[1:] - xint[:-1]) * (yint[1:] + yint[:-1]) ) / 2
print("The integral is:", integral, "+/-", error)
print("The trapezoid approximation with", len(xint), "points is:", integral_trapezoid)
"""
Explanation: Compute the integral both at high accuracy and with the trapezoid approximation
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_source_label_time_frequency.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, source_induced_power
print(__doc__)
"""
Explanation: Compute power and phase lock in label of the source space
Compute time-frequency maps of power and phase lock in the source space.
The inverse method is linear based on dSPM inverse operator.
The example also shows the difference in the time-frequency maps
when they are computed with and without subtracting the evoked response
from each epoch. The former results in induced activity only while the
latter also includes evoked (stimulus-locked) activity.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
label_name = 'Aud-rh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
tmin, tmax, event_id = -0.2, 0.5, 2
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
inverse_operator = read_inverse_operator(fname_inv)
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# Picks MEG channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, include=include, exclude='bads')
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
# Load epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject,
preload=True)
# Compute a source estimate per frequency band including and excluding the
# evoked response
frequencies = np.arange(7, 30, 2) # define frequencies of interest
label = mne.read_label(fname_label)
n_cycles = frequencies / 3. # different number of cycle per frequency
# subtract the evoked response in order to exclude evoked activity
epochs_induced = epochs.copy().subtract_evoked()
plt.close('all')
for ii, (this_epochs, title) in enumerate(zip([epochs, epochs_induced],
['evoked + induced',
'induced only'])):
# compute the source space power and the inter-trial coherence
power, itc = source_induced_power(
this_epochs, inverse_operator, frequencies, label, baseline=(-0.1, 0),
baseline_mode='percent', n_cycles=n_cycles, n_jobs=1)
power = np.mean(power, axis=0) # average over sources
itc = np.mean(itc, axis=0) # average over sources
times = epochs.times
##########################################################################
# View time-frequency plots
plt.subplots_adjust(0.1, 0.08, 0.96, 0.94, 0.2, 0.43)
plt.subplot(2, 2, 2 * ii + 1)
plt.imshow(20 * power,
extent=[times[0], times[-1], frequencies[0], frequencies[-1]],
aspect='auto', origin='lower', vmin=0., vmax=30., cmap='RdBu_r')
plt.xlabel('Time (s)')
plt.ylabel('Frequency (Hz)')
plt.title('Power (%s)' % title)
plt.colorbar()
plt.subplot(2, 2, 2 * ii + 2)
plt.imshow(itc,
extent=[times[0], times[-1], frequencies[0], frequencies[-1]],
aspect='auto', origin='lower', vmin=0, vmax=0.7,
cmap='RdBu_r')
plt.xlabel('Time (s)')
plt.ylabel('Frequency (Hz)')
plt.title('ITC (%s)' % title)
plt.colorbar()
plt.show()
"""
Explanation: Set parameters
End of explanation
"""
|
aje/POT | docs/source/auto_examples/plot_otda_d2.ipynb | mit | # Authors: Remi Flamary <remi.flamary@unice.fr>
# Stanislas Chambon <stan.chambon@gmail.com>
#
# License: MIT License
import matplotlib.pylab as pl
import ot
import ot.plot
"""
Explanation: OT for domain adaptation on empirical distributions
This example introduces a domain adaptation in a 2D setting. It explicits
the problem of domain adaptation and introduces some optimal transport
approaches to solve it.
Quantities such as optimal couplings, greater coupling coefficients and
transported samples are represented in order to give a visual understanding
of what the transport methods are doing.
End of explanation
"""
n_samples_source = 150
n_samples_target = 150
Xs, ys = ot.datasets.get_data_classif('3gauss', n_samples_source)
Xt, yt = ot.datasets.get_data_classif('3gauss2', n_samples_target)
# Cost matrix
M = ot.dist(Xs, Xt, metric='sqeuclidean')
"""
Explanation: generate data
End of explanation
"""
# EMD Transport
ot_emd = ot.da.EMDTransport()
ot_emd.fit(Xs=Xs, Xt=Xt)
# Sinkhorn Transport
ot_sinkhorn = ot.da.SinkhornTransport(reg_e=1e-1)
ot_sinkhorn.fit(Xs=Xs, Xt=Xt)
# Sinkhorn Transport with Group lasso regularization
ot_lpl1 = ot.da.SinkhornLpl1Transport(reg_e=1e-1, reg_cl=1e0)
ot_lpl1.fit(Xs=Xs, ys=ys, Xt=Xt)
# transport source samples onto target samples
transp_Xs_emd = ot_emd.transform(Xs=Xs)
transp_Xs_sinkhorn = ot_sinkhorn.transform(Xs=Xs)
transp_Xs_lpl1 = ot_lpl1.transform(Xs=Xs)
"""
Explanation: Instantiate the different transport algorithms and fit them
End of explanation
"""
pl.figure(1, figsize=(10, 10))
pl.subplot(2, 2, 1)
pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples')
pl.xticks([])
pl.yticks([])
pl.legend(loc=0)
pl.title('Source samples')
pl.subplot(2, 2, 2)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples')
pl.xticks([])
pl.yticks([])
pl.legend(loc=0)
pl.title('Target samples')
pl.subplot(2, 2, 3)
pl.imshow(M, interpolation='nearest')
pl.xticks([])
pl.yticks([])
pl.title('Matrix of pairwise distances')
pl.tight_layout()
"""
Explanation: Fig 1 : plots source and target samples + matrix of pairwise distance
End of explanation
"""
pl.figure(2, figsize=(10, 6))
pl.subplot(2, 3, 1)
pl.imshow(ot_emd.coupling_, interpolation='nearest')
pl.xticks([])
pl.yticks([])
pl.title('Optimal coupling\nEMDTransport')
pl.subplot(2, 3, 2)
pl.imshow(ot_sinkhorn.coupling_, interpolation='nearest')
pl.xticks([])
pl.yticks([])
pl.title('Optimal coupling\nSinkhornTransport')
pl.subplot(2, 3, 3)
pl.imshow(ot_lpl1.coupling_, interpolation='nearest')
pl.xticks([])
pl.yticks([])
pl.title('Optimal coupling\nSinkhornLpl1Transport')
pl.subplot(2, 3, 4)
ot.plot.plot2D_samples_mat(Xs, Xt, ot_emd.coupling_, c=[.5, .5, 1])
pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples')
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples')
pl.xticks([])
pl.yticks([])
pl.title('Main coupling coefficients\nEMDTransport')
pl.subplot(2, 3, 5)
ot.plot.plot2D_samples_mat(Xs, Xt, ot_sinkhorn.coupling_, c=[.5, .5, 1])
pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples')
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples')
pl.xticks([])
pl.yticks([])
pl.title('Main coupling coefficients\nSinkhornTransport')
pl.subplot(2, 3, 6)
ot.plot.plot2D_samples_mat(Xs, Xt, ot_lpl1.coupling_, c=[.5, .5, 1])
pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples')
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples')
pl.xticks([])
pl.yticks([])
pl.title('Main coupling coefficients\nSinkhornLpl1Transport')
pl.tight_layout()
"""
Explanation: Fig 2 : plots optimal couplings for the different methods
End of explanation
"""
# display transported samples
pl.figure(4, figsize=(10, 4))
pl.subplot(1, 3, 1)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o',
label='Target samples', alpha=0.5)
pl.scatter(transp_Xs_emd[:, 0], transp_Xs_emd[:, 1], c=ys,
marker='+', label='Transp samples', s=30)
pl.title('Transported samples\nEmdTransport')
pl.legend(loc=0)
pl.xticks([])
pl.yticks([])
pl.subplot(1, 3, 2)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o',
label='Target samples', alpha=0.5)
pl.scatter(transp_Xs_sinkhorn[:, 0], transp_Xs_sinkhorn[:, 1], c=ys,
marker='+', label='Transp samples', s=30)
pl.title('Transported samples\nSinkhornTransport')
pl.xticks([])
pl.yticks([])
pl.subplot(1, 3, 3)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o',
label='Target samples', alpha=0.5)
pl.scatter(transp_Xs_lpl1[:, 0], transp_Xs_lpl1[:, 1], c=ys,
marker='+', label='Transp samples', s=30)
pl.title('Transported samples\nSinkhornLpl1Transport')
pl.xticks([])
pl.yticks([])
pl.tight_layout()
pl.show()
"""
Explanation: Fig 3 : plot transported samples
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/miroc/cmip6/models/miroc-es2l/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'miroc-es2l', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: MIROC
Source ID: MIROC-ES2L
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
dblyon/PandasIntro | Pandas_Introduction.ipynb | mit | %%javascript
$.getScript('misc/kmahelona_ipython_notebook_toc.js')
"""
Explanation: <h1 id="tocheading">Table of Contents</h1>
<div id="toc"></div>
End of explanation
"""
from IPython.core.display import HTML
HTML("<iframe src=https://jupyter.readthedocs.io/en/latest/_images/notebook_components.png width=800 height=400></iframe>")
"""
Explanation: see Installation_Customization_Resources.txt for useful infos and links
IPython: beyond plain Python
IPython provides a rich toolkit to help you make the most out of using Python, with:
*TAB COMPLETION*
Powerful Python shells (terminal and Qt-based).
A web-based notebook with the same core features but support for code, text, mathematical expressions, inline plots and other rich media.
Support for interactive data visualization and use of GUI toolkits.
Flexible, embeddable interpreters to load into your own projects.
Easy to use, high performance tools for parallel computing.
http://ipython.org/ipython-doc/stable/interactive/tutorial.html
http://ipython.readthedocs.io/en/stable/
Jupyter notebook
The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more.
Checkout the "command palette" for useful commands and shortcuts.
End of explanation
"""
!pwd
!ls -la
var = !ls
len(var), var[:3]
%%bash
echo "My shell is:" $SHELL
echo "My disk usage is:"
df -h
"""
Explanation: Navigation
Esc and Enter to toggle between "Command" and "Edit" mode --> see colour indicator of frame surrounding the cell
when in "Command" mode:
use cursor keys to navigate
a to insert cell Above
b to insert cell Below
dd to delete cell
when in "Edit" mode:
Tab for tab completion of file-names
Shift + Tab in parenthesis of function to display docstring
Cmd + / toggle line comment
Running code, getting help
In the notebook, to run a cell of code, hit Shift + Enter. This executes the cell and puts the cursor in the next cell below, or makes a new one if you are at the end. Alternately, you can use:
Alt + Enter to force the creation of a new cell unconditionally (useful when inserting new content in the middle of an existing notebook).
Control + Enter executes the cell and keeps the cursor in the same cell, useful for quick experimentation of snippets that you don't need to keep permanently.
Esc and Enter to toggle between "Command" and "Edit" mode --> see colour indicator of frame surrounding the cell
h for Help or checkout "Help" --> "Keyboard shortcuts" from the menu
Shift + Tab inside of the brackets of a function to get the Docstring
Cmd + / to toggle line comment
Shell commands
End of explanation
"""
# ?math.log
help(math.log)
"""
Explanation: helpful IPython commands
command | description
-------- | -----------
? | Introduction and overview of IPython’s features.
%quickref | Quick reference.
help | Python’s own help system.
?object | Details about ‘object’, use ‘??object’ for extra details.
End of explanation
"""
# math.log()
"""
Explanation: press Shift + Tab inside of the brackets to get the Docstring
End of explanation
"""
# %magic
values = range(1, 1001)
%%timeit
results = []
for val in values:
new_val = math.log(val, 10)
results.append(new_val)
%%timeit
results = [math.log(val, 10) for val in values]
%%timeit
results = np.log10(values)
"""
Explanation: Magic
End of explanation
"""
s = pd.Series([1, 3, 5, np.nan, 6, 8])
s
"""
Explanation: Introduction to Pandas
pandas is a Python package providing fast, flexible, and expressive data structures designed to work with relational or labeled data both. It is a fundamental high-level building block for doing practical, real world data analysis in Python.
pandas is well suited for:
Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet
Ordered and unordered (not necessarily fixed-frequency) time series data.
Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels
Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed into a pandas data structure
Key features:
Easy handling of missing data
Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional objects
Automatic and explicit data alignment: objects can be explicitly aligned to a set of labels, or the data can be aligned automatically
Powerful, flexible group by functionality to perform split-apply-combine operations on data sets
Intelligent label-based slicing, fancy indexing, and subsetting of large data sets
Intuitive merging and joining data sets
Flexible reshaping and pivoting of data sets
Hierarchical labeling of axes
Robust IO tools for loading data from flat files, Excel files, databases, and HDF5
Time series functionality: date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging, etc.
Pandas cheat sheet
(download the pdf or find it in the git repo of the current workshop)
https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf
Object creation
Creating a Series by passing a list of values, letting pandas create a default integer index:
End of explanation
"""
df = pd.DataFrame(np.random.randn(6, 4), columns=list('ABCD'))
df
"""
Explanation: Creating a DataFrame by passing a numpy array, with labeled columns:
End of explanation
"""
df2 = pd.DataFrame({'A' : 1.,
'B' : pd.Timestamp('20130102'),
'C' : pd.Series(1, index=list(range(4)), dtype='float32'),
'D' : np.array([3] * 4, dtype='int32'),
'E' : pd.Categorical(["test","train","test","train"]),
'F' : 'foo',
'tinyRick': pd.Series([2000])})
df2
"""
Explanation: Creating a DataFrame by passing a dict of objects that can be converted to series-like.
End of explanation
"""
df2.dtypes
"""
Explanation: Having specific dtypes (data types)
End of explanation
"""
df2["D"] = df2["D"].astype('float64')
"""
Explanation: change the dtype
End of explanation
"""
df.head()
df.tail(3)
"""
Explanation: Viewing Data
See the top & bottom rows of the frame
End of explanation
"""
df.index
df.columns
df.values
"""
Explanation: Display the index, columns, and the underlying numpy data
End of explanation
"""
df.describe()
"""
Explanation: Describe shows a quick statistic summary of your data excluding NaN (Not a Number) values
End of explanation
"""
df2.info()
"""
Explanation: Concise summary of a DataFrame
End of explanation
"""
df.T
"""
Explanation: Transposing your data
End of explanation
"""
df.sort_values(by='B')
"""
Explanation: Sorting by values
End of explanation
"""
df["A"]
"""
Explanation: Selection
important ones:
- .loc works on label of index and boolean array
- .iloc works on integer position (from 0 to length-1 of the axis)
for completeness/comparison:
- .ix first performs label based, if that fails then it falls to integer based --> don't use it (deprecated soon, prone to mistakes)
- .at get scalar values. It's a very fast loc
- .iat Get scalar values. It's a very fast iloc
CAVEATS: label-based slicing in pandas is inclusive. The primary reason for this is that it is often not possible to easily determine the “successor” or next element after a particular label in an index.
Getting
Selecting a single column, which yields a Series.
End of explanation
"""
df.A
"""
Explanation: equivalent
End of explanation
"""
df[["A", "C"]]
"""
Explanation: selecting with a list of column names, yields a data frame (also with a single column name)
End of explanation
"""
df[0:3]
"""
Explanation: Selecting via [], which slices the rows.
End of explanation
"""
df = pd.DataFrame(np.random.randn(6, 4), columns=list('ABCD'))
df
# Let's transpose the DataFrame and change the order of the columns (to force us to the string labels of the novel row index).
# row-index: Strings, columns: Integers
dfx = df.T
import random
columns = dfx.columns.tolist()
random.shuffle(columns)
dfx.columns = columns
dfx
dfx.loc["A":"C", ]
dfx.loc[["C", "A", "D"], ]
dfx.loc[["A", "D"], 0:2]
"""
Explanation: Selection by Label
df.loc[rows, columns]
.loc is primarily label based, but may also be used with a boolean array. .loc will raise KeyError when the items are not found. Allowed inputs are:
- A single label, e.g. 5 or 'a', (note that 5 is interpreted as a label of the index. This use is not an integer position along the index)
- A list or array of labels ['a', 'b', 'c']
- A slice object with labels 'a':'f', (note that contrary to usual python slices, both the start and the stop are included!)
- A boolean array
- A callable function with one argument (the calling Series, DataFrame or Panel) and that returns valid output for indexing (one of the above)
End of explanation
"""
df = pd.DataFrame(np.random.randn(6, 4), columns=list('ABCD'))
df
### transpose
# df = df.T
# df
df.iloc[0:3, :]
"""
Explanation: Selection by Position
df.iloc[rows, columns]
.iloc is primarily integer position based (from 0 to length-1 of the axis), but may also be used with a boolean array. .iloc will raise IndexError if a requested indexer is out-of-bounds, except slice indexers which allow out-of-bounds indexing. (this conforms with python/numpy slice semantics). Allowed inputs are:
- An integer e.g. 5
- A list or array of integers [4, 3, 0]
- A slice object with ints 1:7
- A boolean array
- A callable function with one argument (the calling Series, DataFrame or Panel) and that returns valid output for indexing (one of the above)
End of explanation
"""
df.iloc[[1, 3, 5], 1:3]
"""
Explanation: uncomment the transpose and execute the two cells above again and notice that this doesn't raise an Error compared to label based indexing.
End of explanation
"""
df = pd.DataFrame(np.random.randn(6, 4), columns=list('ABCD'))
df
"""
Explanation: Boolean indexing
End of explanation
"""
df[df["A"] > 0]
"""
Explanation: Using a single column’s values to select data.
End of explanation
"""
df[df > 0]
"""
Explanation: A where operation for getting.
End of explanation
"""
cond = df["A"] > 0
print type(cond)
cond
"""
Explanation: This comparison yields a Pandas.Series of Booleans.
End of explanation
"""
sum(cond), len(cond) # or cond.sum(), cond.shape[0]
"""
Explanation: Summing boolean arrays can come in handy (True=1, False=0)
End of explanation
"""
df.loc[cond, ["A", "C"]]
"""
Explanation: Frequent use case: combining a Series of Bools with specific column names to select data.
End of explanation
"""
df['E'] = ['one', 'one','two','three','four','three']
df
"""
Explanation: Let's add a column to the DataFrame
End of explanation
"""
df[df['E'].isin(['two','four'])]
"""
Explanation: Using the isin() method for filtering:
End of explanation
"""
dates = pd.date_range('20130101', periods=6)
dfx = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD'))
dfx
s1 = pd.Series([1,2,3,4,5,6], index=pd.date_range('20130102', periods=6))
s1 = s1.sort_values(ascending=False)
s1
dfx["F"] = s1
dfx
"""
Explanation: Setting
Setting a new column automatically aligns the data by the indexes
End of explanation
"""
df = pd.DataFrame(np.random.randn(6, 4), columns=list('ABCD'))
df['E'] = ['one', 'one','two','three','four','three']
df
df.loc[5, "A"] = 0.815
"""
Explanation: Setting values by label
End of explanation
"""
df.iloc[0, 4] = "zero"
"""
Explanation: Setting values by position
End of explanation
"""
df.drop([3, 5])
"""
Explanation: delete (drop) some rows
End of explanation
"""
df = df.drop("E", axis=1)
df
"""
Explanation: delete (drop) a column
End of explanation
"""
df = pd.DataFrame(np.random.randn(6, 4), columns=list('ABCD'))
df
"""
Explanation: Axis
default: axis=0
axis=0 is "column-wise", for each column along the rows, horizontal
axis=1 is "row-wise", for each row along the columns, vertical
End of explanation
"""
df[df > 0] = -df
df
"""
Explanation: A where operation with setting.
End of explanation
"""
df = df * -1
df
"""
Explanation: Multiply with a scalar.
End of explanation
"""
df["F"] = df["A"] / df["B"]
df
"""
Explanation: Row wise division
End of explanation
"""
df.loc[0, ] = np.nan
df.loc[2, ["A", "C"]] = np.nan
df
"""
Explanation: Missing Data
pandas primarily uses the value np.nan to represent missing data.
End of explanation
"""
df.dropna()
"""
Explanation: To drop any rows that have missing data.
End of explanation
"""
df.dropna(how='all')
"""
Explanation: Drop only rows where all values are missing.
End of explanation
"""
df.fillna(value=5)
"""
Explanation: Fill missing values
End of explanation
"""
df.isnull() # or pd.isnull(df)
df.notnull() == -df.isnull()
"""
Explanation: see replace method to replace values given in 'to_replace' with 'value'.
To get the boolean mask where values are nan
End of explanation
"""
dfx = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
'foo', 'bar', 'foo', 'foo'],
'B' : ['one', 'one', 'two', 'three',
'two', 'two', 'one', 'three'],
'C' : np.random.randn(8),
'D' : np.random.randn(8)})
dfx
"""
Explanation: Duplicates
End of explanation
"""
dfx.drop_duplicates(subset="A")
"""
Explanation: Drop duplicates except for the first occurrence, considering only a certain column.
End of explanation
"""
df
"""
Explanation: see also duplicated method to return boolean Series denoting duplicate rows, optionally only considering certain columns
End of explanation
"""
df.mean()
"""
Explanation: Operations
Stats
Operations in general exclude missing data.
Performing a descriptive statistic
End of explanation
"""
df.median(axis=1)
"""
Explanation: On the other axis
End of explanation
"""
df.cov()
"""
Explanation: The full range of basic statistics that are quickly calculable and built into the base Pandas package are:
| Function | Description |
|:---:|:---:|
| count | Number of non-null observations |
| sum | Sum of values |
| mean | Mean of values |
| mad | Mean absolute deviation |
| median | Arithmetic median of values |
| min | Minimum |
| max | Maximum |
| mode | Mode |
| abs | Absolute Value |
| prod | Product of values |
| std | Bessel-corrected sample standard deviation |
| var | Unbiased variance |
| sem | Standard error of the mean |
| skew | Sample skewness (3rd moment) |
| kurt | Sample kurtosis (4th moment) |
| quantile | Sample quantile (value at %) |
| cumsum | Cumulative sum |
| cumprod | Cumulative product |
| cummax | Cumulative maximum |
| cummin | Cumulative minimum |
The need for custom functions is minimal unless you have very specific requirements. c.f.
http://pandas.pydata.org/pandas-docs/stable/basics.html
e.g. compute pairwise covariance of columns, excluding NA/null values
End of explanation
"""
df = pd.DataFrame(np.random.randint(0,100,size=(6, 4)), columns=list('ABCD'))
df
df.apply(np.cumsum, axis=0)
df.apply(lambda x: x.max() - x.min())
"""
Explanation: Apply
Applying functions to the data
End of explanation
"""
df = pd.DataFrame(np.random.randn(10, 4))
df
# break it into multiple pieces
pieces = [df[3:7], df[:3], df[7:]]
pd.concat(pieces)
"""
Explanation: Merge
pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
Concat
End of explanation
"""
left = pd.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]})
right = pd.DataFrame({'key': ['bar', 'foo'], 'rval': [4, 5]})
left
right
pd.merge(left, right, on='key')
"""
Explanation: Join
SQL style merges.
End of explanation
"""
df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
s = df.iloc[3]
df
df.append(s, ignore_index=True)
"""
Explanation: Append
Append rows to a dataframe.
End of explanation
"""
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
'foo', 'bar', 'foo', 'foo'],
'B' : ['one', 'one', 'two', 'three',
'two', 'two', 'one', 'three'],
'C' : np.random.randn(8),
'D' : np.random.randn(8)})
df
"""
Explanation: Grouping
By “group by” we are referring to a process involving one or more of the following steps
Splitting the data into groups based on some criteria
Applying a function to each group independently
Combining the results into a data structure
End of explanation
"""
df.groupby('A').sum()
"""
Explanation: Grouping and then applying a function sum to the resulting groups.
End of explanation
"""
df.groupby(['A','B']).sum()
"""
Explanation: Grouping by multiple columns forms a hierarchical index, which we then apply the function.
End of explanation
"""
df.groupby('A')["C"].nlargest(2)
"""
Explanation: Series.nlargest: Return the largest n elements.
Series.nsmallest: Return the smallest n elements.
End of explanation
"""
grouped = df.groupby(['A','B'])
for name, group in grouped:
print "Name:", name
print group
print "#"*50
"""
Explanation: Splitting
Iterate over a groupby object, just to illustrate the split part of the groupby method.
End of explanation
"""
grouped.aggregate(["sum", "count", "median", "mean"]) # see `Operations` above for more built-in methods
"""
Explanation: Applying
Aggregation: computing a summary statistic (or statistics) about each group. Some examples:
Compute group sums or means
Compute group sizes / counts
Transformation: perform some group-specific computations and return a like-indexed. Some examples:
Standardizing data (zscore) within group
Filling NAs within groups with a value derived from each group
Filtration: discard some groups, according to a group-wise computation that evaluates True or False. Some examples:
Discarding data that belongs to groups with only a few members
Filtering out data based on the group sum or mean
Aggregation
"reduces" the DataFrame, meaning the df_original.shape > df_aggregated.shape
End of explanation
"""
df = pd.DataFrame({'a': [1, 2, 3, 4, 5, 6],
'b': [1, 2, 3, 4, 5, 6],
'c': ['q', 'q', 'q', 'q', 'w', 'w'],
'd': ['z','z','z','o','o','o']})
df['e'] = df['a'] + df['b']
df
df['f'] = (df.groupby(['c', 'd'])['e'].transform('sum'))
df
assert df.loc[0, "f"] == df.loc[( (df["c"] == "q") & (df["d"] == "z") ), "e"].sum()
"""
Explanation: Transformation
returns an object that is indexed the same (same size) as the one being grouped. Thus, the passed transform function should return a result that is the same size as the group chunk.
End of explanation
"""
df = pd.DataFrame({'A': np.arange(8), 'B': list('aabbbbcc')})
df
df.groupby('B').filter(lambda x: len(x) > 2)
"""
Explanation: Filtration
returns a subset of the original object.
End of explanation
"""
def example_custom_function(number):
if number >= 6:
return number / 5.0
elif number <= 3:
return number * 88
else:
return np.nan
df["A"].apply(example_custom_function)
"""
Explanation: Apply
Another frequent operation is applying a function on 1D arrays to each column or row.
Apply a custom function to the entire DataFrame or a Series. axis indicates row or column-wise application. (default: axis=0)
apply on a Series performs an element-wise operation
End of explanation
"""
df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
df
def example_custom_function_on_series_1(series):
return series / series.sum()
### notice how example_custom_function_on_series_1 return a single value per Series --> transformation
df.apply(example_custom_function_on_series_1)
def example_custom_function_on_series_2(series):
return series.max() - series.min()
### notice how example_custom_function_on_series_2 return a single value per Series --> aggregation
df.apply(example_custom_function_on_series_2)
"""
Explanation: apply on a DataFrame perform a Series-wise operation
End of explanation
"""
formater = lambda x: '%.2f' % x
df.applymap(formater)
"""
Explanation: Applymap
Element-wise Python functions can be used, too. You can do this with applymap:
End of explanation
"""
df["A"].map(formater)
"""
Explanation: Map
The reason for the name applymap is that Series has a map method for applying an element-wise function:
End of explanation
"""
tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
'foo', 'foo', 'qux', 'qux'],
['one', 'two', 'one', 'two',
'one', 'two', 'one', 'two']]))
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])
df2 = df[:4]
df2
stacked = df2.stack()
stacked
"""
Explanation: Reshaping
Stack
“compresses” a level in the DataFrame’s columns.
End of explanation
"""
stacked.unstack() # in this particular case equivalent to stacked.unstack(2)
stacked.unstack(0)
stacked.unstack(1)
"""
Explanation: Unstack
the inverse operation of stack() is unstack(), which by default unstacks the last level
End of explanation
"""
df = pd.DataFrame({'A' : ['one', 'one', 'two', 'three'] * 3,
'B' : ['a', 'b', 'c'] * 4,
'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
'D' : np.random.randn(12),
'E' : np.random.randn(12)})
df
pd.pivot_table(df, values=['D', "E"], index=['A', 'B'], columns=['C'])
"""
Explanation: Pivot
We can produce pivot tables from this data very easily:
End of explanation
"""
df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
'B': {0: 1, 1: 3, 2: 5},
'C': {0: 2, 1: 4, 2: 6}})
df
pd.melt(df, id_vars=['A'], value_vars=['B', 'C'])
"""
Explanation: Melt
“Unpivots” a DataFrame from wide format to long format, optionally leaving identifier variables set. This function is useful to massage a DataFrame into a format where one or more columns are identifier variables (id_vars), while all other columns, considered measured variables (value_vars), are “unpivoted” to the row axis, leaving just two non-identifier columns, ‘variable’ and ‘value’.
End of explanation
"""
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000))
ts = ts.cumsum()
df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index,
columns=['A', 'B', 'C', 'D'])
df = df.cumsum()
df.plot()
df.boxplot()
df.plot(kind="scatter", x="A", y="B")
df[["C", "D"]].sum().plot(kind="bar", x="C", y="D")
axs = pd.scatter_matrix(df, figsize=(16,12), diagonal='kde')
"""
Explanation: Plotting
On DataFrame, plot() is a convenience to plot all of the columns with labels. Using %matplotlib inline magic function enables plotting within the jupyter notebook cells (instead of e.g. in a separate Qt window).
End of explanation
"""
fn = r"data/Saliva.txt"
df = pd.read_csv(fn, sep='\t')
df.head(3)
"""
Explanation: Getting data in and out
reading data from file
tabular data such as CSV or tab-delimited txt files
End of explanation
"""
xls = pd.ExcelFile('data/microbiome/MID1.xls')
print xls.sheet_names # see all sheet names
df = xls.parse("Sheet 1", header=None)
df.columns = ["Taxon", "Count"]
df.head()
"""
Explanation: from Excel
End of explanation
"""
fn_out = r"data/Tidy_data.txt"
df.to_csv(fn_out, sep='\t', header=True, index=False)
"""
Explanation: a current list of I/O tools:
- read_csv
- read_excel
- read_hdf
- read_sql
- read_json
- read_msgpack (experimental)
- read_html
- read_gbq (experimental)
- read_stata
- read_sas
- read_clipboard
- read_pickle
checkout the documentation http://pandas.pydata.org/pandas-docs/stable/ for more infos
writing data to file
End of explanation
"""
xls = pd.ExcelFile('data/microbiome/MID1.xls')
df = xls.parse("Sheet 1", header=None)
df.columns = ["Taxon", "Count"]
df.head(3)
df["superkingdom"], df["rest"] = zip(*df["Taxon"].apply(lambda x: x.split(" ", 1)))
df.head(3)
df.superkingdom.unique()
"""
Explanation: Miscellaneous
zip example
End of explanation
"""
s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
s
s.str.lower()
"""
Explanation: string processing methods
for Series and Index, to make it easy to operate on each element of the array. These methods exclude missing/NA values automatically. These are accessed via the str attribute and generally have names matching the equivalent (scalar) built-in string methods.
End of explanation
"""
df = pd.DataFrame(np.array([.2, 1.4, 2.5, 6.2, 9.7, np.nan, 2.1]), columns=["value"])
df["category"] = pd.cut(df["value"], 3, retbins=False, labels=["good", "medium", "bad"])
df
"""
Explanation: Cut
Return indices of half-open bins to which each value of x belongs.
The cut function can be useful for going from a continuous variable to a categorical variable. For example, cut could convert ages to groups of age ranges.
End of explanation
"""
import rpy2.robjects as robjects
import rpy2.robjects.packages as rpackages
from rpy2.robjects.numpy2ri import numpy2ri
robjects.conversion.py2ri = numpy2ri
dict_ = {"Intensity First" : [5.0, 2.0, 3.0, 4.0],
"Intensity Second" : [4.0, 1.0, 4.0, 2.0],
"Intensity Third" : [3.0, 4.0, 6.0, 8.0]}
df = pd.DataFrame(dict_, index=list("ABCD"))
df
def R_function_normalizeQuantiles():
rpackages.importr('limma')
normalizeQuantiles = robjects.r['normalizeQuantiles']
return normalizeQuantiles
def quantile_normalize(df, cols_2_norm):
"""
:param df: DataFrame
:param cols_2_norm: ListOfString (Columns to normalize)
:return: DataFrame
"""
normalizeQuantiles = R_function_normalizeQuantiles()
# set Zero to NaN and transform to log-space
df[cols_2_norm] = df[cols_2_norm].replace(to_replace=0.0, value=np.nan)
df[cols_2_norm] = np.log10(df[cols_2_norm])
# quantile normalize
df[cols_2_norm] = np.array(normalizeQuantiles(df[cols_2_norm].values))
# Transform back to non-log space and replace NaN with Zero
df[cols_2_norm] = np.power(10, df[cols_2_norm])
df[cols_2_norm] = df[cols_2_norm].replace(to_replace=np.nan, value=0.0)
return df
df_norm = quantile_normalize(df, df.columns.tolist())
df_norm
"""
Explanation: rpy2 example
call an R function from Python
https://en.wikipedia.org/wiki/Quantile_normalization
https://bioconductor.org/packages/release/bioc/html/limma.html
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_stats_cluster_1samp_test_time_frequency.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import permutation_cluster_1samp_test
from mne.datasets import sample
print(__doc__)
"""
Explanation: Non-parametric 1 sample cluster statistic on single trial power
This script shows how to estimate significant clusters
in time-frequency power estimates. It uses a non-parametric
statistical procedure based on permutations and cluster
level statistics.
The procedure consists in:
extracting epochs
compute single trial power estimates
baseline line correct the power estimates (power ratios)
compute stats to see if ratio deviates from 1.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
tmin, tmax, event_id = -0.3, 0.6, 1
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
# Load condition 1
event_id = 1
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
# Take only one channel
ch_name = 'MEG 1332'
epochs.pick_channels([ch_name])
evoked = epochs.average()
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet. Decimation occurs after frequency decomposition and can
# be used to reduce memory usage (and possibly computational time of downstream
# operations such as nonparametric statistics) if you don't need high
# spectrotemporal resolution.
decim = 5
frequencies = np.arange(8, 40, 2) # define frequencies of interest
sfreq = raw.info['sfreq'] # sampling in Hz
tfr_epochs = tfr_morlet(epochs, frequencies, n_cycles=4., decim=decim,
average=False, return_itc=False, n_jobs=1)
# Baseline power
tfr_epochs.apply_baseline(mode='logratio', baseline=(-.100, 0))
# Crop in time to keep only what is between 0 and 400 ms
evoked.crop(0., 0.4)
tfr_epochs.crop(0., 0.4)
epochs_power = tfr_epochs.data[:, 0, :, :] # take the 1 channel
"""
Explanation: Set parameters
End of explanation
"""
threshold = 2.5
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_1samp_test(epochs_power, n_permutations=100,
threshold=threshold, tail=0)
"""
Explanation: Compute statistic
End of explanation
"""
evoked_data = evoked.data
times = 1e3 * evoked.times
plt.figure()
plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43)
# Create new stats image with only significant clusters
T_obs_plot = np.nan * np.ones_like(T_obs)
for c, p_val in zip(clusters, cluster_p_values):
if p_val <= 0.05:
T_obs_plot[c] = T_obs[c]
vmax = np.max(np.abs(T_obs))
vmin = -vmax
plt.subplot(2, 1, 1)
plt.imshow(T_obs, cmap=plt.cm.gray,
extent=[times[0], times[-1], frequencies[0], frequencies[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.imshow(T_obs_plot, cmap=plt.cm.RdBu_r,
extent=[times[0], times[-1], frequencies[0], frequencies[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title('Induced power (%s)' % ch_name)
ax2 = plt.subplot(2, 1, 2)
evoked.plot(axes=[ax2])
plt.show()
"""
Explanation: View time-frequency plots
End of explanation
"""
|
dalya/WeirdestGalaxies | outlier_detection_RF_demo.ipynb | mit | %pylab inline
import numpy
import sklearn
from sklearn.preprocessing import Imputer
import matplotlib.pyplot as plt
"""
Explanation: Outlier Detection Algorithm with unsupervised Random Forest (RF)
This notebook shows the basic steps for using the outlier detection algorithm we developped, based on unsupervised RF, and its results on simulated 2D data.
The steps of the following notebook include:
* Simulate 2D data on which we will perform the anomaly detection. We call these data 'real'.
The outlier detection algorithm follows the following steps:
* Build synthetic data with the same sample size as the 'real' data, and the same marginal distribution in its features. We call these data 'synthetic'.
* Label the 'real' data as Class I and the 'synthetic' data as Class II. Merge the two samples with their labels into a data matrix X and label vector Y.
* Train the RF to distinguish between Class I and Class II.
* Once we have the trained forest, we propogate the 'real' data through the RF to obtain a pair-wise distance for all the objects in our sample.
* Sum the distances from a given object to all the rest to obtain the final weirdness score.
* Sort the weirdness score vector and extract the N weirdest objects in the sample.
End of explanation
"""
mean = [50, 60]
cov = [[5,5],[100,200]]
x1,y1 = numpy.random.multivariate_normal(mean,cov,1000).T
mean = [65, 70]
cov = [[20,10],[2,10]]
x2,y2 = numpy.random.multivariate_normal(mean,cov,1000).T
# and additional noises
mean = [60, 60]
cov = [[100,0],[0,100]]
x3,y3 = numpy.random.multivariate_normal(mean,cov,200).T
# concatenate it all to a single vector
x_total = numpy.concatenate((x1, x2, x3))
y_total = numpy.concatenate((y1, y2, y3))
X = numpy.array([x_total, y_total]).T
# create object IDs that will be just integers
obj_ids = numpy.arange(len(x_total))
plt.rcParams['figure.figsize'] = 4, 4
plt.title("real data")
plt.plot(X[:, 0], X[:, 1], "ok", markersize=3)
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
"""
Explanation: Simulating 2D data for anomaly detection
In the next cell we will simulate the data on which we will perform outlier detection.
We use 2D data for illustration purposes. If you have trouble constructing your input data on which to perform outlier detection, please let us know (dalyabaron@gmail.com) and we will help!
End of explanation
"""
def return_synthetic_data(X):
"""
The function returns a matrix with the same dimensions as X but with synthetic data
based on the marginal distributions of its featues
"""
features = len(X[0])
X_syn = numpy.zeros(X.shape)
for i in xrange(features):
obs_vec = X[:,i]
syn_vec = numpy.random.choice(obs_vec, len(obs_vec)) # here we chose the synthetic data to match the marginal distribution of the real data
X_syn[:,i] += syn_vec
return X_syn
X_syn = return_synthetic_data(X)
"""
Explanation: Outlier detection algorithm
We wish to detect the outlying objects in the given 2D data from the previous cell (which we call 'real'). We will do that step by step in the following cells.
Step 1: create synthetic data with the same sample size as the 'real' data, and the same marginal distributions in its features. We call these data 'synthetic'.
We build the 'synthetic' data with the same size as the 'real' data because we want the RF to train on a balanced sample. That is, the RF performs better when the different classes have approximatly the same number of objects. Otherwise, the trained forest will perform better on the bigger class and worse on the smaller class.
We build the 'synthetic' data with the same marginal distributions because we wish to detect objects with outlying covariance (and higher moments). We show in the paper that this method works best for anomaly detection on galaxy spectra. Other possible choices are discussed by Shi & Horvath (2006).
End of explanation
"""
plt.rcParams['figure.figsize'] = 8, 4
plt.subplot(1, 2, 1)
plt.title("real data")
plt.plot(X[:, 0], X[:, 1], "ok", markersize=3)
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.subplot(1, 2, 2)
plt.title("synthetic data")
plt.plot(X_syn[:, 0], X_syn[:, 1], "og", markersize=3)
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.tight_layout()
"""
Explanation: Now lets plot the 'real' and 'synthetic' data to examine the properties of the 'synthetic' data:
End of explanation
"""
plt.rcParams['figure.figsize'] = 8, 6
plt.subplot(2, 2, 1)
plt.title("real data")
tmp = plt.hist(X[:, 0], bins=numpy.linspace(0, 140, 20), color="k", label="Feature 1")
plt.legend(loc="best")
plt.xlabel("Feature 1")
plt.ylabel("N")
plt.subplot(2, 2, 2)
plt.title("real data")
tmp = plt.hist(X[:, 1], bins=numpy.linspace(0, 140, 20), color="k", label="Feature 2")
plt.legend(loc="best")
plt.xlabel("Feature 2")
plt.ylabel("N")
plt.subplot(2, 2, 3)
plt.title("synthetic data")
tmp = plt.hist(X_syn[:, 0], bins=numpy.linspace(0, 140, 20), color="g", label="Feature 1")
plt.legend(loc="best")
plt.xlabel("Feature 1")
plt.ylabel("N")
plt.subplot(2, 2, 4)
plt.title("synthetic data")
tmp = plt.hist(X_syn[:, 1], bins=numpy.linspace(0, 140, 20), color="g", label="Feature 2")
plt.legend(loc="best")
plt.xlabel("Feature 2")
plt.ylabel("N")
plt.tight_layout()
"""
Explanation: Now lets plot the marginal distributions of the 'real' and 'synthetic' data and make sure that they match for a given feature:
End of explanation
"""
def merge_work_and_synthetic_samples(X, X_syn):
"""
The function merges the data into one sample, giving the label "1" to the real data and label "2" to the synthetic data
"""
# build the labels vector
Y = numpy.ones(len(X))
Y_syn = numpy.ones(len(X_syn)) * 2
Y_total = numpy.concatenate((Y, Y_syn))
X_total = numpy.concatenate((X, X_syn))
return X_total, Y_total
X_total, Y_total = merge_work_and_synthetic_samples(X, X_syn)
# declare an RF
N_TRAIN = 500 # number of trees in the forest
rand_f = sklearn.ensemble.RandomForestClassifier(n_estimators=N_TRAIN)
rand_f.fit(X_total, Y_total)
"""
Explanation: Step 2: Once we have the 'real' and 'synthetic' data, we merge them into a single sample and assign classes to each of these. We will then train an RF to distinguish between the different classes.
This step essentially converts the problem from unsupervised to supervised, since we have labels for the data.
We train the forest on the entire data, without dividing it to training, validation, and test sets as typically done in supervised learning. This is because we do not need to test the algorithms performance on new data, but we need it to learn as much as possible from the input ('real') data in order to detect outliers.
In this demo we do not perform parallel training since the sample is small. In case of parallel training one must:
* Select a random subset of objects from X.
* Select a random subset of features from X.
* Build 'synthetic' data with the same dimensions as the subset of the 'real' data.
* Train N decision trees for this 'real' and 'synthetic' data.
* Repeat the process M times, each time with a new subset of objects and features.
* Merge all the decision trees into a forest, the forest will contain NxM decision trees.
End of explanation
"""
xx, yy = numpy.meshgrid(numpy.linspace(0, 140, 100), numpy.linspace(0, 140, 100))
Z = rand_f.predict_proba(numpy.c_[xx.ravel(), yy.ravel()])[:, 0]
Z = Z.reshape(xx.shape)
plt.rcParams['figure.figsize'] = 6, 6
plt.pcolormesh(xx, yy, Z, cmap='viridis')
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.colorbar()
plt.title("Probability map for Class I (real)")
plt.xticks((0, 20, 40, 60, 80, 100, 120))
plt.yticks((0, 20, 40, 60, 80, 100, 120))
plt.xlim(0, 140)
plt.ylim(0, 140)
plt.tight_layout()
"""
Explanation: Lets plot the probability of an object, which is described by the coordiantes (Feature 1, Feature 2), to be classified as 'real' by the trained RF. This will give us a sense of the fitting that is done.
End of explanation
"""
import io
import base64
from IPython.display import HTML
video = io.open('rf_unsup_example.m4v', 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii')))
"""
Explanation: One can see that in the parameter range of Feature 1: 40-80, and Feature 2: 40-100, the classifier performs well and is able to describe the boundries of the 'real' data well. This is not true outside this range, since we do not have 'real' data there. However, this is not an issue since we wish to detect outliers where 'real' actually exists.
Step 3: Having a trained RF, we now build the similarity matrix that describes the pair-wise similarity of all the 'real' objects in our sample. We note that from this point, we do not need the 'synthetic' data any more.
The algorithm presented by Shi & Horvath (2006) propagates each pair of objects in the decision trees and counts how many times these objects ended up in the same terminal node (leaf). Since a leaf in the tree describes the same route inside the tree, objects that end up in the same leaf are described by the same model within the same tree and therefore are similar. The similarity measure can vary from 0 (objects never end up in the same leaf) to N_trees (objects ended up in the same leaf in all the decision trees).
The next cell shows the schematic process of measuring the similarity measure:
End of explanation
"""
def build_similarity_matrix(rand_f, X):
"""
The function builds the similarity matrix based on the feature matrix X for the results Y
based on the random forest we've trained
the matrix is normalised so that the biggest similarity is 1 and the lowest is 0
This function counts only leaves in which the object is classified as a "real" object
it is also implemented to optimize running time, asumming one has enough running memory
"""
# apply to get the leaf indices
apply_mat = rand_f.apply(X)
# find the predictions of the sample
is_good_matrix = numpy.zeros(apply_mat.shape)
for i, est in enumerate(rand_f.estimators_):
d = est.predict_proba(X)[:, 0] == 1
is_good_matrix[:, i] = d
# mark leaves that make the wrong prediction as -1, in order to remove them from the distance measurement
apply_mat[is_good_matrix == False] = -1
# now calculate the similarity matrix
sim_mat = numpy.sum((apply_mat[:, None] == apply_mat[None, :]) & (apply_mat[:, None] != -1) & (apply_mat[None, :] != -1), axis=2) / numpy.asfarray(numpy.sum([apply_mat != -1], axis=2), dtype='float')
return sim_mat
sim_mat = build_similarity_matrix(rand_f, X)
dis_mat = 1 - sim_mat
"""
Explanation: In our case, we find that counting all the leafs, regardless of their prediction, is not optimal.
Instead, we propagate the objects through the decision trees and count how many times these objects ended up in the same leaf which ALSO predicts both of the objects to be real.
In our demo example this does not change the outliers that we get, but for outlier detection on galaxy spectra we find that galaxies with very low signal-to-noise ratio (essentially no signal detected), which are often predicted to be 'synthetic' by the algorithm, add significant noise to the final distance measure.
In the next cells we compute the similarity matrix when taking into account only leaves that predict both of the objects to be 'real':
End of explanation
"""
sum_vec = numpy.sum(dis_mat, axis=1)
sum_vec /= float(len(sum_vec))
plt.rcParams['figure.figsize'] = 6, 4
plt.title("Weirdness score histogram")
tmp = plt.hist(sum_vec, bins=60, color="g")
plt.ylabel("N")
plt.xlabel("weirdness score")
"""
Explanation: We defined the distance matrix to be 1 - the similarity matrix.
The definition of the distance matrix should obey the triangle inequality and should preserve the inner ordering of the data. Therefore, there are different distance definitions that will result in the same outlier ranking (such as square root of 1 - the similarity matrix).
Step 4: Here we sum the distance matrix. This is essentialy the weirdness score. Objects that have on average large distances from the rest will have a higher sum, therefore a higher weirdness score.
End of explanation
"""
N_outliers = 200
sum_vec_outliers = numpy.sort(sum_vec)[::-1][:N_outliers]
obj_ids_outliers = obj_ids[numpy.argsort(sum_vec)][::-1][:N_outliers]
plt.rcParams['figure.figsize'] = 5, 5
plt.title("Data and outliers")
plt.plot(X[:,0], X[:,1], "ok", label="input daya", markersize=4)
plt.plot(X[obj_ids_outliers, 0], X[obj_ids_outliers, 1], "om", label="outliers", markersize=4)
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.legend(loc="best")
"""
Explanation: We can see that for this dataset most of the objects have an average distance of 0.998 from each other, and outliers are objects with average distance that approaches 1.
We show in the paper that for the entire SDSS galaxy sample, the weirdness distribution decreases for increasing weirdness score.
Lets plot the 200 most outlying objects in our sample:
End of explanation
"""
|
tuanavu/coursera-university-of-washington | machine_learning/1_machine_learning_foundations/assignment/week3/Analyzing product sentiment.ipynb | mit | import graphlab
"""
Explanation: Predicting sentiment from product reviews
Fire up GraphLab Create
End of explanation
"""
products = graphlab.SFrame('amazon_baby.gl/')
"""
Explanation: Read some product review data
Loading reviews for a set of baby products.
End of explanation
"""
products.head()
"""
Explanation: Let's explore this data together
Data includes the product name, the review text and the rating of the review.
End of explanation
"""
products['word_count'] = graphlab.text_analytics.count_words(products['review'])
products.head()
graphlab.canvas.set_target('ipynb')
products['name'].show()
"""
Explanation: Build the word count vector for each review
End of explanation
"""
giraffe_reviews = products[products['name'] == 'Vulli Sophie the Giraffe Teether']
len(giraffe_reviews)
giraffe_reviews['rating'].show(view='Categorical')
"""
Explanation: Examining the reviews for most-sold product: 'Vulli Sophie the Giraffe Teether'
End of explanation
"""
products['rating'].show(view='Categorical')
"""
Explanation: Build a sentiment classifier
End of explanation
"""
#ignore all 3* reviews
products = products[products['rating'] != 3]
#positive sentiment = 4* or 5* reviews
products['sentiment'] = products['rating'] >=4
products.head()
"""
Explanation: Define what's a positive and a negative sentiment
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. Reviews with a rating of 4 or higher will be considered positive, while the ones with rating of 2 or lower will have a negative sentiment.
End of explanation
"""
train_data,test_data = products.random_split(.8, seed=0)
sentiment_model = graphlab.logistic_classifier.create(train_data,
target='sentiment',
features=['word_count'],
validation_set=test_data)
"""
Explanation: Let's train the sentiment classifier
End of explanation
"""
sentiment_model.evaluate(test_data, metric='roc_curve')
sentiment_model.show(view='Evaluation')
"""
Explanation: Evaluate the sentiment model
End of explanation
"""
giraffe_reviews['predicted_sentiment'] = sentiment_model.predict(giraffe_reviews, output_type='probability')
giraffe_reviews.head()
"""
Explanation: Applying the learned model to understand sentiment for Giraffe
End of explanation
"""
giraffe_reviews = giraffe_reviews.sort('predicted_sentiment', ascending=False)
giraffe_reviews.head()
"""
Explanation: Sort the reviews based on the predicted sentiment and explore
End of explanation
"""
giraffe_reviews[0]['review']
giraffe_reviews[1]['review']
"""
Explanation: Most positive reviews for the giraffe
End of explanation
"""
giraffe_reviews[-1]['review']
giraffe_reviews[-2]['review']
"""
Explanation: Show most negative reviews for giraffe
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/9460321824116e4964fbe6d88d27462e/plot_cluster_stats_evoked.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.stats import permutation_cluster_test
from mne.datasets import sample
print(__doc__)
"""
Explanation: Permutation F-test on sensor data with 1D cluster level
One tests if the evoked response is significantly different
between conditions. Multiple comparison problem is addressed
with cluster level permutation test.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id = 1
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
channel = 'MEG 1332' # include only this channel in analysis
include = [channel]
"""
Explanation: Set parameters
End of explanation
"""
picks = mne.pick_types(raw.info, meg=False, eog=True, include=include,
exclude='bads')
event_id = 1
reject = dict(grad=4000e-13, eog=150e-6)
epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject)
condition1 = epochs1.get_data() # as 3D matrix
event_id = 2
epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject)
condition2 = epochs2.get_data() # as 3D matrix
condition1 = condition1[:, 0, :] # take only one channel to get a 2D array
condition2 = condition2[:, 0, :] # take only one channel to get a 2D array
"""
Explanation: Read epochs for the channel of interest
End of explanation
"""
threshold = 6.0
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_test([condition1, condition2], n_permutations=1000,
threshold=threshold, tail=1, n_jobs=1)
"""
Explanation: Compute statistic
End of explanation
"""
times = epochs1.times
plt.close('all')
plt.subplot(211)
plt.title('Channel : ' + channel)
plt.plot(times, condition1.mean(axis=0) - condition2.mean(axis=0),
label="ERF Contrast (Event 1 - Event 2)")
plt.ylabel("MEG (T / m)")
plt.legend()
plt.subplot(212)
for i_c, c in enumerate(clusters):
c = c[0]
if cluster_p_values[i_c] <= 0.05:
h = plt.axvspan(times[c.start], times[c.stop - 1],
color='r', alpha=0.3)
else:
plt.axvspan(times[c.start], times[c.stop - 1], color=(0.3, 0.3, 0.3),
alpha=0.3)
hf = plt.plot(times, T_obs, 'g')
plt.legend((h, ), ('cluster p-value < 0.05', ))
plt.xlabel("time (ms)")
plt.ylabel("f-values")
plt.show()
"""
Explanation: Plot
End of explanation
"""
|
julienchastang/unidata-python-workshop | failing_notebooks/casestudy.ipynb | mit | import xarray
ds = xarray.open_dataset('https://motherlode.ucar.edu/repository/opendap/41f2b38a-4e70-4135-8ff8-dbf3d1dcbfc1/entry.das',
decode_times=False)
print(ds)
"""
Explanation: netCDF File Visualization Case Study
I was asked by a colleague to visualize data contained within this netCDF file (OPeNDAP link) with Python. What follows is an exploration of how I achieved that objective. Because this exercise touches upon many technologies related to Unidata, it makes for an interesting case study. We will be meandering through,
netCDF
WMO GRIB metadata
Map projections
xarray data analysis library
cartopy visualization library
Crack Open the File
To get our bearings let's see what there is inside our netCDF file. We will be using the xarray library to dig inside our netCDF data. xarray is similar to pandas, but for the Common Data Model. We could have just used the netcdf4-python library but xarray has output that is more nicely formatted. Let's first import xarray and open the dataset.
End of explanation
"""
print(ds['th'])
"""
Explanation: Dimensions, Coordinates, Data Variables
As far as the dimensions and coordinates go, the most relevant and important coordinates variables are x and y. We can see the data variables, such as, temperature (t), mixing ratio (mr), and potential temperature (th), are mostly on a 1901 x 1801 grid. There is also the mysterious nav dimension and associated data variables which we will be examining later.
Let's set a goal of visualizing potential temperature with the Cartopy plotting package.
The first step is to get more information concerning the variables we are interested in. For example, let's look at potential temperature or th.
End of explanation
"""
th = ds['th'].values[0][0]
print(th)
"""
Explanation: potential temperature (th)
Let's grab the data array for potential temperature (th).
End of explanation
"""
print(ds['grid_type_code'])
"""
Explanation: To Visualize the Data, We have to Decrypt the Projection
In order, to visualize the data that are contained within a two-dimensional array onto a map that represents a three-dimensional globe, we need to understand the projection of the data.
We can make an educated guess these are contained in the data variables with the nav cooridinate variable.
Specifically,
grid_type
grid_type_code
x_dim
y_dim
Nx
Ny
La1
Lo1
LoV
Latin1
Latin2
Dx
Dy
But what are these??
For Grins, Let's Scrutinize the grid_type_code
End of explanation
"""
print(ds['grid_type_code'].values[0])
"""
Explanation: Google to the Rescue
A simple Google search of GRIB-1 GDS data representation type takes us to
A GUIDE TO THE CODE FORM FM 92-IX Ext. GRIB Edition 1 from 1994 document. Therein one can find an explanation of the variables needed to understand the map projection. Let's review these variables.
End of explanation
"""
grid_type = ds['grid_type'].values
print('The grid type is ', grid_type[0])
"""
Explanation: What is grid_type_code of 5?
Let's look at Table 6 . A grid_type_code of 5 corresponds to a projection of Polar Stereographic.
Next up grid_type
End of explanation
"""
nx, ny = ds['Nx'].values[0], ds['Ny'].values[0]
print(nx, ny)
"""
Explanation: Uh oh! Polar Stereographic or Lambert Conformal??
Note that this newest piece of information relating to a Lambert Conformal projection disagrees with the earlier projection information about a Polar Stereographic projection. There is a bug in the metadata description of the projection.
Moving on Anyway, next Nx and Ny
According to the grib documentation Nx and Ny represent the number grid points along the x and y axes. Let's grab those.
End of explanation
"""
la1, lo1 = ds['La1'].values[0], ds['Lo1'].values[0]
print(la1, lo1)
"""
Explanation: La1 and Lo1
Next let's get La1 and Lo1 which are defined as the "first grid points" These are probably the latitude and longitude for one of the corners of the grid.
End of explanation
"""
latin1, latin2 = ds['Latin1'].values[0], ds['Latin2'].values[0]
print(latin1, latin2)
"""
Explanation: Latin1 and Latin2
Next up are the rather mysteriously named Latin1 and Latin2 variables. When I first saw these identifiers, I thought they referred to a Unicode block, but in fact they relate to the secants of the projection cone. I do not know why they are called "Latin" and this name is confusing. At any rate, we can feel comfortable that we are dealing with Lambert Conformal rather than Polar Stereographic.
Credit: http://www.geo.hunter.cuny.edu/~jochen
End of explanation
"""
lov = ds['LoV'].values[0]
print(lov)
"""
Explanation: The Central Meridian for the Lambert Conformal Projection, LoV
If we are defining a Lambert Conformal projection, we will require the central meridian that the GRIB documentation refers to as LoV.
End of explanation
"""
print(ds['Dx'])
print(ds['Dy'])
"""
Explanation: Dx and Dy
Finally, let's look at the grid increments. In particular, we need to find the units.
End of explanation
"""
dx,dy = ds['Dx'].values[0],ds['Dy'].values[0]
print(dx,dy)
"""
Explanation: Units for Dx and Dy
The units for the deltas are in meters.
End of explanation
"""
%matplotlib inline
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import matplotlib as mpl
"""
Explanation: Let's Review What We Have
We now have all the information we need to understand the Lambert projection:
The secants of the Lambert Conformal projection (Latin1, Latin2)
The central meridian of the projection (LoV)
Moreover, we have additional information that shows how the data grid relates to the projection:
The number of grid points in x and y (Nx, Ny)
The delta in meters between grid point (Dx, Dy)
The first latitude and longitude of the data (first latitude, first longitude).
We are Ready for Visualization (almost)!
Let's import cartopy and matplotlib.
End of explanation
"""
proj = ccrs.LambertConformal(central_longitude=lov,standard_parallels=(latin1,latin2))
"""
Explanation: Define the Lambert Conformal Projection with Cartopy
End of explanation
"""
pc = ccrs.PlateCarree()
"""
Explanation: Lambert Conformal Grid Extents
To plot the data we need the left,right,bottom,top extents of the grid expressed in Lambert Conformal
coordinates.
Key point: The projection coordinate systems have flat topology and Euclidean distance.
Calculating the Extents
Remember, we have:
The number of grid points in x and y (Nx, Ny)
The delta in meters between grid point (Dx, Dy)
The first latitude and longitude of the data (first latitude, first longitude).
We have one of the corners in latitude and longitude, but we need to convert it LC coordinates and derive the other corner.
Platte Carrée Projection
The Platte Carrée Projection is a very simple X,Y/Cartesian projection. It is used a lot in Cartopy because it allows you to express coordinates in familiar Latitude and Longitudes. Remember: The projection coordinate systems have flat topology and Euclidean distance.
Platte Carrée
Source: Wikipedia Source
Create the PlatteCarre Cartopy Projection
End of explanation
"""
left,bottom = proj.transform_point(lo1,la1,pc)
print(left,bottom)
"""
Explanation: Convert Corner from Lat/Lon PlatteCarre to LC
The transform_point method translates coordinates from one projection coordinate system to the other.
End of explanation
"""
right,top = left + nx*dx,bottom + ny*dy
print(right,top)
"""
Explanation: Derive Opposite Corner
Derive the opposite corner from the number of points and the delta. Again, we can do this because the projection coordinate systems have flat topology and Euclidean distance.
End of explanation
"""
#Define the figure
fig = plt.figure(figsize=(12, 12))
# Define the extents and add the data
ax = plt.axes(projection=proj)
extents = (left, right, bottom, top)
ax.contourf(th, origin='lower', extent=extents, transform=proj)
# Add bells and whistles
ax.coastlines(resolution='50m', color='black', linewidth=2)
ax.add_feature(ccrs.cartopy.feature.STATES)
ax.add_feature(ccrs.cartopy.feature.BORDERS, linewidth=1, edgecolor='black')
ax.gridlines()
plt.show()
"""
Explanation: Plot It Up!
We now have the extents, we are ready to plot.
End of explanation
"""
|
mikarubi/notebooks | worker/notebooks/bolt/tutorials/basic-usage.ipynb | mit | from bolt import ones
a = ones((2,3,4))
a.shape
"""
Explanation: Basic usage
The primary object in Bolt is the Bolt array. We can construct these arrays using familiar operators (like zeros and ones), or from an existing array, and manipulate them like ndarrays whether in local or distributed settings. This notebook highlights the core functionality, much of which hides the underlying implementation (by design!); see other tutorials for more about what's going on under the hood, and for more advanced usage.
Local array
The local Bolt array is just like a NumPy array, and we can construct it without any special arguments.
End of explanation
"""
a.sum()
a.transpose(2,1,0).shape
"""
Explanation: The local array is basically a wrapper for a NumPy array, so that we can write applications against the BoltArray and support either local or distributed settings regardless of which we're in. As such, it has the usual NumPy functionality.
End of explanation
"""
a.sum(axis=0).toarray()
"""
Explanation: The toarray method always returns the underlying array
End of explanation
"""
b = ones((2, 3, 4), sc)
b.shape
"""
Explanation: Distributed array
To construct Bolt arrays backed by other engines, like Spark, we just add additional arguments to the constructor. For Spark, we add a SparkContext
End of explanation
"""
from numpy import arange
x = arange(2*3*4).reshape(2, 3, 4)
from bolt import array
b = array(x, sc)
b.shape
"""
Explanation: We can also construct from an existing local array
End of explanation
"""
b.sum()
b.sum(axis=0).toarray()
b.max(axis=(0,1)).toarray()
"""
Explanation: Array operations
We can use many of the ndarray operations we're familiar with, including aggregations along axes
End of explanation
"""
b[:,:,0:2].shape
b[0,0:2,0:2].toarray()
b[[0,1],[0,1],[0,1]].toarray()
"""
Explanation: indexing with either slices or interger lists
End of explanation
"""
b.shape
b.reshape(2, 4, 3).shape
b[:,:,0:1].squeeze().shape
b.transpose(2, 1, 0).shape
"""
Explanation: and reshaping, squeezing, and transposing
End of explanation
"""
a = ones((2, 3, 4), sc)
a.map(lambda x: x * 2).toarray()
"""
Explanation: Functional operators
The Bolt array also supports functional-style operations, like map, reduce, and filter. We can use map to apply functions in parallel
End of explanation
"""
a.map(lambda x: x.sum(), axis=(0,)).toarray()
"""
Explanation: If we map over the 0th axis with the sum function, we are taking the sum of 2 arrays each 3x4
End of explanation
"""
a.map(lambda x: x.sum(), axis=(0,1)).toarray()
"""
Explanation: If we instead map over the 0 and 1st axis, we are taking the sum of 2x3 arrays each of size 4
End of explanation
"""
a.map(lambda x: x * 2, axis=(0,)).sum(axis=(0,1)).toarray()
"""
Explanation: And we can chain these functional operations alongside array operations
End of explanation
"""
|
xpharry/Udacity-DLFoudation | tutorials/intro-to-tensorflow/intro_to_tensorflow_solution.ipynb | mit | # Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
"""
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
"""
a = 0.1
b = 0.9
grayscale_min = 0
grayscale_max = 255
return a + ( ( (image_data - grayscale_min)*(b - a) )/( grayscale_max - grayscale_min ) )
"""
Explanation: Solutions
Problem 1
Implement the Min-Max scaling function ($X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}$) with the parameters:
$X_{\min }=0$
$X_{\max }=255$
$a=0.1$
$b=0.9$
End of explanation
"""
features_count = 784
labels_count = 10
# Problem 2 - Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# Problem 2 - Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
"""
Explanation: Problem 2
Use tf.placeholder() for features and labels since they are the inputs to the model.
Any math operations must have the same type on both sides of the operator. The weights are float32, so the features and labels must also be float32.
Use tf.Variable() to allow weights and biases to be modified.
The weights must be the dimensions of features by labels. The number of features is the size of the image, 28*28=784. The size of labels is 10.
The biases must be the dimensions of the labels, which is 10.
End of explanation
"""
|
chengsoonong/mclass-sky | projects/alasdair/notebooks/04_learning_curves.ipynb | bsd-3-clause | # remove after testing
%load_ext autoreload
%autoreload 2
import pickle
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from itertools import product
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.utils import shuffle
from mclearn.classifier import (train_classifier,
grid_search_logistic,
grid_search_svm_poly,
grid_search_svm_rbf,
learning_curve)
from mclearn.preprocessing import balanced_train_test_split
from mclearn.tools import results_exist, load_results
from mclearn.viz import plot_learning_curve, plot_average_learning_curve, plot_validation_accuracy_heatmap
%matplotlib inline
sns.set_style('ticks')
fig_dir = '../thesis/figures/'
target_col = 'class'
sdss_features = ['psfMag_r_w14', 'psf_u_g_w14', 'psf_g_r_w14', 'psf_r_i_w14',
'psf_i_z_w14', 'petroMag_r_w14', 'petro_u_g_w14', 'petro_g_r_w14',
'petro_r_i_w14', 'petro_i_z_w14', 'petroRad_r']
vstatlas_features = ['rmagC', 'umg', 'gmr', 'rmi', 'imz', 'rmw1', 'w1m2']
sdss = pd.read_hdf('../data/sdss.h5', 'sdss')
vstatlas = pd.read_hdf('../data/vstatlas.h5', 'vstatlas')
X_sdss, _, y_sdss, _ = balanced_train_test_split(
sdss[sdss_features], sdss[target_col], train_size=10000, test_size=0, random_state=2)
X_vstatlas, _, y_vstatlas, _ = balanced_train_test_split(
vstatlas[vstatlas_features], vstatlas[target_col], train_size=2360, test_size=0, random_state=2)
"""
Explanation: Learning Curves
End of explanation
"""
sdss_rbf_path = '../pickle/04_learning_curves/sdss_rbf_scores.pickle'
sdss_rbf_heat = fig_dir + '4_expt1/sdss_grid_rbf.pdf'
sdss_poly_path = '../pickle/04_learning_curves/sdss_poly_scores.pickle'
sdss_poly_heat = fig_dir + '4_expt1/sdss_grid_poly.pdf'
sdss_logistic_path = '../pickle/04_learning_curves/sdss_logistic_scores.pickle'
sdss_logistic_heat = fig_dir + '4_expt1/sdss_grid_logistic.pdf'
sdss_paths = [sdss_rbf_path, sdss_poly_path, sdss_logistic_path]
vstatlas_rbf_path = '../pickle/04_learning_curves/vstatlas_rbf_scores.pickle'
vstatlas_rbf_heat = fig_dir + '4_expt1/vstatlas_grid_rbf.pdf'
vstatlas_poly_path = '../pickle/04_learning_curves/vstatlas_poly_scores.pickle'
vstatlas_poly_heat = fig_dir + '4_expt1/vstatlas_grid_poly.pdf'
vstatlas_logistic_path = '../pickle/04_learning_curves/vstatlas_logistic_scores.pickle'
vstatlas_logistic_heat = fig_dir + '4_expt1/vstatlas_grid_logistic.pdf'
vstatlas_paths = [vstatlas_rbf_path, vstatlas_poly_path, vstatlas_logistic_path]
logistic_labels = ['Degree 1, OVR, L1-norm',
'Degree 1, OVR, L2-norm',
'Degree 1, Multinomial, L2-norm',
'Degree 2, OVR, L1-norm',
'Degree 2, OVR, L2-norm',
'Degree 2, Multinomial, L2-norm',
'Degree 3, OVR, L1-norm',
'Degree 3, OVR, L2-norm',
'Degree 3, Multinomial, L2-norm']
poly_labels = ['Degree 1, OVR, Squared Hinge, L1-norm',
'Degree 1, OVR, Squared Hinge, L2-norm',
'Degree 1, OVR, Hinge, L2-norm',
'Degree 1, Crammer-Singer',
'Degree 2, OVR, Squared Hinge, L1-norm',
'Degree 2, OVR, Squared Hinge, L2-norm',
'Degree 2, OVR, Hinge, L2-norm',
'Degree 2, Crammer-Singer',
'Degree 3, OVR, Squared Hinge, L1-norm',
'Degree 3, OVR, Squared Hinge, L2-norm',
'Degree 3, OVR, Hinge, L2-norm',
'Degree 3, Crammer-Singer']
C_rbf_range = np.logspace(-2, 10, 13)
C_range = np.logspace(-6, 6, 13)
gamma_range = np.logspace(-9, 3, 13)
if not results_exist(sdss_rbf_path):
grid_search_svm_rbf(X_sdss, y_sdss, pickle_path=sdss_rbf_path)
if not results_exist(sdss_poly_path):
grid_search_svm_poly(X_sdss, y_sdss, pickle_path=sdss_poly_path)
if not results_exist(sdss_logistic_path):
grid_search_logistic(X_sdss, y_sdss, pickle_path=sdss_logistic_path)
sdss_rbf, sdss_poly, sdss_logistic = load_results(sdss_paths)
if not results_exist(vstatlas_rbf_path):
grid_search_svm_rbf(X_vstatlas, y_vstatlas, pickle_path=vstatlas_rbf_path)
if not results_exist(vstatlas_poly_path):
grid_search_svm_poly(X_vstatlas, y_vstatlas, pickle_path=vstatlas_poly_path)
if not results_exist(vstatlas_logistic_path):
grid_search_logistic(X_vstatlas, y_vstatlas, pickle_path=vstatlas_logistic_path)
vstatlas_rbf, vstatlas_poly, vstatlas_logistic = load_results(vstatlas_paths)
"""
Explanation: Hyperparameter Optimization
We start by optimising the hyperparameters with grid search. For each combination, we use a 5-fold cross validation, each fold having 300 examples in the training set and 300 in the test set.
End of explanation
"""
fig = plt.figure(figsize=(7, 3))
im = plot_validation_accuracy_heatmap(sdss_logistic, x_range=C_range, x_label='$C$', power10='x')
plt.yticks(np.arange(0, 9), logistic_labels)
plt.tick_params(top='off', right='off')
plt.colorbar(im)
fig.savefig(sdss_logistic_heat, bbox_inches='tight')
"""
Explanation: Logistic Regression
The best parameters for the SDSS Dataset are {degree=3, multi_class='ovr', penalty='l2', C=0.01}. But this is too slow. We can only do degree 2. The best one here is {degree=2, multi_class='ovr', penalty='l1', C=1}.
End of explanation
"""
fig = plt.figure(figsize=(7, 3))
im = plot_validation_accuracy_heatmap(vstatlas_logistic, x_range=C_range, x_label='$C$', power10='x')
plt.yticks(np.arange(0, 9), logistic_labels)
plt.tick_params(top='off', right='off')
plt.colorbar(im)
fig.savefig(vstatlas_logistic_heat, bbox_inches='tight')
"""
Explanation: Multinomial has the highest score, but it doesn't give us reliable probability estimates. The next best option for VST ATLAS is {degree=2, multi_class='ovr', penalty='l1', C=100}.
End of explanation
"""
fig = plt.figure(figsize=(9, 3.5))
im = plot_validation_accuracy_heatmap(sdss_poly, x_range=C_range, x_label='$C$', power10='x')
plt.yticks(np.arange(0, 12), poly_labels)
plt.tick_params(top='off', right='off')
plt.colorbar(im)
fig.savefig(sdss_poly_heat, bbox_inches='tight')
"""
Explanation: SVM with Polynomial Kernel
The best parameters for the SDSS dataset are {degree=2, multi_class='ovr', loss='squared_hinge', penalty='l1', C=0.1}:
End of explanation
"""
fig = plt.figure(figsize=(9, 3.5))
im = plot_validation_accuracy_heatmap(vstatlas_poly, x_range=C_range, x_label='$C$', power10='x')
plt.yticks(np.arange(0, 12), poly_labels)
plt.tick_params(top='off', right='off')
plt.colorbar(im)
fig.savefig(vstatlas_poly_heat, bbox_inches='tight')
"""
Explanation: The best parameters for the VST ATLAS dataset are {degree=1, multi_class='crammer-singer', C=1000}:
End of explanation
"""
fig = plt.figure(figsize=(8, 4))
im = plot_validation_accuracy_heatmap(sdss_rbf, x_range=gamma_range,
y_range=C_rbf_range, y_label='$C$', x_label='$\gamma$', power10='both')
plt.tick_params(top='off', right='off')
plt.colorbar(im)
fig.savefig(sdss_rbf_heat, bbox_inches='tight')
"""
Explanation: SVM with RBF Kernel
The best one is {C = 10,000, gamma=0.001}.
End of explanation
"""
fig = plt.figure(figsize=(8, 4))
im = plot_validation_accuracy_heatmap(vstatlas_rbf, x_range=gamma_range,
y_range=C_rbf_range, y_label='$C$', x_label='$\gamma$', power10='both')
plt.tick_params(top='off', right='off')
plt.colorbar(im)
fig.savefig(vstatlas_rbf_heat, bbox_inches='tight')
"""
Explanation: The best one is {C = 1,000,000, gamma=0.001}.
End of explanation
"""
X = np.asarray(sdss[sdss_features])
y = np.asarray(sdss[target_col])
cv = StratifiedShuffleSplit(y, n_iter=5, test_size=200000, train_size=300001, random_state=29)
rbf = SVC(kernel='rbf', gamma=0.1, C=10, cache_size=2000, class_weight='auto')
poly = LinearSVC(C=0.1, loss='squared_hinge', penalty='l1', dual=False, multi_class='ovr',
fit_intercept=True, class_weight='auto', random_state=21)
logistic = LogisticRegression(penalty='l1', dual=False, C=1, multi_class='ovr', solver='liblinear', class_weight='auto', random_state=21)
forest = RandomForestClassifier(n_estimators=300, n_jobs=-1, class_weight='subsample', random_state=21)
classifiers = [forest, logistic, rbf, poly, poly]
degrees = [1, 2, 1, 2, 3]
sample_sizes = np.concatenate((np.arange(100, 1000, 100), np.arange(1000, 10000, 1000),
np.arange(10000, 100001, 10000), [200000, 300000]))
curve_labels = ['Random Forest', 'Logistic Regression', 'RBF SVM', 'Degree 2 Polynomial SVM', 'Degree 3 Polynomial SVM']
pickle_paths = ['../pickle/04_learning_curves/sdss_lc_forest.pickle',
'../pickle/04_learning_curves/sdss_lc_logistic_2.pickle',
'../pickle/04_learning_curves/sdss_lc_rbf.pickle',
'../pickle/04_learning_curves/sdss_lc_poly_2.pickle',
'../pickle/04_learning_curves/sdss_lc_poly_3.pickle']
for classifier, degree, pickle_path in zip(classifiers, degrees, pickle_paths):
if not results_exist(pickle_path):
learning_curve(classifier, X, y, cv, sample_sizes, degree, pickle_path)
all_learning_curves = load_results(pickle_paths)
for c in all_learning_curves:
print(np.array(c)[:, -1])
fig = plt.figure(figsize=(4, 4))
ax = plot_average_learning_curve(sample_sizes, all_learning_curves, curve_labels)
ax.set_xscale('log')
fig.savefig(fig_dir + '4_expt1/sdss_learning_curves.pdf', bbox_inches='tight')
"""
Explanation: Learning Curves
SDSS Learning Curves
Note that logistic regression with degree 3 polynomial transformation takes too long, so we skip obtaining its learning curves.
End of explanation
"""
logistic_lc = np.array(all_learning_curves[1])
rbf_lc = np.array(all_learning_curves[2])
print(np.mean(logistic_lc[:,-1]))
print(np.mean(rbf_lc[:,-1]))
"""
Explanation: Upper bounds for Logistic Regression and RBF SVM
End of explanation
"""
X = np.asarray(vstatlas[vstatlas_features])
y = np.asarray(vstatlas[target_col])
cv = StratifiedShuffleSplit(y, n_iter=5, test_size=0.3, train_size=0.7, random_state=29)
rbf = SVC(kernel='rbf', gamma=0.001, C=1000000, cache_size=2000, class_weight='auto')
poly = LinearSVC(C=1000, multi_class='crammer_singer',
fit_intercept=True, class_weight='auto', random_state=21)
logistic = LogisticRegression(penalty='l1', dual=False, C=100, multi_class='ovr', solver='liblinear', class_weight='auto', random_state=21)
forest = RandomForestClassifier(n_estimators=300, n_jobs=-1, class_weight='subsample', random_state=21)
classifiers = [forest, logistic, rbf, poly]
degrees = [1, 2, 1, 1]
sample_sizes = np.concatenate((np.arange(100, 1000, 100), np.arange(1000, 10000, 1000),
np.arange(10000, 30001, 10000), [35056]))
curve_labels = ['Random Forest', 'Logistic Regression', 'RBF SVM', 'Linear SVM']
pickle_paths = ['../pickle/04_learning_curves/vstatlas_lc_forest.pickle',
'../pickle/04_learning_curves/vstatlas_lc_logistic.pickle',
'../pickle/04_learning_curves/vstatlas_lc_rbf.pickle',
'../pickle/04_learning_curves/vstatlas_lc_poly.pickle']
for classifier, degree, pickle_path in zip(classifiers, degrees, pickle_paths):
if not results_exist(pickle_path):
learning_curve(classifier, X, y, cv, sample_sizes, degree, pickle_path)
all_learning_curves = load_results(pickle_paths)
for c in all_learning_curves:
print(np.array(c)[:, -1])
fig = plt.figure(figsize=(4, 4))
ax = plot_average_learning_curve(sample_sizes, all_learning_curves, curve_labels)
ax.set_xscale('log')
fig.savefig(fig_dir + '4_expt1/vstatlas_learning_curves.pdf', bbox_inches='tight')
logistic_lc = np.array(all_learning_curves[1])
rbf_lc = np.array(all_learning_curves[2])
print(np.max(logistic_lc[:,-1]))
print(np.max(rbf_lc[:,-1]))
"""
Explanation: VST ATLAS Learning Curves
End of explanation
"""
transformer = PolynomialFeatures(degree=2, interaction_only=False, include_bias=True)
X_poly = transformer.fit_transform(X)
%%timeit -n 1 -r 1
rbf = SVC(kernel='rbf', gamma=0.001, C=1000, cache_size=2000, class_weight='auto', probability=True)
rbf.fit(X, y)
%%timeit -n 1 -r 1
rbf = SVC(kernel='rbf', gamma=0.1, C=10, cache_size=2000, class_weight='auto', probability=True)
rbf.fit(X, y)
%%timeit -n 1 -r 1
poly = LinearSVC(C=1000, multi_class='crammer_singer',
fit_intercept=True, class_weight='auto', random_state=21)
poly.fit(X, y)
%%timeit -n 1 -r 1
poly = LinearSVC(C=0.1, loss='squared_hinge', penalty='l1', dual=False, multi_class='ovr',
fit_intercept=True, class_weight='auto', random_state=21)
poly.fit(X_poly, y)
%%timeit -n 1 -r 1
logistic = LogisticRegression(penalty='l1', dual=False, C=100, multi_class='ovr', solver='liblinear', random_state=21)
logistic.fit(X_poly, y)
"""
Explanation: Appendix: Time Complexity
End of explanation
"""
|
automl/SpySMAC | examples/autopytorch/apt_notebook.ipynb | bsd-3-clause | # Remove the old example output
import os
import logging
import tempfile
import shutil
log_dir = "logs/apt-cave-notebook/"
rerun_apt = False
logging.basicConfig(level=logging.DEBUG)
from autoPyTorch import AutoNetClassification
import os as os
import openml
import json
from ConfigSpace.read_and_write import json as pcs_json
# Logging
from autoPyTorch.components.metrics.additional_logs import *
from autoPyTorch.pipeline.nodes import LogFunctionsSelector
if rerun_apt:
# Remove old results
if os.path.exists(log_dir):
archive_path = shutil.make_archive(os.path.join(tempfile.mkdtemp(), '.OLD'), 'zip', log_dir)
shutil.rmtree(log_dir)
os.makedirs(log_dir)
shutil.move(archive_path, log_dir)
else:
os.makedirs(log_dir)
task = openml.tasks.get_task(task_id=31)
X, y = task.get_X_and_y()
ind_train, ind_test = task.get_train_test_split_indices()
X_train, Y_train = X[ind_train], y[ind_train]
X_test, Y_test = X[ind_test], y[ind_test]
autopytorch = AutoNetClassification(config_preset="medium_cs",
result_logger_dir=log_dir,
log_every_n_datapoints=10,
use_tensorboard_logger=True,
additional_logs=[test_result.__name__,
test_cross_entropy.__name__,
test_balanced_accuracy.__name__],
)
# Get data from the openml task "Supervised Classification on credit-g (https://www.openml.org/t/31)"
task = openml.tasks.get_task(task_id=31)
X, y = task.get_X_and_y()
ind_train, ind_test = task.get_train_test_split_indices()
X_train, Y_train = X[ind_train], y[ind_train]
X_test, Y_test = X[ind_test], y[ind_test]
# Equip autopytorch with additional logs
gl = GradientLogger()
lw_gl = LayerWiseGradientLogger()
additional_logs = [gradient_max(gl), gradient_mean(gl), gradient_median(gl), gradient_std(gl),
gradient_q10(gl), gradient_q25(gl), gradient_q75(gl), gradient_q90(gl),
layer_wise_gradient_max(lw_gl), layer_wise_gradient_mean(lw_gl),
layer_wise_gradient_median(lw_gl), layer_wise_gradient_std(lw_gl),
layer_wise_gradient_q10(lw_gl), layer_wise_gradient_q25(lw_gl),
layer_wise_gradient_q75(lw_gl), layer_wise_gradient_q90(lw_gl),
gradient_norm()]
for additional_log in additional_logs:
autopytorch.pipeline[LogFunctionsSelector.get_name()].add_log_function(name=type(additional_log).__name__,
log_function=additional_log)
#sampling_space["additional_logs"].append(type(additional_log).__name__)
autopytorch.pipeline[LogFunctionsSelector.get_name()].add_log_function(name=test_result.__name__,
log_function=test_result(autopytorch, X[ind_test], y[ind_test]))
autopytorch.pipeline[LogFunctionsSelector.get_name()].add_log_function(name=test_cross_entropy.__name__,
log_function=test_cross_entropy(autopytorch, X[ind_test], y[ind_test]))
autopytorch.pipeline[LogFunctionsSelector.get_name()].add_log_function(name=test_balanced_accuracy.__name__,
log_function=test_balanced_accuracy(autopytorch, X[ind_test], y[ind_test]))
# Fit to find an incumbent configuration with BOHB
results_fit = autopytorch.fit(X_train=X_train,
Y_train=Y_train,
validation_split=0.3,
max_runtime=750,
min_budget=10,
max_budget=50,
refit=True,
)
autopytorch.refit_all_incumbents(X_train, Y_train)
"""
Explanation: Using CAVE with AutoPyTorch
AutoPyTorch provides a framework for automated neural-network-configuration. Currently it supports BOHB for hyperparameter search.
CAVE integrates with AutoPyTorch, providing further insights and visualizations.
This notebook provides an exemplary pipeline for using CAVE on / with AutoPyTorch.
We will generate some AutoPyTorch-Output.
You can use your own AutoPyTorch-routine here, we will use the openml-tasks, inspired by AutoPyTorch's tutorial notebook.
Note: This example adapts with the refactor of the APT project.
Since logging is not yet finally implemented in the APT project, this example is not necessarily fully executable...
However, feel free to open issues on errors you encounter in the issue tracker.
End of explanation
"""
if rerun_apt:
# Save fit results as json
with open(os.path.join(log_dir, "results_fit.json"), "w") as f:
json.dump(results_fit, f, indent=2)
# Also necessary information (can be migrated either to CAVE or (preferably) to autopytorch)
with open(os.path.join(log_dir, 'configspace.json'), 'w') as f:
f.write(pcs_json.write(autopytorch.get_hyperparameter_search_space(X_train=X_train,
Y_train=Y_train)))
with open(os.path.join(log_dir, 'autonet_config.json'), 'w') as f:
json.dump(autopytorch.get_current_autonet_config(), f, indent=2)
"""
Explanation: Note: APT is supposed to automatically log the results to the output directory. Until then, do in manually:
End of explanation
"""
from cave.cavefacade import CAVE
cave_output_dir = "cave_output"
cave = CAVE([log_dir], # List of folders holding results
cave_output_dir, # Output directory
['.'], # Target Algorithm Directory (only relevant for SMAC)
file_format="APT",
verbose="DEBUG")
cave.apt_overview()
cave.compare_default_incumbent()
"""
Explanation: Next, spin up CAVE pass along the output directory.
End of explanation
"""
cave.apt_tensorboard()
"""
Explanation: Other analyzers also run on the APT-data:
End of explanation
"""
|
JBed/Pandas_Analysis_Worksheet | Pandas_Worksheet_Solutions.ipynb | apache-2.0 | import pandas as pd
%matplotlib inline
"""
Explanation: Pandas Worksheet Solutions
End of explanation
"""
df = pd.read_csv('nba-shot-logs.zip')
"""
Explanation: The goal of this worksheet is to provide practical examples of aggregating (with group by), plotting, and pivoting data with the Pandas Python package.
This worksheet is available as a jupyter notebook on github here: https://github.com/JBed/Pandas_Analysis_Worksheet
Get the data here: https://www.kaggle.com/dansbecker/nba-shot-logs
Finally, if you have any questions, comments, or believe that I did anything incorrectly feel free to email me here: jason@jbedford.net
End of explanation
"""
df.head(2)
df.columns
"""
Explanation: The data is structured so that each row corresponds to one shot taking during the 2014-2015 NBA season (We exclude free throws).
End of explanation
"""
df.set_index('GAME_ID').loc[21400899]['MATCHUP'].unique()
"""
Explanation: Most of the column names are self-explanatory. One thing that initially confused me was that there is no column telling us the team of the player taking the shot. It turns out that that information is hidden in the MATCHUP column.
End of explanation
"""
shot_result_by_matchup = df.groupby(['MATCHUP','SHOT_RESULT']).size().unstack()
shot_result_by_matchup.head()
df.pivot_table(index='MATCHUP', columns='SHOT_RESULT', \
values='W',aggfunc=lambda x: len(x)).head()
"""
Explanation: We see that the name of the team of the player taking the shot is the first team listed after the date. It turns out that having things structured this way is actually very convenient.
Part1: Questions about SHOT_RESULT for one team in a one game
Here It makes sense restructure our data so that each row is referring to one team in one game and the columns give us the number of shots made or missed. This can be done with either DataFrame.groupby or pandas.pivot_table which I show below.
End of explanation
"""
shot_result_by_matchup.sort_values(by='made').head()
shot_result_by_matchup.sort_values(by='made', ascending=False).head()
"""
Explanation: Personally I find the groupby operation to be more expressive.
Q1.1 which team made the most and least shots in a game
This is done by sorting the above on made
End of explanation
"""
shot_result_by_matchup['total'] = shot_result_by_matchup.sum(axis=1)
shot_result_by_matchup.sort_values(by='total').head()
"""
Explanation: Q1.2: which team took the most and least shots in a game.
We'll make a new column called total and sort on that.
End of explanation
"""
shot_result_by_matchup.sort_values(by='total',ascending=False).head()
"""
Explanation: These are likely game that were canceled
End of explanation
"""
shot_result_by_matchup['make_percent'] = \
round((shot_result_by_matchup['made'] / shot_result_by_matchup['total'])*100,1)
shot_result_by_matchup.sort_values(by='make_percent').head()
shot_result_by_matchup.sort_values(by='make_percent', ascending=False).head()
"""
Explanation: Q1.3 Which team made the most shots as a percentage of all shots taken in a game
we'll make a derived column called make_percent and sort on that.
End of explanation
"""
shot_and_game_result_by_matchup = df.groupby(['MATCHUP','W','SHOT_RESULT']).size().unstack()
shot_and_game_result_by_matchup.head()
"""
Explanation: Part 2: Questions about SHOT_RESULT and W for one team in a one game
Here we’ll make a dataframe similar to the above but with 'W' as a column
End of explanation
"""
shot_and_game_result_by_matchup = \
shot_and_game_result_by_matchup.reset_index().set_index('MATCHUP')
shot_and_game_result_by_matchup.head()
"""
Explanation: We want 'W' to be a column not part of the index as it is now
End of explanation
"""
shot_and_game_result_by_matchup['make_percent'] = \
round((shot_and_game_result_by_matchup['made']/\
shot_and_game_result_by_matchup.sum(axis=1))*100,1)
shot_and_game_result_by_matchup[\
shot_and_game_result_by_matchup['W']=='W'].sort_values(by='make_percent').head()
shot_and_game_result_by_matchup[\
shot_and_game_result_by_matchup['W']=='L'].sort_values(by='make_percent', \
ascending=False).head()
"""
Explanation: Q2.1 which team had the lowest make percentage (in a single game) but still won that game.
Again we'll make a derived column called make_percent
End of explanation
"""
shot_and_game_result_by_matchup.groupby('W')['make_percent'].describe().unstack()
shot_and_game_result_by_matchup.boxplot(column='make_percent', by='W',figsize=(12,8))
"""
Explanation: Q2.2 Did winning teams have a higher make percentage on average than losing teams?
We'll explore this question with a group and a plot
End of explanation
"""
shot_result_by_gameid_and_w = \
df.groupby(['GAME_ID','W','SHOT_RESULT']).size().unstack()
shot_result_by_matchup_gameid.head(4)
shot_result_by_gameid_and_w['make_percent'] = \
round((shot_result_by_gameid_and_w['made']/\
shot_result_by_gameid_and_w.sum(axis=1))*100,1)
shot_result_by_gameid_and_w.head(4)
shot_result_by_gameid_and_w = \
shot_result_by_gameid_and_w.reset_index()
shot_result_by_gameid_and_w.head(4)
make_percent_by_gameid = \
shot_result_by_gameid_and_w.pivot(index='GAME_ID', columns='W')
make_percent_by_gameid.head()
make_percent_by_gameid[\
make_percent_by_gameid['make_percent']['W']<\
make_percent_by_gameid['make_percent']['L']].head()
len(make_percent_by_gameid[\
make_percent_by_gameid['make_percent']['W']<\
make_percent_by_gameid['make_percent']['L']])
len(make_percent_by_gameid)
(197./904)*100
"""
Explanation: Part3: Questions comparing two teams in one game.
Q 3.1: How often did the winning team will have a lower make percentage than the losing team
Here it looks like we need to compare two rows in the shot_and_game_result_by_matchup dataframe. Don't do that! Remember that columns are for comparing not rows. Let me say that again
compare columns not rows!!!
ok so how do we turn two rows into two columns?
End of explanation
"""
|
computational-class/cjc2016 | code/04.PythonCrawler_beautifulsoup.ipynb | mit | import requests
from bs4 import BeautifulSoup
help(requests.get)
url = 'http://computational-class.github.io/bigdata/data/test.html'
content = requests.get(url)
help(content)
print(content.text)
content.encoding
"""
Explanation: 数据抓取:
Requests、Beautifulsoup、Xpath简介
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
爬虫基本原理
http://www.cnblogs.com/zhaof/p/6898138.html
需要解决的问题
页面解析
获取Javascript隐藏源数据
自动翻页
自动登录
连接API接口
一般的数据抓取,使用requests和beautifulsoup配合就可以了。
尤其是对于翻页时url出现规则变化的网页,只需要处理规则化的url就可以了。
以简单的例子是抓取天涯论坛上关于某一个关键词的帖子。
在天涯论坛,关于雾霾的帖子的第一页是:
http://bbs.tianya.cn/list.jsp?item=free&nextid=0&order=8&k=雾霾
第二页是:
http://bbs.tianya.cn/list.jsp?item=free&nextid=1&order=8&k=雾霾
第一个爬虫
Beautifulsoup Quick Start
http://www.crummy.com/software/BeautifulSoup/bs4/doc/
http://computational-class.github.io/bigdata/data/test.html
End of explanation
"""
url = 'http://computational-class.github.io/bigdata/data/test.html'
content = requests.get(url)
content = content.text
soup = BeautifulSoup(content, 'html.parser')
soup
print(soup.prettify())
"""
Explanation: Beautiful Soup
Beautiful Soup is a Python library designed for quick turnaround projects like screen-scraping. Three features make it powerful:
Beautiful Soup provides a few simple methods. It doesn't take much code to write an application
Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. Then you just have to specify the original encoding.
Beautiful Soup sits on top of popular Python parsers like lxml and html5lib.
Install beautifulsoup4
open your terminal/cmd
<del> $ pip install beautifulsoup4
html.parser
Beautiful Soup supports the html.parser included in Python’s standard library
lxml
but it also supports a number of third-party Python parsers. One is the lxml parser lxml. Depending on your setup, you might install lxml with one of these commands:
$ apt-get install python-lxml
$ easy_install lxml
$ pip install lxml
html5lib
Another alternative is the pure-Python html5lib parser html5lib, which parses HTML the way a web browser does. Depending on your setup, you might install html5lib with one of these commands:
$ apt-get install python-html5lib
$ easy_install html5lib
$ pip install html5lib
End of explanation
"""
soup.select('body > p.title > b')[0].text
"""
Explanation: html
head
title
body
p (class = 'title', 'story' )
a (class = 'sister')
href/id
Select 方法
标签名不加任何修饰
类名前加点
id名前加 #
我们也可以利用这种特性,使用soup.select()方法筛选元素,返回类型是 list
Select方法三步骤
Inspect (检查)
Copy
Copy Selector
鼠标选中标题The Dormouse's story, 右键检查Inspect
鼠标移动到选中的源代码
右键Copy-->Copy Selector
body > p.title > b
End of explanation
"""
soup.select('title')
soup.select('a')
soup.select('b')
"""
Explanation: Select 方法: 通过标签名查找
End of explanation
"""
soup.select('.title')
soup.select('.sister')
soup.select('.story')
"""
Explanation: Select 方法: 通过类名查找
End of explanation
"""
soup.select('#link1')
soup.select('#link1')[0]['href']
"""
Explanation: Select 方法: 通过id名查找
End of explanation
"""
soup.select('p #link1')
"""
Explanation: Select 方法: 组合查找
将标签名、类名、id名进行组合
例如查找 p 标签中,id 等于 link1的内容
End of explanation
"""
soup.select("head > title")
soup.select("body > p")
"""
Explanation: Select 方法:属性查找
加入属性元素
- 属性需要用中括号>连接
- 属性和标签属于同一节点,中间不能加空格。
End of explanation
"""
soup('p')
soup.find_all('p')
[i.text for i in soup('p')]
for i in soup('p'):
print(i.text)
for tag in soup.find_all(True):
print(tag.name)
soup('head') # or soup.head
soup('body') # or soup.body
soup('title') # or soup.title
soup('p')
soup.p
soup.title.name
soup.title.string
soup.title.text
# 推荐使用text方法
soup.title.parent.name
soup.p
soup.p['class']
soup.find_all('p', {'class', 'title'})
soup.find_all('p', class_= 'title')
soup.find_all('p', {'class', 'story'})
soup.find_all('p', {'class', 'story'})[0].find_all('a')
soup.a
soup('a')
soup.find(id="link3")
soup.find_all('a')
soup.find_all('a', {'class', 'sister'}) # compare with soup.find_all('a')
soup.find_all('a', {'class', 'sister'})[0]
soup.find_all('a', {'class', 'sister'})[0].text
soup.find_all('a', {'class', 'sister'})[0]['href']
soup.find_all('a', {'class', 'sister'})[0]['id']
soup.find_all(["a", "b"])
print(soup.get_text())
"""
Explanation: find_all方法
End of explanation
"""
from IPython.display import display_html, HTML
HTML(url = 'http://mp.weixin.qq.com/s?__biz=MzA3MjQ5MTE3OA==&mid=206241627&idx=1&sn=471e59c6cf7c8dae452245dbea22c8f3&3rd=MzA3MDU4NTYzMw==&scene=6#rd')
# the webpage we would like to crawl
"""
Explanation: 数据抓取:
抓取微信公众号文章内容
王成军
wangchengjun@nju.edu.cn
计算传播网 http://computational-communication.com
End of explanation
"""
url = "http://mp.weixin.qq.com/s?__biz=MzA3MjQ5MTE3OA==&mid=206241627&idx=1&sn=471e59c6cf7c8dae452245dbea22c8f3&3rd=MzA3MDU4NTYzMw==&scene=6#rd"
content = requests.get(url).text #获取网页的html文本
soup = BeautifulSoup(content, 'html.parser')
title = soup.select("#activity-name") # #activity-name
title[0].text.strip()
soup.find('h2', {'class', 'rich_media_title'}).text.strip()
print(soup.find('div', {'class', 'rich_media_meta_list'}) )
soup.select('#publish_time')
article = soup.find('div', {'class' , 'rich_media_content'}).text
print(article)
rmml = soup.find('div', {'class', 'rich_media_meta_list'})
#date = rmml.find(id = 'post-date').text
rmc = soup.find('div', {'class', 'rich_media_content'})
content = rmc.get_text()
print(title[0].text.strip())
#print(date)
print(content)
"""
Explanation: 查看源代码 Inspect
End of explanation
"""
!pip install wechatsogou --upgrade
import wechatsogou
# 可配置参数
# 直连
ws_api = wechatsogou.WechatSogouAPI()
# 验证码输入错误的重试次数,默认为1
ws_api = wechatsogou.WechatSogouAPI(captcha_break_time=3)
# 所有requests库的参数都能在这用
# 如 配置代理,代理列表中至少需包含1个 HTTPS 协议的代理, 并确保代理可用
ws_api = wechatsogou.WechatSogouAPI(proxies={
"http": "127.0.0.1:8889",
"https": "127.0.0.1:8889",
})
# 如 设置超时
ws_api = wechatsogou.WechatSogouAPI(timeout=0.1)
ws_api =wechatsogou.WechatSogouAPI()
ws_api.get_gzh_info('南航青年志愿者')
articles = ws_api.search_article('南京航空航天大学')
for i in articles:
print(i)
"""
Explanation: wechatsogou
pip install wechatsogou --upgrade
https://github.com/Chyroc/WechatSogou
End of explanation
"""
import requests
from lxml import etree
url = 'https://movie.douban.com/subject/26611804/'
data = requests.get(url).text
s = etree.HTML(data)
"""
Explanation: requests + Xpath方法介绍:以豆瓣电影为例
Xpath 即为 XML 路径语言(XML Path Language),它是一种用来确定 XML 文档中某部分位置的语言。
Xpath 基于 XML 的树状结构,提供在数据结构树中找寻节点的能力。起初 Xpath 的提出的初衷是将其作为一个通用的、介于 Xpointer 与 XSL 间的语法模型。但是Xpath 很快的被开发者采用来当作小型查询语言。
获取元素的Xpath信息并获得文本:
这里的“元素的Xpath信息”是需要我们手动获取的,获取方式为:
- 定位目标元素
- 在网站上依次点击:右键 > 检查
- copy xpath
- xpath + '/text()'
参考:https://mp.weixin.qq.com/s/zx3_eflBCrrfOqFEWjAUJw
End of explanation
"""
title = s.xpath('//*[@id="content"]/h1/span[1]/text()')[0]
director = s.xpath('//*[@id="info"]/span[1]/span[2]/a/text()')
actors = s.xpath('//*[@id="info"]/span[3]/span[2]/a/text()')
type1 = s.xpath('//*[@id="info"]/span[5]/text()')
type2 = s.xpath('//*[@id="info"]/span[6]/text()')
type3 = s.xpath('//*[@id="info"]/span[7]/text()')
time = s.xpath('//*[@id="info"]/span[11]/text()')
length = s.xpath('//*[@id="info"]/span[13]/text()')
score = s.xpath('//*[@id="interest_sectl"]/div[1]/div[2]/strong/text()')[0]
print(title, director, actors, type1, type2, type3, time, length, score)
"""
Explanation: 豆瓣电影的名称对应的的xpath为xpath_title,那么title表达为:
title = s.xpath('xpath_info/text()')
其中,xpath_info为:
//*[@id="content"]/h1/span[1]
End of explanation
"""
import requests
# https://movie.douban.com/subject/26611804/
url = 'https://api.douban.com/v2/movie/subject/26611804?apikey=0b2bdeda43b5688921839c8ecb20399b&start=0&count=20&client=&udid='
jsonm = requests.get(url).json()
jsonm.keys()
#jsonm.values()
jsonm['rating']
jsonm['alt']
jsonm['casts'][0]
jsonm['directors']
jsonm['genres']
"""
Explanation: Douban API
https://developers.douban.com/wiki/?title=guide
https://github.com/computational-class/douban-api-docs
End of explanation
"""
import requests
from bs4 import BeautifulSoup
from lxml import etree
url0 = 'https://movie.douban.com/top250?start=0&filter='
data = requests.get(url0).text
s = etree.HTML(data)
s.xpath('//*[@id="content"]/div/div[1]/ol/li[1]/div/div[2]/div[1]/a/span[1]/text()')[0]
s.xpath('//*[@id="content"]/div/div[1]/ol/li[2]/div/div[2]/div[1]/a/span[1]/text()')[0]
s.xpath('//*[@id="content"]/div/div[1]/ol/li[3]/div/div[2]/div[1]/a/span[1]/text()')[0]
import requests
from bs4 import BeautifulSoup
url0 = 'https://movie.douban.com/top250?start=0&filter='
data = requests.get(url0).text
soup = BeautifulSoup(data, 'lxml')
movies = soup.find_all('div', {'class', 'info'})
len(movies)
movies[0].a['href']
movies[0].find('span', {'class', 'title'}).text
movies[0].find('div', {'class', 'star'})
movies[0].find('span', {'class', 'rating_num'}).text
people_num = movies[0].find('div', {'class', 'star'}).find_all('span')[-1]
people_num.text.split('人评价')[0]
for i in movies:
url = i.a['href']
title = i.find('span', {'class', 'title'}).text
des = i.find('div', {'class', 'star'})
rating = des.find('span', {'class', 'rating_num'}).text
rating_num = des.find_all('span')[-1].text.split('人评价')[0]
print(url, title, rating, rating_num)
for i in range(0, 250, 25):
print('https://movie.douban.com/top250?start=%d&filter='% i)
import requests
from bs4 import BeautifulSoup
dat = []
for j in range(0, 250, 25):
urli = 'https://movie.douban.com/top250?start=%d&filter='% j
data = requests.get(urli).text
soup = BeautifulSoup(data, 'lxml')
movies = soup.find_all('div', {'class', 'info'})
for i in movies:
url = i.a['href']
title = i.find('span', {'class', 'title'}).text
des = i.find('div', {'class', 'star'})
rating = des.find('span', {'class', 'rating_num'}).text
rating_num = des.find_all('span')[-1].text.split('人评价')[0]
listi = [url, title, rating, rating_num]
dat.append(listi)
import pandas as pd
df = pd.DataFrame(dat, columns = ['url', 'title', 'rating', 'rating_num'])
df['rating'] = df.rating.astype(float)
df['rating_num'] = df.rating_num.astype(int)
df.head()
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(df.rating_num)
plt.show()
plt.hist(df.rating)
plt.show()
# viz
fig = plt.figure(figsize=(16, 16),facecolor='white')
plt.plot(df.rating_num, df.rating, 'bo')
for i in df.index:
plt.text(df.rating_num[i], df.rating[i], df.title[i],
fontsize = df.rating[i],
color = 'red', rotation = 45)
plt.show()
df[df.rating > 9.4]
alist = []
for i in df.index:
alist.append( [df.rating_num[i], df.rating[i], df.title[i] ])
blist =[[df.rating_num[i], df.rating[i], df.title[i] ] for i in df.index]
alist
from IPython.display import display_html, HTML
HTML('<iframe src=http://nbviewer.jupyter.org/github/computational-class/bigdata/blob/gh-pages/vis/douban250bubble.html \
width=1000 height=500></iframe>')
"""
Explanation: 作业:抓取豆瓣电影 Top 250
End of explanation
"""
# headers = {
# 'Accept': 'application/json, text/javascript, */*; q=0.01',
# 'Accept-Encoding': 'gzip, deflate',
# 'Accept-Language': 'zh-TW,zh;q=0.9,en-US;q=0.8,en;q=0.7,zh-CN;q=0.6',
# 'Cache-Control': 'no-cache',
# 'Connection': 'keep-alive',
# 'Cookie': 'JSESSIONID=992CB756ADE61B87409672DC808FDD92',
# 'DNT': '1',
# 'Host': 'www.jszx.gov.cn',
# 'Pragma': 'no-cache',
# 'Referer': 'http://www.jszx.gov.cn/zxta/2019ta/',
# 'Upgrade-Insecure-Requests': '1',
# 'User-Agent': 'Mozilla/5.0 (iPad; CPU OS 11_0 like Mac OS X) AppleWebKit/604.1.34 (KHTML, like Gecko) Version/11.0 Mobile/15A5341f Safari/604.1'
# }
"""
Explanation: 作业:
抓取复旦新媒体微信公众号最新一期的内容
requests.post模拟登录豆瓣(包括获取验证码)
https://blog.csdn.net/zhuzuwei/article/details/80875538
抓取江苏省政协十年提案
End of explanation
"""
import requests
from bs4 import BeautifulSoup
form_data = {'year':2019,
'pagenum':1,
'pagesize':20
}
url = 'http://www.jszx.gov.cn/wcm/zxweb/proposalList.jsp'
content = requests.get(url, form_data)
content.encoding = 'utf-8'
js = content.json()
js['data']['totalcount']
dat = js['data']['list']
pagenum = js['data']['pagecount']
"""
Explanation: 打开http://www.jszx.gov.cn/zxta/2019ta/
点击下一页,url不变!
所以数据的更新是使用js推送的
- 分析network中的内容,发现proposalList.jsp
- 查看它的header,并发现了form_data
<img src = './img/form_data.png'>
http://www.jszx.gov.cn/zxta/2019ta/
End of explanation
"""
for i in range(2, pagenum+1):
print(i)
form_data['pagenum'] = i
content = requests.get(url, form_data)
content.encoding = 'utf-8'
js = content.json()
for j in js['data']['list']:
dat.append(j)
len(dat)
dat[0]
import pandas as pd
df = pd.DataFrame(dat)
df.head()
df.groupby('type').size()
"""
Explanation: 抓取所有提案的链接
End of explanation
"""
url_base = 'http://www.jszx.gov.cn/wcm/zxweb/proposalInfo.jsp?pkid='
urls = [url_base + i for i in df['pkid']]
import sys
def flushPrint(www):
sys.stdout.write('\r')
sys.stdout.write('%s' % www)
sys.stdout.flush()
text = []
for k, i in enumerate(urls):
flushPrint(k)
content = requests.get(i)
content.encoding = 'utf-8'
js = content.json()
js = js['data']['binfo']['_content']
soup = BeautifulSoup(js, 'html.parser')
text.append(soup.text)
len(text)
df['content'] = text
df.head()
df.to_csv('../data/jszx2019.csv', index = False)
dd = pd.read_csv('../data/jszx2019.csv')
dd.head()
"""
Explanation: 抓取提案内容
http://www.jszx.gov.cn/zxta/2019ta/index_61.html?pkid=18b1b347f9e34badb8934c2acec80e9e
http://www.jszx.gov.cn/wcm/zxweb/proposalInfo.jsp?pkid=18b1b347f9e34badb8934c2acec80e9e
End of explanation
"""
|
TomAugspurger/PracticalPandas | Practical Pandas 01 - Reading the Data.ipynb | mit | import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from IPython import display
"""
Explanation: Practical Pandas: Cycling (Part 1)
This is the first post in a series where I'll show how I use pandas on real-world datasets.
For this post, we'll look at data I collected with Cyclemeter on
my daily bike ride to and from school last year.
I had to manually start and stop the tracking at the beginning and end of each ride. There may
have been times where I forgot to do that, so we'll see if we can find those.
Let's begin in the usual fashion, a bunch of imports and loading our data.
End of explanation
"""
!ls data | head -n 5
"""
Explanation: Each day has data recorded in two formats, CSVs and KMLs.
For now I've just uploaded the CSVs to the data/ directory.
We'll start with the those, and come back to the KMLs later.
End of explanation
"""
df = pd.read_csv('data/Cyclemeter-Cycle-20130801-0707.csv')
df.head()
df.info()
"""
Explanation: Take a look at the first one to see how the file's laid out.
End of explanation
"""
date_cols = ["Time", "Ride Time", "Stopped Time", "Pace", "Average Pace"]
df = pd.read_csv("data/Cyclemeter-Cycle-20130801-0707.csv",
parse_dates=date_cols)
display.display_html(df.head())
df.info()
"""
Explanation: Pandas has automatically parsed the headers, but it could use a bit of help on some dtypes.
We can see that the Time column is a datetime but it's been parsed as an object dtype.
This is pandas' fallback dtype that can store anything, but its operations won't be optimized like
they would on an float or bool or datetime[64]. read_csv takes a parse_dates parameter, which
we'll give a list of column names.
End of explanation
"""
import os
csvs = (f for f in os.listdir('data') if f.startswith('Cyclemeter')
and f.endswith('.csv'))
"""
Explanation: One minor issue is that some of the dates are parsed as datetimes when they're really just times.
We'll take care of that later. Pandas store everything as datetime64. For now we'll keep them as
datetimes, and remember that they're really just times.
Now let's do the same thing, but for all the files.
Use a generator expression to filter down to just csv's that match the simple
condition of having the correct naming style.
I try to use lazy generators instead of lists wherever possible.
In this case the list is so small that it really doesn't matter, but it's
a good habit.
End of explanation
"""
def read_ride(path_, i):
"""
read in csv at path, and assign the `ride_id` variable to i.
"""
date_cols = ["Time", "Ride Time", "Stopped Time", "Pace", "Average Pace"]
df = pd.read_csv(path_, parse_dates=date_cols)
df['ride_id'] = i
return df
dfs = (read_ride(os.path.join('data', csv), i)
for (i, csv) in enumerate(csvs))
"""
Explanation: I see a potential problem: We'll potentailly want to concatenate each csv together
into a single DataFrame. However we'll want to retain some idea of which specific
ride an observation came from. So let's create a ride_id variable, which will
just be an integar ranging from $0 \ldots N$, where $N$ is the number of rides.
Make a simple helper function to do this, and apply it to each csv.
End of explanation
"""
df = pd.concat(dfs, ignore_index=True)
df.head()
"""
Explanation: Now concatenate together. The original indicies are meaningless, so we'll ignore them in the concat.
End of explanation
"""
df.to_hdf('data/cycle_store.h5', key='merged',
format='table')
"""
Explanation: Great! The data itself is clean enough that we didn't have to do too much munging.
Let's persist the merged DataFrame. Writing it out to a csv would be fine, but I like to use
pandas' HDF5 integration (via pytables) for personal projects.
End of explanation
"""
|
zerothi/sids | docs/visualization/viz_module/showcase/PdosPlot.ipynb | lgpl-3.0 | import sisl
import sisl.viz
# This is just for convenience to retreive files
siesta_files = sisl._environ.get_environ_variable("SISL_FILES_TESTS") / "sisl" / "io" / "siesta"
"""
Explanation: PdosPlot
End of explanation
"""
plot = sisl.get_sile(siesta_files / "SrTiO3.PDOS").plot(Erange=[-10,10])
"""
Explanation: We are going to get the PDOS from a SIESTA .PDOS file, but we could get it from a hamiltonian as well.
End of explanation
"""
plot
"""
Explanation: By default, a PDOS plot shows the total density of states:
End of explanation
"""
plot.update_settings(requests=[{"name": "My first PDOS (Oxygen)", "species": ["O"], "n": 2, "l": 1}])
# or (it's equivalent)
plot.update_settings(requests=[{
"name": "My first PDOS (Oxygen)", "species": ["O"],
"orbitals": ["2pzZ1", "2pzZ2", "2pxZ1", "2pxZ2", "2pyZ1", "2pyZ2"]
}])
"""
Explanation: PDOS requests
There's a very important setting in the PdosPlot: requests. This setting expects a list of PDOS requests, where each request is a dictionary that can specify
- species
- atoms
- orbitals (the orbital name)
- n, l, m (the quantum numbers)
- Z (the Z shell of the orbital)
- spin
involved in the PDOS line that you want to draw. Apart from that, a request also accepts the name, color, linewidth and dash keys that manage the aesthetics of the line and normalize, which indicates if the PDOS should be normalized (divided by number of orbitals).
Here is an example of how to use the requests setting to create a line that displays the Oxygen 2p PDOS:
End of explanation
"""
plot.update_settings(requests=[
{"name": "Oxygen", "species": ["O"], "color": "darkred", "dash": "dash", "normalize": True},
{"name": "Titanium", "species": ["Ti"], "color": "grey", "linewidth": 3, "normalize": True},
{"name": "Sr", "species": ["Sr"], "color": "green", "normalize": True},
], Erange=[-5, 5])
"""
Explanation: And now we are going to create three lines, one for each species
End of explanation
"""
# Let's import the AtomZ and AtomOdd categories just to play with them
from sisl.geom import AtomZ, AtomOdd
plot.update_settings(requests=[
{"atoms": [0,1], "name": "Atoms 0 and 1"},
{"atoms": {"Z": 8}, "name": "Atoms with Z=8"},
{"atoms": AtomZ(8) & ~ AtomOdd(), "name": "Oxygens with even indices"}
])
"""
Explanation: It's interesting to note that the atoms key of each request accepts the same possibilities as the atoms argument of the Geometry methods. Therefore, you can use indices, categories, dictionaries, strings...
For example:
End of explanation
"""
plot.split_DOS()
"""
Explanation: Easy and fast DOS splitting
As you might have noticed, sometimes it might be cumbersome to build all the requests you want. If your needs are simple and you don't need the flexibility of defining every parameter by yourself, there is a set of methods that will help you explore your PDOS data faster than ever before. These are: split_DOS, split_requests, update_requests, remove_requests and add_requests.?
Let's begin with split_DOS. As you can imagine, this method splits the density of states:
End of explanation
"""
plot.split_DOS(on="atoms")
"""
Explanation: By default, it splits on the different species, but you can use the on argument to change that.
End of explanation
"""
plot.split_DOS(on="atoms", species=["O"], name="Oxygen $atoms")
"""
Explanation: Now we have the contribution of each atom.
But here comes the powerful part: split_DOS accepts as keyword arguments all the keys that a request accepts. Then, it adds that extra constrain to the splitting by adding the value to each request. So, if we want to get the separate contributions of all oxygen atoms, we can impose an extra constraint on species:
End of explanation
"""
plot.split_DOS(on="atoms", exclude=[1,3])
"""
Explanation: and then we have only the oxygen atoms, which are all equivalent.
Note that we also set a name for all requests, with the additional twist that we used the templating supported by split_DOS. If you are splitting on parameter, you can use $parameter inside your name and the method will replace it with the value for each request. In this case parameter was atoms, but it could be anything you are splitting the DOS on.
You can also exclude some values of the parameter you are splitting on:
End of explanation
"""
plot.split_DOS(on="atoms", only=[0,2])
"""
Explanation: Or indicate the only values that you want:
End of explanation
"""
plot.split_DOS(on="n+l+m", species=["O"], name="Oxygen")
"""
Explanation: Finally, if you want to split on multiple parameters at the same time, you can use + between different parameters. For example, to get all the oxygen orbitals:
End of explanation
"""
plot.split_DOS(name="$species")
"""
Explanation: Managing existing requests
Not only you can create requests easily with split_DOS, but it's also easy to manage the requests that you have created.
The methods that help you accomplish this are split_requests, update_requests, remove_requests. All three methods accept an undefined number of arguments that are used to select the requests you want to act on. You can refer to requests by their name (using a str) or their position (using an int). It's very easy to understand with examples. Then, keyword arguments depend on the functionality of each method.
For example, let's say that we have splitted the DOS on species
End of explanation
"""
plot.remove_requests("Sr", 2)
"""
Explanation: and we want to remove the Sr and O lines. That's easy:
End of explanation
"""
plot.split_DOS(name="$species", normalize=True)
"""
Explanation: We have indicated that we wanted to remove the request with name "Sr" and the 2nd request. Simple, isn't it?
Now that we know how to indicate the requests that we want to act on, let's use it to get the total Sr contribution, and then the Ti and O contributions splitted by n and l.
It sounds difficult, but it's actually not. Just split the DOS on species:
End of explanation
"""
plot.split_requests("Sr", 2, on="n+l", dash="dot")
"""
Explanation: And then use split_requests to split only the requests that we want to split:
End of explanation
"""
plot.update_requests("Ti", color="red", linewidth=2)
"""
Explanation: Notice how we've also set dash for all the requests that split_requests has generated. We can do this because split_requests works exactly as split_DOS, with the only difference that splits specific requests.
Just as a last thing, we will let you figure out how update_requests works:
End of explanation
"""
thumbnail_plot = plot.update_requests("Ti", color=None, linewidth=1)
if thumbnail_plot:
thumbnail_plot.show("png")
"""
Explanation: We hope you enjoyed what you learned!
This next cell is just to create the thumbnail for the notebook in the docs
End of explanation
"""
|
google/physics-math-tutorials | colabs/statistics1.ipynb | apache-2.0 | import numpy as np
def normalize(x):
return x / np.sum(x)
def posterior_covid(observed, prevalence=None, sensitivity=None):
# observed = 0 for negative test, 1 for positive test
# hidden state = 0 if no-covid, 1 if have-covid
if sensitivity is None:
sensitivity = 0.875
specificity = 0.975
TPR = sensitivity;
FNR = 1-TPR
TNR = specificity
FPR = 1-TNR
# likelihood(hidden, obs)
likelihood_fn = np.array([[TNR, FPR], [FNR, TPR]])
# prior(hidden)
if prevalence is None:
prevalence = 0.1
prior = np.array([1-prevalence, prevalence])
likelihood = likelihood_fn[:, observed].T
posterior = normalize(prior * likelihood)
return posterior
"""
Explanation: Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Statistics - Session 1 - Exercises
Below are a few simple exercises or examples on probability to be discussed in the breakout rooms.
Exercise 1 - Bayes rule for COVID diagnosis
Ref: slides.
Consider estimating if someone has COVID $H=1$ or not $H=0$ on the basis of a PCR test. The test can either return a positive result $Y=1$ or a negative result $Y=0$. The reliability of the test is given by the following confusion matrix:
<table>
<tr>
<th colspan="2" rowspan="2">
<th colspan="2">Observation
</tr>
<tr>
<th>0
<th>1
</tr>
<tr>
<th rowspan="2">Truth
<th>0
<td>True Negative Rate (TNR) = Specificity
<td>False Positive Rate (FPR) = 1 - TNR
</tr>
<tr>
<th>1
<td>False Negative Rate (FNR) = 1 - TPR
<td>True Positive Rate (TPR) = Sensitivity
</tr>
</table>
Using data from the New York Times, we set sensitivity to 87.5\% and specificity to 97.5\%.
We also need to specify the prior probability $p(H=1)$; known as the prevalence. This varies over time and place, but let's pick $p(H=1)=0.1$ as a reasonable estimate.
If you test positive:
\begin{align}
p(H=1|Y=1)
&= \frac{p(Y=1|H=1) p(H=1)}
{p(Y=1|H=1) p(H=1) + p(Y=1|H=0) p(H=0)}
= 0.795
\end{align}
If you test negative:
\begin{align}
p(H=1|Y=0)
&= \frac{p(Y=0|H=1) p(H=1)}
{p(Y=0|H=1) p(H=1) + p(Y=0|H=0) p(H=0)}
=0.014
\end{align}
Exercise: Using the code below, explore how the resulting probabilities depend on the sensitivity and specificity of the test, as well as on the prevalence.
End of explanation
"""
print(posterior_covid(1)[1]*100)
print(posterior_covid(0)[1]*100)
"""
Explanation: For a prevalence of $p(H=1)=0.1$
End of explanation
"""
print(posterior_covid(1, 0.01)[1]*100)
print(posterior_covid(0, 0.01)[1]*100)
"""
Explanation: For a prevalence of $p(H=1)=0.01$
End of explanation
"""
|
hankcs/HanLP | plugins/hanlp_demo/hanlp_demo/zh/tok_restful.ipynb | apache-2.0 | pip install hanlp_restful -U
"""
Explanation: <h2 align="center">点击下列图标在线运行HanLP</h2>
<div align="center">
<a href="https://colab.research.google.com/github/hankcs/HanLP/blob/doc-zh/plugins/hanlp_demo/hanlp_demo/zh/tok_restful.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<a href="https://mybinder.org/v2/gh/hankcs/HanLP/doc-zh?filepath=plugins%2Fhanlp_demo%2Fhanlp_demo%2Fzh%2Ftok_restful.ipynb" target="_blank"><img src="https://mybinder.org/badge_logo.svg" alt="Open In Binder"/></a>
</div>
安装
无论是Windows、Linux还是macOS,HanLP的安装只需一句话搞定:
End of explanation
"""
from hanlp_restful import HanLPClient
HanLP = HanLPClient('https://www.hanlp.com/api', auth=None, language='zh') # auth不填则匿名,zh中文,mul多语种
"""
Explanation: 创建客户端
End of explanation
"""
HanLP.tokenize('商品和服务。阿婆主来到北京立方庭参观自然语义科技公司。')
"""
Explanation: 申请秘钥
由于服务器算力有限,匿名用户每分钟限2次调用。如果你需要更多调用次数,建议申请免费公益API秘钥auth。
分词
HanLP线上模型训练自9970万字的大型综合语料库,覆盖新闻、社交媒体、金融、法律等多个领域,是已知范围内全世界最大的中文分词语料库。语料库规模决定实际效果,面向生产环境的语料库应当在千万字量级。自然语义的语言学专家一直在持续标注该语料库,与时俱进保持最先进的分词质量。
在分词标准上,HanLP提供细粒度和粗粒度两种颗粒度,细粒度适合搜索引擎业务,粗粒度适合文本挖掘业务。
细粒度分词
默认细粒度:
End of explanation
"""
HanLP('商品和服务。阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok').pretty_print()
"""
Explanation: 用户也可以直接将HanLP当作函数调用,并且打印漂亮的分词结果:
End of explanation
"""
HanLP.tokenize('商品和服务。阿婆主来到北京立方庭参观自然语义科技公司', coarse=True)
"""
Explanation: 返回类型为Document,是dict的子类,拓展了很多操作各种语言学结构的方法。
两个接口都会对文本进行分句,所以返回的结果一定是句子的列表。推荐在不超过服务器允许的最大长度的前提下,尽量传入整篇文章,以提高分词速度。
粗粒度分词
执行粗颗粒度分词:
End of explanation
"""
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok/coarse').pretty_print()
"""
Explanation: 或者直接当函数调用:
End of explanation
"""
HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok*')
"""
Explanation: 同时执行细粒度和粗粒度分词
End of explanation
"""
HanLP(['In 2021, HanLPv2.1 delivers state-of-the-art multilingual NLP techniques to production environments.',
'2021年、HanLPv2.1は次世代の最先端多言語NLP技術を本番環境に導入します。',
'2021年 HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。'], tasks='tok', language='mul').pretty_print()
"""
Explanation: fine为细分,coarse为粗分。
多语种分词
得益于语言无关的设计,HanLP支持包括简繁中英日俄法德在内的104种语言上的分词。这一切,只需指定language='mul'即可实现。
End of explanation
"""
|
pinga-lab/magnetic-ellipsoid | code/lambda_oblate_ellipsoids.ipynb | bsd-3-clause | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: $\lambda$ variable for oblate ellipsoids
End of explanation
"""
a = 11.
b = 20.
x = 21.
y = 23.
z = 30.
"""
Explanation: Here, we follow the reasoning presented by Webster (1904) for analyzing the ellipsoidal coordinate $\lambda$ describing a oblate ellipsoid. Let's consider an ellipsoid with semi-axes $a$, $b$, $c$ oriented along the $x$-, $y$-, and $z$-axis, respectively, where $0 < a < b = c$. This ellipsoid is defined by the following equation:
<a id='eq1'></a>
$$
\frac{x^{2}}{a^{2}} + \frac{y^{2} + z^{2}}{b^{2}} = 1 \: . \tag{1}
$$
A quadric surface which is confocal with the ellipsoid defined in equation 1 can be described as follows:
<a id='eq2'></a>
$$
\frac{x^{2}}{a^{2} + \rho} + \frac{y^{2} + z^{2}}{b^{2} + \rho}= 1 \: , \tag{2}
$$
where $\rho$ is a real number. We know that equation 2 represents an ellipsoid for $\rho$ satisfying the condition
<a id='eq3'></a>
$$
\rho + a^{2} > 0 \: . \tag{3}
$$
Given $a$, $b$, and a $\rho$ satisfying equation 3, we may use equation 2 for determining a set of points $(x, y, z)$ lying on the surface of an ellipsoid confocal with that one defined in equation 1. Now, consider the problem of determining the ellipsoid which is confocal with that one defined in equation 1 and pass through a particular point $(x, y, z)$. This problem consists in determining the real number $\rho$ that, given $a$, $b$, $x$, $y$, and $z$, satisfies the equation 2.
By rearranging equation 2, we obtain the following quadratic equation for $\rho$:
$$
f(\rho) = (a^{2} + \rho)(b^{2} + \rho) - (b^{2} + \rho) \, x^{2}
- (a^{2} + \rho) \, (y^{2} + z^{2}) \: .
$$
This equation shows that:
$$
\rho = \begin{cases}
d \to \infty \: &, \quad f(\rho) > 0 \
-a^{2} \: &, \quad f(\rho) < 0 \
-b^{2} \: &, \quad f(\rho) > 0
\end{cases} \: .
$$
By rearranging this equation, we obtain a simpler one given by:
<a id='eq4'></a>
$$
f(\rho) = p_{2} \, \rho^{2} + p_{1} \, \rho + p_{0} \: , \tag{4}
$$
where
<a id='eq5'></a>
$$
p_{2} = 1 \: , \tag{5}
$$
<a id='eq6'></a>
$$
p_{1} = a^{2} + b^{2} - x^{2} - y^{2} - z^{2} \tag{6}
$$
and
<a id='eq7'></a>
$$
p_{0} = a^{2} \, b^{2} - b^{2} \, x^{2} - a^{2} \, y^{2} - a^{2} \, z^{2} \: . \tag{7}
$$
Note that a particular $\rho$ satisfying equation 2 results in $f(\rho) = 0$ (equation 4).
In order to illustrate the parameter $\rho$, consider the constants $a$, $b$, $x$, $y$, and $z$ given in the cell below:
End of explanation
"""
p2 = 1.
p1 = a**2 + b**2 - (x**2) - (y**2) - (z**2)
p0 = (a*b)**2 - (b*x)**2 - (a*y)**2 - (a*z)**2
"""
Explanation: By using these constants, we calculate the coefficients $p_{2}$ (equation 5), $p_{1}$ (equation 6) and $p_{0}$ (equation 7) as follows:
End of explanation
"""
rho_min = -b**2 - 500.
rho_max = -a**2 + 2500.
rho = np.linspace(rho_min, rho_max, 100)
f = p2*(rho**2) + p1*rho + p0
"""
Explanation: In the sequence, we define a set of values for the variable $\rho$ in an interval $\left[ \rho_{min} \, , \rho_{max} \right]$ and evaluate the quadratic equation $f(\rho)$ (equation 4).
End of explanation
"""
ymin = np.min(f) - 0.1*(np.max(f) - np.min(f))
ymax = np.max(f) + 0.1*(np.max(f) - np.min(f))
plt.close('all')
plt.figure(figsize=(10,4))
plt.subplot(1,2,1)
plt.plot([rho_min, rho_max], [0., 0.], 'k-')
plt.plot([-a**2, -a**2], [ymin, ymax], 'r--', label = '$-a^{2}$')
plt.plot([-b**2, -b**2], [ymin, ymax], 'g--', label = '$-b^{2}$')
plt.plot(rho, f, 'k-', linewidth=2.)
plt.xlim(rho_min, rho_max)
plt.ylim(ymin, ymax)
plt.legend(loc = 'best')
plt.xlabel('$\\rho$', fontsize = 20)
plt.ylabel('$f(\\rho)$', fontsize = 20)
plt.subplot(1,2,2)
plt.plot([rho_min, rho_max], [0., 0.], 'k-')
plt.plot([-a**2, -a**2], [ymin, ymax], 'r--', label = '$-a^{2}$')
plt.plot([-b**2, -b**2], [ymin, ymax], 'g--', label = '$-b^{2}$')
plt.plot(rho, f, 'k-', linewidth=2.)
plt.xlim(-600., 100.)
plt.ylim(-0.3*10**6, 10**6)
plt.legend(loc = 'best')
plt.xlabel('$\\rho$', fontsize = 20)
#plt.ylabel('$f(\\rho)$', fontsize = 20)
plt.tight_layout()
plt.show()
"""
Explanation: Finally, the cell below shows the quadratic equation $f(\rho)$ (equation 4) evaluated in the range $\left[ \rho_{min} \, , \rho_{max} \right]$ defined above.
End of explanation
"""
delta = p1**2 - 4.*p2*p0
lamb = (-p1 + np.sqrt(delta))/(2.*p2)
print 'lambda = %.5f' % lamb
"""
Explanation: Remember that we are interested in a $\rho$ satisfying equation 3. Consequently, according to the figures shown above, we are interested in the largest root $\lambda$ of the quadratic equation $f(\rho)$ (equation 4).
The largest root $\lambda$ of $f(\rho)$ (equation 4) is given by:
<a id='eq8'></a>
$$
\lambda = \frac{-p_{1} + \sqrt{\Delta}}{2} \: , \tag{8}
$$
where
<a id='eq9'></a>
$$
\Delta = p_{1}^{2} - 4 \, p_{0} \: . \tag{9}
$$
The cells below use the equations 8 and 9 to compute the root $\lambda$.
End of explanation
"""
f_lamb = p2*(lamb**2) + p1*lamb + p0
print 'f(lambda) = %.5f' % f_lamb
"""
Explanation: By substituing $\lambda$ in equation 4, we can verify that it is a root of $f(\rho)$.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.20/_downloads/3292f41d8bad9e2a2bc48714f7f39668/plot_30_epochs_metadata.ipynb | bsd-3-clause | import os
import numpy as np
import pandas as pd
import mne
kiloword_data_folder = mne.datasets.kiloword.data_path()
kiloword_data_file = os.path.join(kiloword_data_folder,
'kword_metadata-epo.fif')
epochs = mne.read_epochs(kiloword_data_file)
"""
Explanation: Working with Epoch metadata
This tutorial shows how to add metadata to :class:~mne.Epochs objects, and
how to use Pandas query strings <pandas:indexing.query> to select and
plot epochs based on metadata properties.
:depth: 2
For this tutorial we'll use a different dataset than usual: the
kiloword-dataset, which contains EEG data averaged across 75 subjects
who were performing a lexical decision (word/non-word) task. The data is in
:class:~mne.Epochs format, with each epoch representing the response to a
different stimulus (word). As usual we'll start by importing the modules we
need and loading the data:
End of explanation
"""
epochs.metadata
"""
Explanation: Viewing Epochs metadata
.. sidebar:: Restrictions on metadata DataFrames
Metadata dataframes are less flexible than typical
:class:Pandas DataFrames <pandas.DataFrame>. For example, the allowed
data types are restricted to strings, floats, integers, or booleans;
and the row labels are always integers corresponding to epoch numbers.
Other capabilities of :class:DataFrames <pandas.DataFrame> such as
:class:hierarchical indexing <pandas.MultiIndex> are possible while the
:class:~mne.Epochs object is in memory, but will not survive saving and
reloading the :class:~mne.Epochs object to/from disk.
The metadata attached to :class:~mne.Epochs objects is stored as a
:class:pandas.DataFrame containing one row for each epoch. The columns of
this :class:~pandas.DataFrame can contain just about any information you
want to store about each epoch; in this case, the metadata encodes
information about the stimulus seen on each trial, including properties of
the visual word form itself (e.g., NumberOfLetters, VisualComplexity)
as well as properties of what the word means (e.g., its Concreteness) and
its prominence in the English lexicon (e.g., WordFrequency). Here are all
the variables; note that in a Jupyter notebook, viewing a
:class:pandas.DataFrame gets rendered as an HTML table instead of the
normal Python output block:
End of explanation
"""
print('Name-based selection with .loc')
print(epochs.metadata.loc[2:4])
print('\nIndex-based selection with .iloc')
print(epochs.metadata.iloc[2:4])
"""
Explanation: Viewing the metadata values for a given epoch and metadata variable is done
using any of the Pandas indexing <pandas:/reference/indexing.rst>
methods such as :obj:~pandas.DataFrame.loc,
:obj:~pandas.DataFrame.iloc, :obj:~pandas.DataFrame.at,
and :obj:~pandas.DataFrame.iat. Because the
index of the dataframe is the integer epoch number, the name- and index-based
selection methods will work similarly for selecting rows, except that
name-based selection (with :obj:~pandas.DataFrame.loc) is inclusive of the
endpoint:
End of explanation
"""
epochs.metadata['NumberOfLetters'] = \
epochs.metadata['NumberOfLetters'].map(int)
epochs.metadata['HighComplexity'] = epochs.metadata['VisualComplexity'] > 65
epochs.metadata.head()
"""
Explanation: Modifying the metadata
Like any :class:pandas.DataFrame, you can modify the data or add columns as
needed. Here we convert the NumberOfLetters column from :class:float to
:class:integer <int> data type, and add a :class:boolean <bool> column
that arbitrarily divides the variable VisualComplexity into high and low
groups.
End of explanation
"""
print(epochs['WORD.str.startswith("dis")'])
"""
Explanation: Selecting epochs using metadata queries
All :class:~mne.Epochs objects can be subselected by event name, index, or
:term:slice (see tut-section-subselect-epochs). But
:class:~mne.Epochs objects with metadata can also be queried using
Pandas query strings <pandas:indexing.query> by passing the query
string just as you would normally pass an event name. For example:
End of explanation
"""
print(epochs['Concreteness > 6 and WordFrequency < 1'])
"""
Explanation: This capability uses the :meth:pandas.DataFrame.query method under the
hood, so you can check out the documentation of that method to learn how to
format query strings. Here's another example:
End of explanation
"""
epochs['solenoid'].plot_psd()
"""
Explanation: Note also that traditional epochs subselection by condition name still works;
MNE-Python will try the traditional method first before falling back on rich
metadata querying.
End of explanation
"""
words = ['typhoon', 'bungalow', 'colossus', 'drudgery', 'linguist', 'solenoid']
epochs['WORD in {}'.format(words)].plot(n_channels=29)
"""
Explanation: One use of the Pandas query string approach is to select specific words for
plotting:
End of explanation
"""
evokeds = dict()
query = 'NumberOfLetters == {}'
for n_letters in epochs.metadata['NumberOfLetters'].unique():
evokeds[str(n_letters)] = epochs[query.format(n_letters)].average()
mne.viz.plot_compare_evokeds(evokeds, cmap=('word length', 'viridis'),
picks='Pz')
"""
Explanation: Notice that in this dataset, each "condition" (A.K.A., each word) occurs only
once, whereas with the sample-dataset dataset each condition (e.g.,
"auditory/left", "visual/right", etc) occurred dozens of times. This makes
the Pandas querying methods especially useful when you want to aggregate
epochs that have different condition names but that share similar stimulus
properties. For example, here we group epochs based on the number of letters
in the stimulus word, and compare the average signal at electrode Pz for
each group:
End of explanation
"""
sort_order = np.argsort(epochs.metadata['WordFrequency'])
epochs.plot_image(order=sort_order, picks='Pz')
"""
Explanation: Metadata can also be useful for sorting the epochs in an image plot. For
example, here we order the epochs based on word frequency to see if there's a
pattern to the latency or intensity of the response:
End of explanation
"""
new_metadata = pd.DataFrame(data=['foo'] * len(epochs), columns=['bar'],
index=range(len(epochs)))
epochs.metadata = new_metadata
epochs.metadata.head()
"""
Explanation: Although there's no obvious relationship in this case, such analyses may be
useful for metadata variables that more directly index the time course of
stimulus processing (such as reaction time).
Adding metadata to an Epochs object
You can add a metadata :class:~pandas.DataFrame to any
:class:~mne.Epochs object (or replace existing metadata) simply by
assigning to the :attr:~mne.Epochs.metadata attribute:
End of explanation
"""
epochs.metadata = None
"""
Explanation: You can remove metadata from an :class:~mne.Epochs object by setting its
metadata to None:
End of explanation
"""
|
juanshishido/text-classification | notebooks/kaggle-writeup.ipynb | mit | %matplotlib inline
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import regexp_tokenize
from nltk.stem.porter import PorterStemmer
from sklearn import cross_validation
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import LinearSVC
from sklearn.metrics import accuracy_score
plt.style.use('ggplot')
"""
Explanation: ANLP 2015 Text Classification Assignment
Emily Scharff and Juan Shishido
Write Up
Introduction
This notebook contains the code and documentation that we used to obtain our score of 0.58541 on the public leaderboard for the ANLP 2015 Classification Assignment. We describe our text processing, feature engineering, and model selection approaches. We both worked on feature engineering and model selection. Juan spent time at the beginning setting up tfidf and Emily created and tested many features. Juan also experimented with tweaking the model parameters. Both Juan and Emily contributed to setting up the workflow.
Text Processing
The data were loaded into pandas DataFrames. We began plotting the frequency of each category in the training set and noticed that the distribution was not uniform. Category 1, for example, was the most well represented with 769 questions. Category 6, on the other hand, had the least amount of questions—232. This would prove to be a good insight and we'll describe how we used this to our advantage.
In terms of processing the data, our approach was not to modify the original text. Rather, we created a new column, text_clean, that reflected our changes.
While examining the plain-text training data, we noticed sequences of HTML escaped characters, such as &#xd;&lt;br&gt;, which we removed with a regular expression. We also remove non-alphanumeric characters and replace whitespace with single spaces.
Features and Models
In terms of features, we started simple, using a term-document matrix that only included word frequencies. We also decided to get familiar with a handful of algorithms. We used our word features to train logistic regression and multinomial naive Bayes models. Using Scikit-Learn's cross_validation function, we were surprised to find initial scores of around 50% accuracy.
From here, we deviated somewhat and tried document similarity. Using the training data, we combined questions, by category. Our thought was to create seven "documents," one for each category, that represented the words used for the corresponding questions. This resulted in a $7 \times w$ matrix, where $w$ represents the number of unique words across documents. This was created using Scikit-Learn's TfidfVectorizer. For the test data, the matrix was of dimension $w \times q$, where $q$ represents the number of questions. Note that $w$ is the same in each of our matrices. This is so that it's possible to perform matrix multiplication. Of course, the cosine_similarity function, the metric we decided to use, takes care of some of the implementation details. Our first submission was based on this approach. We then stemmed the words in our corpus, using the Porter Stemmer, and that increased our score slightly.
Before proceeding, we decided to use Scikit-Learn's train_test_split function to create a development set—20% of the training data—on which to test our models. To fit our models, we used the remaining 80% of the original training data.
In our next iteration, we went back to experimenting with logistic regression and naive Bayes, but also added a linear support vector classifier. Here, we also started to add features. Because we were fitting a model, we did not combine questions by category. Rather, our tfidf feature matrix had a row for each question.
We tried many features. We ended up with the following list:
number of question marks
number of periods
number of apostrophes
number of "the"s
number of words
number of stop words
number of first person words
number of second person words
number of third person words
indicators for whether the first word was in ['what', 'how', 'why', 'is']
Other features we tried
Unigrams: This feature was used to check for the occurrence of certain unigrams, just as in John's Scikit-Learn notebook. We used it to check for the most frequent words in each category. Using the 500 most frequent words in each category performed the best. However, this performance was outstripped by a simple tfidf and, when combined, only lowered the score.
Numeric: The goal of this feature was to check if a certain question used numbers. The idea was that crtain categories, such as math, would use number more frequently than others, such as entertainment. In practice it did not work out that well.
Similarity: Here we used WordNet's similarity to see how similar the words in the question were to the question's category. This performed quite poorly. We believe this was due to the fact that the similarity function is not that accurate.
POS: We added a feature to count the number of a particular part of speech. We tested it with nouns, verbs, and adjectives. Interestingly the verbs performed the best. However in combination with the other features we chose it seemed to hurt the performance
Median length: Without tfidf, including the length of the median word of a question greatly increased the categorization accuracy. However, after using tfidf, the median length only detracted from the score. Because tfidf performed better, we did not include it in the final set of features.
Names: This feature checked if a particular question contained a name. This worked better than counting the number of names. This is likely due to a lack of data. Overall, the number of questions with names in the training set is small so you can get better classification by only making the feature return
Other processing
We also stemmed the words prior to passing them through the TfidfVectorizer.
When we noticed some misspelled words, we tried using Peter Norvig's correct function, but it did not improve our accuracy scores.
One thing that was helpful was the plots we created when assessing the various models. We plotted the predicted labels against the ground truth. (An example of this in included below.) This helped us see, right away, that the linear SVC was performing best across all the permutations of features we tried. This is how we eventually decided to stick with that algorithm.
During one of the iterations, we noticed that the naive Bayes model was incorrectly predicting category 1 for a majority of the data. We remembered the distribution of categories mentioned earlier and decided to sample the other categories at higher frequencies. We took the original training data, and then drew a random sample of questions from categories 2 through 7. After some experimentation, we decided to sample an extra 1,200 observations. This strategy helped improve our score.
We also spend time examining and analyzing the confidence scores using the decision_function() method. The idea here was to see if we could identify patterns in how the classifier was incorrectly labeling the development set. Unfortunately, we were not able to use this information to improve our scores.
Finally, because of all the testing we had done, we had several results files, which included results we did not submit. With this data, we used a bagging approach—majority vote—to get a "final" classification on the 1,874 test examples. This, unfortunately, did not improve our score.
Our best result on the public leaderboard was from a single linear support vector classifier using tfidf and the features listed above.
Code
Imports
End of explanation
"""
def sample(df, n=1000, include_cats=[2, 3, 4, 5, 6, 7], random_state=1868):
"""Take a random sample of size `n` for categories
in `include_cats`.
"""
df = df.copy()
subset = df[df.Category.isin(include_cats)]
sample = subset.sample(n, random_state=random_state)
return sample
def clean_text(df, col):
"""A function for keeping only alpha-numeric
characters and replacing all white space with
a single space.
"""
df = df.copy()
porter_stemmer = PorterStemmer()
return df[col].apply(lambda x: re.sub(';br&', ';&', x))\
.apply(lambda x: re.sub('&.+?;', '', x))\
.apply(lambda x: re.sub('[^A-Za-z0-9]+', ' ', x.lower()))\
.apply(lambda x: re.sub('\s+', ' ', x).strip())\
.apply(lambda x: ' '.join([porter_stemmer.stem(w)
for w in x.split()]))
def count_pattern(df, col, pattern):
"""Count the occurrences of `pattern`
in df[col].
"""
df = df.copy()
return df[col].str.count(pattern)
def split_on_sentence(text):
"""Tokenize the text on sentences.
Returns a list of strings (sentences).
"""
sent_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
return sent_tokenizer.tokenize(text)
def split_on_word(text):
"""Use regular expression tokenizer.
Keep apostrophes.
Returns a list of lists, one list for each sentence:
[[word, word], [word, word, ..., word], ...].
"""
if type(text) is list:
return [regexp_tokenize(sentence, pattern="\w+(?:[-']\w+)*")
for sentence in text]
else:
return regexp_tokenize(text, pattern="\w+(?:[-']\w+)*")
def features(df):
"""Create the features in the specified DataFrame."""
stop_words = stopwords.words('english')
df = df.copy()
df['n_questionmarks'] = count_pattern(df, 'Text', '\?')
df['n_periods'] = count_pattern(df, 'Text', '\.')
df['n_apostrophes'] = count_pattern(df, 'Text', '\'')
df['n_the'] = count_pattern(df, 'Text', 'the ')
df['first_word'] = df.text_clean.apply(lambda x: split_on_word(x)[0])
question_words = ['what', 'how', 'why', 'is']
for w in question_words:
col_wc = 'n_' + w
col_fw = 'fw_' + w
df[col_fw] = (df.first_word == w) * 1
del df['first_word']
df['n_words'] = df.text_clean.apply(lambda x: len(split_on_word(x)))
df['n_stopwords'] = df.text_clean.apply(lambda x:
len([w for w in split_on_word(x)
if w not in stop_words]))
df['n_first_person'] = df.text_clean.apply(lambda x:
sum([w in person_first
for w in x.split()]))
df['n_second_person'] = df.text_clean.apply(lambda x:
sum([w in person_second
for w in x.split()]))
df['n_third_person'] = df.text_clean.apply(lambda x:
sum([w in person_third
for w in x.split()]))
return df
def flatten_words(list1d, get_unique=False):
qa = [s.split() for s in list1d]
if get_unique:
return sorted(list(set([w for sent in qa for w in sent])))
else:
return [w for sent in qa for w in sent]
def tfidf_matrices(tr, te, col='text_clean'):
"""Returns tfidf matrices for both the
training and test DataFrames.
The matrices will have the same number of
columns, which represent unique words, but
not the same number of rows, which represent
samples.
"""
tr = tr.copy()
te = te.copy()
text = tr[col].values.tolist() + te[col].values.tolist()
vocab = flatten_words(text, get_unique=True)
tfidf = TfidfVectorizer(stop_words='english', vocabulary=vocab)
tr_matrix = tfidf.fit_transform(tr.text_clean)
te_matrix = tfidf.fit_transform(te.text_clean)
return tr_matrix, te_matrix
def concat_tfidf(df, matrix):
df = df.copy()
df = pd.concat([df, pd.DataFrame(matrix.todense())], axis=1)
return df
def jitter(values, sd=0.25):
"""Jitter points for use in a scatterplot."""
return [np.random.normal(v, sd) for v in values]
person_first = ['i', 'we', 'me', 'us', 'my', 'mine', 'our', 'ours']
person_second = ['you', 'your', 'yours']
person_third = ['he', 'she', 'it', 'him', 'her', 'his', 'hers', 'its']
"""
Explanation: Functions
End of explanation
"""
training = pd.read_csv('../data/newtrain.csv')
test = pd.read_csv('../data/newtest.csv')
"""
Explanation: Data
Load
End of explanation
"""
training['text_clean'] = clean_text(training, 'Text')
test['text_clean'] = clean_text(test, 'Text')
"""
Explanation: Clean
End of explanation
"""
training = features(training)
test = features(test)
"""
Explanation: Features
End of explanation
"""
train, dev = cross_validation.train_test_split(training, test_size=0.2, random_state=1868)
train = train.append(sample(train, n=800))
train.reset_index(drop=True, inplace=True)
dev.reset_index(drop=True, inplace=True)
"""
Explanation: Split the training data
End of explanation
"""
train_matrix, dev_matrix = tfidf_matrices(train, dev)
"""
Explanation: tfidf
End of explanation
"""
train = concat_tfidf(train, train_matrix)
dev = concat_tfidf(dev, dev_matrix)
"""
Explanation: Combine
End of explanation
"""
svm = LinearSVC(dual=False, max_iter=5000)
features = train.columns[3:]
X = train[features].values
y = train['Category'].values
features_dev = dev[features].values
"""
Explanation: Training
End of explanation
"""
svm.fit(X, y)
dev_predicted = svm.predict(features_dev)
accuracy_score(dev.Category, dev_predicted)
plt.figure(figsize=(6, 5))
plt.scatter(jitter(dev.Category, 0.15),
jitter(dev_predicted, 0.15),
color='#348ABD', alpha=0.25)
plt.title('Support Vector Classifier\n')
plt.xlabel('Ground Truth')
plt.ylabel('Predicted')
"""
Explanation: Testing on dev
End of explanation
"""
training = training.append(sample(training, n=1200))
training.reset_index(drop=True, inplace=True)
training_matrix, test_matrix = tfidf_matrices(training, test)
training = concat_tfidf(training, training_matrix)
test = concat_tfidf(test, test_matrix)
features = training.columns[3:]
X = training[features].values
y = training['Category'].values
features_test = test[features].values
svm.fit(X, y)
test_predicted = svm.predict(features_test)
test['Category'] = test_predicted
output = test[['Id', 'Category']]
"""
Explanation: Test Data
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/recommendation_systems/labs/3_als_bqml_hybrid.ipynb | apache-2.0 | PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
%env PROJECT=$PROJECT
"""
Explanation: Hybrid Recommendations with the Movie Lens Dataset
Note: It is recommended that you complete the companion als_bqml.ipynb notebook before continuing with this als_bqml_hybrid.ipynb notebook. This is, however, not a requirement for this lab as you have the option to bring over the dataset + trained model. If you already have the movielens dataset and trained model you can skip the "Import the dataset and trained model" section.
Learning Objectives
Know extract user and product factors from a BigQuery Matrix Factorizarion Model
Know how to format inputs for a BigQuery Hybrid Recommendation Model
End of explanation
"""
!bq mk movielens
%%bash
rm -r bqml_data
mkdir bqml_data
cd bqml_data
curl -O 'http://files.grouplens.org/datasets/movielens/ml-20m.zip'
unzip ml-20m.zip
yes | bq rm -r $PROJECT:movielens
bq --location=US mk --dataset \
--description 'Movie Recommendations' \
$PROJECT:movielens
bq --location=US load --source_format=CSV \
--autodetect movielens.ratings ml-20m/ratings.csv
bq --location=US load --source_format=CSV \
--autodetect movielens.movies_raw ml-20m/movies.csv
"""
Explanation: Import the dataset and trained model
In the previous notebook, you imported 20 million movie recommendations and trained an ALS model with BigQuery ML.
We are going to use the same tables, but if this is a new environment, please run the below commands to copy over the clean data.
First create the BigQuery dataset and copy over the data
End of explanation
"""
%%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.movies AS
SELECT * REPLACE(SPLIT(genres, "|") AS genres)
FROM movielens.movies_raw
"""
Explanation: And create a cleaned movielens.movies table.
End of explanation
"""
%%bash
bq --location=US cp \
cloud-training-demos:movielens.recommender \
movielens.recommender
"""
Explanation: Next, copy over the trained recommendation model. Note that if you're project is in the EU you will need to change the location from US to EU below. Note that as of the time of writing you cannot copy models across regions with bq cp.
End of explanation
"""
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `movielens.recommender`, (
SELECT
movieId, title, 903 AS userId
FROM movielens.movies, UNNEST(genres) g
WHERE g = 'Comedy'
))
ORDER BY predicted_rating DESC
LIMIT 5
"""
Explanation: Next, ensure the model still works by invoking predictions for movie recommendations:
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
processed_input,
feature,
TO_JSON_STRING(factor_weights) AS factor_weights,
intercept
FROM ML.WEIGHTS(MODEL `movielens.recommender`)
WHERE
(processed_input = 'movieId' AND feature = '96481')
OR (processed_input = 'userId' AND feature = '54192')
"""
Explanation: Incorporating user and movie information
The matrix factorization approach does not use any information about users or movies beyond what is available from the ratings matrix. However, we will often have user information (such as the city they live, their annual income, their annual expenditure, etc.) and we will almost always have more information about the products in our catalog. How do we incorporate this information in our recommendation model?
The answer lies in recognizing that the user factors and product factors that result from the matrix factorization approach end up being a concise representation of the information about users and products available from the ratings matrix. We can concatenate this information with other information we have available and train a regression model to predict the rating.
Obtaining user and product factors
We can get the user factors or product factors from ML.WEIGHTS. For example to get the product factors for movieId=96481 and user factors for userId=54192, we would do:
End of explanation
"""
%%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.users AS
SELECT
userId,
RAND() * COUNT(rating) AS loyalty,
CONCAT(SUBSTR(CAST(userId AS STRING), 0, 2)) AS postcode
FROM
movielens.ratings
GROUP BY userId
"""
Explanation: Multiplying these weights and adding the intercept is how we get the predicted rating for this combination of movieId and userId in the matrix factorization approach.
These weights also serve as a low-dimensional representation of the movie and user behavior. We can create a regression model to predict the rating given the user factors, product factors, and any other information we know about our users and products.
Creating input features
The MovieLens dataset does not have any user information, and has very little information about the movies themselves. To illustrate the concept, therefore, let’s create some synthetic information about users:
End of explanation
"""
%%bigquery --project $PROJECT
WITH userFeatures AS (
SELECT
u.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS user_factors
FROM movielens.users u
JOIN ML.WEIGHTS(MODEL movielens.recommender) w
ON processed_input = 'userId' AND feature = CAST(u.userId AS STRING)
)
SELECT * FROM userFeatures
LIMIT 5
"""
Explanation: Input features about users can be obtained by joining the user table with the ML weights and selecting all the user information and the user factors from the weights array.
End of explanation
"""
%%bigquery --project $PROJECT
WITH productFeatures AS (
SELECT
p.* EXCEPT(genres),
g, (SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights))
AS product_factors
FROM movielens.movies p, UNNEST(genres) g
JOIN ML.WEIGHTS(MODEL movielens.recommender) w
ON processed_input = 'movieId' AND feature = CAST(p.movieId AS STRING)
)
SELECT * FROM productFeatures
LIMIT 5
"""
Explanation: Similarly, we can get product features for the movies data, except that we have to decide how to handle the genre since a movie could have more than one genre. If we decide to create a separate training row for each genre, then we can construct the product features using.
End of explanation
"""
%%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.hybrid_dataset AS
WITH userFeatures AS (
# TODO: Place the user features query here
),
productFeatures AS (
# TODO: Place the product features query here
)
SELECT
p.* EXCEPT(movieId),
u.* EXCEPT(userId),
rating
FROM productFeatures p, userFeatures u
JOIN movielens.ratings r
ON r.movieId = p.movieId AND r.userId = u.userId
"""
Explanation: Combining these two WITH clauses and pulling in the rating corresponding the movieId-userId combination (if it exists in the ratings table), we can create the training dataset.
TODO 1: Combine the above two queries to get the user factors and product factor for each rating.
End of explanation
"""
%%bigquery --project $PROJECT
SELECT *
FROM movielens.hybrid_dataset
LIMIT 1
"""
Explanation: One of the rows of this table looks like this:
End of explanation
"""
%%bigquery --project $PROJECT
CREATE OR REPLACE FUNCTION movielens.arr_to_input_16_users(u ARRAY<FLOAT64>)
RETURNS
STRUCT<
u1 FLOAT64,
u2 FLOAT64,
u3 FLOAT64,
u4 FLOAT64,
u5 FLOAT64,
u6 FLOAT64,
u7 FLOAT64,
u8 FLOAT64,
u9 FLOAT64,
u10 FLOAT64,
u11 FLOAT64,
u12 FLOAT64,
u13 FLOAT64,
u14 FLOAT64,
u15 FLOAT64,
u16 FLOAT64
> AS (STRUCT(
u[OFFSET(0)],
u[OFFSET(1)],
u[OFFSET(2)],
u[OFFSET(3)],
u[OFFSET(4)],
u[OFFSET(5)],
u[OFFSET(6)],
u[OFFSET(7)],
u[OFFSET(8)],
u[OFFSET(9)],
u[OFFSET(10)],
u[OFFSET(11)],
u[OFFSET(12)],
u[OFFSET(13)],
u[OFFSET(14)],
u[OFFSET(15)]
));
"""
Explanation: Essentially, we have a couple of attributes about the movie, the product factors array corresponding to the movie, a couple of attributes about the user, and the user factors array corresponding to the user. These form the inputs to our “hybrid” recommendations model that builds off the matrix factorization model and adds in metadata about users and movies.
Training hybrid recommendation model
At the time of writing, BigQuery ML can not handle arrays as inputs to a regression model. Let’s, therefore, define a function to convert arrays to a struct where the array elements are its fields:
End of explanation
"""
%%bigquery --project $PROJECT
SELECT movielens.arr_to_input_16_users(u).*
FROM (SELECT
[0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15.] AS u)
"""
Explanation: which gives:
End of explanation
"""
%%bigquery --project $PROJECT
CREATE OR REPLACE FUNCTION movielens.arr_to_input_16_products(p ARRAY<FLOAT64>)
RETURNS
STRUCT<
p1 FLOAT64,
p2 FLOAT64,
p3 FLOAT64,
p4 FLOAT64,
p5 FLOAT64,
p6 FLOAT64,
p7 FLOAT64,
p8 FLOAT64,
p9 FLOAT64,
p10 FLOAT64,
p11 FLOAT64,
p12 FLOAT64,
# TODO: Finish building this struct
> AS (STRUCT(
p[OFFSET(0)],
p[OFFSET(1)],
p[OFFSET(2)],
p[OFFSET(3)],
p[OFFSET(4)],
p[OFFSET(5)],
p[OFFSET(6)],
p[OFFSET(7)],
p[OFFSET(8)],
p[OFFSET(9)],
p[OFFSET(10)],
p[OFFSET(11)],
p[OFFSET(12)],
# TODO: Finish building this struct
));
"""
Explanation: We can create a similar function named movielens.arr_to_input_16_products to convert the product factor array into named columns.
TODO 2: Create a function that returns named columns from a size 16 product factor array.
End of explanation
"""
%%bigquery --project $PROJECT
CREATE OR REPLACE MODEL movielens.recommender_hybrid
OPTIONS(model_type='linear_reg', input_label_cols=['rating'])
AS
SELECT
* EXCEPT(user_factors, product_factors),
movielens.arr_to_input_16_users(user_factors).*,
movielens.arr_to_input_16_products(product_factors).*
FROM
movielens.hybrid_dataset
"""
Explanation: Then, we can tie together metadata about users and products with the user factors and product factors obtained from the matrix factorization approach to create a regression model to predict the rating:
End of explanation
"""
|
feststelltaste/software-analytics | prototypes/Reading Git logs with Pandas 2.0 with bonus.ipynb | gpl-3.0 | import git
GIT_REPO_PATH = r'../../spring-petclinic/'
repo = git.Repo(GIT_REPO_PATH)
git_bin = repo.git
git_bin
"""
Explanation: Introduction
Introduction
There are multiple reasons for analyzing a version control system like your Git repository. See for example Adam Tornhill's book "Your Code as a Crime Scene" or his upcoming book "Software Design X-Rays" for plenty of inspirations:
You can
- analyze knowledge islands
- distinguish often changing code from stable code parts
- identify code that is temporal coupled
Having the necessary data for those analyses in a Pandas <tt>DataFrame</tt> gives you many many possibilities to quickly gain insights about the evolution of your software system.
The idea
In another blog post I showed you a way to read in Git log data with Pandas's DataFrame and GitPython. Looking back, this was really complicated and tedious. So with a few tricks we can do it much more better this time:
We use GitPython's feature to directly access an underlying Git installation. This is way more faster than using GitPython's object representation of the repository makes it possible to have everything we need in one notebook.
We use in-memory reading by using StringIO to avoid unnecessary file access. This avoids storing the Git output on disk and read it from from disc again. This method is way more faster.
We also exploit Pandas's <tt>read_csv</tt> method even more. This makes the transformation of the Git log into a <tt>DataFrame</tt> as easy as pie.
Reading the Git log
The first step is to connect GitPython with the Git repo. If we have an instance of the repo, we can gain access to the underlying Git installation of the operation system via <tt>repo.git</tt>.
In this case, again, we tap the Spring Pet Clinic project, a small sample application for the Spring framework.
End of explanation
"""
git_log = git_bin.execute('git log --numstat --pretty=format:"\t\t\t%h\t%at\t%aN"')
git_log[:80]
"""
Explanation: With the <tt>git_bin</tt>, we can execute almost any Git command we like directly. In our hypothetical use case, we want to retrieve some information about the change frequency of files. For this, we need the complete history of the Git repo including statistics for the changed files (via <tt>--numstat</tt>).
We use a little trick to make sure, that the format for the file's statistics fits nicely with the commit's metadata (SHA <tt>%h</tt>, UNIX timestamp <tt>%at</tt> and author's name <tt>%aN</tt>). The <tt>--numstat</tt> option provides data for additions, deletions and the affected file name in one line, separated by the tabulator character <tt>\t</tt>:
<p>
<tt>1<b>\t</b>1<b>\t</b>some/file/name.ext</tt>
</p>
We use the same tabular separator <tt>\t</tt> for the format string:
<p>
<tt>%h<b>\t</b>%at<b>\t</b>%aN</tt>
</p>
And here is the trick: Additionally, we add the amount of tabulators of the file's statistics plus an additional tabulator in front of the format string to pretend that there are empty file statistics' information in front of the format string.
The results looks like this:
<p>
<tt>\t\t\t%h\t%at\t%aN</tt>
</p>
Note: If you want to export the Git log on the command line into a file to read that file later, you need to use the tabulator character xxx as separator instead of <tt>\t</tt> in the format string. Otherwise, the trick doesn't work.
OK, let's first executed the Git log export:
End of explanation
"""
import pandas as pd
from io import StringIO
commits_raw = pd.read_csv(StringIO(git_log),
sep="\t",
header=None,
names=['additions', 'deletions', 'filename', 'sha', 'timestamp', 'author']
)
commits_raw.head()
"""
Explanation: We now read in the complete files' history in the <tt>git_log</tt> variable. Don't let confuse you by all the <tt>\t</tt> characters.
Let's read the result into a Pandas <tt>DataFrame</tt> by using the <tt>read_csv</tt> method. Because we can't provide a file path to a CSV data, we have to use StringIO to read in our in-memory buffered content.
Pandas will read the first line of the tabular-separated "file", sees the many tabular-separated columns and parses all other lines in the same format / column layout. Additionaly, we set the <tt>header</tt> to <tt>None</tt> because we don't have one and provide nice names for all the columns that we read in.
End of explanation
"""
commits = commits_raw.fillna(method='ffill')
commits.head()
"""
Explanation: We got two different kind of content for the rows:
For each other row, we got some statistics about the modified files:
<pre>
2 0 src/main/asciidoc/appendices/bibliography.adoc
</pre>
It contains the number of lines inserted, the number of lines deleted and the relative path of the file. With a little trick and a little bit of data wrangling, we can read that information into a nicely structured DataFrame.
The last steps are easy. We fill all the empty file statistics rows with the commit's metadata.
End of explanation
"""
commits = commits.dropna()
commits.head()
"""
Explanation: And drop all the commit metadata rows that don't contain file statitics.
End of explanation
"""
pd.read_csv("../../spring-petclinic/git.log",
sep="\t",
header=None,
names=[
'additions',
'deletions',
'filename',
'sha',
'timestamp',
'author']).fillna(method='ffill').dropna().head()
"""
Explanation: We are finished! This is it.
In summary, you'll need "one-liner" for converting a Git log file output that was exported with
git log --numstat --pretty=format:"%x09%x09%x09%h%x09%at%x09%aN" > git.log
into a <tt>DataFrame</tt>:
End of explanation
"""
commits['additions'] = pd.to_numeric(commits['additions'], errors='coerce')
commits['deletions'] = pd.to_numeric(commits['deletions'], errors='coerce')
commits = commits.dropna()
commits['timestamp'] = pd.to_datetime(commits['timestamp'], unit="s")
commits.head()
%matplotlib inline
commits[commits['filename'].str.endswith(".java")]\
.groupby('filename')\
.count()['additions']\
.hist()
"""
Explanation: Bonus section
We can now convert some columns to their correct data types. The columns <tt>additions</tt> and <tt>deletions</tt> columns are representing the added or deleted lines of code respectively. But there are also a few exceptions for binary files like images. We skip these lines with the <tt>errors='coerce'</tt> option. This will lead to <tt>Nan</tt> in the rows that will be dropped after the conversion.
The <tt>timestamp</tt> column is a UNIX timestamp with the past seconds since January 1st 1970 we can easily convert with Pandas' <tt>to_datetime</tt> method.
End of explanation
"""
|
arruda/bgarena_analysis | notebooks/gametables_finished_games_by_elo_rating.ipynb | mit | %pylab inline
import os
from pandas import read_csv
csv_files_location = "../bgarena_gatherer/db_backup/"
game_tables_df = read_csv(os.path.join(csv_files_location, 'gametables.csv'))
game_tables_df = game_tables_df.set_index("id")
# remove non existing tables
non_error_game_table_df = game_tables_df[game_tables_df['game'] != "ERROR"]
"""
Explanation: Data frame info
game: Name of the game;
table_link: link to access the table;
creation_time: time that the table was created(not the time that the game started);
is_elo_rating: -1 if not option was not present(old tables), 0 if option was present but was set to false and 1 if was present and set to true;
game_speed: If present (this is in portuguese);
game_status: 'open': 0,'finished': 1,'abandonned': 2,'cancelled': 3
End of explanation
"""
game_tables_with_elo_rating_present = non_error_game_table_df[non_error_game_table_df['is_elo_rating'] > -1]
game_tables_with_elo_rating_present.head()
grouped_by_elo = game_tables_with_elo_rating_present.groupby('is_elo_rating')
"""
Explanation: considering only tables with elo option present
End of explanation
"""
grouped_by_elo.get_group(1)['game_status'].plot.hist(bins=20, color='Green')
grouped_by_elo.get_group(0)['game_status'].plot.hist(bins=20, color='Blue')
elo_rated_finished = grouped_by_elo.get_group(1)[grouped_by_elo.get_group(1)['game_status']==1]
non_elo_rated_finished = grouped_by_elo.get_group(0)[grouped_by_elo.get_group(0)['game_status']==1]
"""
Explanation: Elo Rating On (Green) vs Elo Rating Off (Blue)
End of explanation
"""
perc = elo_rated_finished.size/float(grouped_by_elo.get_group(1).size) * 100
print "{} %".format(perc)
"""
Explanation: % of Finished games by Elo Rate Option (On/Off)
Finished Games With Elo Rate On
End of explanation
"""
perc = non_elo_rated_finished.size/float(grouped_by_elo.get_group(0).size) * 100
print "{} %".format(perc)
"""
Explanation: Finished Games With Elo Rate Off
End of explanation
"""
|
sys-bio/tellurium | examples/notebooks/core/tellurium_utility.ipynb | apache-2.0 | %matplotlib inline
from __future__ import print_function
import tellurium as te
# to get the tellurium version use
print('te.__version__')
print(te.__version__)
# or
print('te.getTelluriumVersion()')
print(te.getTelluriumVersion())
# to print the full version info use
print('-' * 80)
te.printVersionInfo()
print('-' * 80)
"""
Explanation: Back to the main Index
Version information
Tellurium's version can be obtained via te.__version__. .printVersionInfo() also returns information from certain constituent packages.
End of explanation
"""
from builtins import range
# Load SBML file
r = te.loada("""
model test
J0: X0 -> X1; k1*X0;
X0 = 10; X1=0;
k1 = 0.2
end
""")
import matplotlib.pyplot as plt
# Turn off notices so they don't clutter the output
te.noticesOff()
for i in range(0, 20):
result = r.simulate (0, 10)
r.reset()
r.plot(result, loc=None, show=False,
linewidth=2.0, linestyle='-', color='black', alpha=0.8)
r.k1 = r.k1 + 0.2
# Turn the notices back on
te.noticesOn()
"""
Explanation: Repeat simulation without notification
End of explanation
"""
import tellurium as te
# create tmp file
import tempfile
ftmp = tempfile.NamedTemporaryFile(suffix=".xml")
# load model
r = te.loada('S1 -> S2; k1*S1; k1 = 0.1; S1 = 10')
# save to file
te.saveToFile(ftmp.name, r.getMatlab())
# or easier via
r.exportToMatlab(ftmp.name)
# load file
matlabstr = te.readFromFile(ftmp.name)
print('%' + '*'*80)
print('Converted MATLAB code')
print('%' + '*'*80)
print(matlabstr[1531:2000])
print('...')
"""
Explanation: File helpers for reading and writing
End of explanation
"""
|
abulbasar/machine-learning | Spacy.ipynb | apache-2.0 | sentence = """European authorities fined Google a record $5.1 billion on Wednesday for
abusing its power in the mobile phone market and ordered the company to alter its practices"""
"""
Install spacy
$ pip install spacy
Download en_core_web_sm module
$ python -m spacy download en_core_web_sm
"""
import spacy
from spacy import displacy
from collections import Counter
import en_core_web_sm
nlp = en_core_web_sm.load()
doc = nlp(sentence)
[(X.text, X.label_) for X in doc.ents]
"""
Explanation: Spacy
spaCy is an open-source software library for advanced Natural Language Processing, written in the programming languages Python and Cython. It is slightly more ready enterprise use cases. Its out of box POS tagger, NER analyzers are very popular.
End of explanation
"""
displacy.render(nlp(str(sentence)), style='dep', jupyter = True, options = {'distance': 120})
[(x.orth_,x.pos_, x.lemma_)
for x in [y for y in nlp(sentence)
if not y.is_stop and y.pos_ != 'PUNCT']]
"""
Explanation: Display dependency graph
End of explanation
"""
|
pvalienteverde/ElCuadernillo | ElCuadernillo/20160214_TensorFlowTutorialBasico/TensorFlow_Interactivo_IPython.ipynb | mit | import tensorflow as tf
"""
Explanation: Ejemplo de como correr interactivamente la libreria TensorFlow en IPython
End of explanation
"""
sess = tf.InteractiveSession()
x = tf.Variable([[2.0, 3.0],[4.0, 12.0]])
"""
Explanation: De esta manera lanzar una sesion interactiva, util cuando queremos probar metodos
End of explanation
"""
x.initializer.run()
tf.reduce_mean(x).eval()
tf.reduce_mean(x,1).eval()
tf.reduce_mean(x,0).eval()
"""
Explanation: Probamos la funcion que nos reduce un tensor por medio de medias
End of explanation
"""
sess.close()
"""
Explanation: Cerramos la session para liberar recursos
End of explanation
"""
|
robertoalotufo/ia898 | src/h2stats.ipynb | mit | def h2stats(h):
import numpy as np
import ia898.src as ia
hn = 1.0*h/h.sum() # compute the normalized image histogram
v = np.zeros(11) # number of statistics
# compute statistics
n = len(h) # number of gray values
v[0] = np.sum((np.arange(n)*hn)) # mean
v[1] = np.sum(np.power((np.arange(n)-v[0]),2)*hn) # variance
v[2] = np.sum(np.power((np.arange(n)-v[0]),3)*hn)/(np.power(v[1],1.5))# skewness
v[3] = np.sum(np.power((np.arange(n)-v[0]),4)*hn)/(np.power(v[1],2))-3# kurtosis
v[4] = -(hn[hn>0]*np.log(hn[hn>0])).sum() # entropy
v[5] = np.argmax(h) # mode
v[6:] = ia.h2percentile(h,np.array([1,10,50,90,99])) # 1,10,50,90,99% percentile
return v
"""
Explanation: Function h2stats
Synopse
The h2stats function computes several statistics given an image histogram.
g = h2stats(h)
Output
g: unidimensional array. Array containing the statistics from
the histogram
Input
h: 1-D ndarray: histogram
Description
The h2stats function extracts some relevant statistics of the images where
the histogram was computed:
[0] Mean (mean grayscale value)
[1] Variance (variance of grayscale values)
[2] Skewness
[3] Kurtosis
[4] entropy
[5] mode (gray scale value with largest occurrence)
[6] Percentile 1%
[7] Percentile 10%
[8] Percentile 50% (This is the median gray scale value)
[9] Percentile 90%
[10] Percentile 99%
Function Code
End of explanation
"""
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python h2stats.ipynb
import numpy as np
import sys,os
import matplotlib.image as mpimg
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
"""
Explanation: Examples
End of explanation
"""
if testing:
f = np.array([1,1,1,0,1,2,2,2,1])
h = ia.histogram(f)
print('statistics =', ia.h2stats(h))
"""
Explanation: Numeric Example
End of explanation
"""
if testing:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from scipy.stats import mode, kurtosis, skew, entropy
f = mpimg.imread('../data/cameraman.tif')
plt.imshow(f,cmap='gray')
h = ia.histogram(f)
v = ia.h2stats(h)
print('mean =',v[0])
print('variance =',v[1])
print('skewness =',v[2])
print('kurtosis = ',v[3])
print('entropy = ',v[4])
print('mode = ',v[5])
print('percentil 1% = ',v[6])
print('percentil 10% = ',v[7])
print('percentil 50% = ',v[8])
print('percentil 90% = ',v[9])
print('percentil 99% = ',v[10])
"""
Explanation: Image Example
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.3/tutorials/general_concepts.ipynb | gpl-3.0 | #!pip install -I "phoebe>=2.3,<2.4"
"""
Explanation: General Concepts: The PHOEBE Bundle
HOW TO RUN THIS FILE: if you're running this in a Jupyter notebook or Google Colab session, you can click on a cell and then shift+Enter to run the cell and automatically select the next cell. Alt+Enter will run a cell and create a new cell below it. Ctrl+Enter will run a cell but keep it selected. To restart from scratch, restart the kernel/runtime.
All of these tutorials assume basic comfort with Python in general - particularly with the concepts of lists, dictionaries, and objects as well as basic comfort with using the numpy and matplotlib packages. This tutorial introduces all the general concepts of accessing parameters within the Bundle.
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
from phoebe import u # units
"""
Explanation: Let's get started with some basic imports:
End of explanation
"""
logger = phoebe.logger(clevel='WARNING')
"""
Explanation: If running in IPython notebooks, you may see a "ShimWarning" depending on the version of Jupyter you are using - this is safe to ignore.
PHOEBE 2 uses constants defined in the IAU 2015 Resolution which conflict with the constants defined in astropy. As a result, you'll see the warnings as phoebe.u and phoebe.c "hijacks" the values in astropy.units and astropy.constants.
Whenever providing units, please make sure to use phoebe.u instead of astropy.units, otherwise the conversions may be inconsistent.
Logger
Before starting any script, it is a good habit to initialize a logger and define which levels of information you want printed to the command line (clevel) and dumped to a file (flevel). A convenience function is provided at the top-level via phoebe.logger to initialize the logger with any desired level.
The levels from most to least information are:
DEBUG
INFO
WARNING
ERROR
CRITICAL
End of explanation
"""
b = phoebe.default_binary()
"""
Explanation: All of these arguments are optional and will default to clevel='WARNING' if not provided. There is therefore no need to provide a filename if you don't provide a value for flevel.
So with this logger, anything with INFO, WARNING, ERROR, or CRITICAL levels will be printed to the screen. All messages of any level will be written to a file named 'tutorial.log' in the current directory.
Note: the logger messages are not included in the outputs shown below.
Overview
As a quick overview of what's to come, here is a quick preview of some of the steps used when modeling a binary system with PHOEBE. Each of these steps will be explained in more detail throughout these tutorials.
First we need to create our binary system. For the sake of most of these tutorials, we'll use the default detached binary available through the phoebe.default_binary constructor.
End of explanation
"""
b.set_value(qualifier='teff', component='primary', value=6500)
"""
Explanation: This object holds all the parameters and their respective values. We'll see in this tutorial and the next tutorial on constraints how to search through these parameters and set their values.
End of explanation
"""
b.add_dataset('lc', compute_times=phoebe.linspace(0,1,101))
"""
Explanation: Next, we need to define our datasets via b.add_dataset. This will be the topic of the following tutorial on datasets.
End of explanation
"""
b.run_compute()
"""
Explanation: We'll then want to run our forward model to create a synthetic model of the observables defined by these datasets using b.run_compute, which will be the topic of the computing observables tutorial.
End of explanation
"""
print(b.get_value(qualifier='fluxes', context='model'))
"""
Explanation: We can access the value of any parameter, including the arrays in the synthetic model just generated. To export arrays to a file, we could call b.export_arrays
End of explanation
"""
afig, mplfig = b.plot(show=True)
"""
Explanation: We can then plot the resulting model with b.plot, which will be covered in the plotting tutorial.
End of explanation
"""
b = phoebe.default_binary()
"""
Explanation: And then lastly, if we wanted to solve the inverse problem and "fit" parameters to observational data, we may want to add distributions to our system so that we can run estimators, optimizers, or samplers.
Default Binary Bundle
For this tutorial, let's start over and discuss this b object in more detail and how to access and change the values of the input parameters.
Everything for our system will be stored in this single Python object that we call the Bundle which we'll call b (short for bundle).
End of explanation
"""
b
"""
Explanation: The Bundle is just a collection of Parameter objects along with some callable methods. Here we can see that the default binary Bundle consists of over 100 individual parameters.
End of explanation
"""
b.filter(context='compute')
"""
Explanation: If we want to view or edit a Parameter in the Bundle, we first need to know how to access it. Each Parameter object has a number of tags which can be used to filter (similar to a database query). When filtering the Bundle, a ParameterSet is returned - this is essentially just a subset of the Parameters in the Bundle and can be further filtered until eventually accessing a single Parameter.
End of explanation
"""
b.contexts
"""
Explanation: Here we filtered on the context tag for all Parameters with context='compute' (i.e. the options for computing a model). If we want to see all the available options for this tag in the Bundle, we can use the plural form of the tag as a property on the Bundle or any ParameterSet.
End of explanation
"""
b.filter(context='compute').components
"""
Explanation: Although there is no strict hierarchy or order to the tags, it can be helpful to think of the context tag as the top-level tag and is often very helpful to filter by the appropriate context first.
Other tags currently include:
* kind
* figure
* component
* feature
* dataset
* distribution
* compute
* model
* solver
* solution
* time
* qualifier
Accessing the plural form of the tag as an attribute also works on a filtered ParameterSet
End of explanation
"""
b.filter(context='compute').filter(component='primary')
"""
Explanation: This then tells us what can be used to filter further.
End of explanation
"""
b.filter(context='compute', component='primary').qualifiers
"""
Explanation: The qualifier tag is the shorthand name of the Parameter itself. If you don't know what you're looking for, it is often useful to list all the qualifiers of the Bundle or a given ParameterSet.
End of explanation
"""
b.filter(context='compute', component='primary', qualifier='ntriangles')
"""
Explanation: Now that we know the options for the qualifier within this filter, we can choose to filter on one of those. Let's look filter by the 'ntriangles' qualifier.
End of explanation
"""
b.filter(context='compute', component='primary', qualifier='ntriangles').get_parameter()
"""
Explanation: Once we filter far enough to get to a single Parameter, we can use get_parameter to return the Parameter object itself (instead of a ParameterSet).
End of explanation
"""
b.get_parameter(context='compute', component='primary', qualifier='ntriangles')
"""
Explanation: As a shortcut, get_parameter also takes filtering keywords. So the above line is also equivalent to the following:
End of explanation
"""
b.get_parameter(context='compute', component='primary', qualifier='ntriangles').get_value()
b.get_parameter(context='compute', component='primary', qualifier='ntriangles').get_description()
"""
Explanation: Each Parameter object contains several keys that provide information about that Parameter. The keys "description" and "value" are always included, with additional keys available depending on the type of Parameter.
End of explanation
"""
print(b.filter(context='compute', component='primary').info)
"""
Explanation: We can also see a top-level view of the filtered parameters and descriptions (note: the syntax with @ symbols will be explained further in the section on twigs below.
End of explanation
"""
b.get_parameter(context='compute', component='primary', qualifier='ntriangles').get_limits()
"""
Explanation: Since the Parameter for ntriangles is a FloatParameter, it also includes a key for the allowable limits.
End of explanation
"""
b.get_parameter(context='compute', component='primary', qualifier='ntriangles').set_value(2000)
b.get_parameter(context='compute', component='primary', qualifier='ntriangles')
"""
Explanation: In this case, we're looking at the Parameter called ntriangles with the component tag set to 'primary'. This Parameter therefore defines how many triangles should be created when creating the mesh for the star named 'primary'. By default, this is set to 1500 triangles, with allowable values above 100.
If we wanted a finer mesh, we could change the value.
End of explanation
"""
b.get_parameter(context='compute', component='primary', qualifier='distortion_method')
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').get_value()
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').get_description()
"""
Explanation: If we choose the distortion_method qualifier from that same ParameterSet, we'll see that it has a few different keys in addition to description and value.
End of explanation
"""
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').get_choices()
"""
Explanation: Since the distortion_method Parameter is a ChoiceParameter, it contains a key for the allowable choices.
End of explanation
"""
try:
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').set_value('blah')
except Exception as e:
print(e)
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').set_value('rotstar')
b.get_parameter(context='compute', component='primary', qualifier='distortion_method').get_value()
"""
Explanation: We can only set a value if it is contained within this list - if you attempt to set a non-valid value, an error will be raised.
End of explanation
"""
b.filter(context='compute', component='primary')
b['primary@compute']
b['compute@primary']
"""
Explanation: Parameter types include:
* IntParameter
* FloatParameter
* FloatArrayParameter
* BoolParameter
* StringParameter
* ChoiceParameter
* SelectParameter
* DictParameter
* ConstraintParameter
* DistributionParameter
* HierarchyParameter
* UnitParameter
* JobParameter
these Parameter types and their available options are all described in great detail in Advanced: Parameter Types
Twigs
As a shortcut to needing to filter by all these tags, the Bundle and ParameterSets can be filtered through what we call "twigs" (as in a Bundle of twigs). These are essentially a single string-representation of the tags, separated by @ symbols.
This is very useful as a shorthand when working in an interactive Python console, but somewhat obfuscates the names of the tags and can make it difficult if you use them in a script and make changes earlier in the script.
For example, the following lines give identical results:
End of explanation
"""
b.filter(context='compute', component='primary', qualifier='distortion_method')
b['distortion_method@primary@compute']
"""
Explanation: However, this dictionary-style twig access will never return a ParameterSet with a single Parameter, instead it will return the Parameter itself. This can be seen in the different output between the following two lines:
End of explanation
"""
b['distortion_method@primary@compute'] = 'roche'
print(b['distortion_method@primary@compute'])
"""
Explanation: Because of this, this dictionary-style twig access can also set the value directly:
End of explanation
"""
print(b['value@distortion_method@primary@compute'])
print(b['description@distortion_method@primary@compute'])
"""
Explanation: And can even provide direct access to the keys/attributes of the Parameter (value, description, limits, etc)
End of explanation
"""
b['compute'].twigs
"""
Explanation: As with the tags, you can call .twigs on any ParameterSet to see the "smallest unique twigs" of the contained Parameters
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/f12d4c974f9dc78c910e072ebccba291/plot_rereference_eeg.ipynb | bsd-3-clause | # Authors: Marijn van Vliet <w.m.vanvliet@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from matplotlib import pyplot as plt
print(__doc__)
# Setup for reading the raw data
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id, tmin, tmax = 1, -0.2, 0.5
# Read the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
events = mne.read_events(event_fname)
# The EEG channels will be plotted to visualize the difference in referencing
# schemes.
picks = mne.pick_types(raw.info, meg=False, eeg=True, eog=True, exclude='bads')
"""
Explanation: Re-referencing the EEG signal
This example shows how to load raw data and apply some EEG referencing schemes.
End of explanation
"""
reject = dict(eog=150e-6)
epochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax,
picks=picks, reject=reject, proj=True)
fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, sharex=True)
# No reference. This assumes that the EEG has already been referenced properly.
# This explicitly prevents MNE from adding a default EEG reference. Any average
# reference projector is automatically removed.
raw.set_eeg_reference([])
evoked_no_ref = mne.Epochs(raw, **epochs_params).average()
evoked_no_ref.plot(axes=ax1, titles=dict(eeg='Original reference'), show=False,
time_unit='s')
# Average reference. This is normally added by default, but can also be added
# explicitly.
raw.set_eeg_reference('average', projection=True)
evoked_car = mne.Epochs(raw, **epochs_params).average()
evoked_car.plot(axes=ax2, titles=dict(eeg='Average reference'), show=False,
time_unit='s')
# Re-reference from an average reference to the mean of channels EEG 001 and
# EEG 002.
raw.set_eeg_reference(['EEG 001', 'EEG 002'])
evoked_custom = mne.Epochs(raw, **epochs_params).average()
evoked_custom.plot(axes=ax3, titles=dict(eeg='Custom reference'),
time_unit='s')
"""
Explanation: We will now apply different EEG referencing schemes and plot the resulting
evoked potentials. Note that when we construct epochs with mne.Epochs, we
supply the proj=True argument. This means that any available projectors
are applied automatically. Specifically, if there is an average reference
projector set by raw.set_eeg_reference('average', projection=True), MNE
applies this projector when creating epochs.
End of explanation
"""
|
fivetentaylor/rpyca | TGA_Testing.ipynb | mit | %matplotlib inline
"""
Explanation: Robust PCA Example
Robust PCA is an awesome relatively new method for factoring a matrix into a low rank component and a sparse component. This enables really neat applications for outlier detection, or models that are robust to outliers.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
def mk_rot_mat(rad=np.pi / 4):
rot = np.array([[np.cos(rad),-np.sin(rad)], [np.sin(rad), np.cos(rad)]])
return rot
rot_mat = mk_rot_mat( np.pi / 4)
x = np.random.randn(100) * 5
y = np.random.randn(100)
points = np.vstack([y,x])
rotated = np.dot(points.T, rot_mat).T
"""
Explanation: Make Some Toy Data
End of explanation
"""
outliers = np.tile([15,-10], 10).reshape((-1,2))
pts = np.vstack([rotated.T, outliers]).T
"""
Explanation: Add Some Outliers to Make Life Difficult
End of explanation
"""
U,s,Vt = np.linalg.svd(rotated)
U_n,s_n,Vt_n = np.linalg.svd(pts)
"""
Explanation: Compute SVD on both the clean data and the outliery data
End of explanation
"""
plt.ylim([-20,20])
plt.xlim([-20,20])
plt.scatter(*pts)
pca_line = np.dot(U[0].reshape((2,1)), np.array([-20,20]).reshape((1,2)))
plt.plot(*pca_line)
rpca_line = np.dot(U_n[0].reshape((2,1)), np.array([-20,20]).reshape((1,2)))
plt.plot(*rpca_line, c='r')
"""
Explanation: Just 10 outliers can really screw up our line fit!
End of explanation
"""
import tga
reload(tga)
import logging
logger = logging.getLogger(tga.__name__)
logger.setLevel(logging.INFO)
"""
Explanation: Now the robust pca version!
End of explanation
"""
X = pts.copy()
v = tga.tga(X.T, eps=1e-5, k=1, p=0.0)
"""
Explanation: Factor the matrix into L (low rank) and S (sparse) parts
End of explanation
"""
plt.ylim([-20,20])
plt.xlim([-20,20])
plt.scatter(*pts)
tga_line = np.dot(v[0].reshape((2,1)), np.array([-20,20]).reshape((1,2)))
plt.plot(*tga_line)
#plt.scatter(*L, c='red')
"""
Explanation: And have a look at this!
End of explanation
"""
|
google/making_with_ml | instafashion/scripts/getMatches.ipynb | apache-2.0 | # For each fashion inspiration pic, check to make sure that it's
# a "fashion" picture. Ignore all other pics
storage_client = storage.Client()
blobs = list(storage_client.list_blobs(INSPO_BUCKET, prefix=INSPO_SUBFOLDER))
uris = [os.path.join("gs://", blobs[0].bucket.name, x.name)
for x in blobs if '.jpg' in x.name]
urls = [x.public_url for x in blobs if '.jpg' in x.name]
fashionPics = []
for uri, url in tqdm(list(zip(uris, urls))):
labels = detectLabels(image_uri=uri)
if any([x.description == "Fashion" for x in labels]):
fashionPics.append((uri, url))
fashion_pics = pd.DataFrame(fashionPics, columns=["uri", "url"])
# Run this line to verify you can actually search your product set using a picture
productSet.search("apparel", image_uri=fashion_pics['uri'].iloc[0])
"""
Explanation: Download fashion influence pics and filter them by "Fashion" images
End of explanation
"""
# The API sometimes uses different names for similar items, so this
# function tells you whether two labels are roughly equivalent
def isTypeMatch(label1, label2):
# everything in a single match group are more or less synonymous
matchGroups = [("skirt", "miniskirt"),
("jeans", "pants"),
("shorts"),
("jacket", "vest", "outerwear", "coat", "suit"),
("top", "shirt"),
("dress"),
("swimwear", "underpants"),
("footwear", "sandal", "boot", "high heels"),
("handbag", "suitcase", "satchel", "backpack", "briefcase"),
("sunglasses", "glasses"),
("bracelet"),
("scarf", "bowtie", "tie"),
("earrings"),
("necklace"),
("sock"),
("hat", "cowboy hat", "straw hat", "fedora", "sun hat", "sombrero")]
for group in matchGroups:
if label1.lower() in group and label2.lower() in group:
return True
return False
def getBestMatch(searchResponse):
label = searchResponse['label']
matches = searchResponse['matches']
viableMatches = [match for match in matches if any([isTypeMatch(label, match['product'].labels['type'])])]
return max(viableMatches, key= lambda x: x['score']) if len(viableMatches) else None
"""
Explanation: Example Response:
{'score': 0.7648860812187195,
'label': 'Shoe',
'matches': [{'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x14992d2e0>,
'score': 0.35719582438468933,
'image': 'projects/yourprojectid/locations/us-west1/products/high_rise_white_jeans_pants/referenceImages/6550f579-6b26-433a-8fa6-56e5bbca95c1'},
{'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x14992d5b0>,
'score': 0.32596680521965027,
'image': 'projects/yourprojectid/locations/us-west1/products/white_boot_shoe/referenceImages/56248bb2-9d5e-4004-b397-6c3b2fb0edc3'},
{'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x14a423850>,
'score': 0.26240724325180054,
'image': 'projects/yourprojectid/locations/us-west1/products/tan_strap_sandal_shoe/referenceImages/f970af65-c51e-42e8-873c-d18080f00430'}],
'boundingBox': [x: 0.6475263833999634
y: 0.8726409077644348
, x: 0.7815263271331787
y: 0.8726409077644348
, x: 0.7815263271331787
y: 0.9934644103050232
, x: 0.6475263833999634
y: 0.9934644103050232
]},
{'score': 0.8066604733467102,
'label': 'Shorts',
'matches': [{'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x106a4fa60>,
'score': 0.27552375197410583,
'image': 'projects/yourprojectid/locations/us-west1/products/white_sneaker_shoe_*/referenceImages/a109b530-56ff-42bc-ac73-d60578b7f363'},
{'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x106a4f400>,
'score': 0.2667400538921356,
'image': 'projects/yourprojectid/locations/us-west1/products/grey_vneck_tee_top_*/referenceImages/cc6f873c-328e-481a-86fb-a2116614ce80'},
{'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x106a4f8e0>,
'score': 0.2606571912765503,
'image': 'projects/yourprojectid/locations/us-west1/products/high_rise_white_jeans_pants_*/referenceImages/360b26d8-a844-4a83-bf97-ef80f2243fdb'},
{'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x106a4fb80>],
'boundingBox': [x: 0.4181176424026489
y: 0.40305882692337036
, x: 0.6837647557258606
y: 0.40305882692337036
, x: 0.6837647557258606
y: 0.64000004529953
, x: 0.4181176424026489
y: 0.64000004529953
]}]
The response above returns a set of matches for each item identified in your inspiration photo.
In the example above, "Shorts" and "Shoes" were recognized. For each of those items, a bounding box is returned that indicates where the item is in the picture.
For each matched item in your closet, a Product object is returned along with its image id and a confidence score.
Get clothing matches
We want to make sure that when we recommend users similar items that we respect clothing type.
For example, the Product Search API might (accidentally) return a dress as a match for a shirt, but we wouldn't want to expose that to the end user. So this function--getBestMatch--sorts through the results returned by the API and makes sure that a. only the highest confidence match for each item is returned and b. that the item types match.
End of explanation
"""
def canAddItem(existingArray, newType):
bottoms = {"pants", "skirt", "shorts", "dress"}
newType = newType.lower()
# Don't add the same item type twice
if newType in existingArray:
return False
if newType == "shoe":
return True
# Only add one type of bottom (pants, skirt, etc)
if newType in bottoms and len(bottoms.intersection(existingArray)):
return False
# You can't wear both a top and a dress
if newType == "top" and "dress" in existingArray:
return False
return True
"""
Explanation: After we run getBestMatch above, we're left with a bunch of items from our own closet that match our inspiration picture. But the next step is transform those matches into an "outfit," and outfits have rules: you can't wear a dress and pants at the same time (probably). You usually only wear one type of shoe. This next function, canAddItem, allows us to add clothing items to an outfit one at a time without breaking any of the "rules" of fashion.
End of explanation
"""
# Option 1: sum up the confidence scores for each closet item matched to the inspo photo
def scoreOutfit1(matches):
if not matches:
return 0
return sum([match['score'] for match in matches]) / len(matches)
# Option 2: Sum up the confidence scores only of items that matched with the inspo photo
# with confidence > 0.3. Also, because shoes will match most images _twice_
# (because people have two feet), only count the shoe confidence score once
def scoreOutfit2(matches):
if not len(matches):
return 0
noShoeSum = sum([x['score'] for x in matches if (x['score'] > 0.3 and not isTypeMatch("shoe", x["label"]))])
shoeScore = 0
try:
shoeScore = max([x['score'] for x in matches if isTypeMatch("shoe", x["label"])])
except:
pass
return noShoeSum + shoeScore * 0.5 # half the weight for shoes
"""
Explanation: Finally, we need a function that allows us to evaluate how "good" an outfit recommendation is. We'll do this by creating a score function. This part is creative, and you can do it however you like. Here are some example score functions:
End of explanation
"""
def getOutfit(imgUri, verbose=False):
# 1. Search for matching items
response = productSet.search("apparel", image_uri=imgUri)
if verbose:
print("Found matching " + ", ".join([x['label'] for x in response]) + " in closet.")
clothes = []
# 2. For each item in the inspo pic, find the best match in our closet and add it to
# the outfit array
for item in response:
bestMatch = getBestMatch(item)
if not bestMatch:
if verbose:
print(f"No good match found for {item['label']}")
continue
if verbose:
print(f"Best match for {item['label']} was {bestMatch['product'].displayName}")
clothes.append(bestMatch)
# 3. Sort the items by highest confidence score first
clothes.sort(key=lambda x: x['score'], reverse=True)
# 4. Add as many items as possible to the outfit while still
# maintaining a logical outfit
outfit = []
addedTypes = []
for item in clothes:
itemType = item['product'].labels['type'] # i.e. shorts, top, etc
if canAddItem(addedTypes, itemType):
addedTypes.append(itemType)
outfit.append(item)
if verbose:
print(f"Added a {itemType} to the outfit")
# 5. Now that we have a whole outfit, compute its score!
score1 = scoreOutfit1(outfit)
score2 = scoreOutfit2(outfit)
if verbose:
print("Algorithm 1 score: %0.3f" % score1)
print("Algorithm 2 score: %0.3f" % score2)
return (outfit, score1, score2)
getOutfit(fashion_pics.iloc[0]['uri'], verbose=True)
"""
Explanation: Great--now that we have all our helper functions written, let's combine them into one big function for
constructing an outfit and computing its score!
End of explanation
"""
db = firestore.Client()
userid = u"youruserd" # I like to store all data in Firestore as users, incase I decide to add more in the future!
thisUser = db.collection(u'users').document(userid)
outfits = thisUser.collection(u'outfitsDEMO')
# Go through all of the inspo pics and compute matches.
for row in fashion_pics.iterrows():
srcUrl = row[1]['url']
srcUri = row[1]['uri']
(outfit, score1, score2) = getOutfit(srcUri, verbose=False)
# Construct a name for the source image--a key we can use to store it in the database
srcId = srcUri[len("gs://"):].replace("/","-")
# Firestore writes json to the database, so let's construct an object and fill it with data
fsMatch = {
"srcUrl": srcUrl,
"srcUri": srcUri,
"score1": score1,
"score2": score2,
}
# Go through all of the outfit matches and put them into json that can be
# written to firestore
theseMatches = []
for match in outfit:
image = match['image']
imgName = match['image'].split('/')[-1]
name = match['image'].split('/')[-3]
# The storage api makes these images publicly accessible through url
imageUrl = f"https://storage.googleapis.com/{BUCKET}/" + imgName
label = match['product'].labels['type']
score = match['score']
theseMatches.append({
"score": score,
"image": image,
"imageUrl": imageUrl,
"label": label
})
fsMatch["matches"] = theseMatches
# Add the outfit to firestore!
outfits.document(srcId).set(fsMatch)
"""
Explanation: Output:
Found matching Shorts, Shoe in closet.
Best match for Shorts was high_rise_white_shorts_*
No good match found for Shoe
Added a shorts to the outfit
Algorithm 1 score: 0.247
Algorithm 2 score: 0.000
{'outfit': [{'product': <pyvisionproductsearch.ProductSearch.ProductSearch.Product at 0x149fa6760>,
'score': 0.24715223908424377,
'image': 'projects/yourprojectid/locations/us-west1/products/high_rise_white_shorts_*/referenceImages/71cc9936-2a35-4a81-8f43-75e1bf50fc22'}],
'score1': 0.24715223908424377,
'score2': 0.0}
Add Data to Firestore
Now that we have a way of constructing and scoring outfits, let's add them to Firestore
so we can later use them in our app.
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.13.0/examples/notebooks/generated/plots_boxplots.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
"""
Explanation: Box Plots
The following illustrates some options for the boxplot in statsmodels. These include violin_plot and bean_plot.
End of explanation
"""
data = sm.datasets.anes96.load_pandas()
party_ID = np.arange(7)
labels = [
"Strong Democrat",
"Weak Democrat",
"Independent-Democrat",
"Independent-Independent",
"Independent-Republican",
"Weak Republican",
"Strong Republican",
]
"""
Explanation: Bean Plots
The following example is taken from the docstring of beanplot.
We use the American National Election Survey 1996 dataset, which has Party
Identification of respondents as independent variable and (among other
data) age as dependent variable.
End of explanation
"""
plt.rcParams["figure.subplot.bottom"] = 0.23 # keep labels visible
plt.rcParams["figure.figsize"] = (10.0, 8.0) # make plot larger in notebook
age = [data.exog["age"][data.endog == id] for id in party_ID]
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts = {
"cutoff_val": 5,
"cutoff_type": "abs",
"label_fontsize": "small",
"label_rotation": 30,
}
sm.graphics.beanplot(age, ax=ax, labels=labels, plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
# plt.show()
def beanplot(data, plot_opts={}, jitter=False):
"""helper function to try out different plot options"""
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts_ = {
"cutoff_val": 5,
"cutoff_type": "abs",
"label_fontsize": "small",
"label_rotation": 30,
}
plot_opts_.update(plot_opts)
sm.graphics.beanplot(
data, ax=ax, labels=labels, jitter=jitter, plot_opts=plot_opts_
)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
fig = beanplot(age, jitter=True)
fig = beanplot(age, plot_opts={"violin_width": 0.5, "violin_fc": "#66c2a5"})
fig = beanplot(age, plot_opts={"violin_fc": "#66c2a5"})
fig = beanplot(
age, plot_opts={"bean_size": 0.2, "violin_width": 0.75, "violin_fc": "#66c2a5"}
)
fig = beanplot(age, jitter=True, plot_opts={"violin_fc": "#66c2a5"})
fig = beanplot(
age, jitter=True, plot_opts={"violin_width": 0.5, "violin_fc": "#66c2a5"}
)
"""
Explanation: Group age by party ID, and create a violin plot with it:
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
# Necessary to make horizontal axis labels fit
plt.rcParams["figure.subplot.bottom"] = 0.23
data = sm.datasets.anes96.load_pandas()
party_ID = np.arange(7)
labels = [
"Strong Democrat",
"Weak Democrat",
"Independent-Democrat",
"Independent-Independent",
"Independent-Republican",
"Weak Republican",
"Strong Republican",
]
# Group age by party ID.
age = [data.exog["age"][data.endog == id] for id in party_ID]
# Create a violin plot.
fig = plt.figure()
ax = fig.add_subplot(111)
sm.graphics.violinplot(
age,
ax=ax,
labels=labels,
plot_opts={
"cutoff_val": 5,
"cutoff_type": "abs",
"label_fontsize": "small",
"label_rotation": 30,
},
)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Create a bean plot.
fig2 = plt.figure()
ax = fig2.add_subplot(111)
sm.graphics.beanplot(
age,
ax=ax,
labels=labels,
plot_opts={
"cutoff_val": 5,
"cutoff_type": "abs",
"label_fontsize": "small",
"label_rotation": 30,
},
)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Create a jitter plot.
fig3 = plt.figure()
ax = fig3.add_subplot(111)
plot_opts = {
"cutoff_val": 5,
"cutoff_type": "abs",
"label_fontsize": "small",
"label_rotation": 30,
"violin_fc": (0.8, 0.8, 0.8),
"jitter_marker": ".",
"jitter_marker_size": 3,
"bean_color": "#FF6F00",
"bean_mean_color": "#009D91",
}
sm.graphics.beanplot(age, ax=ax, labels=labels, jitter=True, plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Create an asymmetrical jitter plot.
ix = data.exog["income"] < 16 # incomes < $30k
age = data.exog["age"][ix]
endog = data.endog[ix]
age_lower_income = [age[endog == id] for id in party_ID]
ix = data.exog["income"] >= 20 # incomes > $50k
age = data.exog["age"][ix]
endog = data.endog[ix]
age_higher_income = [age[endog == id] for id in party_ID]
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts["violin_fc"] = (0.5, 0.5, 0.5)
plot_opts["bean_show_mean"] = False
plot_opts["bean_show_median"] = False
plot_opts["bean_legend_text"] = "Income < \$30k"
plot_opts["cutoff_val"] = 10
sm.graphics.beanplot(
age_lower_income,
ax=ax,
labels=labels,
side="left",
jitter=True,
plot_opts=plot_opts,
)
plot_opts["violin_fc"] = (0.7, 0.7, 0.7)
plot_opts["bean_color"] = "#009D91"
plot_opts["bean_legend_text"] = "Income > \$50k"
sm.graphics.beanplot(
age_higher_income,
ax=ax,
labels=labels,
side="right",
jitter=True,
plot_opts=plot_opts,
)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Show all plots.
# plt.show()
"""
Explanation: Advanced Box Plots
Based of example script example_enhanced_boxplots.py (by Ralf Gommers)
End of explanation
"""
|
olgaliak/cntk-cyclegan | simpleGan/CNTK_206B_DCGAN.ipynb | mit | import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import os
import cntk as C
from cntk import Trainer
from cntk.layers import default_options
from cntk.device import set_default_device, gpu, cpu
from cntk.initializer import normal
from cntk.io import (MinibatchSource, CTFDeserializer, StreamDef, StreamDefs,
INFINITELY_REPEAT)
from cntk.layers import Dense, Convolution2D, ConvolutionTranspose2D, BatchNormalization
from cntk.learners import (adam, UnitType, learning_rate_schedule,
momentum_as_time_constant_schedule, momentum_schedule)
from cntk.logging import ProgressPrinter
%matplotlib inline
"""
Explanation: CNTK 206 Part B: Deep Convolutional GAN with MNIST data
Prerequisites: We assume that you have successfully downloaded the MNIST data by completing the tutorial titled CNTK_103A_MNIST_DataLoader.ipynb.
Introduction
Generative models have gained a lot of attention in deep learning community which has traditionally leveraged discriminative models for (semi-supervised) and unsupervised learning.
Overview
In the previous tutorial we introduce the original GAN implementation by Goodfellow et al at NIPS 2014. This pioneering work has since then been extended and many techniques have been published amongst which the Deep Convolutional Generative Adversarial Network a.k.a. DCGAN has become the recommended launch pad in the community.
In this tutorial, we introduce an implementation of the DCGAN with some well tested architectural constraints that improve stability in the GAN training:
We use strided convolutions in the (discriminator) and fractional-strided convolutions in the generator.
We have used batch normalization in both the generator and the discriminator
We have removed fully connected hidden layers for deeper architectures.
We use ReLU activation in generator for all layers except for the output, which uses Tanh.
We use LeakyReLU activation in the discriminator for all layers.
End of explanation
"""
# Select the right target device when this notebook is being tested:
if 'TEST_DEVICE' in os.environ:
import cntk
if os.environ['TEST_DEVICE'] == 'cpu':
C.device.set_default_device(C.device.cpu())
else:
C.device.set_default_device(C.device.gpu(0))
C.device.set_default_device(C.device.gpu(0))
"""
Explanation: Select the notebook runtime environment devices / settings
Set the device to cpu / gpu for the test environment. If you have both CPU and GPU on your machine, you can optionally switch the devices. By default, we choose the best available device.
End of explanation
"""
isFast = True
"""
Explanation: There are two run modes:
- Fast mode: isFast is set to True. This is the default mode for the notebooks, which means we train for fewer iterations or train / test on limited data. This ensures functional correctness of the notebook though the models produced are far from what a completed training would produce.
Slow mode: We recommend the user to set this flag to False once the user has gained familiarity with the notebook content and wants to gain insight from running the notebooks for a longer period with different parameters for training.
Note
If the isFlag is set to False the notebook will take a few hours on a GPU enabled machine. You can try fewer iterations by setting the num_minibatches to a smaller number say 20,000 which comes at the expense of quality of the generated images.
End of explanation
"""
# Ensure the training data is generated and available for this tutorial
# We search in two locations in the toolkit for the cached MNIST data set.
data_found = False
for data_dir in [os.path.join("..", "Examples", "Image", "DataSets", "MNIST"),
os.path.join("data", "MNIST")]:
train_file = os.path.join(data_dir, "Train-28x28_cntk_text.txt")
if os.path.isfile(train_file):
data_found = True
break
if not data_found:
raise ValueError("Please generate the data by completing CNTK 103 Part A")
print("Data directory is {0}".format(data_dir))
def create_reader(path, is_training, input_dim, label_dim):
deserializer = CTFDeserializer(
filename = path,
streams = StreamDefs(
labels_unused = StreamDef(field = 'labels', shape = label_dim, is_sparse = False),
features = StreamDef(field = 'features', shape = input_dim, is_sparse = False
)
)
)
return MinibatchSource(
deserializers = deserializer,
randomize = is_training,
max_sweeps = INFINITELY_REPEAT if is_training else 1
)
"""
Explanation: Data Reading
The input to the GAN will be a vector of random numbers. At the end of the traning, the GAN "learns" to generate images of hand written digits drawn from the MNIST database. We will be using the same MNIST data generated in tutorial 103A. A more in-depth discussion of the data format and reading methods can be seen in previous tutorials. For our purposes, just know that the following function returns an object that will be used to generate images from the MNIST dataset. Since we are building an unsupervised model, we only need to read in features and ignore the labels.
End of explanation
"""
np.random.seed(123)
def noise_sample(num_samples):
return np.random.uniform(
low = -1.0,
high = 1.0,
size = [num_samples, g_input_dim]
).astype(np.float32)
"""
Explanation: The random noise we will use to train the GAN is provided by the noise_sample function to generate random noise samples from a uniform distribution within the interval [-1, 1].
End of explanation
"""
# architectural parameters
img_h, img_w = 28, 28
kernel_h, kernel_w = 5, 5
stride_h, stride_w = 2, 2
# Input / Output parameter of Generator and Discriminator
g_input_dim = 100
g_output_dim = d_input_dim = img_h * img_w
# We expect the kernel shapes to be square in this tutorial and
# the strides to be of the same length along each data dimension
if kernel_h == kernel_w:
gkernel = dkernel = kernel_h
else:
raise ValueError('This tutorial needs square shaped kernel')
if stride_h == stride_w:
gstride = dstride = stride_h
else:
raise ValueError('This tutorial needs same stride in all dims')
# Helper functions
def bn_with_relu(x, activation=C.relu):
h = BatchNormalization(map_rank=1)(x)
return C.relu(h)
# We use param-relu function to use a leak=0.2 since CNTK implementation
# of Leaky ReLU is fixed to 0.01
def bn_with_leaky_relu(x, leak=0.2):
h = BatchNormalization(map_rank=1)(x)
r = C.param_relu(C.constant((np.ones(h.shape)*leak).astype(np.float32)), h)
return r
"""
Explanation: Model Creation
First we provide a brief recap of the basics of GAN. You may skip this block if you are familiar with CNTK 206A.
A GAN network is composed of two sub-networks, one called the Generator ($G$) and the other Discriminator ($D$).
- The Generator takes random noise vector ($z$) as input and strives to output synthetic (fake) image ($x^$) that is indistinguishable from the real image ($x$) from the MNIST dataset.
- The Discriminator strives to differentiate between the real image ($x$) and the fake ($x^$) image.
In each training iteration, the Generator produces more realistic fake images (in other words minimizes the difference between the real and generated counterpart) and the Discriminator maximizes the probability of assigning the correct label (real vs. fake) to both real examples (from training set) and the generated fake ones. The two conflicting objectives between the sub-networks ($G$ and $D$) leads to the GAN network (when trained) converge to an equilibrium, where the Generator produces realistic looking fake MNIST images and the Discriminator can at best randomly guess whether images are real or fake. The resulting Generator model once trained produces realistic MNIST image with the input being a random number.
Model config
First, we establish some of the architectural and training hyper-parameters for our model.
The generator network is fractional strided convolutional network. The input is a 10-dimensional random vector and the output of the generator is a flattened version of a 28 x 28 fake image. The discriminator is strided-convolution network. It takes as input the 784 dimensional output of the generator or a real MNIST image, reshapes into a 28 x 28 image format and outputs a single scalar - the estimated probability that the input image is a real MNIST image.
Model components
We build a computational graph for our model, one each for the generator and the discriminator. First, we establish some of the architectural parameters of our model.
End of explanation
"""
def convolutional_generator(z):
with default_options(init=C.normal(scale=0.02)):
print('Generator input shape: ', z.shape)
s_h2, s_w2 = img_h//2, img_w//2 #Input shape (14,14)
s_h4, s_w4 = img_h//4, img_w//4 # Input shape (7,7)
gfc_dim = 1024
gf_dim = 64
h0 = Dense(gfc_dim, activation=None)(z)
h0 = bn_with_relu(h0)
print('h0 shape', h0.shape)
h1 = Dense([gf_dim * 2, s_h4, s_w4], activation=None)(h0)
h1 = bn_with_relu(h1)
print('h1 shape', h1.shape)
h2 = ConvolutionTranspose2D(gkernel,
num_filters=gf_dim*2,
strides=gstride,
pad=True,
output_shape=(s_h2, s_w2),
activation=None)(h1)
h2 = bn_with_relu(h2)
print('h2 shape', h2.shape)
h3 = ConvolutionTranspose2D(gkernel,
num_filters=1,
strides=gstride,
pad=True,
output_shape=(img_h, img_w),
activation=C.sigmoid)(h2)
print('h3 shape :', h3.shape)
return C.reshape(h3, img_h * img_w)
"""
Explanation: Generator
The generator takes a 100-dimensional random vector (for starters) as input ($z$) and the outputs a 784 dimensional vector, corresponding to a flattened version of a 28 x 28 fake (synthetic) image ($x^*$). In this tutorial, we use fractionally strided convolutions (a.k.a ConvolutionTranspose) with ReLU activations except for the last layer. We use a tanh activation on the last layer to make sure that the output of the generator function is confined to the interval [-1, 1]. The use of ReLU and tanh activation functions are key in addition to using the fractionally strided convolutions.
End of explanation
"""
def convolutional_discriminator(x):
with default_options(init=C.normal(scale=0.02)):
dfc_dim = 1024
df_dim = 64
print('Discriminator convolution input shape', x.shape)
x = C.reshape(x, (1, img_h, img_w))
h0 = Convolution2D(dkernel, 1, strides=dstride)(x)
h0 = bn_with_leaky_relu(h0, leak=0.2)
print('h0 shape :', h0.shape)
h1 = Convolution2D(dkernel, df_dim, strides=dstride)(h0)
h1 = bn_with_leaky_relu(h1, leak=0.2)
print('h1 shape :', h1.shape)
h2 = Dense(dfc_dim, activation=None)(h1)
h2 = bn_with_leaky_relu(h2, leak=0.2)
print('h2 shape :', h2.shape)
h3 = Dense(1, activation=C.sigmoid)(h2)
print('h3 shape :', h3.shape)
return h3
"""
Explanation: Discriminator
The discriminator takes as input ($x^*$) the 784 dimensional output of the generator or a real MNIST image, re-shapes the input to a 28 x 28 image and outputs the estimated probability that the input image is a real MNIST image. The network is modeled using strided convolution with Leaky ReLU activation except for the last layer. We use a sigmoid activation on the last layer to ensure the discriminator output lies in the inteval of [0,1].
End of explanation
"""
# training config
minibatch_size = 128
num_minibatches = 5000 if isFast else 10000
lr = 0.0002
momentum = 0.5 #equivalent to beta1
"""
Explanation: We use a minibatch size of 128 and a fixed learning rate of 0.0002 for training. In the fast mode (isFast = True) we verify only functional correctness with 5000 iterations.
Note: In the slow mode, the results look a lot better but it requires in the order of 10 minutes depending on your hardware. In general, the more number of minibatches one trains, the better is the fidelity of the generated images.
End of explanation
"""
def build_graph(noise_shape, image_shape, generator, discriminator):
input_dynamic_axes = [C.Axis.default_batch_axis()]
Z = C.input(noise_shape, dynamic_axes=input_dynamic_axes)
X_real = C.input(image_shape, dynamic_axes=input_dynamic_axes)
X_real_scaled = X_real / 255.0
# Create the model function for the generator and discriminator models
X_fake = generator(Z)
D_real = discriminator(X_real_scaled)
D_fake = D_real.clone(
method = 'share',
substitutions = {X_real_scaled.output: X_fake.output}
)
# Create loss functions and configure optimazation algorithms
G_loss = 1.0 - C.log(D_fake)
D_loss = -(C.log(D_real) + C.log(1.0 - D_fake))
G_learner = adam(
parameters = X_fake.parameters,
lr = learning_rate_schedule(lr, UnitType.sample),
momentum = momentum_schedule(0.5)
)
D_learner = adam(
parameters = D_real.parameters,
lr = learning_rate_schedule(lr, UnitType.sample),
momentum = momentum_schedule(0.5)
)
# Instantiate the trainers
G_trainer = Trainer(
X_fake,
(G_loss, None),
G_learner
)
D_trainer = Trainer(
D_real,
(D_loss, None),
D_learner
)
return X_real, X_fake, Z, G_trainer, D_trainer
"""
Explanation: Build the graph
The rest of the computational graph is mostly responsible for coordinating the training algorithms and parameter updates, which is particularly tricky with GANs for couple reasons. The GANs are sensitive to the choice of learner and the parameters. Many of the parameters chosen here are based on many hard learnt lessons from the community. You may directly go to the code if you have read the basic GAN tutorial.
First, the discriminator must be used on both the real MNIST images and fake images generated by the generator function. One way to represent this in the computational graph is to create a clone of the output of the discriminator function, but with substituted inputs. Setting method=share in the clone function ensures that both paths through the discriminator model use the same set of parameters.
Second, we need to update the parameters for the generator and discriminator model separately using the gradients from different loss functions. We can get the parameters for a Function in the graph with the parameters attribute. However, when updating the model parameters, update only the parameters of the respective models while keeping the other parameters unchanged. In other words, when updating the generator we will update only the parameters of the $G$ function while keeping the parameters of the $D$ function fixed and vice versa.
Training the Model
The code for training the GAN very closely follows the algorithm as presented in the original NIPS 2014 paper. In this implementation, we train $D$ to maximize the probability of assigning the correct label (fake vs. real) to both training examples and the samples from $G$. In other words, $D$ and $G$ play the following two-player minimax game with the value function $V(G,D)$:
$$
\min_G \max_D V(D,G)= \mathbb{E}{x}[ log D(x) ] + \mathbb{E}{z}[ log(1 - D(G(z))) ]
$$
At the optimal point of this game the generator will produce realistic looking data while the discriminator will predict that the generated image is indeed fake with a probability of 0.5. The algorithm referred below is implemented in this tutorial.
End of explanation
"""
def train(reader_train, generator, discriminator):
X_real, X_fake, Z, G_trainer, D_trainer = \
build_graph(g_input_dim, d_input_dim, generator, discriminator)
# print out loss for each model for upto 25 times
print_frequency_mbsize = num_minibatches // 25
print("First row is Generator loss, second row is Discriminator loss")
pp_G = ProgressPrinter(print_frequency_mbsize)
pp_D = ProgressPrinter(print_frequency_mbsize)
k = 2
input_map = {X_real: reader_train.streams.features}
for train_step in range(num_minibatches):
# train the discriminator model for k steps
for gen_train_step in range(k):
Z_data = noise_sample(minibatch_size)
X_data = reader_train.next_minibatch(minibatch_size, input_map)
if X_data[X_real].num_samples == Z_data.shape[0]:
batch_inputs = {X_real: X_data[X_real].data, Z: Z_data}
D_trainer.train_minibatch(batch_inputs)
# train the generator model for a single step
Z_data = noise_sample(minibatch_size)
batch_inputs = {Z: Z_data}
G_trainer.train_minibatch(batch_inputs)
G_trainer.train_minibatch(batch_inputs)
pp_G.update_with_trainer(G_trainer)
pp_D.update_with_trainer(D_trainer)
G_trainer_loss = G_trainer.previous_minibatch_loss_average
return Z, X_fake, G_trainer_loss
reader_train = create_reader(train_file, True, d_input_dim, label_dim=10)
# G_input, G_output, G_trainer_loss = train(reader_train, dense_generator, dense_discriminator)
G_input, G_output, G_trainer_loss = train(reader_train,
convolutional_generator,
convolutional_discriminator)
# Print the generator loss
print("Training loss of the generator is: {0:.2f}".format(G_trainer_loss))
"""
Explanation: With the value functions defined we proceed to interatively train the GAN model. The training of the model can take significnantly long depending on the hardware especiallly if isFast flag is turned off.
End of explanation
"""
def plot_images(images, subplot_shape):
plt.style.use('ggplot')
fig, axes = plt.subplots(*subplot_shape)
for image, ax in zip(images, axes.flatten()):
ax.imshow(image.reshape(28, 28), vmin=0, vmax=1.0, cmap='gray')
ax.axis('off')
plt.show()
noise = noise_sample(36)
images = G_output.eval({G_input: noise})
plot_images(images, subplot_shape=[6, 6])
"""
Explanation: Generating Fake (Synthetic) Images
Now that we have trained the model, we can create fake images simply by feeding random noise into the generator and displaying the outputs. Below are a few images generated from random samples. To get a new set of samples, you can re-run the last cell.
End of explanation
"""
|
DistrictDataLabs/tribe | notebooks/Introduction to Networkx.ipynb | mit | %matplotlib inline
import os
import random
import community
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
from tribe.utils import *
from tribe.stats import *
from operator import itemgetter
## Some Helper constants
FIXTURES = os.path.join(os.getcwd(), "fixtures")
# GRAPHML = os.path.join(FIXTURES, "benjamin@bengfort.com.graphml")
GRAPHML = os.path.join(FIXTURES, "/Users/benjamin/Desktop/20150814T212153Z.graphml")
"""
Explanation: Analyses with NetworkX
Social networks have become a fixture of modern life thanks to social networking sites like Facebook and Twitter. Social networks themselves are not new, however. The study of such networks dates back to the early twentieth century, particularly in the field of sociology and anthropology. It is their prevelance in mainstream applciations that have moved these types of studies to the purview of data science.
The basis for the analyses in this notebook comes from Graph Theory- the mathmatical study of the application and properties of graphs, originally motivated by the study of games of chance. Generally speaking, this involves the study of network encoding, and measuring properties of the graph. Graph theory can be traced back to Euler's work on the Konigsberg Bridges problem (1735). However in recent decades, the rise of the social network has influenced the discpline, particularly with Computer Science graph data structures and databases.
A Graph, then can be defined as: G = (V, E) consiting of a finite set of nodes denoted by V or V(G) and a collection E or E(G) of unordered pairs {u, v} where u, v ∈ V. Less formally, this is a symbolic repreentation of a network and their relationships- a set of linked nodes.
Graphs can be either directed or undirected. Directed graphs simply have ordered relationships, undirected graphs can be seen as bidirectional directed graphs. A directed graph in a social network tends to have directional semantic relationships, e.g. "friends" - Abe might be friends with Jane, but Jane might not reciprocate. Undirected social networks have more general semantic relationships, e.g. "knows". Any directed graph can easily be converted to the more general undirected graph. In this case, the adjacency matrix becomes symmetric.
A few final terms will help us in our discussion. The cardinality of vertices is called the order of the Graph, where as the cardinality of the edges is called the size. In the above graph, the order is 7 and the size is 10. Two nodes are adjacent if they share an edge, they are also called neighbors and the neighborhood of a vertex is the set of all vertices that a vertex is connected to. The number of nodes in a vertex' neighborhood is that vertex' degree.
Required Python Libraries
The required external libraries for the tasks in this notebook are as follows:
networkx
matplotlib
python-louvain
NetworkX is a well maintained Python library for the creation, manipulation, and study of the structure of complex networks. Its tools allow for the quick creation of graphs, and the library also contains many common graph algorithms. In particular NetworkX complements Python's scientific computing suite of SciPy/NumPy, Matplotlib, and Graphviz and can handle graphs in memory of 10M's of nodes and 100M's of links. NetworkX should be part of every data scientist's toolkit.
NetworkX and Python are the perfect combination to do social network analysis. NetworkX is designed to handle data at scale, data that is relevant to modern scale social networks. The core algorithms that are included are implemented on extremely fast legacy code. Graphs are hugely flexible (nodes can be any hashable type), and there is an extensive set of native IO formats. Finally, with Python- you'll be able to access or use a myriad of data sources from databases to the Internet.
End of explanation
"""
H = nx.Graph(name="Hello World Graph")
# Also nx.DiGraph, nx.MultiGraph, etc
# Add nodes manually, label can be anything hashable
H.add_node(1, name="Ben", email="benjamin@bengfort.com")
H.add_node(2, name="Tony", email="ojedatony1616@gmail.com")
# Can also add an iterable of nodes: H.add_nodes_from
print nx.info(H)
H.add_edge(1,2, label="friends", weight=0.832)
# Can also add an iterable of edges: H.add_edges_from
print nx.info(H)
# Clearing a graph is easy
H.remove_node(1)
H.clear()
"""
Explanation: The basics of creating a NetworkX Graph:
End of explanation
"""
H = nx.erdos_renyi_graph(100, 0.20)
"""
Explanation: For testing and diagnostics it's useful to generate a random Graph. NetworkX comes with several graph models including:
Complete Graph G=nx.complete_graph(100)
Star Graph G=nx.star_graph(100)
Erdős-Rényi graph, binomial graph G=nx.erdos_renyi_graph(100, 0.20)
Watts-Strogatz small-world graph G=nx.watts_strogatz_graph(100, 0.20)
Holme and Kim power law G=nx.powerlaw_cluster_graph(100, 0.20)
But there are so many more, see Graph generators for more information on all the types of graph generators NetworkX provides. These, however are the best ones for doing research on social networks.
End of explanation
"""
print H.nodes()[1:10]
print H.edges()[1:5]
print H.neighbors(3)
# For fast, memory safe iteration, use the `_iter` methods
edges, nodes = 0,0
for e in H.edges_iter(): edges += 1
for n in H.nodes_iter(): nodes += 1
print "%i edges, %i nodes" % (edges, nodes)
# Accessing the properties of a graph
print H.graph['name']
H.graph['created'] = strfnow()
print H.graph
# Accessing the properties of nodes and edges
H.node[1]['color'] = 'red'
H.node[43]['color'] = 'blue'
print H.node[43]
print H.nodes(data=True)[:3]
# The weight property is special and should be numeric
H.edge[0][34]['weight'] = 0.432
H.edge[0][36]['weight'] = 0.123
print H.edge[34][0]
# Accessing the highest degree node
center, degree = sorted(H.degree().items(), key=itemgetter(1), reverse=True)[0]
# A special type of subgraph
ego = nx.ego_graph(H, center)
pos = nx.spring_layout(H)
nx.draw(H, pos, node_color='#0080C9', edge_color='#cccccc', node_size=50)
nx.draw_networkx_nodes(H, pos, nodelist=[center], node_size=100, node_color="r")
plt.show()
# Other subgraphs can be extracted with nx.subgraph
# Finding the shortest path
H = nx.star_graph(100)
print nx.shortest_path(H, random.choice(H.nodes()), random.choice(H.nodes()))
pos = nx.spring_layout(H)
nx.draw(H, pos)
plt.show()
# Preparing for Data Science Analysis
print nx.to_numpy_matrix(H)
# print nx.to_scipy_sparse_matrix(G)
"""
Explanation: Accessing Nodes and Edges:
End of explanation
"""
G = nx.read_graphml(GRAPHML) # opposite of nx.write_graphml
print nx.info(G)
"""
Explanation: Serialization of Graphs
Most Graphs won't be constructed in memory, but rather saved to disk. Serialize and deserialize Graphs as follows:
End of explanation
"""
# Generate a list of connected components
# See also nx.strongly_connected_components
for component in nx.connected_components(G):
print len(component)
len([c for c in nx.connected_components(G)])
# Get a list of the degree frequencies
dist = FreqDist(nx.degree(G).values())
dist.plot()
# Compute Power log sequence
degree_sequence=sorted(nx.degree(G).values(),reverse=True) # degree sequence
plt.loglog(degree_sequence,'b-',marker='.')
plt.title("Degree rank plot")
plt.ylabel("degree")
plt.xlabel("rank")
# Graph Properties
print "Order: %i" % G.number_of_nodes()
print "Size: %i" % G.number_of_edges()
print "Clustering: %0.5f" % nx.average_clustering(G)
print "Transitivity: %0.5f" % nx.transitivity(G)
hairball = nx.subgraph(G, [x for x in nx.connected_components(G)][0])
print "Average shortest path: %0.4f" % nx.average_shortest_path_length(hairball)
# Node Properties
node = 'benjamin@bengfort.com' # Change to an email in your graph
print "Degree of node: %i" % nx.degree(G, node)
print "Local clustering: %0.4f" % nx.clustering(G, node)
"""
Explanation: NetworkX has a ton of Graph serialization methods, and most have methods in the following format for serialization format, format:
Read Graph from disk: read_format
Write Graph to disk: write_format
Parse a Graph string: parse_format
Generate a random Graph in format: generate_format
The list of formats is pretty impressive:
Adjacency List
Multiline Adjacency List
Edge List
GEXF
GML
Pickle
GraphML
JSON
LEDA
YAML
SparseGraph6
Pajek
GIS Shapefile
The JSON and GraphmL are most noteworthy (for use in D3 and Gephi/Neo4j)
Initial Analysis of Email Network
We can do some initial analyses on our network using built in NetworkX methods.
End of explanation
"""
def nbest_centrality(graph, metric, n=10, attribute="centrality", **kwargs):
centrality = metric(graph, **kwargs)
nx.set_node_attributes(graph, attribute, centrality)
degrees = sorted(centrality.items(), key=itemgetter(1), reverse=True)
for idx, item in enumerate(degrees[0:n]):
item = (idx+1,) + item
print "%i. %s: %0.4f" % item
return degrees
degrees = nbest_centrality(G, nx.degree_centrality, n=15)
"""
Explanation: Computing Key Players
In the previous graph, we began exploring ego networks and strong ties between individuals in our social network. We started to see that actors with strong ties to other actors created clusters that centered around themselves. This leads to the obvious question: who are the key figures in the graph, and what kind of pull do they have? We'll look at a couple measures of "centrality" to try to discover this: degree centrality, betweeness centrality, closeness centrality, and eigenvector centrality.
Degree Centrality
The most common and perhaps simplest technique for finding the key actors of a graph is to measure the degree of each vertex. Degree is a signal that determines how connected a node is, which could be a metaphor for influence or popularity. At the very least, the most connected nodes are the ones that spread information the fastest, or have the greatest effect on their community. Measures of degree tend to suffer from dillution, and benefit from statistical techniques to normalize data sets.
End of explanation
"""
# centrality = nx.betweenness_centrality(G)
# normalized = nx.betweenness_centrality(G, normalized=True)
# weighted = nx.betweenness_centrality(G, weight="weight")
degrees = nbest_centrality(G, nx.betweenness_centrality, n=15)
"""
Explanation: Betweenness Centrality
A path is a sequence of nodes between a star node and an end node where no node appears twice on the path, and is measured by the number of edges included (also called hops). The most interesting path to compute for two given nodes is the shortest path, e.g. the minimum number of edges required to reach another node, this is also called the node distance. Note that paths can be of length 0, the distance from a node to itself.
End of explanation
"""
# centrality = nx.closeness_centrality(graph)
# normalied = nx.closeness_centrality(graph, normalized=True)
# weighted = nx.closeness_centrality(graph, distance="weight")
degrees = nbest_centrality(G, nx.closeness_centrality, n=15)
"""
Explanation: Closeness Centrality
Another centrality measure, closeness takes a statistical look at the outgoing paths fora particular node, v. That is, what is the average number of hops it takes to reach any other node in the network from v? This is simply computed as the reciprocal of the mean distance to all other nodes in the graph, which can be normalized to n-1 / size(G)-1 if all nodes in the graph are connected. The reciprocal ensures that nodes that are closer (e.g. fewer hops) score "better" e.g. closer to one as in other centrality scores.
End of explanation
"""
# centrality = nx.eigenvector_centality(graph)
# centrality = nx.eigenvector_centrality_numpy(graph)
degrees = nbest_centrality(G, nx.eigenvector_centrality_numpy, n=15)
"""
Explanation: Eigenvector Centrality
The eigenvector centrality of a node, v is proportional to the sum of the centrality scores of it's neighbors. E.g. the more important people you are connected to, the more important you are. This centrality measure is very interesting, because an actor with a small number of hugely influential contacts may outrank ones with many more mediocre contacts. For our social network, hopefully it will allow us to get underneath the celebrity structure of heroic teams and see who actually is holding the social graph together.
End of explanation
"""
print nx.density(G)
"""
Explanation: Clustering and Cohesion
In this next section, we're going to characterize our social network as a whole, rather than from the perspective of individual actors. This task is usually secondary to getting a feel for the most important nodes; but it is a chicken and an egg problem- determining the techniques to analyze and split the whole graph can be informed by key player analyses, and vice versa.
The density of a network is the ratio of the number of edges in the network to the total number of possible edges in the network. The possible number of edges for a graph of n vertices is n(n-1)/2 for an undirected graph (remove the division for a directed graph). Perfectly connected networks (every node shares an edge with every other node) have a density of 1, and are often called cliques.
End of explanation
"""
for subgraph in nx.connected_component_subgraphs(G):
print nx.diameter(subgraph)
print nx.average_shortest_path_length(subgraph)
"""
Explanation: Graphs can also be analyzed in terms of distance (the shortest path between two nodes). The longest distance in a graph is called the diameter of the social graph, and represents the longest information flow along the graph. Typically less dense (sparse) social networks will have a larger diameter than more dense networks. Additionally, the average distance is an interesting metric as it can give you information about how close nodes are to each other.
End of explanation
"""
partition = community.best_partition(G)
print "%i partitions" % len(set(partition.values()))
nx.set_node_attributes(G, 'partition', partition)
pos = nx.spring_layout(G)
plt.figure(figsize=(12,12))
plt.axis('off')
nx.draw_networkx_nodes(G, pos, node_size=200, cmap=plt.cm.RdYlBu, node_color=partition.values())
nx.draw_networkx_edges(G,pos, alpha=0.5)
"""
Explanation: Let's actually get into some clustering. The python-louvain library uses NetworkX to perform community detection with the louvain method. Here is a simple example of cluster partitioning on a small, built-in social network.
End of explanation
"""
nx.draw(nx.erdos_renyi_graph(20, 0.20))
plt.show()
"""
Explanation: Visualizing Graphs
NetworkX wraps matplotlib or graphviz to draw simple graphs using the same charting library we saw in the previous chapter. This is effective for smaller size graphs, but with larger graphs memory can quickly be consumed. To draw a graph, simply use the networkx.draw function, and then use pyplot.show to display it.
End of explanation
"""
# Generate the Graph
G=nx.davis_southern_women_graph()
# Create a Spring Layout
pos=nx.spring_layout(G)
# Find the center Node
dmin=1
ncenter=0
for n in pos:
x,y=pos[n]
d=(x-0.5)**2+(y-0.5)**2
if d<dmin:
ncenter=n
dmin=d
# color by path length from node near center
p=nx.single_source_shortest_path_length(G,ncenter)
# Draw the graph
plt.figure(figsize=(8,8))
nx.draw_networkx_edges(G,pos,nodelist=[ncenter],alpha=0.4)
nx.draw_networkx_nodes(G,pos,nodelist=p.keys(),
node_size=90,
node_color=p.values(),
cmap=plt.cm.Reds_r)
"""
Explanation: There is, however, a rich drawing library underneath that lets you customize how the Graph looks and is laid out with many different layout algorithms. Let's take a look at an example using one of the built-in Social Graphs: The Davis Women's Social Club.
End of explanation
"""
|
saketkc/notebooks | python/CrossValidated-222556.ipynb | bsd-2-clause | %pylab inline
from __future__ import division
import scipy as sp
import seaborn as sns
sns.set_style('whitegrid')
sns.set_context('paper', font_scale=2)
np.random.seed(42)
def calculate_shannon_entropy(p):
'''
Parameters
----------
p: list
list of probability values such that sum(p) = 1
Returns
-------
entropy: float
Shannon Entropy
'''
assert np.allclose(sum(p), 1)
entropy = -np.nansum(np.array(p) * np.log2(np.array(p)).T)
return entropy
def calculate_uniformity_index(x, a, b):
'''
Parameters
----------
'''
"""
Explanation: How do I quantify the uniformity of sampling time?
https://stats.stackexchange.com/questions/how-do-i-quantify-the-uniformity-of-sampling-time
End of explanation
"""
plt.plot(np.arange(0, 1.1, 0.1), [calculate_shannon_entropy([x, 1-x]) for x in np.arange(0, 1.1, 0.1)])
"""
Explanation: Entropy is maximal for a uniform distribution
End of explanation
"""
entropies = []
for n in range(1, 100):
p_vec = [1/n] * n
entropies.append(calculate_shannon_entropy(p_vec))
plt.plot(entropies)
## Uniform index
hist_bins = 20
s = np.random.uniform(1,10,1000)
count, bins, ignored = plt.hist(s, 10, normed=True)
count, boundaries = np.histogram(s, bins=1000)
normalized_count = count/np.sum(count)
U = np.sqrt(1+(10-1)**2)/np.sum(np.sqrt(1+normalized_count**2))
(U-0.707)/(1-0.707)
"""
Explanation: Entropy increase monotonically with sample size for a uniform distribution
End of explanation
"""
count, boundaries = np.histogram(s, bins=1000, density=True)
sum(np.diff(boundaries)*count)
#denom = np.trapz(np.sqrt(1+count**2), np.diff(boundaries))
denom = sum(np.diff(boundaries)*np.sqrt(1+count**2))
denom
U = np.sqrt(1+(10-1)**2)/denom
(U-0.707)/(1-0.707)
s[np.where( (s>7) & (s<9) )] = 0
#count = np.histogram(s)
#normalized_count = count[0]/np.sum(count[0])
count, boundaries = np.histogram(s, bins=1000, density=True)
U = np.sqrt(1+(10-1)**2)/np.sum(np.sqrt(1+normalized_count**2))
denom = sum(np.diff(boundaries)*np.sqrt(1+count**2))
U = np.sqrt(1+(10-1)**2)/denom
(U-0.707)/(1-0.707)
len(count)
s[np.where( (s>1) )] = 0
count, bins, ignored = plt.hist(s, 10, normed=True)
count, boundaries = np.histogram(s, bins=10, density=True)
denom = sum(np.diff(boundaries)*np.sqrt(1+count**2))
U = np.sqrt(1+(2-1)**2)/denom
(U-0.707)/(1-0.707)
denom
count, bins, ignored = plt.hist(s, 10, normed=True)
"""
Explanation: Uniformity index
https://arxiv.org/pdf/1508.01146.pdf defines uniformity index as :
$$
\mathcal{U}(f_X) = \frac{\sqrt{1+(b-a)^2}}{\sqrt{1+[f_X(t)]^2}dt}
$$
End of explanation
"""
|
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn | doc/notebooks/automaton.is_accessible.ipynb | gpl-3.0 | import vcsn
"""
Explanation: automaton.is_accessible
Whether all its states are accessible, i.e., all its states can be reached from an initial state.
Preconditions:
- None
See also:
- automaton.accessible
- automaton.is_coaccessible
- automaton.trim
Examples
End of explanation
"""
%%automaton a
context = "lal_char(abc), b"
$ -> 0
0 -> 1 a
1 -> $
2 -> 0 a
1 -> 3 a
a.is_accessible()
"""
Explanation: The following automaton has states that cannot be reached from the initial(s) states:
End of explanation
"""
a.accessible()
a.accessible().is_accessible()
"""
Explanation: Calling accessible returns a copy of the automaton without non-accessible states:
End of explanation
"""
|
geektoni/shogun | doc/ipython-notebooks/multiclass/Tree/DecisionTrees.ipynb | bsd-3-clause | import os
import numpy as np
import shogun as sg
import matplotlib.pyplot as plt
%matplotlib inline
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../../data')
# training data
train_income=['Low','Medium','Low','High','Low','High','Medium','Medium','High','Low','Medium',
'Medium','High','Low','Medium']
train_age = ['Old','Young','Old','Young','Old','Young','Young','Old','Old','Old','Young','Old',
'Old','Old','Young']
train_education = ['University','College','University','University','University','College','College',
'High School','University','High School','College','High School','University','High School','College']
train_marital = ['Married','Single','Married','Single','Married','Single','Married','Single','Single',
'Married','Married','Single','Single','Married','Married']
train_usage = ['Low','Medium','Low','High','Low','Medium','Medium','Low','High','Low','Medium','Low',
'High','Low','Medium']
# print data
print('Training Data Table : \n')
print('Income \t\t Age \t\t Education \t\t Marital Status \t Usage')
for i in range(len(train_income)):
print(train_income[i]+' \t\t '+train_age[i]+' \t\t '+train_education[i]+' \t\t '+train_marital[i]+' \t\t '+train_usage[i])
"""
Explanation: Decision Trees
By Parijat Mazumdar (GitHub ID: mazumdarparijat)
This notebook illustrates the use of decision trees in Shogun for classification and regression. Various decision tree learning algorithms like ID3, C4.5, CART, CHAID have been discussed in detail using both intuitive toy datasets as well as real-world datasets.
Decision Tree Basics
Decision Trees are a non-parametric supervised learning method that can be used for both classification and regression. Decision trees essentially encode a set of if-then-else rules which can be used to predict target variable given data features. These if-then-else rules are formed using the training dataset with the aim to satisfy as many training data instances as possible. The formation of these rules (aka. decision tree) from training data is called decision tree learning. Various decision tree learning algorithms have been developed and they work best in different situations. An advantage of decision trees is that they can model any type of function for classification or regression which other techniques cannot. But a decision tree is highly prone to overfitting and bias towards training data. So, decision trees are used for very large datasets which are assumed to represent the ground truth well. Additionally, certain tree pruning algorithms are also used to tackle overfitting.
ID3 (Iterative Dichotomiser 3)
ID3 is a straightforward decision tree learning algorithm developed by Ross Quinlan. ID3 is applicable only in cases where the attributes (or features) defining data examples are categorical in nature and the data examples belong to pre-defined, clearly distinguishable (ie. well defined) classes. ID3 is an iterative greedy algorithm which starts with the root node and eventually builds the entire tree. At each node, the "best" attribute to classify data is chosen. The "best" attribute is chosen using the information gain metric. Once an attribute is chosen in a node, the data examples in the node are categorized into sub-groups based on the attribute values that they have. Basically, all data examples having the same attribute value are put together in the same sub-group. These sub-groups form the children of the present node and the algorithm is repeated for each of the newly formed children nodes. This goes on until all the data members of a node belong to the same class or all the attributes are exhausted. In the latter case, the class predicted may be erroneous and generally the mode of the classes appearing in the node is chosen as the predictive class.
Pseudocode for ID3 Algorithm
Example using a Simple dataset
In this section, we create a simple example where we try to predict the usage of mobile phones by individuals based on their income, age, education and marital status. Each of the attributes have been categorized into 2 or 3 types. Let us create the training dataset and tabulate it first.
End of explanation
"""
# encoding dictionary
income = {'Low' : 1.0, 'Medium' : 2.0, 'High' : 3.0}
age = {'Young' : 1.0, 'Old' : 2.0}
education = {'High School' : 1.0, 'College' : 2.0, 'University' : 3.0}
marital_status = {'Married' : 1.0, 'Single' : 2.0}
usage = {'Low' : 1.0, 'Medium' : 2.0, 'High' : 3.0}
# encode training data
for i in range(len(train_income)):
train_income[i] = income[train_income[i]]
train_age[i] = age[train_age[i]]
train_education[i] = education[train_education[i]]
train_marital[i] = marital_status[train_marital[i]]
train_usage[i] = usage[train_usage[i]]
# form Shogun feature matrix
train_data = np.array([train_income, train_age, train_education, train_marital])
train_feats = sg.create_features(train_data);
# form Shogun multiclass labels
labels = sg.create_labels(np.array(train_usage));
"""
Explanation: We want to create a decision tree from the above training dataset. The first step for that is to encode the data into numeric values and bind them to Shogun's features and multiclass labels.
End of explanation
"""
# create ID3ClassifierTree object
id3 = sg.create_machine("ID3ClassifierTree", labels=labels)
# learn the tree from training features
is_successful = id3.train(train_feats)
"""
Explanation: Next, we learn our decision tree using the features and labels created.
End of explanation
"""
# test data
test_income = ['Medium','Medium','Low','High','High']
test_age = ['Old','Young','Old','Young','Old']
test_education = ['University','College','High School','University','College']
test_marital = ['Married','Single','Married','Single','Married']
test_usage = ['Low','Medium','Low','High','High']
# tabulate test data
print('Test Data Table : \n')
print('Income \t\t Age \t\t Education \t\t Marital Status \t Usage')
for i in range(len(test_income)):
print(test_income[i]+' \t\t '+test_age[i]+' \t\t '+test_education[i]+' \t\t '+test_marital[i]+' \t\t ?')
"""
Explanation: Our decision tree is ready now and we want to use it to make some predictions over test data. So, let us create some test data examples first.
End of explanation
"""
# encode test data
for i in range(len(test_income)):
test_income[i] = income[test_income[i]]
test_age[i] = age[test_age[i]]
test_education[i] = education[test_education[i]]
test_marital[i] = marital_status[test_marital[i]]
# bind to shogun features
test_data = np.array([test_income, test_age, test_education, test_marital])
test_feats = sg.create_features(test_data)
# apply decision tree classification
test_labels = id3.apply(test_feats)
"""
Explanation: Next, as with training data, we encode our test dataset and bind it to Shogun features. Then, we apply our decision tree to the test examples to obtain the predicted labels.
End of explanation
"""
output = test_labels.get("labels");
output_labels=[0]*len(output)
# decode back test data for printing
for i in range(len(test_income)):
test_income[i]=list(income.keys())[list(income.values()).index(test_income[i])]
test_age[i]=list(age.keys())[list(age.values()).index(test_age[i])]
test_education[i]=list(education.keys())[list(education.values()).index(test_education[i])]
test_marital[i]=list(marital_status.keys())[list(marital_status.values()).index(test_marital[i])]
output_labels[i]=list(usage.keys())[list(usage.values()).index(output[i])]
# print output data
print('Final Test Data Table : \n')
print('Income \t Age \t Education \t Marital Status \t Usage(predicted)')
for i in range(len(test_income)):
print(test_income[i]+' \t '+test_age[i]+' \t '+test_education[i]+' \t '+test_marital[i]+' \t\t '+output_labels[i])
"""
Explanation: Finally let us tabulate the results obtained and compare them with our intuitive predictions.
End of explanation
"""
# class attribute
evaluation = {'unacc' : 1.0, 'acc' : 2.0, 'good' : 3.0, 'vgood' : 4.0}
# non-class attributes
buying = {'vhigh' : 1.0, 'high' : 2.0, 'med' : 3.0, 'low' : 4.0}
maint = {'vhigh' : 1.0, 'high' : 2.0, 'med' : 3.0, 'low' : 4.0}
doors = {'2' : 1.0, '3' : 2.0, '4' : 3.0, '5more' : 4.0}
persons = {'2' : 1.0, '4' : 2.0, 'more' : 3.0}
lug_boot = {'small' : 1.0, 'med' : 2.0, 'big' : 3.0}
safety = {'low' : 1.0, 'med' : 2.0, 'high' : 3.0}
"""
Explanation: So, do the predictions made by our decision tree match our inferences from training set? Yes! For example, from the training set we infer that the individual having low income has low usage and also all individuals going to college have medium usage. The decision tree predicts the same for both cases.
Example using a real dataset
We choose the car evaluation dataset from the UCI Machine Learning Repository as our real-world dataset. The car.names file of the dataset enumerates the class categories as well as the non-class attributes. Each car categorized into one of 4 classes : unacc, acc, good, vgood. Each car is judged using 6 attributes : buying, maint, doors, persons, lug_boot, safety. Each of these attributes can take 3-4 values. Let us first make a dictionary to encode strings to numeric values using information from cars.names file.
End of explanation
"""
with open( os.path.join(SHOGUN_DATA_DIR, 'uci/car/car.data'), 'r') as f:
feats = []
labels = []
# read data from file and encode
for line in f:
words = line.rstrip().split(',')
words[0] = buying[words[0]]
words[1] = maint[words[1]]
words[2] = doors[words[2]]
words[3] = persons[words[3]]
words[4] = lug_boot[words[4]]
words[5] = safety[words[5]]
words[6] = evaluation[words[6]]
feats.append(words[0:6])
labels.append(words[6])
"""
Explanation: Next, let us read the file and form Shogun features and labels.
End of explanation
"""
feats = np.array(feats)
labels = np.array(labels)
# number of test vectors
num_test_vectors = 200;
test_indices = np.random.randint(feats.shape[0], size = num_test_vectors)
test_features = feats[test_indices]
test_labels = labels[test_indices]
# remove test vectors from training set
feats = np.delete(feats,test_indices,0)
labels = np.delete(labels,test_indices,0)
"""
Explanation: From the entire dataset, let us choose some test vectors to form our test dataset.
End of explanation
"""
# shogun test features and labels
test_feats = sg.create_features(test_features.T)
test_labels = sg.create_labels(test_labels)
# method for id3 training and
def ID3_routine(feats, labels):
# Shogun train features and labels
train_feats = sg.create_features(feats.T)
train_lab = sg.create_labels(labels)
# create ID3ClassifierTree object
id3 = sg.create_machine("ID3ClassifierTree", labels=train_lab)
# learn the tree from training features
id3.train(train_feats)
# apply to test dataset
output = id3.apply(test_feats)
return output
output = ID3_routine(feats, labels)
"""
Explanation: Next step is to train our decision tree using the training features and applying it to our test dataset to get predicted output classes.
End of explanation
"""
# Shogun object for calculating multiclass accuracy
accuracy = sg.create_evaluation("MulticlassAccuracy")
print('Accuracy : ' + str(accuracy.evaluate(output, test_labels)))
"""
Explanation: Finally, let us compare our predicted labels with test labels to find out the percentage error of our classification model.
End of explanation
"""
# list of error rates for all training dataset sizes
error_rate = []
# number of error rate readings taken for each value of dataset size
num_repetitions = 3
# loop over training dataset size
for i in range(500,1600,200):
indices = np.random.randint(feats.shape[0], size = i)
train_features = feats[indices]
train_labels = labels[indices]
average_error = 0
for i in range(num_repetitions):
output = ID3_routine(train_features, train_labels)
average_error = average_error + (1-accuracy.evaluate(output, test_labels))
error_rate.append(average_error/num_repetitions)
# plot the error rates
from scipy.interpolate import interp1d
fig,axis = plt.subplots(1,1)
x = np.arange(500,1600,200)
f = interp1d(x, error_rate)
xnew = np.linspace(500,1500,100)
plt.plot(x,error_rate,'o',xnew,f(xnew),'-')
plt.xlim([400,1600])
plt.xlabel('training dataset size')
plt.ylabel('Classification Error')
plt.title('Decision Tree Performance')
plt.show()
"""
Explanation: We see that the accuracy is moderately high. Thus our decision tree can evaluate any car given its features with a high success rate. As a final exercise, let us examine the effect of training dataset size on the accuracy of decision tree.
End of explanation
"""
def create_toy_classification_dataset(ncat,do_plot):
# create attribute values and labels for class 1
x = np.ones((1,ncat))
y = 1+np.random.rand(1,ncat)*4
lab = np.zeros(ncat)
# add attribute values and labels for class 2
x = np.concatenate((x,np.ones((1,ncat))),1)
y = np.concatenate((y,5+np.random.rand(1,ncat)*4),1)
lab = np.concatenate((lab,np.ones(ncat)))
# add attribute values and labels for class 3
x = np.concatenate((x,2*np.ones((1,ncat))),1)
y = np.concatenate((y,1+np.random.rand(1,ncat)*8),1)
lab = np.concatenate((lab,2*np.ones(ncat)))
# create test data
ntest = 20
x_t = np.concatenate((np.ones((1,int(3*ntest/4))),2*np.ones((1,int(ntest/4)))),1)
y_t = 1+np.random.rand(1,ntest)*8
if do_plot:
# plot training data
c = ['r','g','b']
for i in range(3):
plt.scatter(x[0,lab==i],y[0,lab==i],color=c[i],marker='x',s=50)
# plot test data
plt.scatter(x_t[0,:],y_t[0,:],color='k',s=10,alpha=0.8)
plt.xlabel('attribute X')
plt.ylabel('attribute Y')
plt.show()
# form training feature matrix
train_feats = sg.create_features(np.concatenate((x,y),0))
# from training labels
train_labels = sg.create_labels(lab)
# from test feature matrix
test_feats = sg.create_features(np.concatenate((x_t,y_t),0))
return (train_feats,train_labels,test_feats);
train_feats,train_labels,test_feats = create_toy_classification_dataset(20,True)
"""
Explanation: NOTE : The above code snippet takes about half a minute to execute. Please wait patiently.
From the above plot, we see that error rate decreases steadily as we increase the training dataset size. Although in this case, the decrease in error rate is not very significant, in many datasets this decrease in error rate can be substantial.
C4.5
The C4.5 algorithm is essentially an extension of the ID3 algorithm for decision tree learning. It has the additional capability of handling continuous attributes and attributes with missing values. The tree growing process in case of C4.5 is same as that of ID3 i.e. finding the best split at each node using the information gain metric. But in case of continuous attribute, the C4.5 algorithm has to perform the additional step of converting it to a two-value categorical attribute by splitting about a suitable threshold. This threshold is chosen in a way such that the resultant split produces maximum information gain. Let us start exploring Shogun's C4.5 algorithm API with a toy example.
Example using toy dataset
Let us consider a 3-class classification using 2 attributes. One of the attributes (say attribute X) is a 2-class categorical attribute depicted by values 1 and 2. The other attribute (say attribute Y) is a continuous attribute having values between 1 and 9. The simple rules of classification are as follows : if X=1 and Y $\epsilon$ [1,5), data point belongs to class 1, if X=1 and Y $\epsilon$ [5,9), data point belongs to class 2 and if X=2, data point belongs to class 3. Let us realize the toy dataset by plotting it.
End of explanation
"""
# steps in C4.5 Tree training bundled together in a python method
def train_tree(feats,types,labels):
# C4.5 Tree object
tree = sg.create_machine("C45ClassifierTree", labels=labels, m_nominal=types)
# supply training matrix and train
tree.train(feats)
return tree
# specify attribute types X is categorical hence True, Y is continuous hence False
feat_types = np.array([True,False])
# get back trained tree
C45Tree = train_tree(train_feats,feat_types,train_labels)
"""
Explanation: In the above plot the training data points are are marked with different colours of crosses where each colour corresponds to a particular label. The test data points are marked by black circles. For us it is a trivial task to assign correct colours (i.e. labels) to the black points. Let us see how accurately C4.5 assigns colours to these test points.
Now let us train a decision tree using the C4.5 algorithm. We need to create a Shogun C4.5 tree object and supply training features and training labels to it. We also need to specify which attribute is categorical and which is continuous. The attribute types can be specified using set_feature_types method through which all categorical attributes are set as True and continuous attributes as False.
End of explanation
"""
def classify_data(tree,data):
# get classification labels
output = tree.apply(data)
# get classification certainty
output_certainty=tree.get('m_certainty')
return output,output_certainty
out_labels,out_certainty = classify_data(C45Tree,test_feats)
"""
Explanation: Now that we have trained the decision tree, we can use it to classify our test vectors.
End of explanation
"""
# plot results
def plot_toy_classification_results(train_feats,train_labels,test_feats,test_labels):
train = train_feats.get('feature_matrix')
lab = train_labels.get("labels")
test = test_feats.get('feature_matrix')
out_labels = test_labels.get("labels")
c = ['r','g','b']
for i in range(out_labels.size):
plt.scatter(test[0,i],test[1,i],color=c[np.int32(out_labels[i])],s=50)
# plot training dataset for visual comparison
for i in range(3):
plt.scatter(train[0,lab==i],train[1,lab==i],color=c[i],marker='x',s=30,alpha=0.7)
plt.show()
plot_toy_classification_results(train_feats,train_labels,test_feats,out_labels)
"""
Explanation: Let us use the output labels to colour our test data points to qualitatively judge the performance of the decision tree.
End of explanation
"""
import csv
# dictionary to encode class names to class labels
to_label = {'Iris-setosa' : 0.0, 'Iris-versicolor' : 1.0, 'Iris-virginica' : 2.0}
# read csv file and separate out labels and features
lab = []
feat = []
with open( os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data')) as csvfile:
csvread = csv.reader(csvfile,delimiter=',')
for row in csvread:
feat.append([float(i) for i in row[0:4]])
lab.append(to_label[row[4]])
lab = np.array(lab)
feat = np.array(feat).T
"""
Explanation: We see that the decision tree trained using the C4.5 algorithm works almost perfectly in this toy dataset. Now let us try this algorithm on a real world dataset.
Example using a real dataset
In this section we will investigate that how accurately we can predict the species of an Iris flower using a C4.5 trained decision tree. In this example we will use petal length, petal width, sepal length and sepal width as our attributes to decide among 3 classes of Iris : Iris Setosa, Iris Versicolor and Iris Verginica. Let us start by suitably reading the dataset.
End of explanation
"""
# no.of vectors in test dataset
ntest = 25
# no. of vectors in train dataset
ntrain = 150-ntest
# randomize the order of vectors
subset = np.int32(np.random.permutation(150))
# choose 1st ntrain from randomized set as training vectors
feats_train = feat[:,subset[0:ntrain]]
# form training labels correspondingly
train_labels = lab[subset[0:ntrain]]
# form test features and labels (for accuracy evaluations)
feats_test = feat[:,subset[ntrain:ntrain+ntest]]
test_labels = lab[subset[ntrain:ntrain+ntest]]
"""
Explanation: Because there is no separate test dataset, we first divide the given dataset into training and testing subsets.
End of explanation
"""
# plot training features
c = ['r', 'g', 'b']
for i in range(3):
plt.scatter(feats_train[2,train_labels==i],feats_train[3,train_labels==i],color=c[i],marker='x')
# plot test data points in black
plt.scatter(feats_test[2,:],feats_test[3,:],color='k',marker='o')
plt.show()
"""
Explanation: Before marching forward with applying C4.5, let us plot the data to get a better understanding. The given data points are 4-D and hence cannot be conveniently plotted. We need to reduce the number of dimensions to 2. This reduction can be achieved using any dimension reduction algorithm like PCA. However for the sake of brevity, let us just choose two highly correlated dimensions, petal width and petal length (see summary statistics), right away for plotting.
End of explanation
"""
# training data
feats_train = sg.create_features(feats_train)
train_labels = sg.create_labels(train_labels)
# test data
feats_test = sg.create_features(feats_test)
test_labels = sg.create_labels(test_labels)
"""
Explanation: First, let us create Shogun features and labels from the given data.
End of explanation
"""
# randomize the order of vectors
subset = np.int32(np.random.permutation(ntrain))
nvalidation = 45
# form training subset and validation subset
train_subset = subset[0:ntrain-nvalidation]
validation_subset = subset[ntrain-nvalidation:ntrain]
"""
Explanation: We know for fact that decision trees tend to overfit. Hence pruning becomes a necessary step. In case of toy dataset, we skipped the pruning step because the dataset was simple and noise-free. But in case of a real dataset like the Iris dataset pruning cannot be skipped. So we have to partition the training dataset into the training subset and the validation subset.
End of explanation
"""
# set attribute types - all continuous
feature_types = np.array([False, False, False, False])
# remove validation subset before training the tree
feats_train.add_subset(train_subset)
train_labels.add_subset(train_subset)
# train tree
C45Tree = train_tree(feats_train,feature_types,train_labels)
# bring back validation subset
feats_train.remove_subset()
train_labels.remove_subset()
# remove data belonging to training subset
feats_train.add_subset(validation_subset)
train_labels.add_subset(validation_subset)
# FIXME: expose C45ClassifierTree::prune_tree
# prune the tree
# C45Tree.prune_tree(feats_train,train_labels)
# bring back training subset
feats_train.remove_subset()
train_labels.remove_subset()
# get results
output, output_certainty = classify_data(C45Tree,feats_test)
"""
Explanation: Now we train the decision tree first, then prune it and finally use it to get output labels for test vectors.
End of explanation
"""
# Shogun object for calculating multiclass accuracy
accuracy = sg.create_evaluation("MulticlassAccuracy")
print('Accuracy : ' + str(accuracy.evaluate(output, test_labels)))
# convert MulticlassLabels object to labels vector
output = output.get("labels")
test_labels = test_labels.get("labels")
train_labels = train_labels.get("labels")
# convert features object to matrix
feats_test = feats_test.get('feature_matrix')
feats_train = feats_train.get('feature_matrix')
# plot ground truth
for i in range(3):
plt.scatter(feats_test[2,test_labels==i],feats_test[3,test_labels==i],color=c[i],marker='x',s=100)
# plot predicted labels
for i in range(output.size):
plt.scatter(feats_test[2,i],feats_test[3,i],color=c[np.int32(output[i])],marker='o',s=30*output_certainty[i])
plt.show()
"""
Explanation: Let us calculate the accuracy of the classification made by our tree as well as plot the results for qualitative evaluation.
End of explanation
"""
train_feats,train_labels,test_feats=create_toy_classification_dataset(20,True)
"""
Explanation: From the evaluation of results, we infer that, with the help of a C4.5 trained decision tree, we can predict (with high accuracy) the type of Iris plant given its petal and sepal widths and lengths.
Classification and Regression Trees (CART)
The CART algorithm is a popular decision tree learning algorithm introduced by Breiman et al. Unlike ID3 and C4.5, the learnt decision tree in this case can be used for both multiclass classification and regression depending on the type of dependent variable. The tree growing process comprises of recursive binary splitting of nodes. To find the best split at each node, all possible splits of all available predictive attributes are considered. The best split is the one that maximises some splitting criterion. For classification tasks, ie. when the dependent attribute is categorical, the Gini index is used as the splitting criterion. For regression tasks, ie. when the dependent variable is continuous, the least squares deviation is used. Let us learn about Shogun's CART implementation by working on two toy problems, one on classification and the other on regression.
Classification example using toy data
Let us consider the same dataset as that in the C4.5 toy example. We re-create the dataset and plot it first.
End of explanation
"""
def train_carttree(feat_types,problem_type,num_folds,use_cv_pruning,labels,feats):
# create CART tree object
c = sg.create_machine("CARTree", nominal=feat_types,
mode=problem_type,
folds=num_folds,
apply_cv_pruning=use_cv_pruning,
labels=labels)
# train using training features
c.train(feats)
return c
# form feature types True for nominal (attribute X), False for ordinal/continuous (attribute Y)
ft = np.array([True, False])
# get back trained tree
cart = train_carttree(ft, "PT_MULTICLASS", 5, True, train_labels, train_feats)
"""
Explanation: Next, we supply necessary parameters to the CART algorithm and use it train our decision tree.
End of explanation
"""
# get output labels
output_labels = cart.apply(test_feats)
plot_toy_classification_results(train_feats,train_labels,test_feats,output_labels)
"""
Explanation: In the above code snippet, we see four parameters being supplied to the CART tree object. feat_types supplies knowledge of attribute types of training data to the CART algorithm and problem_type specifies whether it is a multiclass classification problem (PT_MULTICLASS) or a regression problem (PT_REGRESSION). The boolean parameter use_cv_pruning switches on cross-validation pruning of the trained tree and num_folds specifies the number of folds of cross-validation to be applied while pruning. At this point, let us divert ourselves briefly towards undertanding what kind of pruning strategy is employed by Shogun's CART implementation. The CART algorithm uses the cost-complexity pruning strategy. Cost-Complexity pruning yields a list of subtrees of varying depths using complexity normalized resubstitution error, $R_\alpha(T)$. Resubstitution error, R(T), measures how well a decision tree fits the training data. But, this measure favours larger trees over smaller ones. Hence the complexity normalized resubstitution error metric is used which adds penalty for increased complexity and in-turn counters overfitting.
$R_\alpha(T)=R(T)+\alpha \times (numleaves)$
The best subtree among the list of subtrees can be chosen using cross validation or using the best-fit metric in the validation dataset. Setting use_cv_pruning in the above code snippet basically tells the CART object to use cross-validation to choose the best among the subtrees generated by cost-complexity pruning.
Let us now get back on track and use the trained tree to classify our test data.
End of explanation
"""
def create_toy_regression_dataset(nsamples,noise_var):
# randomly choose positions in X axis between 0 to 16
samples_x = np.random.rand(1,nsamples)*16
# find out y (=sin(x)) values for the sampled x positions and add noise to it
samples_y = np.sin(samples_x)+(np.random.rand(1,nsamples)-0.5)*noise_var
# plot the samples
plt.scatter(samples_x,samples_y,color='b',marker='x')
# create training features
train_feats = sg.create_features(samples_x)
# training labels
train_labels = sg.create_labels(samples_y[0,:])
return (train_feats,train_labels)
# plot the reference sinusoid
def plot_ref_sinusoid():
plot_x = np.linspace(-2,18,100)
plt.plot(plot_x,np.sin(plot_x),color='y',linewidth=1.5)
plt.xlabel('Feature values')
plt.ylabel('Labels')
plt.xlim([-3,19])
plt.ylim([-1.5,1.5])
# number of samples is 300, noise variance is 0.5
train_feats,train_labels = create_toy_regression_dataset(300,0.5)
plot_ref_sinusoid()
plt.show()
"""
Explanation: Regression example using toy data
In this example, we form the training dataset by sampling points from a sinusoidal curve and see how well a decision tree, trained using these samples, re-creates the actual sinusoid.
End of explanation
"""
# feature type - continuous
feat_type = np.array([False])
# get back trained tree
cart = train_carttree(feat_type, "PT_REGRESSION", 5, True, train_labels, train_feats)
"""
Explanation: Next, we train our CART-tree.
End of explanation
"""
def plot_predicted_sinusoid(cart):
# regression range - 0 to 16
x_test = np.array([np.linspace(0,16,100)])
# form Shogun features
test_feats = sg.create_features(x_test)
# apply regression using our previously trained CART-tree
regression_output = cart.apply(test_feats).get("labels")
# plot the result
plt.plot(x_test[0,:],regression_output,linewidth=2.0)
# plot reference sinusoid
plot_ref_sinusoid()
plt.show()
plot_predicted_sinusoid(cart)
"""
Explanation: Now let us use the trained decision tree to regress over the entire range of the previously depicted sinusoid.
End of explanation
"""
# dictionary to encode class names to class labels
to_label = {'Iris-setosa' : 0.0, 'Iris-versicolor' : 1.0, 'Iris-virginica' : 2.0}
# read csv file and separate out labels and features
lab = []
feat = []
with open( os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data')) as csvfile:
csvread = csv.reader(csvfile,delimiter=',')
for row in csvread:
feat.append([float(i) for i in row[0:4]])
lab.append(to_label[row[4]])
lab = np.array(lab)
feat = np.array(feat).T
# plot the dataset using two highly correlated attributes
c = ['r', 'g', 'b']
for i in range(3):
plt.scatter(feat[2,lab==i],feat[3,lab==i],color=c[i],marker='x')
plt.show()
"""
Explanation: As we can see from the above plot, CART-induced decision tree follows the reference sinusoid quite beautifully!
Classification example using real dataset
In this section, we will apply the CART algorithm on the Iris dataset. Remember that the Iris dataset provides us with just a training dataset and no separate test dataset. In case of the C4.5 example discussed earlier, we ourselves divided the entire training dataset into training subset and test subset. In this section, we will employ a different strategy i.e. cross validation. In cross-validation, we divide the training dataset into n subsets where n is a user controlled parameter. We perform n iterations of training and testing in which, at each iteration, we choose one of the n subsets as our test dataset and the remaining n-1 subsets as our training dataset. The performance of the model is usually taken as the average of the performances in various iterations. Shogun's cross validation class makes it really easy to apply cross-validation to any model of our choice. Let us realize this by applying cross-validation to CART-tree trained over Iris dataset. We start by reading the data.
End of explanation
"""
# set attribute types - all continuous
feature_types = np.array([False, False, False, False])
# setup CART-tree with cross validation pruning switched off
cart = sg.create_machine("CARTree", nominal=feature_types,
mode="PT_MULTICLASS",
folds=5,
apply_cv_pruning=False)
"""
Explanation: Next, we setup the model which is CART-tree in this case.
End of explanation
"""
# training features
feats_train = sg.create_features(feat)
# training labels
labels_train = sg.create_labels(lab)
# set evaluation criteria - multiclass accuracy
accuracy = sg.create_evaluation("MulticlassAccuracy")
# set splitting criteria - 10 fold cross-validation
split = sg.create_splitting_strategy("CrossValidationSplitting", labels=labels_train, num_subsets=10)
# set cross-validation parameters
cross_val = sg.create_machine_evaluation("CrossValidation",
machine=cart,
features=feats_train,
labels=labels_train,
splitting_strategy=split,
evaluation_criterion=accuracy,
num_runs=10)
# get cross validation result
# CARTree is not x-validatable
result = cross_val.evaluate()
print(result)
print('Mean Accuracy : ' + str(result.get("mean")))
"""
Explanation: Finally we can use Shogun's cross-validation class to get performance.
End of explanation
"""
# dictionary to convert string features to integer values
to_int = {'A' : 1, 'B' : 2, 'C' : 3, 'D' : 4, 'E' : 5}
# read csv file and separate out labels and features
lab = []
feat = []
with open( os.path.join(SHOGUN_DATA_DIR, 'uci/servo/servo.data')) as csvfile:
csvread = csv.reader(csvfile,delimiter=',')
for row in csvread:
feat.append([to_int[row[0]], to_int[row[1]], float(row[2]), float(row[3])])
lab.append(float(row[4]))
lab = np.array(lab)
feat = np.array(feat).T
"""
Explanation: We get a mean accuracy of about 0.93-0.94. This number essentially means that a CART-tree trained using this dataset is expected to classify Iris flowers, given their required attributes, with an accuracy of 93-94% in a real world scenario. The parameters required by Shogun's cross-validation class should be noted in the above code snippet. The class requires the model, training features, training labels, splitting strategy and evaluation method to be specified.
Regression using real dataset
In this section, we evaluate CART-induced decision tree over the Servo dataset. Using this dataset, we essentially want to train a model which can predict the rise time of a servomechanism given the required parameters which are the two (integer) gain settings and two (nominal) choices of mechanical linkages. Let us read the dataset first.
End of explanation
"""
# form training features
feats_train = sg.create_features(feat)
# form training labels
labels_train = sg.create_labels(lab)
def get_cv_error(max_depth):
# set attribute types - 2 nominal and 2 ordinal
feature_types = np.array([True, True, False, False])
# setup CART-tree with cross validation pruning switched off
cart = sg.create_machine("CARTree", nominal=feature_types,
mode="PT_REGRESSION",
folds=5,
apply_cv_pruning=False,
max_depth=max_depth)
# set evaluation criteria - mean squared error
accuracy = sg.create_evaluation("MeanSquaredError")
# set splitting criteria - 10 fold cross-validation
split = sg.create_splitting_strategy("CrossValidationSplitting", labels=labels_train, num_subsets=10)
# set cross-validation parameters
cross_val = sg.create_machine_evaluation("CrossValidation",
machine=cart,
features=feats_train,
labels=labels_train,
splitting_strategy=split,
evaluation_criterion=accuracy,
num_runs=10)
# return cross validation result
return cross_val.evaluate().get("mean")
"""
Explanation: The servo dataset is a small training dataset (contains just 167 training vectors) with no separate test dataset, like the Iris dataset. Hence we will apply the same cross-validation strategy we applied in case of the Iris dataset. However, to make things interesting let us play around with a yet-untouched parameter of CART-induced tree i.e. the maximum allowed tree depth. As the tree depth increases, the tree becomes more complex and hence fits the training data more closely. By setting a maximum allowed tree depth, we restrict the complexity of trained tree and hence avoid over-fitting. But choosing a low value of the maximum allowed tree depth may lead to early stopping i.e. under-fitting. Let us explore how we can decide the appropriate value of the max-allowed-tree-depth parameter. Let us create a method, which takes max-allowed-tree-depth parameter as input and returns the corresponding cross-validated error as output.
End of explanation
"""
import matplotlib.pyplot as plt
cv_errors = [get_cv_error(i) for i in range(1,15)]
plt.plot(range(1,15),cv_errors,'bo',range(1,15),cv_errors,'k')
plt.xlabel('max_allowed_depth')
plt.ylabel('cross-validated error')
plt.ylim(0,1.2)
plt.show()
"""
Explanation: Next, let us supply a range of max_depth values to the above method and plot the returned cross-validated errors.
End of explanation
"""
train_feats,train_labels,test_feats = create_toy_classification_dataset(20,True)
"""
Explanation: From the above plot quite clearly gives us the most appropriate value of maximum allowed depth. We see that the first minima occurs at a maximum allowed depth of 6-8. Hence, one of these should be the desired value. It is to be noted that the error metric that we are discussing here is the mean squared error. Thus, from the above plot, we can also claim that, given required parameters, our CART-flavoured decision tree can predict the rise time within an average error range of $\pm0.5$ (i.e. square root of 0.25 which is the approximate minimum cross-validated error). The relative error i.e average_error/range_of_labels comes out to be ~30%.
CHi-squared Automatic Interaction Detection (CHAID)
The CHAID is an algorithm for decision tree learning proposed by Kass (1980). It is similar in functionality to CART in the sense that both can be used for classification as well as regression. But unlike CART, CHAID internally handles only categorical features. The continuous features are first converted into ordinal categorical features for the CHAID algorithm to be able to use them. This conversion is done by binning of feature values.The number of bins (K) has to be supplied by the user. Given K, a predictor is split in such a way that all the bins get the same number (more or less) of distinct predictor values. The maximum feature value in each bin is used as a breakpoint.
An important parameter in the CHAID tree growing process is the p-value. The p-value is the metric that is used for deciding which categories of predictor values to merge during merging as well as for deciding the best attribute during splitting. The p-value is calculated using different hypothesis testing methods depending on the type of dependent variable (nominal, ordinal or continuous). A more detailed discussion on the CHAID algorithm can be found in the documentation of the CCHAIDTree class in Shogun. Let us move on to a more interesting topic which is learning to use CHAID using Shogun's python API.
Classification example using toy dataset
Let us re-use the toy classification dataset used in C4.5 and CART to see the API usage of CHAID as well as to qualitatively compare the results of the CHAID algorithm with the other two.
End of explanation
"""
def train_chaidtree(dependent_var_type,feature_types,num_bins,feats,labels):
# create CHAID tree object
c = sg.create_machine("CHAIDTree", dependent_vartype=dependent_var_type,
feature_types=feature_types,
num_breakpoints=num_bins,
labels=labels)
# train using training features
c.train(feats)
return c
# form feature types 0 for nominal (attribute X), 2 for continuous (attribute Y)
ft = np.array([0, 2],dtype=np.int32)
# cache training matrix
train_feats_cache=sg.create_features(train_feats.get("feature_matrix"))
# get back trained tree - dependent variable type is nominal (hence 0), number of bins for binning is 10
chaid = train_chaidtree(0,ft,10,train_feats,train_labels)
print('updated_matrix')
print(train_feats.get('feature_matrix'))
print('')
print('original_matrix')
print(train_feats_cache.get('feature_matrix'))
"""
Explanation: Now, we set up our CHAID-tree with appropriate parameters and train over given data.
End of explanation
"""
# get output labels
output_labels = chaid.apply(test_feats)
plot_toy_classification_results(train_feats_cache,train_labels,test_feats,output_labels)
"""
Explanation: An important point to be noted in the above code snippet is that CHAID training modifies the training data. The actual continuous feature values are replaced by the discrete ordinal values obtained during continuous to ordinal conversion. Notice the difference between the original feature matrix and the updated matrix. The updated matrix contains only 10 distinct values denoting all values of the original matrix for feature dimension at row index 1.
With a CHAID-trained decision tree at our disposal, it's time to apply it to colour our test points.
End of explanation
"""
train_feats,train_labels = create_toy_regression_dataset(300,0.5)
plot_ref_sinusoid()
plt.show()
"""
Explanation: Regression example with toy dataset
In this section, we re-work the sinusoid curve fitting example (earlier used in CART toy regression).
End of explanation
"""
# feature type - continuous
feat_type = np.array([2],dtype=np.int32)
# get back trained tree
chaid = train_chaidtree(2,feat_type, 50, train_feats, train_labels)
"""
Explanation: As usual, we start by setting up our decision tree and training it.
End of explanation
"""
plot_predicted_sinusoid(chaid)
"""
Explanation: Next, we use the trained decision tree to follow the reference sinusoid.
End of explanation
"""
train_feats = sg.create_features(sg.read_csv( os.path.join(SHOGUN_DATA_DIR, 'uci/wine/fm_wine.dat')))
train_labels = sg.create_labels(sg.read_csv( os.path.join(SHOGUN_DATA_DIR, 'uci/wine/label_wine.dat')))
"""
Explanation: A distinguishing feature about the predicted curve is the presence of steps. These steps are essentially an artifact of continuous to ordinal conversion. If we decrease the number of bins for the conversion the step widths will increase.
Classification example over real dataset
In this section, we will try to estimate the quality of wine based on 13 attributes like alcohol content, malic acid, magnesium content, etc. using the wine dataset. Let us first read the dataset using Shogun's CSV file reader.
End of explanation
"""
# set attribute types - all attributes are continuous(2)
feature_types = np.array([2 for i in range(13)],dtype=np.int32)
# setup CHAID tree - dependent variable is nominal(0), feature types set, number of bins(20)
chaid = sg.create_machine("CHAIDTree", dependent_vartype=0,
feature_types=feature_types,
num_breakpoints=20)
"""
Explanation: Like the case of CART, here we are also interested in finding out the approximate accuracy with which our CHAID tree trained on this dataset will perform in real world. Hence, we will apply the cross validation strategy. But first we specify the parameters of the CHAID tree.
End of explanation
"""
# set evaluation criteria - multiclass accuracy
accuracy = sg.create_evaluation("MulticlassAccuracy")
# set splitting criteria - 10 fold cross-validation
split = sg.create_splitting_strategy("CrossValidationSplitting", labels=train_labels, num_subsets=10)
# set cross-validation parameters
cross_val = sg.create_machine_evaluation("CrossValidation",
machine=chaid,
features=train_feats,
labels=train_labels,
splitting_strategy=split,
evaluation_criterion=accuracy,
num_runs=10)
# CHAIDTree is not x-validatable
print(f"Mean classification accuracy : {cross_val.evaluate().get('mean')*100} %")
"""
Explanation: Next we set up the cross-validation class and get back the error estimate we want i.e mean classification error.
End of explanation
"""
train_feats=sg.create_features(sg.read_csv( os.path.join(SHOGUN_DATA_DIR, 'uci/housing/fm_housing.dat')))
train_labels=sg.create_labels(sg.read_csv( os.path.join(SHOGUN_DATA_DIR, 'uci/housing/housing_label.dat')))
# print range of regression labels - this is useful for calculating relative deviation later
print('labels range : '+str(np.ptp(train_labels.get("labels"))))
"""
Explanation: Regression example using real dataset
In this section, we try to predict the value of houses in Boston using 13 attributes, like per capita crime rate in neighborhood, number of rooms, nitrous oxide concentration in air, proportion of non-retail business in the area etc. Out of the 13 attributes 12 are continuous and 1 (the Charles river dummy variable) is binary nominal. Let us load the dataset as our first step. For this, we can directly use Shogun's CSV file reader class.
End of explanation
"""
def get_cv_error(max_depth):
# set feature types - all continuous(2) except 4th column which is nominal(0)
feature_types = np.array([2]*13,dtype=np.int32)
feature_types[3]=0
feature_types[8]=1
feature_types[9]=1
# setup CHAID-tree
chaid = sg.create_machine("CHAIDTree", dependent_vartype=2,
feature_types=feature_types,
num_breakpoints=10,
max_tree_depth=10)
# set evaluation criteria - mean squared error
accuracy = sg.create_evaluation("MeanSquaredError")
# set splitting criteria - 5 fold cross-validation
split = sg.create_splitting_strategy("CrossValidationSplitting",
labels=train_labels,
num_subsets=5)
# set cross-validation parameters
cross_val = sg.create_machine_evaluation("CrossValidation",
machine=chaid,
features=train_feats,
labels=train_labels,
splitting_strategy=split,
evaluation_criterion=accuracy,
num_runs=3)
# return cross validation result
return cross_val.evaluate().get("mean")
cv_errors = [get_cv_error(i) for i in range(1,10)]
plt.plot(range(1,10),cv_errors,'bo',range(1,10),cv_errors,'k')
plt.xlabel('max_allowed_depth')
plt.ylabel('cross-validated error')
plt.show()
"""
Explanation: Next, we set up the parameters for the CHAID tree as well as the cross-validation class.
End of explanation
"""
|
ls-cwi/eXamine | doc/tutorial/eXamineNotebook/eXamineTutorial.ipynb | gpl-2.0 | # HTTP Client for Python
import requests
# Cytoscape port number
PORT_NUMBER = 1234
BASE_URL = "https://raw.githubusercontent.com/ls-cwi/eXamine/master/data/"
# The Base path for the CyRest API
BASE = 'http://localhost:' + str(PORT_NUMBER) + '/v1/'
#Helper command to call a command via HTTP POST
def executeRestCommand(namespace="", command="", args={}):
postString = BASE + "commands/" + namespace + "/" + command
res = requests.post(postString,json=args)
return res
"""
Explanation: eXamine Automation Tutorial
This case study demonstrates how to use the REST API of eXamine to study an annotated module in Cytoscape. The module that we study has 17 nodes and 18 edges and occurs within the KEGG mouse network consisting of 3863 nodes and 29293 edges. The module is annotated with sets from four different categories: (1) KEGG pathways and the GO categories (2) molecular process, (3) biological function and (4) cellular component.
There are three steps for visualizing subnetwork modules with eXamine. In the following, we will describe and perform the steps using the Automation functionality of Cytoscape. We refer to tutorial.pdf for instructions using the Cytoscape GUI.
End of explanation
"""
# First we import our demo network
executeRestCommand("network", "import url", {"indexColumnSourceInteraction":"1",
"indexColumnTargetInteraction":"2",
"url": BASE_URL + "edges.txt"})
"""
Explanation: Importing network and node-specific annotation
We start by importing the KEGG network directly from the eXamine repository on github.
End of explanation
"""
# Next we import node annotations
executeRestCommand("table", "import url",
{"firstRowAsColumnNames":"true",
"keyColumnIndex" : "1",
"startLoadRow" : "1",
"dataTypeList":"s,s,f,f,f,s,s,s,sl,sl,sl,sl",
"url": BASE_URL + "nodes_induced.txt"})
"""
Explanation: We then import node-specific annotation directly from the eXamine repository on github. The imported file contains set membership information for each node. Note that it is important to ensure that set-membership information is imported as List of String, as indicated by sl. Additionaly, note that the default list separator is a pipe character.
End of explanation
"""
executeRestCommand("network", "select", {"nodeList":"Module:small"})
"""
Explanation: Import set-specific annotation
We now describe how to import the set-specific annotations. In order to do so, eXamine needs to generate group nodes for each of the sets present in the module. To do so, we need to select nodes present in the module; these nodes have the value small in column Module, which we do as follows.
End of explanation
"""
executeRestCommand("examine", "generate groups",
{"selectedGroupColumns" : "Process,Function,Component,Pathway"})
"""
Explanation: Now that we have selected the nodes of the module, we can proceed with generating group nodes for each set (Process, Function, Component and Pathway).
End of explanation
"""
#Ok, time to enrich our newly greated group nodes with some interesting annotations
executeRestCommand("table", "import url",
{"firstRowAsColumnNames":"true",
"keyColumnIndex" : "1",
"startLoadRow" : "1",
"url" : BASE_URL + "sets_induced.txt"})
"""
Explanation: We import set-specific annotation, again directly from github.
End of explanation
"""
# Adjust the visualization settings
executeRestCommand("examine", "update settings",
{"labelColumn" : "Symbol",
"urlColumn" : "URL",
"scoreColumn" : "Score",
"showScore" : "true",
"selectedGroupColumns" : "Function,Pathway"})
"""
Explanation: Set-based visualization using eXamine
We now describe how to visualize the current selection. First, we set the visualization options.
End of explanation
"""
# Select groups for demarcation in the visualization
executeRestCommand("examine", "select groups",
{"selectedGroups":"GO:0008013,GO:0008083,mmu04070,mmu05200,mmu04520"})
"""
Explanation: We then select five groups.
End of explanation
"""
# Launch the interactive eXamine visualization
executeRestCommand("examine", "interact", {})
"""
Explanation: There are two options: either we launch the interactive eXamine visualization, or we directly generate an SVG.
End of explanation
"""
# Export a graphic instead of interacting with it
# use absolute path; writes in Cytoscape directory if not changed
executeRestCommand("examine", "export", {"path": "your-path-here.svg"})
"""
Explanation: The command below launches the eXamine window. If this window is blank, simply resize the window to force a redraw of the scene.
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions | Sessions/Session04/Day1/LSSTC-DSFP4-Juric-FrequentistAndBayes-01-Probability.ipynb | mit | # Generating some simple photon count data
import numpy as np
from scipy import stats
np.random.seed(1) # for repeatability
F_true = 1000 # true flux, say number of photons measured in 1 second
N = 50 # number of measurements
F = stats.poisson(F_true).rvs(N) # N measurements of the flux
e = np.sqrt(F) # errors on Poisson counts estimated via square root
"""
Explanation: Frequentism and Bayesianism I: a Practical Introduction
Mario Juric & Jake VanderPlas, University of Washington
e-mail: mjuric@astro.washington.edu, twitter: @mjuric
This lecture is based on a post on the blog Pythonic Perambulations, by Jake VanderPlas. The content is BSD licensed. See also VanderPlas (2014) "Frequentism and Bayesianism: A Python-driven Primer".
Slides built using the excellent RISE Jupyter extension by Damian Avila.
<!-- PELICAN_BEGIN_SUMMARY -->
One of the first things a scientist hears about statistics is that there is are two different approaches: Frequentism and Bayesianism. Despite their importance, many scientific researchers never have opportunity to learn the distinctions between them and the different practical approaches that result.
The purpose of this lecture is to synthesize the philosophical and pragmatic aspects of the frequentist and Bayesian approaches, so that scientists like you might be better prepared to understand the types of data analysis people do.
We'll start by addressing the philosophical distinctions between the views, and from there move to discussion of how these ideas are applied in practice, with some Python code snippets demonstrating the difference between the approaches.
<!-- PELICAN_END_SUMMARY -->
Prerequisites
Python Version 2.7
The "PyData" data science software stack (comes with Anaconda).
emcee -- a pure-Python implementation of Goodman & Weare’s Affine Invariant Markov chain Monte Carlo (MCMC) Ensemble sampler.
AstroML -- a library of statistical and machine learning routines for analyzing, loading, and visualizing astronomical data in Python.
Frequentism vs. Bayesianism: a Philosophical Debate
<br>
<center>Fundamentally, the disagreement between frequentists and Bayesians concerns the definition (interpretation) of probability.</center>
<br>
Frequentist Probability
For frequentists, probability only has meaning in terms of a limiting case of repeated measurements.
That is, if I measure the photon flux $F$ from a given star (we'll assume for now that the star's flux does not vary with time), then measure it again, then again, and so on, each time I will get a slightly different answer due to the statistical error of my measuring device. In the limit of a large number of measurements, the frequency of any given value indicates the probability of measuring that value.
For frequentists probabilities are fundamentally related to frequencies of events. This means, for example, that in a strict frequentist view, it is meaningless to talk about the probability of the true flux of the star: the true flux is (by definition) a single fixed value, and to talk about a frequency distribution for a fixed value is nonsense.
Bayesian Probability
For Bayesians, the concept of probability is extended to cover degrees of certainty about statements. You can think of it as an extension of logic to statements where there's uncertainty.
Say a Bayesian claims to measure the flux $F$ of a star with some probability $P(F)$: that probability can certainly be estimated from frequencies in the limit of a large number of repeated experiments, but this is not fundamental. The probability is a statement of my knowledge of what the measurement reasult will be.
For Bayesians, probabilities are fundamentally related to our own knowledge about an event. This means, for example, that in a Bayesian view, we can meaningfully talk about the probability that the true flux of a star lies in a given range.
That probability codifies our knowledge of the value based on prior information and/or available data.
The surprising thing is that this arguably subtle difference in philosophy leads, in practice, to vastly different approaches to the statistical analysis of data. Below I will give a few practical examples of the differences in approach, along with associated Python code to demonstrate the practical aspects of the resulting methods.
Frequentist and Bayesian Approaches in Practice: Counting Photons
Here we'll take a look at an extremely simple problem, and compare the frequentist and Bayesian approaches to solving it. There's necessarily a bit of mathematical formalism involved, but we won't go into too much depth or discuss too many of the subtleties.
If you want to go deeper, you might consider taking a look at chapters 4-5 of the Statistics, Data Mining, and Machine Learning in Astronomy textbook.
The Problem: Simple Photon Counts
Imagine that we point our telescope to the sky, and observe the light coming from a single star. For the time being, we'll assume that the star's true flux is constant with time, i.e. that is it has a fixed value $F_{\rm true}$ (we'll also ignore effects like sky noise and other sources of systematic error). We'll assume that we perform a series of $N$ measurements with our telescope, where the $i^{\rm th}$ measurement reports the observed photon flux $F_i$ and error $e_i$.
The question is, given this set of measurements $D = {F_i,e_i}$, what is our best estimate of the true flux $F_{\rm true}$?
Aside on measurement errors
We'll make the (reasonable) assumption that errors are Gaussian:
* In a Frequentist perspective, $e_i$ is the standard deviation of the results of a single measurement event in the limit of repetitions of that event.
* In the Bayesian perspective, $e_i$ is the standard deviation of the (Gaussian) probability distribution describing our knowledge of that particular measurement given its observed value)
Here we'll use Python to generate some toy data to demonstrate the two approaches to the problem.
Because the measurements are number counts, a Poisson distribution is a good approximation to the measurement process:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.errorbar(F, np.arange(N), xerr=e, fmt='ok', ecolor='gray', alpha=0.5)
ax.vlines([F_true], 0, N, linewidth=5, alpha=0.2)
ax.set_xlabel("Flux");ax.set_ylabel("measurement number");
"""
Explanation: Now let's make a simple visualization of the "measured" data:
End of explanation
"""
w = 1. / e ** 2
print("""
F_true = {0}
F_est = {1:.0f} +/- {2:.0f} (based on {3} measurements)
""".format(F_true, (w * F).sum() / w.sum(), w.sum() ** -0.5, N))
"""
Explanation: These measurements each have a different error $e_i$ which is estimated from Poisson statistics using the standard square-root rule.
In this toy example we already know the true flux $F_{\rm true}$, but the question is this: given our measurements and errors, what is our best estimate of the true flux?
Let's take a look at the frequentist and Bayesian approaches to solving this.
Frequentist Approach to Simple Photon Counts
We'll start with the classical frequentist maximum likelihood approach. Given a single observation $D_i = (F_i, e_i)$, we can compute the probability distribution of the measurement given the true flux $F_{\rm true}$ given our assumption of Gaussian errors:
$$ P(D_i~|~F_{\rm true}) = \frac{1}{\sqrt{2\pi e_i^2}} \exp{\left[\frac{-(F_i - F_{\rm true})^2}{2 e_i^2}\right]} $$
This should be read "the probability of $D_i$ given $F_{\rm true}$ equals ...". You should recognize this as a normal distribution with mean $F_{\rm true}$ and standard deviation $e_i$.
We construct the likelihood function by computing the product of the probabilities for each data point:
$$\mathcal{L}(D~|~F_{\rm true}) = \prod_{i=1}^N P(D_i~|~F_{\rm true})$$
Here $D = {D_i}$ represents the entire set of measurements. Because the value of the likelihood can become very small, it is often more convenient to instead compute the log-likelihood. Combining the previous two equations and computing the log, we have
$$\log\mathcal{L} = -\frac{1}{2} \sum_{i=1}^N \left[ \log(2\pi e_i^2) + \frac{(F_i - F_{\rm true})^2}{e_i^2} \right]$$
What we'd like to do is determine $F_{\rm true}$ such that the likelihood is maximized. For this simple problem, the maximization can be computed analytically (i.e. by setting $d\log\mathcal{L}/dF_{\rm true} = 0$). This results in the following observed estimate of $F_{\rm true}$:
$$ F_{\rm est} = \frac{\sum w_i F_i}{\sum w_i};~~w_i = 1/e_i^2 $$
Notice that in the special case of all errors $e_i$ being equal, this reduces to
$$ F_{\rm est} = \frac{1}{N}\sum_{i=1}^N F_i $$
That is, in agreement with intuition, $F_{\rm est}$ is simply the mean of the observed data when errors are equal.
We can go further and ask what the error of our estimate is? In the frequentist approach, this can be accomplished by fitting a Gaussian approximation to the likelihood curve at maximum; in this simple case this can also be solved analytically.
It can be shown that the standard deviation of this Gaussian approximation is:
$$ \sigma_{\rm est} = \left(\sum_{i=1}^N w_i \right)^{-1/2} $$
These results are fairly simple calculations; let's evaluate them for our toy dataset:
End of explanation
"""
def log_prior(theta):
return 1 # flat prior
def log_likelihood(theta, F, e):
return -0.5 * np.sum(np.log(2 * np.pi * e ** 2)
+ (F - theta[0]) ** 2 / e ** 2)
def log_posterior(theta, F, e):
return log_prior(theta) + log_likelihood(theta, F, e)
"""
Explanation: We find that for 50 measurements of the flux, our estimate has an error of about 0.4% and is consistent with the input value.
Bayesian Approach to Simple Photon Counts
The Bayesian approach, as you might expect, begins and ends with probabilities. It recognizes that what we fundamentally want to compute is our knowledge of the parameters in question, i.e. in this case,
$$ P(F_{\rm true}~|~D) $$
N.b.: that this formulation of the problem is fundamentally contrary to the frequentist philosophy, which says that probabilities have no meaning for model parameters like $F_{\rm true}$. Within the Bayesian interpretation of probability, this is perfectly acceptable.
To compute this result, Bayesians next apply Bayes' Theorem, a fundamental law of probability:
$$ P(F_{\rm true}~|~D) = \frac{P(D~|~F_{\rm true})~P(F_{\rm true})}{P(D)} $$
Though Bayes' theorem is where Bayesians get their name, it is not this law itself that is controversial, but the Bayesian interpretation of probability implied by the term $P(F_{\rm true}~|~D)$.
Let's take a look at each of the terms in this expression:
$P(F_{\rm true}~|~D)$: The posterior, or the probability of the model parameters given the data: this is the result we want to compute.
$P(D~|~F_{\rm true})$: The likelihood, which is proportional to the $\mathcal{L}(D~|~F_{\rm true})$ in the frequentist approach, above.
$P(F_{\rm true})$: The model prior, which encodes what we knew about the model prior to the application of the data $D$.
$P(D)$: The data probability, which in practice amounts to simply a normalization term.
If we set the prior $P(F_{\rm true}) \propto 1$ (a flat prior), we find
$$P(F_{\rm true}|D) \propto \mathcal{L}(D|F_{\rm true})$$
and the Bayesian probability is maximized at precisely the same value as the frequentist result! So despite the philosophical differences, we see that (for this simple problem at least) the Bayesian and frequentist point estimates are equivalent.
But What About the Prior?
We glossed over something here: the prior, $P(F_{\rm true})$.
The prior allows inclusion of other information into the computation, which becomes very useful in cases where multiple measurement strategies are being combined to constrain a single model (as is the case in, e.g. cosmological parameter estimation).
The necessity to specify a prior, however, is one of the more controversial pieces of Bayesian analysis.
But What About the Prior?
A frequentist will point out that the prior is problematic when no true prior information is available. Though it might seem straightforward to use a noninformative prior like the flat prior mentioned above, there are some surprisingly subtleties involved. It turns out that in many situations, a truly noninformative prior does not exist!
Frequentists point out that the subjective choice of a prior which necessarily biases your result has no place in statistical data analysis. A Bayesian would counter that frequentism doesn't solve this problem, but simply skirts the question.
Frequentism can often be viewed as simply a special case of the Bayesian approach for some (implicit) choice of the prior: a Bayesian would say that it's better to make this implicit choice explicit, even if the choice might include some subjectivity.
Photon Counts: the Bayesian approach
Leaving these philosophical debates aside for the time being, let's address how Bayesian results are generally computed in practice.
For a one parameter problem like the one considered here, it's as simple as computing the posterior probability $P(F_{\rm true}~|~D)$ as a function of $F_{\rm true}$: this is the distribution reflecting our knowledge of the parameter $F_{\rm true}$.
But as the dimension of the model grows, this direct approach becomes increasingly intractable. For this reason, Bayesian calculations often depend on sampling methods such as Markov Chain Monte Carlo (MCMC). Here, we'll be using Dan Foreman-Mackey's excellent emcee package. Keep in mind here that the goal is to generate a set of points drawn from the posterior probability distribution, and to use those points to determine the answer we seek.
To perform this MCMC, we start by defining Python functions for the prior $P(F_{\rm true})$, the likelihood $P(D~|~F_{\rm true})$, and the posterior $P(F_{\rm true}~|~D)$, noting that none of these need be properly normalized.
Our model here is one-dimensional, but to handle multi-dimensional models we'll define the model in terms of an array of parameters $\theta$, which in this case is $\theta = [F_{\rm true}]$:
End of explanation
"""
ndim = 1 # number of parameters in the model
nwalkers = 50 # number of MCMC walkers
nburn = 1000 # "burn-in" period to let chains stabilize
nsteps = 2000 # number of MCMC steps to take
# we'll start at random locations between 0 and 2000
starting_guesses = 2000 * np.random.rand(nwalkers, ndim)
import emcee
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[F, e])
sampler.run_mcmc(starting_guesses, nsteps)
sample = sampler.chain # shape = (nwalkers, nsteps, ndim)
sample = sampler.chain[:, nburn:, :].ravel() # discard burn-in points
"""
Explanation: Now we set up the problem, including generating some random starting guesses for the multiple chains of points.
End of explanation
"""
# plot a histogram of the sample
plt.hist(sample, bins=50, histtype="stepfilled", alpha=0.3, normed=True)
# plot a best-fit Gaussian
F_fit = np.linspace(975, 1025)
pdf = stats.norm(np.mean(sample), np.std(sample)).pdf(F_fit)
plt.plot(F_fit, pdf, '-k')
plt.xlabel("F"); plt.ylabel("P(F)")
"""
Explanation: If this all worked correctly, the array sample should contain a series of 50000 points drawn from the posterior. Let's plot them and check:
End of explanation
"""
print("""
F_true = {0}
F_est = {1:.0f} +/- {2:.0f} (based on {3} measurements)
""".format(F_true, np.mean(sample), np.std(sample), N))
"""
Explanation: We end up with a sample of points drawn from the (normal) posterior distribution. The mean and standard deviation of this posterior are the corollary of the frequentist maximum likelihood estimate above:
End of explanation
"""
np.random.seed(42) # for reproducibility
N = 100 # we'll use more samples for the more complicated model
mu_true, sigma_true = 1000, 15 # stochastic flux model
F_true = stats.norm(mu_true, sigma_true).rvs(N) # (unknown) true flux
F = stats.poisson(F_true).rvs() # observed flux: true flux plus Poisson errors.
e = np.sqrt(F) # root-N error, as above
"""
Explanation: We see that as expected for this simple problem, the Bayesian approach yields the same result as the frequentist approach!
Discussion
Now, you might come away with the impression that the Bayesian method is unnecessarily complicated, and in this case it certainly is. Using an Affine Invariant Markov Chain Monte Carlo Ensemble sampler to characterize a one-dimensional normal distribution is a bit like using the Death Star to destroy a beach ball.
But we did it here because it demonstrates an approach that can scale to complicated posteriors in many, many dimensions, and can provide nice results in more complicated situations where an analytic likelihood approach is not possible.
As a side note, you might also have noticed one little sleight of hand: at the end, we use a frequentist approach to characterize our posterior samples! When we computed the sample mean and standard deviation above, we were employing a distinctly frequentist technique to characterize the posterior distribution. The pure Bayesian result for a problem like this would be to report the posterior distribution itself (i.e. its representative sample), and leave it at that. That is, in pure Bayesianism the answer to a question is not a single number with error bars; the answer is the posterior distribution over the model parameters!
Adding a Dimension: Exploring a more sophisticated model
Let's briefly take a look at a more complicated situation, and compare the frequentist and Bayesian results yet again. Above we assumed that the star was static: now let's assume that we're looking at an object which we suspect has some stochastic variation — that is, it varies with time, but in an unpredictable way (a Quasar is a good example of such an object).
We'll propose a simple 2-parameter Gaussian model for this object: $\theta = [\mu, \sigma]$ where $\mu$ is the mean value, and $\sigma$ is the standard deviation of the variability intrinsic to the object. Thus our model for the probability of the true flux at the time of each observation looks like this:
$$ F_{\rm true} \sim \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left[\frac{-(F - \mu)^2}{2\sigma^2}\right]$$
Now, we'll again consider $N$ observations each with their own error. We can generate them this way:
End of explanation
"""
def log_likelihood(theta, F, e):
return -0.5 * np.sum(np.log(2 * np.pi * (theta[1] ** 2 + e ** 2))
+ (F - theta[0]) ** 2 / (theta[1] ** 2 + e ** 2))
# maximize likelihood <--> minimize negative likelihood
def neg_log_likelihood(theta, F, e):
return -log_likelihood(theta, F, e)
from scipy import optimize
theta_guess = [900, 5]
theta_est = optimize.fmin(neg_log_likelihood, theta_guess, args=(F, e))
print("""
Maximum likelihood estimate for {0} data points:
mu={theta[0]:.0f}, sigma={theta[1]:.0f}
""".format(N, theta=theta_est))
"""
Explanation: Varying Photon Counts: The Frequentist Approach
The resulting likelihood is the convolution of the intrinsic distribution with the error distribution, so we have
$$\mathcal{L}(D~|~\theta) = \prod_{i=1}^N \frac{1}{\sqrt{2\pi(\sigma^2 + e_i^2)}}\exp\left[\frac{-(F_i - \mu)^2}{2(\sigma^2 + e_i^2)}\right]$$
Analogously to above, we can analytically maximize this likelihood to find the best estimate for $\mu$:
$$\mu_{est} = \frac{\sum w_i F_i}{\sum w_i};~~w_i = \frac{1}{\sigma^2 + e_i^2} $$
And here we have a problem: the optimal value of $\mu$ depends on the optimal value of $\sigma$. The results are correlated, so we can no longer use straightforward analytic methods to arrive at the frequentist result.
Nevertheless, we can use numerical optimization techniques to determine the maximum likelihood value. Here we'll use the optimization routines available within Scipy's optimize submodule:
End of explanation
"""
from astroML.resample import bootstrap
def fit_samples(sample):
# sample is an array of size [n_bootstraps, n_samples]
# compute the maximum likelihood for each bootstrap.
return np.array([optimize.fmin(neg_log_likelihood, theta_guess,
args=(F, np.sqrt(F)), disp=0)
for F in sample])
samples = bootstrap(F, 1000, fit_samples) # 1000 bootstrap resamplings
"""
Explanation: Error Estimates
This maximum likelihood value gives our best estimate of the parameters $\mu$ and $\sigma$ governing our model of the source. But this is only half the answer: we need to determine how confident we are in this answer, that is, we need to compute the error bars on $\mu$ and $\sigma$.
To see how this is done in the frequentist paradigm, see the sub-slides.
There are several approaches to determining errors in a frequentist paradigm. We could:
* as above, fit a normal approximation to the maximum likelihood and report the covariance matrix (here we'd have to do this numerically rather than analytically).
* Alternatively, we can compute statistics like $\chi^2$ and $\chi^2_{\rm dof}$ to and use standard tests to determine confidence limits, which also depends on strong assumptions about the Gaussianity of the likelihood.
* We might alternatively use randomized sampling approaches such as "Jackknife"...
* or "Bootstrap", which maximize the likelihood for randomized samples of the input data in order to explore the degree of certainty in the result.
All of these would be valid techniques to use, but each comes with its own assumptions and subtleties. Here, for simplicity, we'll use the basic bootstrap resampler found in the astroML package:
End of explanation
"""
mu_samp = samples[:, 0]
sig_samp = abs(samples[:, 1])
print " mu = {0:.0f} +/- {1:.0f}".format(mu_samp.mean(), mu_samp.std())
print " sigma = {0:.0f} +/- {1:.0f}".format(sig_samp.mean(), sig_samp.std())
"""
Explanation: Now in a similar manner to what we did above for the MCMC Bayesian posterior, we'll compute the sample mean and standard deviation to determine the errors on the parameters.
End of explanation
"""
def log_prior(theta):
# sigma needs to be positive.
if theta[1] <= 0:
return -np.inf
else:
return 0
def log_posterior(theta, F, e):
return log_prior(theta) + log_likelihood(theta, F, e)
# same setup as above:
ndim, nwalkers = 2, 50
nsteps, nburn = 2000, 1000
starting_guesses = np.random.rand(nwalkers, ndim)
starting_guesses[:, 0] *= 2000 # start mu between 0 and 2000
starting_guesses[:, 1] *= 20 # start sigma between 0 and 20
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[F, e])
sampler.run_mcmc(starting_guesses, nsteps)
sample = sampler.chain # shape = (nwalkers, nsteps, ndim)
sample = sampler.chain[:, nburn:, :].reshape(-1, 2)
"""
Explanation: I should note that there is a huge literature on the details of bootstrap resampling, and there are definitely some subtleties of the approach that I am glossing over here. One obvious piece is that there is potential for errors to be correlated or non-Gaussian, neither of which is reflected by simply finding the mean and standard deviation of each model parameter. Nevertheless, I trust that this gives the basic idea of the frequentist approach to this problem.
Varying Photon Counts: The Bayesian Approach
The Bayesian approach to this problem is almost exactly the same as it was in the previous problem, and we can set it up by slightly modifying the above code.
End of explanation
"""
from astroML.plotting import plot_mcmc
fig = plt.figure()
ax = plot_mcmc(sample.T, fig=fig, labels=[r'$\mu$', r'$\sigma$'], colors='k')
ax[0].plot(sample[:, 0], sample[:, 1], ',k', alpha=0.1)
ax[0].plot([mu_true], [sigma_true], 'o', color='red', ms=10);
"""
Explanation: Now that we have the samples, we'll use a convenience routine from astroML to plot the traces and the contours representing one and two standard deviations:
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td2a_ml/ml_b_imbalanced.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
"""
Explanation: 2A.ml - Imbalanced dataset
Un jeu de données imbalanced signifie qu'une classe est sous représentée dans un problème de classification. Lire 8 Tactics to Combat Imbalanced Classes in Your Machine Learning Dataset.
End of explanation
"""
import numpy.random
import pandas
def generate_data(nb, ratio, noise):
mat = numpy.random.random((nb,2))
noise = numpy.random.random((mat.shape[0],1)) * noise
data = pandas.DataFrame(mat, columns=["X1", "X2"])
data["decision"] = data.X1 + data.X2 + noise.ravel()
vec = list(sorted(data["decision"]))
l = len(vec)- 1 - int(len(vec) * ratio)
seuil = vec[l]
data["cl"] = data["decision"].apply(lambda r: 1 if r > seuil else 0)
from sklearn.utils import shuffle
data = shuffle(data)
return data
data = generate_data(1000, 0.08, 0.1)
data.describe()
ax = data[data.cl==1].plot(x="X1", y="X2", kind="scatter", label="cl1", color="r")
data[data.cl==0].plot(x="X1", y="X2", kind="scatter", label="cl0", ax=ax)
ax.set_title("Random imbalanced data");
from sklearn.model_selection import train_test_split
while True:
X_train, X_test, y_train, y_test = train_test_split(data[["X1", "X2"]], data["cl"])
if sum(y_test) > 0:
break
"""
Explanation: Génération de données
On génère un problème de classification binaire avec une classe sous représentée.
End of explanation
"""
y_test.sum()
"""
Explanation: Le découpage apprentissage est délicat car il n'y pas beaucoup d'exemples pour la classe sous-représentée.
End of explanation
"""
from sklearn.metrics import confusion_matrix
def confusion(model, X_train, X_test, y_train, y_test):
model.fit(X_train, y_train)
predt = model.predict(X_train)
c_train = confusion_matrix(y_train, predt)
pred = model.predict(X_test)
c_test = confusion_matrix(y_test, pred)
return pandas.DataFrame(numpy.hstack([c_train, c_test]), index=["y=0", "y=1"],
columns="train:y=0 train:y=1 test:y=0 test:y=1".split())
from sklearn.linear_model import LogisticRegression
confusion(LogisticRegression(solver='lbfgs'),
X_train, X_test, y_train, y_test)
"""
Explanation: Apprendre et tester un modèle
Pour ce type de problème, un modèle qui retourne la classe majoritaire quelque soit le cas est déjà un bon modèle puisqu'il retourne la bonne réponse dans la majorité des cas.
End of explanation
"""
from sklearn.tree import DecisionTreeClassifier
confusion(DecisionTreeClassifier(), X_train, X_test, y_train, y_test)
from sklearn.ensemble import RandomForestClassifier
confusion(RandomForestClassifier(n_estimators=10),
X_train, X_test, y_train, y_test)
"""
Explanation: Quelques exemples pour tester, quelques exemples pour apprendre. C'est peu.
End of explanation
"""
ratio = list(_/400.0 for _ in range(1, 101))
rows = []
for r in ratio:
data = generate_data(1000, r, noise=0.0)
while True:
X_train, X_test, y_train, y_test = train_test_split(data[["X1", "X2"]], data["cl"])
if sum(y_test) > 0 and sum(y_train) > 0:
break
c = confusion(LogisticRegression(penalty='l1', solver='liblinear'),
X_train, X_test, y_train, y_test)
c0, c1 = c.loc["y=1", "test:y=0"], c.loc["y=1", "test:y=1"]
row = dict(ratio=r, precision_l1=c1 / (c0 + c1) )
c = confusion(LogisticRegression(penalty='l2', solver="liblinear"),
X_train, X_test, y_train, y_test)
c0, c1 = c.loc["y=1", "test:y=0"], c.loc["y=1", "test:y=1"]
row["precision_l2"] = c1 / (c0 + c1)
rows.append(row)
df = pandas.DataFrame(rows)
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
df.plot(x="ratio", y=[_ for _ in df.columns if _ !="ratio"], ax=ax)
ax.set_xlabel("Ratio classe mal balancée")
ax.set_ylabel("Précision pour la classe 1 (petite classe)");
"""
Explanation: L'algorithme de la régression logistique converge plus difficile que celui des arbres de décision. Voyons comment cela évolue entre de la norme L1 ou L2 et de la proportion de la classe mal balancée.
End of explanation
"""
from sklearn.metrics import balanced_accuracy_score, accuracy_score
ratio = list(_/400.0 for _ in range(1, 101))
rows = []
for r in ratio:
data = generate_data(1000, r, noise=0.0)
while True:
X_train, X_test, y_train, y_test = train_test_split(data[["X1", "X2"]], data["cl"])
if sum(y_test) > 0 and sum(y_train) > 0:
break
model = LogisticRegression(penalty='l2', solver='liblinear')
model.fit(X_train, y_train)
predt = model.predict(X_test)
bacc_l2 = balanced_accuracy_score(y_test, predt)
acc_l2 = accuracy_score(y_test, predt)
model = LogisticRegression(penalty='l1', solver='liblinear')
model.fit(X_train, y_train)
predt = model.predict(X_test)
bacc_l1 = balanced_accuracy_score(y_test, predt)
acc_l1 = accuracy_score(y_test, predt)
row = dict(standard_l2=acc_l2, balanced_l2=bacc_l2, ratio=r,
standard_l1=acc_l1, balanced_l1=bacc_l1)
rows.append(row)
df = pandas.DataFrame(rows)
fig, ax = plt.subplots(1, 1)
df.plot(x="ratio", y=[_ for _ in df.columns if _ !="ratio"], ax=ax)
ax.set_xlabel("Ratio classe mal balancée")
ax.set_ylabel("Accuracy");
"""
Explanation: La norme l1 est plus sensible aux petites classes. La métrique balanced_accuracy_score calcule la performance du modèle en donnant le même poids quelque soit la taille de la classe, il fait la moyenne de l'accuracy par classe.
End of explanation
"""
from sklearn.ensemble import AdaBoostClassifier
ratio = list(_/400.0 for _ in range(1, 101))
rows = []
for r in ratio:
data = generate_data(1000, r, noise=0.0)
while True:
X_train, X_test, y_train, y_test = train_test_split(data[["X1", "X2"]], data["cl"])
if sum(y_test) > 0 and sum(y_train) > 0:
break
c = confusion(LogisticRegression(penalty='l1', solver="liblinear"),
X_train, X_test, y_train, y_test)
c0, c1 = c.loc["y=1", "test:y=0"], c.loc["y=1", "test:y=1"]
row = dict(ratio=r, precision_l1=c1 / (c0 + c1) )
c = confusion(LogisticRegression(penalty='l2', solver="liblinear"),
X_train, X_test, y_train, y_test)
c0, c1 = c.loc["y=1", "test:y=0"], c.loc["y=1", "test:y=1"]
row["precision_l2"] = c1 / (c0 + c1)
c = confusion(AdaBoostClassifier(LogisticRegression(penalty='l2', solver="liblinear"),
algorithm="SAMME.R", n_estimators=50,
learning_rate=3),
X_train, X_test, y_train, y_test)
c0, c1 = c.loc["y=1", "test:y=0"], c.loc["y=1", "test:y=1"]
row["pre_AdaBoost_l2-50"] = c1 / (c0 + c1)
c = confusion(AdaBoostClassifier(LogisticRegression(penalty='l2', solver="liblinear"),
algorithm="SAMME.R", n_estimators=100,
learning_rate=3),
X_train, X_test, y_train, y_test)
c0, c1 = c.loc["y=1", "test:y=0"], c.loc["y=1", "test:y=1"]
row["prec_AdaBoost_l2-100"] = c1 / (c0 + c1)
rows.append(row)
df = pandas.DataFrame(rows)
fig, ax = plt.subplots(1, 1)
df.plot(x="ratio", y=[_ for _ in df.columns if _ != "ratio"], ax=ax)
ax.set_xlabel("Ratio classe mal balancée")
ax.set_ylabel("Précision pour la classe 1 (petite classe)");
"""
Explanation: La métrique classique "accuracy" ne permet pas de détecter un problème de classification lorsqu'une classe est mal balancée car chaque exemple est pondéré de la même façon. L'accuracy est donc très proche de celle obtenue sur la classe majoritaire.
End of explanation
"""
from sklearn.ensemble import AdaBoostClassifier
ratio = list(_/400.0 for _ in range(1, 101))
rows = []
for r in ratio:
data = generate_data(1000, r, noise=0.0)
while True:
X_train, X_test, y_train, y_test = train_test_split(data[["X1", "X2"]], data["cl"])
if sum(y_test) > 0 and sum(y_train) > 0:
break
c = confusion(DecisionTreeClassifier(),
X_train, X_test, y_train, y_test)
c0, c1 = c.loc["y=1", "test:y=0"], c.loc["y=1", "test:y=1"]
row = dict(ratio=r, precision_tree=c1 / (c0 + c1) )
c = confusion(RandomForestClassifier(n_estimators=10),
X_train, X_test, y_train, y_test)
c0, c1 = c.loc["y=1", "test:y=0"], c.loc["y=1", "test:y=1"]
row["precision_rf"] = c1 / (c0 + c1)
rows.append(row)
fig, ax = plt.subplots(1, 1)
df = pandas.DataFrame(rows)
df.plot(x="ratio", y=[_ for _ in df.columns if _ != "ratio"], ax=ax)
ax.set_xlabel("Ratio classe mal balancée")
ax.set_ylabel("Précision pour la classe 1 (petite classe)");
"""
Explanation: On voit que l'algorithme AdaBoost permet de favoriser les petites classes mais il faut jouer avec le learning rate et le nombre d'estimateurs.
End of explanation
"""
data = generate_data(100, 0.08, 0.1).values
data.sort(axis=0)
data[:5]
from ensae_teaching_cs.ml.gini import gini
def gini_gain_curve(Y):
"le code n'est pas le plus efficace du monde mais ça suffira"
g = gini(Y)
curve = numpy.empty((len(Y),))
for i in range(1, len(Y)-1):
g1 = gini(Y[:i])
g2 = gini(Y[i:])
curve[i] = g - (g1 + g2) / 2
return curve
gini_gain_curve([0, 1, 0, 1, 1, 1, 1])
from ensae_teaching_cs.ml.gini import gini
gini(data[:, 3])
fig, ax = plt.subplots(1, 1)
for skew in [0.05, 0.1, 0.15, 0.2, 0.25, 0.5]:
data = generate_data(100, skew, 0.1).values
data.sort(axis=0)
ax.plot(gini_gain_curve(data[:, 3]), label="balance=%f" % skew)
ax.legend()
ax.set_title("Gini gain for different class proportion")
ax.set_ylabel("Gini")
ax.set_xlabel("x");
"""
Explanation: Les méthodes ensemblistes fonctionnent mieux dans ce cas car l'algorithme cherche la meilleure séparation entre deux classes de façon à ce que les deux classes soient de chaque côté de cette frontière. La proportion d'exemples a moins d'importance pour le critère de Gini. Dans l'exemple suivant, on trie selon la variable $X_1$ et on cherche la meilleur séparation
End of explanation
"""
|
luiscruz/udacity_data_analyst | P05/src/Data exploration.ipynb | mit | del data_dict['TOTAL']
df = pandas.DataFrame.from_dict(data_dict, orient='index')
df.head()
print "Dataset size: %d rows x %d columns"%df.shape
df.dtypes
print "Feature | Missing values"
print "---|---"
for column in df.columns:
if column != 'poi':
print "%s | %d"%(column,(df[column] == 'NaN').sum())
df['poi'].hist( figsize=(5,3))
plt.xticks(np.array([0.05,0.95]), ['Non POI', 'POI'], rotation='horizontal')
plt.tick_params(bottom='on',top='off', right='off', left='on')
df['expenses'] = df['expenses'].astype('float')
df['salary'] = df['salary'].astype('float')
df['from_this_person_to_poi'] = df['from_this_person_to_poi'].astype('float')
df['from_poi_to_this_person'] = df['from_poi_to_this_person'].astype('float')
"""
Explanation: There is an outlier: TOTAL. This certainly a column for totals. Let's remove it before proceeding.
End of explanation
"""
df['expenses'] = df['expenses'].astype('float')
df['expenses'].hist(bins=100)
plt.figure()
df['expenses'].hist(bins=50)
plt.figure()
df['expenses'].hist(bins=20)
"""
Explanation: Expenses Data Distribution
End of explanation
"""
df[df['expenses']>150000]
"""
Explanation: We can see that there are three employees spend considerably more than others.
End of explanation
"""
df['salary'] = df['salary'].astype('float')
df['salary'].hist(bins=100)
plt.figure()
df['salary'].hist(bins=50)
plt.figure()
df['salary'].hist(bins=20)
"""
Explanation: Salary Data Distribution
End of explanation
"""
df[df['salary']>600000]
"""
Explanation: There are four people with a considerably higher salary than other employees (>\$600K).
End of explanation
"""
df[['from_this_person_to_poi','from_poi_to_this_person']].hist(bins=20)
df[df['from_this_person_to_poi']>280]
df[df['from_poi_to_this_person']>300]
df[['from_this_person_to_poi','from_poi_to_this_person']].sort('from_poi_to_this_person').plot()
"""
Explanation: Top salary employees are not the same as top expenses employees.
End of explanation
"""
df['bonus'] = df['bonus'].astype('float')
df[['salary','bonus']].plot(kind='scatter', x='salary', y='bonus')
"""
Explanation: High 'from_this_person_to_poi' does not necessarily mean high 'from_poi_to_this_person', so both variables should evaluated.
End of explanation
"""
df['salary+bonus']=df['salary']+df['bonus']
list(df.columns)
1+np.nan
for _,obj in data_dict.items():
salary_bonus_ratio = np.float(obj['bonus'])/np.float(obj['salary'])
if np.isnan(salary_bonus_ratio):
salary_bonus_ratio = -1
obj['salary_bonus_ratio'] = salary_bonus_ratio
data_dict.items()
"""
Explanation: Perhaps makes sense combining salary with bonus. A new feature could be salary+bonus
End of explanation
"""
import sys
sys.path.append("./tools/")
from feature_format import featureFormat, targetFeatureSplit
import numpy as np
features_all = [
'poi',
'salary',
'to_messages',
'deferral_payments',
'total_payments',
'exercised_stock_options',
'bonus',
'restricted_stock',
'shared_receipt_with_poi',
'restricted_stock_deferred',
'total_stock_value',
'expenses',
'loan_advances',
'from_messages',
'other',
'from_this_person_to_poi',
'director_fees',
'deferred_income',
'long_term_incentive',
# 'email_address',
'from_poi_to_this_person',
'salary_bonus_ratio',
]
features_list=features_all
data = featureFormat(data_dict, features_list, sort_keys = True)
labels, features = targetFeatureSplit(data)
from sklearn.feature_selection import SelectKBest, f_classif
# Perform feature selection
selector = SelectKBest(f_classif, k=5)
selector.fit(features, labels)
# Get the raw p-values for each feature, and transform from p-values into scores
scores = -np.log10(selector.pvalues_)
scores= selector.scores_
predictors = features_list[1:]
sort_indices = np.argsort(scores)[::-1]
# Plot the scores. See how "Pclass", "Sex", "Title", and "Fare" are the best?
X=np.arange(len(predictors))
Y=scores[sort_indices]
X_labels = map(lambda x: x.replace("_"," ").title(),np.array(predictors)[sort_indices])
ax = plt.subplot(111)
ax.bar(X, Y,edgecolor='white',facecolor='#9999ff')
plt.xticks(X+0.4, X_labels, rotation='vertical')
for x,y in zip(X,Y):
ax.text(x+0.4, y+0.05, '%.2f' % y, ha='center', va= 'bottom')
pass
plt.tick_params(bottom='off',top='off', right='off', left='off')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
zip(np.array(predictors)[sort_indices],scores[sort_indices])
list(np.array(predictors)[sort_indices])
"""
Explanation: Feature Selection
End of explanation
"""
|
paris-saclay-cds/python-workshop | Day_2_Software_engineering_best_practices/solutions/01_function_factorization.ipynb | bsd-3-clause | def read_spectra(path_csv):
s = pandas.read_csv(path_csv)
c = s['concentration']
m = s['molecule']
s = s['spectra']
x = []
for spec in s:
x.append(numpy.fromstring(spec[1:-1], sep=','))
s = pandas.DataFrame(x)
return s, c, m
f = pandas.read_csv('data/freq.csv')
filenames = ['data/spectra_{}.csv'.format(i)
for i in range(4)]
stot = []
c = []
m = []
for filename in filenames:
s_tmp, c_tmp, m_tmp = read_spectra(filename)
stot.append(s_tmp)
c.append(c_tmp)
m.append(m_tmp)
stot = pandas.concat(stot)
c = pandas.concat(c)
m = pandas.concat(m)
"""
Explanation: IO: Reading and preprocess the data
We can define a function which will read the data and process them.
End of explanation
"""
def _apply_axis_layout(ax, title):
ax.set_xlabel('Frequency')
ax.set_ylabel('Concentration')
ax.set_title(title)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
ax.spines['left'].set_position(('outward', 10))
ax.spines['bottom'].set_position(('outward', 10))
def plot_spectra(f, s, title):
fig, ax = pyplot.subplots()
ax.plot(f, s.T)
_apply_axis_layout(ax, title)
def plot_spectra_by_type(f, s, classes, title):
fig, ax = pyplot.subplots()
for c_type in numpy.unique(classes):
i = numpy.nonzero(classes == c_type)[0]
ax.plot(f, numpy.mean(s.iloc[i], axis=0), label=c_type)
ax.fill_between(numpy.ravel(f), numpy.mean(s.iloc[i], axis=0) + numpy.std(s.iloc[i], axis=0), numpy.mean(s.iloc[i], axis=0) - numpy.std(s.iloc[i], axis=0), alpha=0.2)
_apply_axis_layout(ax, title)
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plot_spectra(f, stot, 'All training spectra')
plot_spectra_by_type(f, stot, m, 'Mean spectra in function of the molecules')
plot_spectra_by_type(f, stot, c, 'Mean spectra in function of the concentrations')
"""
Explanation: Plot helper functions
We can create two functions: (i) to plot all spectra and (ii) plot the mean spectra with the std intervals.
We will make a "private" function which will be used by both plot types.
End of explanation
"""
s4, c4, m4 = read_spectra('data/spectra_4.csv')
plot_spectra(f, stot, 'All training spectra')
plot_spectra_by_type(f, s4, m4, 'Mean spectra in function of the molecules')
plot_spectra_by_type(f, s4, c4, 'Mean spectra in function of the concentrations')
"""
Explanation: Reusability for new data:
End of explanation
"""
def plot_cm(cm, classes, title):
import itertools
fig, ax = pyplot.subplots()
pyplot.imshow(cm, interpolation='nearest', cmap='bwr')
pyplot.title(title)
pyplot.colorbar()
tick_marks = numpy.arange(len(numpy.unique(classes)))
pyplot.xticks(tick_marks, numpy.unique(classes), rotation=45)
pyplot.yticks(tick_marks, numpy.unique(classes))
fmt = 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
pyplot.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
pyplot.tight_layout()
pyplot.ylabel('True label')
pyplot.xlabel('Predicted label')
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.pipeline import make_pipeline
from sklearn.metrics import confusion_matrix
for clf in [RandomForestClassifier(random_state=0),
LinearSVC(random_state=0)]:
p = make_pipeline(StandardScaler(), PCA(n_components=100, random_state=0), clf)
p.fit(stot, m)
pred = p.predict(s4)
plot_cm(confusion_matrix(m4, pred), p.classes_, 'Confusion matrix using {}'.format(clf.__class__.__name__))
print('Accuracy score: {0:.2f}'.format(p.score(s4, m4)))
"""
Explanation: Training and testing a machine learning model for classification
End of explanation
"""
def plot_regression(y_true, y_pred, title):
from sklearn.metrics import r2_score, median_absolute_error
fig, ax = pyplot.subplots()
ax.scatter(y_true, y_pred)
ax.plot([0, 25000], [0, 25000], '--k')
ax.set_ylabel('Target predicted')
ax.set_xlabel('True Target')
ax.set_title(title)
ax.text(1000, 20000, r'$R^2$=%.2f, MAE=%.2f' % (
r2_score(y_true, y_pred), median_absolute_error(y_true, y_pred)))
ax.set_xlim([0, 25000])
ax.set_ylim([0, 25000])
from sklearn.decomposition import PCA
from sklearn.linear_model import RidgeCV
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import make_pipeline
for reg in [RidgeCV(), RandomForestRegressor(random_state=0)]:
p = make_pipeline(PCA(n_components=100), reg)
p.fit(stot, c)
pred = p.predict(s4)
plot_regression(c4, pred, 'Regression using {}'.format(reg.__class__.__name__))
def fit_params(data):
median = numpy.median(data, axis=0)
p_25 = numpy.percentile(data, 25, axis=0)
p_75 = numpy.percentile(data, 75, axis=0)
return median, (p_75 - p_25)
def transform(data, median, var_25_75):
return (data - median) / var_25_75
median, var_25_75 = fit_params(stot)
stot_scaled = transform(stot, median, var_25_75)
s4_scaled = transform(s4, median, var_25_75)
from sklearn.decomposition import PCA
from sklearn.linear_model import RidgeCV
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import make_pipeline
for reg in [RidgeCV(), RandomForestRegressor(random_state=0)]:
p = make_pipeline(PCA(n_components=100), reg)
p.fit(stot_scaled, c)
pred = p.predict(s4_scaled)
plot_regression(c4, pred, 'Regression using {}'.format(reg.__class__.__name__))
"""
Explanation: Training and testing a machine learning model for regression
End of explanation
"""
|
tensorflow/probability | tensorflow_probability/examples/jupyter_notebooks/HLM_TFP_R_Stan.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
%matplotlib inline
import os
from six.moves import urllib
import numpy as np
import pandas as pd
import warnings
from matplotlib import pyplot as plt
import seaborn as sns
from IPython.core.pylabtools import figsize
figsize(11, 9)
import tensorflow.compat.v1 as tf
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
"""
Explanation: Linear Mixed-Effect Regression in {TF Probability, R, Stan}
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/HLM_TFP_R_Stan"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/HLM_TFP_R_Stan.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/HLM_TFP_R_Stan.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/HLM_TFP_R_Stan.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
1 Introduction
In this colab we will fit a linear mixed-effect regression model to a popular, toy dataset. We will make this fit thrice, using R's lme4, Stan's mixed-effects package, and TensorFlow Probability (TFP) primitives. We conclude by showing all three give roughly the same fitted parameters and posterior distributions.
Our main conclusion is that TFP has the general pieces necessary to fit HLM-like models and that it produces results which are consistent with other software packages, i.e.., lme4, rstanarm. This colab is not an accurate reflection of the computational efficiency of any of the packages compared.
End of explanation
"""
def load_and_preprocess_radon_dataset(state='MN'):
"""Preprocess Radon dataset as done in "Bayesian Data Analysis" book.
We filter to Minnesota data (919 examples) and preprocess to obtain the
following features:
- `log_uranium_ppm`: Log of soil uranium measurements.
- `county`: Name of county in which the measurement was taken.
- `floor`: Floor of house (0 for basement, 1 for first floor) on which the
measurement was taken.
The target variable is `log_radon`, the log of the Radon measurement in the
house.
"""
ds = tfds.load('radon', split='train')
radon_data = tfds.as_dataframe(ds)
radon_data.rename(lambda s: s[9:] if s.startswith('feat') else s, axis=1, inplace=True)
df = radon_data[radon_data.state==state.encode()].copy()
# For any missing or invalid activity readings, we'll use a value of `0.1`.
df['radon'] = df.activity.apply(lambda x: x if x > 0. else 0.1)
# Make county names look nice.
df['county'] = df.county.apply(lambda s: s.decode()).str.strip().str.title()
# Remap categories to start from 0 and end at max(category).
county_name = sorted(df.county.unique())
df['county'] = df.county.astype(
pd.api.types.CategoricalDtype(categories=county_name)).cat.codes
county_name = list(map(str.strip, county_name))
df['log_radon'] = df['radon'].apply(np.log)
df['log_uranium_ppm'] = df['Uppm'].apply(np.log)
df = df[['idnum', 'log_radon', 'floor', 'county', 'log_uranium_ppm']]
return df, county_name
radon, county_name = load_and_preprocess_radon_dataset()
# We'll use the following directory to store our preprocessed dataset.
CACHE_DIR = os.path.join(os.sep, 'tmp', 'radon')
# Save processed data. (So we can later read it in R.)
if not tf.gfile.Exists(CACHE_DIR):
tf.gfile.MakeDirs(CACHE_DIR)
with tf.gfile.Open(os.path.join(CACHE_DIR, 'radon.csv'), 'w') as f:
radon.to_csv(f, index=False)
"""
Explanation: 2 Hierarchical Linear Model
For our comparison between R, Stan, and TFP, we will fit a Hierarchical Linear Model (HLM) to the Radon dataset made popular in Bayesian Data Analysis by Gelman, et. al. (page 559, second ed; page 250, third ed.).
We assume the following generative model:
$$\begin{align}
\text{for } & c=1\ldots \text{NumCounties}:\
& \beta_c \sim \text{Normal}\left(\text{loc}=0, \text{scale}=\sigma_C \right) \
\text{for } & i=1\ldots \text{NumSamples}:\
&\eta_i = \underbrace{\omega_0 + \omega_1 \text{Floor}i}\text{fixed effects} + \underbrace{\beta_{ \text{County}i} \log( \text{UraniumPPM}{\text{County}i}))}\text{random effects} \
&\log(\text{Radon}_i) \sim \text{Normal}(\text{loc}=\eta_i , \text{scale}=\sigma_N)
\end{align}$$
In R's lme4 "tilde notation", this model is equivalent to:
log_radon ~ 1 + floor + (0 + log_uranium_ppm | county)
We will find MLE for $\omega, \sigma_C, \sigma_N$ using the posterior distribution (conditioned on evidence) of ${\beta_c}_{c=1}^\text{NumCounties}$.
For essentially the same model but with a random intercept, see Appendix A.
For a more general specification of HLMs, see Appendix B.
3 Data Munging
In this section we obtain the radon dataset and do some minimal preprocessing to make it comply with our assumed model.
End of explanation
"""
radon.head()
fig, ax = plt.subplots(figsize=(22, 5));
county_freq = radon['county'].value_counts()
county_freq.plot(kind='bar', color='#436bad');
plt.xlabel('County index')
plt.ylabel('Number of radon readings')
plt.title('Number of radon readings per county', fontsize=16)
county_freq = np.array(zip(county_freq.index, county_freq.values)) # We'll use this later.
fig, ax = plt.subplots(ncols=2, figsize=[10, 4]);
radon['log_radon'].plot(kind='density', ax=ax[0]);
ax[0].set_xlabel('log(radon)')
radon['floor'].value_counts().plot(kind='bar', ax=ax[1]);
ax[1].set_xlabel('Floor');
ax[1].set_ylabel('Count');
fig.subplots_adjust(wspace=0.25)
"""
Explanation: 3.1 Know Thy Data
In this section we explore the radon dataset to get a better sense of why the proposed model might be reasonable.
End of explanation
"""
suppressMessages({
library('bayesplot')
library('data.table')
library('dplyr')
library('gfile')
library('ggplot2')
library('lattice')
library('lme4')
library('plyr')
library('rstanarm')
library('tidyverse')
RequireInitGoogle()
})
data = read_csv(gfile::GFile('/tmp/radon/radon.csv'))
head(data)
# https://github.com/stan-dev/example-models/wiki/ARM-Models-Sorted-by-Chapter
radon.model <- lmer(log_radon ~ 1 + floor + (0 + log_uranium_ppm | county), data = data)
summary(radon.model)
qqmath(ranef(radon.model, condVar=TRUE))
write.csv(as.data.frame(ranef(radon.model, condVar = TRUE)), '/tmp/radon/lme4_fit.csv')
"""
Explanation: Conclusions:
- There's a long tail of 85 counties. (A common occurrence in GLMMs.)
- Indeed $\log(\text{Radon})$ is unconstrained. (So linear regression might make sense.)
- Readings are most made on the $0$-th floor; no reading was made above floor $1$. (So our fixed effects will only have two weights.)
4 HLM In R
In this section we use R's lme4 package to fit probabilistic model described above.
NOTE: To execute this section, you must switch to an R colab runtime.
End of explanation
"""
fit <- stan_lmer(log_radon ~ 1 + floor + (0 + log_uranium_ppm | county), data = data)
"""
Explanation: 5 HLM In Stan
In this section we use rstanarm to fit a Stan model using the same formula/syntax as the lme4 model above.
Unlike lme4 and the TF model below, rstanarm is a fully Bayesian model, i.e., all parameters are presumed drawn from a Normal distribution with parameters themselves drawn from a distribution.
NOTE: To execute this section, you must switch an R colab runtime.
End of explanation
"""
fit
color_scheme_set("red")
ppc_dens_overlay(y = fit$y,
yrep = posterior_predict(fit, draws = 50))
color_scheme_set("brightblue")
ppc_intervals(
y = data$log_radon,
yrep = posterior_predict(fit),
x = data$county,
prob = 0.8
) +
labs(
x = "County",
y = "log radon",
title = "80% posterior predictive intervals \nvs observed log radon",
subtitle = "by county"
) +
panel_bg(fill = "gray95", color = NA) +
grid_lines(color = "white")
# Write the posterior samples (4000 for each variable) to a CSV.
write.csv(tidy(as.matrix(fit)), "/tmp/radon/stan_fit.csv")
"""
Explanation: Note: The runtimes are from a single CPU core. (This colab is not intended to be a faithful representation of Stan or TFP runtime.)
End of explanation
"""
with tf.gfile.Open('/tmp/radon/lme4_fit.csv', 'r') as f:
lme4_fit = pd.read_csv(f, index_col=0)
lme4_fit.head()
"""
Explanation: Note: Switch back to the Python TF kernel runtime.
End of explanation
"""
posterior_random_weights_lme4 = np.array(lme4_fit.condval, dtype=np.float32)
lme4_prior_scale = np.array(lme4_fit.condsd, dtype=np.float32)
print(posterior_random_weights_lme4.shape, lme4_prior_scale.shape)
"""
Explanation: Retrieve the point estimates and conditional standard deviations for the group random effects from lme4 for visualization later.
End of explanation
"""
with tf.Session() as sess:
lme4_dist = tfp.distributions.Independent(
tfp.distributions.Normal(
loc=posterior_random_weights_lme4,
scale=lme4_prior_scale),
reinterpreted_batch_ndims=1)
posterior_random_weights_lme4_final_ = sess.run(lme4_dist.sample(4000))
posterior_random_weights_lme4_final_.shape
"""
Explanation: Draw samples for the county weights using the lme4 estimated means and standard deviations.
End of explanation
"""
with tf.gfile.Open('/tmp/radon/stan_fit.csv', 'r') as f:
samples = pd.read_csv(f, index_col=0)
samples.head()
posterior_random_weights_cols = [
col for col in samples.columns if 'b.log_uranium_ppm.county' in col
]
posterior_random_weights_final_stan = samples[
posterior_random_weights_cols].values
print(posterior_random_weights_final_stan.shape)
"""
Explanation: We also retrieve the posterior samples of the county weights from the Stan fit.
End of explanation
"""
# Handy snippet to reset the global graph and global session.
with warnings.catch_warnings():
warnings.simplefilter('ignore')
tf.reset_default_graph()
try:
sess.close()
except:
pass
sess = tf.InteractiveSession()
"""
Explanation: This Stan example shows how one would implement LMER in a style closer to TFP, i.e., by directly specifying the probabilistic model.
6 HLM In TF Probability
In this section we will use low-level TensorFlow Probability primitives (Distributions) to specify our Hierarchical Linear Model as well as fit the unkown parameters.
End of explanation
"""
inv_scale_transform = lambda y: np.log(y) # Not using TF here.
fwd_scale_transform = tf.exp
"""
Explanation: 6.1 Specify Model
In this section we specify the radon linear mixed-effect model using TFP primitives. To do this, we specify two functions which produce two TFP distributions:
- make_weights_prior: A multivariate Normal prior for the random weights (which are multiplied by $\log(\text{UraniumPPM}_{c_i})$ to compue the linear predictor).
- make_log_radon_likelihood: A batch of Normal distributions over each observed $\log(\text{Radon}_i)$ dependent variable.
Since we will be fitting the parameters of each of these distributions we must use TF variables (i.e., tf.get_variable). However, since we wish to use unconstrained optimzation we must find a way to constrain real-values to achieve the necessary semantics, eg, postives which represent standard deviations.
End of explanation
"""
def _make_weights_prior(num_counties, dtype):
"""Returns a `len(log_uranium_ppm)` batch of univariate Normal."""
raw_prior_scale = tf.get_variable(
name='raw_prior_scale',
initializer=np.array(inv_scale_transform(1.), dtype=dtype))
return tfp.distributions.Independent(
tfp.distributions.Normal(
loc=tf.zeros(num_counties, dtype=dtype),
scale=fwd_scale_transform(raw_prior_scale)),
reinterpreted_batch_ndims=1)
make_weights_prior = tf.make_template(
name_='make_weights_prior', func_=_make_weights_prior)
"""
Explanation: The following function constructs our prior, $p(\beta|\sigma_C)$ where $\beta$ denotes the random-effect weights and $\sigma_C$ the standard deviation.
We use tf.make_template to ensure that the first call to this function instantiates the TF variables it uses and all subsequent calls reuse the variable's current value.
End of explanation
"""
def _make_log_radon_likelihood(random_effect_weights, floor, county,
log_county_uranium_ppm, init_log_radon_stddev):
raw_likelihood_scale = tf.get_variable(
name='raw_likelihood_scale',
initializer=np.array(
inv_scale_transform(init_log_radon_stddev), dtype=dtype))
fixed_effect_weights = tf.get_variable(
name='fixed_effect_weights', initializer=np.array([0., 1.], dtype=dtype))
fixed_effects = fixed_effect_weights[0] + fixed_effect_weights[1] * floor
random_effects = tf.gather(
random_effect_weights * log_county_uranium_ppm,
indices=tf.to_int32(county),
axis=-1)
linear_predictor = fixed_effects + random_effects
return tfp.distributions.Normal(
loc=linear_predictor, scale=fwd_scale_transform(raw_likelihood_scale))
make_log_radon_likelihood = tf.make_template(
name_='make_log_radon_likelihood', func_=_make_log_radon_likelihood)
"""
Explanation: The following function constructs our likelihood, $p(y|x,\omega,\beta,\sigma_N)$ where $y,x$ denote response and evidence, $\omega,\beta$ denote fixed- and random-effect weights, and $\sigma_N$ the standard deviation.
Here again we use tf.make_template to ensure the TF variables are reused across calls.
End of explanation
"""
def joint_log_prob(random_effect_weights, log_radon, floor, county,
log_county_uranium_ppm, dtype):
num_counties = len(log_county_uranium_ppm)
rv_weights = make_weights_prior(num_counties, dtype)
rv_radon = make_log_radon_likelihood(
random_effect_weights,
floor,
county,
log_county_uranium_ppm,
init_log_radon_stddev=radon.log_radon.values.std())
return (rv_weights.log_prob(random_effect_weights)
+ tf.reduce_sum(rv_radon.log_prob(log_radon), axis=-1))
"""
Explanation: Finally we use the prior and likelihood generators to construct the joint log-density.
End of explanation
"""
# Specify unnormalized posterior.
dtype = np.float32
log_county_uranium_ppm = radon[
['county', 'log_uranium_ppm']].drop_duplicates().values[:, 1]
log_county_uranium_ppm = log_county_uranium_ppm.astype(dtype)
def unnormalized_posterior_log_prob(random_effect_weights):
return joint_log_prob(
random_effect_weights=random_effect_weights,
log_radon=dtype(radon.log_radon.values),
floor=dtype(radon.floor.values),
county=np.int32(radon.county.values),
log_county_uranium_ppm=log_county_uranium_ppm,
dtype=dtype)
"""
Explanation: 6.2 Training (Stochastic Approximation of Expectation Maximization)
To fit our linear mixed-effect regression model, we will use a stochastic approximation version of the Expectation Maximization algorithm (SAEM). The basic idea is to use samples from the posterior to approximate the expected joint log-density (E-step). Then we find the parameters which maximize this calculation (M-step). Somewhat more concretely, the fixed-point iteration is given by:
$$\begin{align}
\text{E}[ \log p(x, Z | \theta) | \theta_0]
&\approx \frac{1}{M} \sum_{m=1}^M \log p(x, z_m | \theta), \quad Z_m\sim p(Z | x, \theta_0) && \text{E-step}\
&=: Q_M(\theta, \theta_0) \
\theta_0 &= \theta_0 - \eta \left.\nabla_\theta Q_M(\theta, \theta_0)\right|_{\theta=\theta_0} && \text{M-step}
\end{align}$$
where $x$ denotes evidence, $Z$ some latent variable which needs to be marginalized out, and $\theta,\theta_0$ possible parameterizations.
For a more thorough explanation, see Convergence of a stochastic approximation version of the EM algorithms by Bernard Delyon, Marc Lavielle, Eric, Moulines (Ann. Statist., 1999).
To compute the E-step, we need to sample from the posterior. Since our posterior is not easy to sample from, we use Hamiltonian Monte Carlo (HMC). HMC is a Monte Carlo Markov Chain procedure which uses gradients (wrt state, not parameters) of the unnormalized posterior log-density to propose new samples.
Specifying the unnormalized posterior log-density is simple--it is merely the joint log-density "pinned" at whatever we wish to condition on.
End of explanation
"""
# Set-up E-step.
step_size = tf.get_variable(
'step_size',
initializer=np.array(0.2, dtype=dtype),
trainable=False)
hmc = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=unnormalized_posterior_log_prob,
num_leapfrog_steps=2,
step_size=step_size,
step_size_update_fn=tfp.mcmc.make_simple_step_size_update_policy(
num_adaptation_steps=None),
state_gradients_are_stopped=True)
init_random_weights = tf.placeholder(dtype, shape=[len(log_county_uranium_ppm)])
posterior_random_weights, kernel_results = tfp.mcmc.sample_chain(
num_results=3,
num_burnin_steps=0,
num_steps_between_results=0,
current_state=init_random_weights,
kernel=hmc)
"""
Explanation: We now complete the E-step setup by creating an HMC transition kernel.
Notes:
We use state_stop_gradient=Trueto prevent the M-step from backpropping through draws from the MCMC. (Recall, we needn't backprop through because our E-step is intentionally parameterized at the previous best known estimators.)
We use tf.placeholder so that when we eventually execute our TF graph, we can feed the previous iteration's random MCMC sample as the the next iteration's chain's value.
We use TFP's adaptive step_size heuristic, tfp.mcmc.hmc_step_size_update_fn.
End of explanation
"""
# Set-up M-step.
loss = -tf.reduce_mean(kernel_results.accepted_results.target_log_prob)
global_step = tf.train.get_or_create_global_step()
learning_rate = tf.train.exponential_decay(
learning_rate=0.1,
global_step=global_step,
decay_steps=2,
decay_rate=0.99)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss, global_step=global_step)
"""
Explanation: We now set-up the M-step. This is essentially the same as an optimization one might do in TF.
End of explanation
"""
# Initialize all variables.
init_op = tf.initialize_all_variables()
# Grab variable handles for diagnostic purposes.
with tf.variable_scope('make_weights_prior', reuse=True):
prior_scale = fwd_scale_transform(tf.get_variable(
name='raw_prior_scale', dtype=dtype))
with tf.variable_scope('make_log_radon_likelihood', reuse=True):
likelihood_scale = fwd_scale_transform(tf.get_variable(
name='raw_likelihood_scale', dtype=dtype))
fixed_effect_weights = tf.get_variable(
name='fixed_effect_weights', dtype=dtype)
"""
Explanation: We conclude with some housekeeping tasks. We must tell TF that all variables are initialized. We also create handles to our TF variables so we can print their values at each iteration of the procedure.
End of explanation
"""
init_op.run()
w_ = np.zeros([len(log_county_uranium_ppm)], dtype=dtype)
%%time
maxiter = int(1500)
num_accepted = 0
num_drawn = 0
for i in range(maxiter):
[
_,
global_step_,
loss_,
posterior_random_weights_,
kernel_results_,
step_size_,
prior_scale_,
likelihood_scale_,
fixed_effect_weights_,
] = sess.run([
train_op,
global_step,
loss,
posterior_random_weights,
kernel_results,
step_size,
prior_scale,
likelihood_scale,
fixed_effect_weights,
], feed_dict={init_random_weights: w_})
w_ = posterior_random_weights_[-1, :]
num_accepted += kernel_results_.is_accepted.sum()
num_drawn += kernel_results_.is_accepted.size
acceptance_rate = num_accepted / num_drawn
if i % 100 == 0 or i == maxiter - 1:
print('global_step:{:>4} loss:{: 9.3f} acceptance:{:.4f} '
'step_size:{:.4f} prior_scale:{:.4f} likelihood_scale:{:.4f} '
'fixed_effect_weights:{}'.format(
global_step_, loss_.mean(), acceptance_rate, step_size_,
prior_scale_, likelihood_scale_, fixed_effect_weights_))
"""
Explanation: 6.3 Execute
In this section we execute our SAEM TF graph. The main trick here is to feed our last draw from the HMC kernel into the next iteration. This is achieved through our use of feed_dict in the sess.run call.
End of explanation
"""
%%time
posterior_random_weights_final, kernel_results_final = tfp.mcmc.sample_chain(
num_results=int(15e3),
num_burnin_steps=int(1e3),
current_state=init_random_weights,
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=unnormalized_posterior_log_prob,
num_leapfrog_steps=2,
step_size=step_size))
[
posterior_random_weights_final_,
kernel_results_final_,
] = sess.run([
posterior_random_weights_final,
kernel_results_final,
], feed_dict={init_random_weights: w_})
print('prior_scale: ', prior_scale_)
print('likelihood_scale: ', likelihood_scale_)
print('fixed_effect_weights: ', fixed_effect_weights_)
print('acceptance rate final: ', kernel_results_final_.is_accepted.mean())
"""
Explanation: Looks like after ~1500 steps, our estimates of the parameters have stabilized.
6.4 Results
Now that we've fit the parameters, let's generate a large number of posterior samples and study the results.
End of explanation
"""
x = posterior_random_weights_final_ * log_county_uranium_ppm
I = county_freq[:, 0]
x = x[:, I]
cols = np.array(county_name)[I]
pw = pd.DataFrame(x)
pw.columns = cols
fig, ax = plt.subplots(figsize=(25, 4))
ax = pw.boxplot(rot=80, vert=True);
"""
Explanation: We now construct a box and whisker diagram of the $\beta_c \log(\text{UraniumPPM}_c)$ random-effect. We'll order the random-effects by decreasing county frequency.
End of explanation
"""
nrows = 17
ncols = 5
fig, ax = plt.subplots(nrows, ncols, figsize=(18, 21), sharey=True, sharex=True)
with warnings.catch_warnings():
warnings.simplefilter('ignore')
ii = -1
for r in range(nrows):
for c in range(ncols):
ii += 1
idx = county_freq[ii, 0]
sns.kdeplot(
posterior_random_weights_final_[:, idx] * log_county_uranium_ppm[idx],
color='blue',
alpha=.3,
shade=True,
label='TFP',
ax=ax[r][c])
sns.kdeplot(
posterior_random_weights_final_stan[:, idx] *
log_county_uranium_ppm[idx],
color='red',
alpha=.3,
shade=True,
label='Stan/rstanarm',
ax=ax[r][c])
sns.kdeplot(
posterior_random_weights_lme4_final_[:, idx] *
log_county_uranium_ppm[idx],
color='#F4B400',
alpha=.7,
shade=False,
label='R/lme4',
ax=ax[r][c])
ax[r][c].vlines(
posterior_random_weights_lme4[idx] * log_county_uranium_ppm[idx],
0,
5,
color='#F4B400',
linestyle='--')
ax[r][c].set_title(county_name[idx] + ' ({})'.format(idx), y=.7)
ax[r][c].set_ylim(0, 5)
ax[r][c].set_xlim(-1., 1.)
ax[r][c].get_yaxis().set_visible(False)
if ii == 2:
ax[r][c].legend(bbox_to_anchor=(1.4, 1.7), fontsize=20, ncol=3)
else:
ax[r][c].legend_.remove()
fig.subplots_adjust(wspace=0.03, hspace=0.1)
"""
Explanation: From this box and whisker diagram, we observe that the variance of the county-level $\log(\text{UraniumPPM})$ random-effect increases as the county is less represented in the dataset. Intutively this makes sense--we should be less certain about the impact of a certain county if we have less evidence for it.
7 Side-by-Side-by-Side Comparison
We now compare the results of all three procedures. To do this, we will compute non-parameteric estimates of the posterior samples as generated by Stan and TFP. We will also compare against the parameteric (approximate) estimates produced by R's lme4 package.
The following plot depicts the posterior distribution of each weight for each county in Minnesota. We show results for Stan (red), TFP (blue), and R's lme4 (orange). We shade results from Stan and TFP thus expect to see purple when the two agree. For simplicity we do not shade results from R. Each subplot represents a single county and are ordered in descending frequency in raster scan order (i.e., from left-to-right then top-to-bottom).
End of explanation
"""
|
cvxgrp/cvxpylayers | examples/jax/tutorial.ipynb | apache-2.0 | import cvxpy as cp
import numpy as np
import jax
import jax.numpy as jnp
from cvxpylayers.jax import CvxpyLayer
import matplotlib.pyplot as plt
plt.style.use('bmh')
np.set_printoptions(precision=2, suppress=True)
"""
Explanation: Differentiable Convex Optimization Layers: JAX Tutorial
End of explanation
"""
# Implement the problem as usual with CVXPY:
n = 100
_x = cp.Parameter(n)
_y = cp.Variable(n)
obj = cp.Minimize(cp.sum_squares(_y-_x))
cons = [_y >= 0]
prob = cp.Problem(obj, cons)
# Create the JAX layer
relu_layer = CvxpyLayer(prob, parameters=[_x], variables=[_y])
# Differentiate through the JAX layer
def relu_layer_sum(x):
return jnp.sum(relu_layer(x)[0])
drelu_layer = jax.grad(relu_layer_sum)
nrow, ncol = 1, 2
fig, axs = plt.subplots(nrow, ncol, figsize=(6*ncol, 4*nrow))
fig.suptitle('The ReLU')
x = jnp.linspace(-5, 5, num=n)
ax = axs[0]
ax.plot(x, relu_layer(x)[0])
ax.set_xlabel('$x$')
ax.set_ylabel('$f(x)$')
ax = axs[1]
ax.plot(x, drelu_layer(x))
ax.set_xlabel('$x$')
ax.set_ylabel(r'$\nabla f(x)$')
"""
Explanation: This notebook introduces the JAX
library accompanying our
NeurIPS paper
on differentiable convex optimization layers with CVXPY.
A convex optimization layer solves a parametrized convex optimization problem
in the forward pass to produce a solution.
It computes the derivative of the solution with respect to the
parameters in the backward pass.
For more details, see our main repository at
cvxgrp/cvxpylayers
and our
blog post.
Parametrized convex optimization
Express problems of the form
$$
\begin{array}{ll} \mbox{minimize} & f_0(x;\theta)\
\mbox{subject to} & f_i(x;\theta) \leq 0, \quad i=1, \ldots, m\
& A(\theta)x=b(\theta),
\end{array}
$$
with variable $x \in \mathbf{R}^n$ and parameters $\theta\in\Theta\subseteq\mathbf{R}^p$
objective and inequality constraints $f_0, \ldots, f_m$ are convex in $x$ for each $\theta$, i.e., their graphs curve upward
equality constraints are linear
for a given value of $\theta$, find a value for $x$ that minimizes objective, while satisfying constraints
we can efficiently solve these globally with near-total reliability
Solution map
Solution $x^\star$ is an implicit function of $\theta$
When unique, define solution map as function
$x^\star = \mathcal S(\theta)$
Need to call numerical solver to evaluate
This function is often differentiable
We show how to analytically differentiate this function, using the implicit function theorem
Benefits of analytical differentiation: works with nonsmooth objective/constraints, low memory usage, don't compound errors
CVXPY
High level domain-specific language (DSL) for convex optimization
Define variables, parameters, objective, and constraints
Synthesize into problem object, then call solve method
We've added derivatives to CVXPY (forward and backward)
More information available here
CVXPYlayers
* Convert CVXPY problems into callable, differentiable Pytorch and Tensorflow modules in one line
Applications of differentiable convex optimization
learning convex optimization models (structured prediction)
learning decision-making policies (reinforcement learning)
differentiable model-predictive control
convex optimization control policies
machine learning hyper-parameter tuning and feature engineering
repairing infeasible or unbounded optimization problems
as protection layers in neural networks
learning constraints and rules
custom neural network layers (sparsemax, csoftmax, csparsemax, LML)
meta-learning through differentiable solvers
and many more...
ReLU, sigmoid, and softmax examples
This example follows the setup in
our blog post
and shows how to implement standard activation functions
as differentiable convex optimization layers.
These are presented as an exercise to illustrate the
representational and modeling power of differentiable convex
optimization, and for these examples, provides no implementation
benefit over the original forms of these layers.
The ReLU
As shown in Section 2.4 here.
we can interpret the ReLU as projecting
a point $x\in\mathbb{R}^n$ onto the non-negative orthant as
$$
\hat y = {\rm argmin}_y \;\; \frac{1}{2}||y-x||_2^2 \;\; {\rm s.t.} \;\; y\geq 0.
$$
The usual explicit solution can be obtained by taking the
first-order optimality conditions.
Using cvxpylayers, we are able to easily implement
this optimization problem as a JAX layer.
This is powerful since it does not require that
our optimization problem has an explicit closed-form
solution (even though the ReLU does) and this
also shows how easy we can now take this optimization
problem and tweak it if we wanted to do so, without
having to re-derive the appropriate solution and
backwards pass.
Since this is computing the same function as the ReLU,
we would expect that the derivative looks the same.
This is indeed true and we have made it so our optimization layer
can be differentiated through just like any other JAX layer
using automatic differentiation.
End of explanation
"""
n = 100
_x = cp.Parameter(n)
_y = cp.Variable(n)
obj = cp.Minimize(-_x.T@ _y - cp.sum(cp.entr(_y) + cp.entr(1.-_y)))
prob = cp.Problem(obj)
layer = CvxpyLayer(prob, parameters=[_x], variables=[_y])
# Differentiate through the JAX layer
def layer_sum(x):
return jnp.sum(layer(x)[0])
dlayer = jax.grad(layer_sum)
nrow, ncol = 1, 2
fig, axs = plt.subplots(nrow, ncol, figsize=(6*ncol, 4*nrow))
fig.suptitle('The Sigmoid')
x = jnp.linspace(-5, 5, num=n)
ax = axs[0]
ax.plot(x, layer(x)[0])
ax.set_xlabel('$x$')
ax.set_ylabel('$f(x)$')
ax = axs[1]
ax.plot(x, dlayer(x))
ax.set_xlabel('$x$')
ax.set_ylabel(r'$\nabla f(x)$')
"""
Explanation: The sigmoid
Similarly, the sigmoid or logistic activation
function $f(x) = (1+e^{-x})^{-1}$ can be
seen from a convex optimization perspective as discussed in
Section 2.4 here.
The sigmoid
projects a point $x\in\mathbb{R}^n$ onto
the interior of the unit hypercube as
$$
f(x) = {\rm argmin}_{0\lt y\lt 1} \; -x^\top y -H_b(y),
$$
where $H_b(y) = - \left(\sum_i y_i\log y_i + (1-y_i)\log (1-y_i)\right)$ is the
binary entropy function.
This can be proved by looking at the KKT conditions.
End of explanation
"""
d = 5
_x = cp.Parameter(d)
_y = cp.Variable(d)
obj = cp.Minimize(-_x.T @ _y - cp.sum(cp.entr(_y)))
cons = [np.ones(d).T @ _y == 1.]
prob = cp.Problem(obj, cons)
layer = CvxpyLayer(prob, parameters=[_x], variables=[_y])
key = jax.random.PRNGKey(0)
key, k1 = jax.random.split(key, 2)
x = jax.random.normal(k1, shape=(d,))
y, = layer(x)
dsoftmax = jax.grad(lambda x: jax.nn.softmax(x)[0])
dlayer = jax.grad(lambda x: layer(x)[0][0])
print('=== Softmax forward pass')
print('jax.nn.softmax: ', jax.nn.softmax(x))
print('Convex optimization layer: ', layer(x)[0])
print('\n=== Softmax backward pass')
print('jax.nn.softmax: ', dsoftmax(x))
print('Convex optimization layer: ', dlayer(x))
"""
Explanation: The softmax
Lastly the softmax activation function
$f(x)_j = e^{x_j} / \sum_i e^{x_i}$ can
be implemented in JAX as follows.
This time we'll consider a single vector
instead of range of values since the softmax is
only interesting in higher dimensions.
We can again interpret the softmax as projecting a point
$x\in\mathbb{R}^n$ onto
the interior of the $(n-1)$-simplex
$$\Delta_{n-1}={p\in\mathbb{R}^n\; \vert\; 1^\top p = 1 \; \; {\rm and} \;\; p \geq 0 }$$
as
$$
f(x) = {\rm argmin}_{0\lt y\lt 1} \;\; -x^\top y - H(y) \;\; {\rm s.t.}\;\; 1^\top y = 1
$$
where $H(y) = -\sum_i y_i \log y_i$ is the entropy function.
This is also proved in Section 2.4 here
by using the KKT conditions.
We can implement the variational form of the softmax with:
End of explanation
"""
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
np.random.seed(0)
n = 2
N = 60
X, y = make_blobs(N, n, centers=np.array([[2, 2], [-2, -2]]), cluster_std=3)
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=.5)
Xtrain, Xtest, ytrain, ytest = map(
jnp.array, [Xtrain, Xtest, ytrain, ytest])
m = Xtrain.shape[0]
a = cp.Variable((n, 1))
b = cp.Variable((1, 1))
X = cp.Parameter((m, n))
Y = ytrain[:, np.newaxis]
log_likelihood = (1. / m) * cp.sum(
cp.multiply(Y, X @ a + b) - cp.logistic(X @ a + b)
)
regularization = - 0.1 * cp.norm(a, 1) - 0.1 * cp.sum_squares(a)
prob = cp.Problem(cp.Maximize(log_likelihood + regularization))
fit_logreg = CvxpyLayer(prob, [X], [a, b])
np.random.seed(0)
n = 1
N = 60
X = np.random.randn(N, n)
theta = np.random.randn(n)
y = X @ theta + .5 * np.random.randn(N)
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=.5)
Xtrain, Xtest, ytrain, ytest = map(
jnp.array, [Xtrain, Xtest, ytrain, ytest])
m = Xtrain.shape[0]
# set up variables and parameters
a = cp.Variable(n)
b = cp.Variable()
X = cp.Parameter((m, n))
Y = cp.Parameter(m)
lam = cp.Parameter(nonneg=True)
alpha = cp.Parameter(nonneg=True)
# set up objective
loss = (1/m)*cp.sum(cp.square(X @ a + b - Y))
reg = lam * cp.norm1(a) + alpha * cp.sum_squares(a)
objective = loss + reg
# set up constraints
constraints = []
prob = cp.Problem(cp.Minimize(objective), constraints)
# convert into pytorch layer in one line
fit_lr = CvxpyLayer(prob, [X, Y, lam, alpha], [a, b])
# this object is now callable with JAX arrays
fit_lr(Xtrain, ytrain, jnp.zeros(1), jnp.zeros(1))
# sweep over values of alpha, holding lambda=0, evaluating the gradient along the way
alphas = jnp.logspace(-3, 2, 200)
test_losses = []
grads = []
def loss(alpha):
a_tch, b_tch = fit_lr(Xtrain, ytrain, jnp.zeros(1), alpha)
test_loss = jnp.mean((Xtest @ a_tch.flatten() + b_tch - ytest)**2)
return test_loss
# derivative of the loss with respect to alpha
loss_grad = jax.grad(loss)
for alpha in alphas:
test_losses.append(loss(alpha))
grads.append(loss_grad(alpha))
plt.semilogx()
plt.plot(alphas, test_losses, label='test loss')
plt.plot(alphas, grads, label='analytical gradient')
plt.plot(alphas[:-1], np.diff(test_losses) / np.diff(alphas), label='numerical gradient', linestyle='--')
plt.legend()
plt.xlabel("$\\alpha$")
plt.show()
# sweep over values of lambda, holding alpha=0, evaluating the gradient along the way
lams = jnp.logspace(-3, 2, 200)
test_losses = []
grads = []
def loss(lam):
a_tch, b_tch = fit_lr(Xtrain, ytrain, lam, jnp.zeros(1))
test_loss = jnp.mean((Xtest @ a_tch.flatten() + b_tch - ytest)**2)
return test_loss
# derivative of the loss with respect to lambda
loss_grad = jax.grad(loss)
for lam in lams:
test_losses.append(loss(lam))
grads.append(loss_grad(lam))
plt.semilogx()
plt.plot(lams, test_losses, label='test loss')
plt.plot(lams, grads, label='analytical gradient')
plt.plot(lams[:-1], np.diff(test_losses) / np.diff(lams), label='numerical gradient', linestyle='--')
plt.legend()
plt.xlabel("$\\lambda$")
plt.show()
# compute the gradient of the test loss wrt all the training data points, and plot
plt.figure(figsize=(10, 6))
def loss(X):
a_tch, b_tch = fit_lr(
X, ytrain, jnp.array([.05]), jnp.array([.05]), solver_args={"eps": 1e-8})
test_loss = jnp.mean((Xtest @ a_tch.flatten() + b_tch - ytest)**2)
return test_loss, a_tch, b_tch
# derivative of the loss with respect to lambda
loss_grad = jax.grad(lambda X: loss(X)[0], argnums=0)
test_loss, a_tch, b_tch = loss(Xtrain)
Xtrain_grad = loss_grad(Xtrain)
a_tch_test, b_tch_test = fit_lr(
Xtest, ytest, jnp.array([0.]), jnp.array([0.]), solver_args={"eps": 1e-8})
plt.scatter(Xtrain, ytrain, s=20)
plt.plot([-5, 5], [-3*a_tch.item() + b_tch.item(),3*a_tch.item() + b_tch.item()], label='train')
plt.plot([-5, 5], [-3*a_tch_test.item() + b_tch_test.item(), 3*a_tch_test.item() + b_tch_test.item()], label='test')
for i in range(Xtrain.shape[0]):
plt.arrow(Xtrain[i].item(), ytrain[i],
-30.*Xtrain_grad[i][0], 0., color='k')
plt.legend()
plt.show()
# move the training data points in the direction of their gradients, and see the train line get closer to the test line
plt.figure(figsize=(10, 6))
Xtrain_new = Xtrain - 30. * Xtrain_grad
a_tch, b_tch = fit_lr(
Xtrain_new, ytrain, jnp.array([.05]), jnp.array([.05]), solver_args={"eps": 1e-8})
plt.scatter(Xtrain_new, ytrain, s=20)
plt.plot([-5, 5], [-3*a_tch.item() + b_tch.item(),3*a_tch.item() + b_tch.item()], label='train')
plt.plot([-5, 5], [-3*a_tch_test.item() + b_tch_test.item(), 3*a_tch_test.item() + b_tch_test.item()], label='test')
plt.legend()
plt.show()
"""
Explanation: Elastic-net regression example
This example has a similar setup to Section 6.1
of our NeurIPS paper.
We are given training data $(x_i, y_i){i=1}^{N}$,
where $x_i\in\mathbf{R}$ are inputs and $y_i\in\mathbf{R}$ are outputs.
Suppose we fit a model for this regression problem by solving the elastic-net problem
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & \frac{1}{N}\sum{i=1}^N (ax_i + b - y_i)^2 + \lambda |a| + \alpha a^2,
\end{array}
\label{eq:trainlinear}
\end{equation}
where $\lambda,\alpha>0$ are hyper-parameters.
We hope that the test loss $\mathcal{L}^{\mathrm{test}}(a,b) =
\frac{1}{M}\sum_{i=1}^M (a\tilde x_i + b - \tilde y_i)^2$ is small, where
$(\tilde x_i, \tilde y_i)_{i=1}^{M}$ is our test set.
First, we set up our problem, where ${x_i, y_i}_{i=1}^N$, $\lambda$, and $\alpha$ are our parameters.
End of explanation
"""
|
amkatrutsa/MIPT-Opt | Spring2017-2019/18-LinProgPrimalInterior/Seminar18en.ipynb | mit | import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import scipy.optimize as scopt
import scipy.linalg as sclin
USE_COLAB = False
if not USE_COLAB:
plt.rc("text", usetex=True)
def NewtonLinConstraintsFeasible(f, gradf, hessf, A, x0, line_search, linsys_solver, args=(),
disp=False, disp_conv=False, callback=None, tol=1e-6, max_iter=100, **kwargs):
x = x0.copy()
n = x0.shape[0]
iteration = 0
lam = np.random.randn(A.shape[0])
while True:
gradient, hess = gradf(x, *args), hessf(x, *args)
h = linsys_solver(hess, A, gradient)
descent_dir = h[:n]
decrement = descent_dir.dot(hessf(x, *args).dot(descent_dir))
if decrement < tol:
if disp_conv:
print("Tolerance achieved! Decrement = {}".format(decrement))
break
alpha = line_search(x, descent_dir, f, gradf, args, **kwargs)
if alpha < 1e-16:
if disp_conv:
print("Step is too small!")
x = x + alpha * descent_dir
if callback is not None:
callback((descent_dir, x))
iteration += 1
if disp:
print("Current function val = {}".format(f(x, *args)))
print("Newton decrement = {}".format(decrement))
if iteration >= max_iter:
if disp_conv:
print("Maxiter exceeds!")
break
res = {"x": x, "num_iter": iteration, "tol": decrement}
return res
def simple_solver(hess, A, gradient):
n = hess.shape[0]
n_lin_row, n_lin_col = A.shape
modified_hess = np.zeros((n + n_lin_row, n + n_lin_row))
modified_hess[:n, :n] = hess
modified_hess[n:n + n_lin_row, :n_lin_col] = A
modified_hess[:n_lin_col, n:n + n_lin_row] = A.T
rhs = np.zeros(n + n_lin_row)
rhs[:n] = -gradient
h = np.linalg.solve(modified_hess, rhs)
return h
def elimination_solver(hess, A, gradient):
inv_hess_diag = np.divide(1.0, np.diag(hess))
inv_hess_grad = np.multiply(-inv_hess_diag, gradient)
rhs = A.dot(inv_hess_grad)
L_inv_hess = np.sqrt(inv_hess_diag)
AL_inv_hess = A * L_inv_hess
# print(AL_inv_hess.shape)
S = AL_inv_hess.dot(AL_inv_hess.T)
cho_S = sclin.cho_factor(S)
w = sclin.cho_solve(cho_S, rhs)
# w = np.linalg.solve(S, rhs)
v = np.subtract(inv_hess_grad, np.multiply(inv_hess_diag, A.T.dot(w)))
# h = np.zeros(hess.shape[1] + A.shape[0])
# h[:hess.shape[1]] = v
# h[hess.shape[1]:hess.shape[1] + A.shape[0]] = w
return v
def backtracking(x, descent_dir, f, grad_f, args, **kwargs):
beta1 = kwargs["beta1"]
rho = kwargs["rho"]
alpha = 1
while f(x + alpha * descent_dir, *args) >= f(x, *args) + beta1 * alpha * grad_f(x, *args).dot(descent_dir) \
or np.isnan(f(x + alpha * descent_dir, *args)):
alpha *= rho
if alpha < 1e-16:
break
return alpha
def generate_KleeMinty_test_problem(n):
c = np.array([2**i for i in range(n)])
c = -c[::-1]
bounds = [(0, None) for i in range(n)]
b = np.array([5**(i+1) for i in range(n)])
a = np.array([1] + [2**(i+1) for i in range(1, n)])
A = np.zeros((n, n))
for i in range(n):
A[i:, i] = a[:n-i]
return c, A, b, bounds
n = 7
c, A, b, _ = generate_KleeMinty_test_problem(n)
eps = 1e-10
def f(x, c, mu):
n = c.shape[0]
return c.dot(x[:n]) - mu * np.sum(np.log(eps + x))
def gradf(x, c, mu):
grad = np.zeros(len(x))
n = c.shape[0]
grad[:n] = c - mu / (eps + x[:n])
grad[n:] = -mu / (eps + x[n:])
return grad
def hessf(x, c, mu):
return mu * np.diag(1. / (eps + x)**2)
A_lin = np.zeros((n, n + A.shape[0]))
A_lin[:n, :n] = A
A_lin[:n, n:n + A.shape[0]] = np.eye(A.shape[0])
mu = 0.1
"""
Explanation: Seminar 18
Linear programming. Primal interior point method
Reminder
Linear programming problem statement
Examples of application
Simplex mathod and its drawbacks
Review of existing packages to solve linear programming problems
Solving linear programming problems is complete technology, therefore usually you just need to call appropriate function from the appropriate package
Review is available here
Comparison is available here
A bit history
In 1979 L. Khachiyan proposed ellipsoid method and proved, that it solves any linear programming problem in polynomial time.
Article about this publication was published in the first page of New York Times on November, 7, 1979.
However complexity of the ellipsoid method is $O(n^6 L^2)$, and it was slower than simplex method for practical problems
In 1984 N. Karmarkar proposed another polynomial time method for solving linear programming problem, which was significantly faster than ellipsoid method, pacticularly $O(n^{3.5} L^2)$.
One of the recent paper on this topic proposes the method to solve linear programming problem in $\tilde{O}\left(\sqrt{rank(A)}L\right)$.
$f(n)\in\tilde{O}(g(n)) \leftrightarrow \exists k:f(n) \in O(g(n) \log^kg(n))$.
Both ellipsoid method and Karmarkar's method are primal-dual methods or interior point methods that will cover further...
Alternatives to simplex method
Simplex method is based on the search of solution among the vertices
Other approach is to move from interior of the feasible set to one of the vertex - solution of the problem
Therefore, such methods are called interior point methods
Trade-off between number of iterations and cost of one iteration
One of the main watershed in numerical optimization (cf. gradient descent and Newton method)
Linear programming is solved in polynomial time by performing small number of cost iterations
Further we will study methods that perform a lot of cheap iterations
Duality in linear programming
Theorem.
If primal (dual) problem has finite solution, then dual (primal) problem has finite solution too and their optimal objectives are equal
If primal (dual) problem is unbounded, then feasible set of the dual (primal) problem is empty.
Interior point method idea
Original problem
\begin{align}
&\min_x c^{\top}x \
\text{s.t. } & Ax = b\
& x_i \geq 0, \; i = 1,\dots, n
\end{align}
Intermediate problem
\begin{align}
&\min_x c^{\top}x {\color{red}{- \mu \sum\limits_{i=1}^n \ln x_i}} \
\text{s.t. } & Ax = b\
\end{align}
for some $\mu > 0$
Barrier function
Definition. A function $B(x, \mu) = -\mu\ln x$ is called barrier for the problem with constraint $x \geq 0$.
More details we will known in seminar about general convex nonlinear optimization problems...
What happened?
Convert linear problem to nonlinear
Move inequality constraints into the objective function
Introduce additional parameter $\mu$
Why is it good?
Move to the problem with equality constraints only $\to$ simplification of the optimality conditions, in particular
Exclude complementary slackness conditions from optimality conditions
Exclude non-negativity of the Lagrange multipliers for inequality constraints
Optimality conditions
Lagrangian: $L = c^{\top}x - \mu\sum\limits_{i=1}^n \ln x_i + \lambda^{\top}(Ax - b)$
Stationary point $L$:
$$
c - \mu X^{-1}e + A^{\top}\lambda = 0,
$$
where $X = \mathrm{diag}(x_1, \dots, x_n)$ and $e = [1, \dots, 1]$
- Equality constraints: $Ax = b$
Let $s = \mu X^{-1}e$, then optimality conditions can be re-written as follows:
- $A^{\top}\lambda + c - s = 0 $
- $Xs = {\color{red}{\mu e}}$
- $Ax = b$
Also $x > 0 \Rightarrow s > 0$
Comparison with optimality conditions for original problem
Lagrangian: $L = c^{\top}x + \lambda^{\top}(Ax - b) - s^{\top}x$
First order optimality condition: $c + A^{\top}\lambda - s = 0$
Primal feasibility: $Ax = b, \; x \geq 0$
Dual fasibility: $s \geq 0$
Complementary slackness: $s_ix_i = 0$
After re-arrangement
$A^{\top}\lambda + c - s = 0$
$Ax = b$
$Xs = {\color{red}{0}}$
$x \geq 0, \; s \geq 0$
Summary
Introducing barrier function with coefficient $\mu$ is equivalent to relaxation with parameter $\mu$ of complementary slackness of the original problem.
If $\mu \to 0$ solutions are equal!
Idea: solve problems with barrier function iteratively decreasing $\mu$. Sequence of solutions convereges to the vertex of simplex along trajectory consisting of the points which lie inside the simplex
General scheme
python
def GeneralInteriorPointLP(c, A, b, x0, mu0, rho, tol):
x = x0
mu = mu0
e = np.ones(c.shape[0])
while True:
primal_var, dual_var = StepInsideFeasibleSet(c, A, b, x, mu)
mu *= rho
if converge(primal_var, dual_var, c, A, b, tol) and mu < tol:
break
return x
How to solve the problem with barrier function?
Primal method - the next slide
Primal-dual method - some weeks later
Primal method
Remind original problem:
\begin{align}
&\min_x c^{\top}x - \mu \sum\limits_{i=1}^n \ln x_i \
\text{s.t. } & Ax = b\
\end{align}
Idea: approximate objective function up to second order like in Newton method.
Implementation
In the $(k+1)$-th iteration one has to solve the following problem:
\begin{align}
&\min_p \frac{1}{2}p^{\top}Hp + g^{\top}p\
\text{s.t. } & A(x_k + p) = b,\
\end{align}
where $H = \mu X^{-2}$ is hessian, and $g = c - \mu X^{-1}e$ is gradient.
KKT again
KKT conditions for this problem
- $Hp + g + A^{\top}\lambda = 0$
- $Ap = 0$
or
$$\begin{bmatrix} H & A^{\top}\ A & 0 \end{bmatrix} \begin{bmatrix} p\ \lambda \end{bmatrix} = \begin{bmatrix} -g \ 0 \end{bmatrix}$$
From the first line:
$$
-\mu X^{-2}p + A^{\top}\lambda = c - \mu X^{-1}e
$$
$$
-\mu Ap + AX^{2}A^{\top}\lambda = AX^2c - \mu AXe
$$
$$
AX^{2}A^{\top}\lambda = AX^2c - \mu AXe
$$
Due to $X \in \mathbb{S}^n_{++}$ and $A$ is full-rank, then equation has unique solution $\lambda^*$.
Find direction $p$
$$
-\mu p + X^2A^{\top}\lambda^* = X^2c - \mu Xe = X^2c - \mu x
$$
$$
p = x + \frac{1}{\mu}X^2(A^{\top}\lambda^* - c)
$$
How to solve KKT system
Direct method: compose $(n + m) \times (n + m)$ matrix and explicitly solve system - $\frac{1}{3}(n + m)^3$ flops
Sequential variable elimination:
$Hp + A^{\top}\lambda = -g$, $p = -H^{-1}(g + A^{\top}\lambda)$
$Ap = -AH^{-1}(g + A^{\top}\lambda) = -AH^{-1}A^{\top}\lambda - AH^{-1}g = 0$
where matrix $-AH^{-1}A^{\top}$ is called Schur complement of the matrix $H$.
3. Method for sequential variable elimination
- Compute $H^{-1}g$ and $H^{-1}A^{\top}$ - $f_H + (m+1)s_H$ flops
- Compute Schur complement $-AH^{-1}A^{\top}$ - $\mathcal{O}(m^2n)$
- Find $\lambda$ - $\frac{1}{3}m^3$ flops
- Find $p$ - $s_H + \mathcal{O}(mn)$ flops
4. Summary: $f_H + ms_H + \frac{m^3}{3} + \mathcal{O}(m^2n)$ is much less than direct method
Use structure of $H$
In our case $H = \mu X^{-2}$ is diagonal matrix!
$f_H$ - $n$ flops
$s_H$ - $n$ flops
Total complexity $\frac{m^3}{3} + \mathcal{O}(m^2n)$ flops, where $m \ll n$
Step size search
Standard backtracking rules
Constraint $A(x_k + \alpha p) = b$ is satisfied automatically
Pseudocode of the primal barrier method
python
def PrimalBarrierLP(c, A, b, x0, mu0, rho, tol):
x = x0
mu = mu0
e = np.ones(x.shape[0])
while True:
p, lam = ComputeNewtonDirection(c, x, A, mu)
alpha = line_search(p, mu, c, x)
x = x + alpha * p
mu = rho * mu
if mu < tol and np.linalg.norm(x.dot(c - A.T.dot(lam)) - mu * e) < tol:
break
return x
Comparison barrier method and primal interior point method
Klee-Minty's example from the last seminar
\begin{align}
& \max_{x \in \mathbb{R}^n} 2^{n-1}x_1 + 2^{n-2}x_2 + \dots + 2x_{n-1} + x_n\
\text{s.t. } & x_1 \leq 5\
& 4x_1 + x_2 \leq 25\
& 8x_1 + 4x_2 + x_3 \leq 125\
& \ldots\
& 2^n x_1 + 2^{n-1}x_2 + 2^{n-2}x_3 + \ldots + x_n \leq 5^n\
& x \geq 0
\end{align}
What is simplex method complexity?
Reduction to standard form
$$
\begin{align}
& \min_{x, \; z} -c^{\top}x \
\text{s.t. } & Ax + z = b\
& x \geq 0, \quad z \geq 0
\end{align}
$$
Compare running time of simplex method and primal barrier method
End of explanation
"""
scopt.check_grad(f, gradf, np.random.rand(n), c, mu)
"""
Explanation: Check gradient
End of explanation
"""
x0 = np.zeros(2*n)
x0[:n] = np.random.rand(n)
x0[n:2*n] = b - A.dot(x0[:n])
print(np.linalg.norm(A_lin.dot(x0) - b))
print(np.sum(x0 <= 1e-6))
"""
Explanation: Select initial guess vector from feasible set and from the domain of the objective
End of explanation
"""
hist_conv = []
def cl(x):
hist_conv.append(x)
res = NewtonLinConstraintsFeasible(f, gradf, hessf, A_lin, x0, backtracking, elimination_solver, (c, mu), callback=cl,
max_iter=2000, beta1=0.1, rho=0.7)
print("Decrement value = {}".format(res["tol"]))
fstar = f(res["x"], c, mu)
hist_conv_f = [np.abs(fstar - f(descdir_x[1], c, mu)) for descdir_x in hist_conv]
plt.figure(figsize=(12, 5))
plt.subplot(1,2,1)
plt.semilogy(hist_conv_f)
plt.xlabel("Number of iteration, $k$", fontsize=18)
plt.ylabel("$f^* - f_k$", fontsize=18)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
hist_conv_x = [np.linalg.norm(res["x"] - x[1]) for x in hist_conv]
plt.subplot(1,2,2)
plt.semilogy(hist_conv_x)
plt.xlabel("Number of iteration, $k$", fontsize=18)
plt.ylabel("$\| x_k - x^*\|_2$", fontsize=18)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
plt.tight_layout()
"""
Explanation: Check convergence
End of explanation
"""
def BarrierPrimalLinConstr(f, gradf, hessf, A, c, x0, mu0, rho_mu, linesearch, linsys_solver,
tol=1e-8, max_iter=500, disp_conv=False, **kwargs):
x = x0.copy()
n = x0.shape[0]
mu = mu0
while True:
res = NewtonLinConstraintsFeasible(f, gradf, hessf, A, x, linesearch, linsys_solver, (c, mu),
disp_conv=disp_conv, max_iter=max_iter, beta1=0.01, rho=0.5)
x = res["x"].copy()
if n * mu < tol:
break
mu *= rho_mu
return x
mu0 = 5
rho_mu = 0.5
x = BarrierPrimalLinConstr(f, gradf, hessf, A_lin, c, x0, mu0, rho_mu, backtracking, elimination_solver, max_iter=100)
%timeit BarrierPrimalLinConstr(f, gradf, hessf, A_lin, c, x0, mu0, rho_mu, backtracking, elimination_solver, max_iter=100)
%timeit BarrierPrimalLinConstr(f, gradf, hessf, A_lin, c, x0, mu0, rho_mu, backtracking, simple_solver, max_iter=100)
print(x[:n])
"""
Explanation: Implementation of primal barrier method
End of explanation
"""
mu0 = 2
rho_mu = 0.5
n_list = range(3, 10)
n_iters = np.zeros(len(n_list))
times_simplex = np.zeros(len(n_list))
times_barrier_simple = np.zeros(len(n_list))
for i, n in enumerate(n_list):
print("Current dimension = {}".format(n))
c, A, b, bounds = generate_KleeMinty_test_problem(n)
res = scopt.linprog(c, A, b, bounds=bounds, options={"maxiter": 2**max(n_list) + 1})
time = %timeit -o -q scopt.linprog(c, A, b, bounds=bounds, options={"maxiter": 2**max(n_list) + 1})
times_simplex[i] = time.best
A_lin = np.zeros((n, n + A.shape[0]))
A_lin[:n, :n] = A
A_lin[:n, n:n + A.shape[0]] = np.eye(A.shape[0])
x0 = np.zeros(2*n)
x0[:n] = np.random.rand(n)
x0[n:2*n] = b - A.dot(x0[:n])
time = %timeit -o -q BarrierPrimalLinConstr(f, gradf, hessf, A_lin, c, x0, mu0, rho_mu, backtracking, simple_solver)
times_barrier_simple[i] = time.best
plt.figure(figsize=(8, 5))
plt.semilogy(n_list, times_simplex, label="Simplex")
plt.semilogy(n_list, times_barrier_simple, label="Primal barrier")
plt.legend(fontsize=18)
plt.xlabel("Dimension, $n$", fontsize=18)
plt.ylabel("Computation time, sec.", fontsize=18)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
"""
Explanation: Running time comparison
End of explanation
"""
def generate_linprog_problem(n, m=10):
x = np.random.rand(n)
A = np.random.randn(m, n)
b = A.dot(x)
c = np.random.randn(n)
c[c < 0] = 1
return c, A, b, x
n = 20
c, A, b, x0 = generate_linprog_problem(n)
res = scopt.linprog(c, A_eq=A, b_eq=b, bounds=[(0, None) for i in range(n)])
m = A.shape[0]
# x0 = np.random.rand(n)
# resid = b - A[:, m:].dot(x0[m:])
# x0[:m] = np.linalg.solve(A[:m, :m], resid)
print(np.linalg.norm(A.dot(x0) - b))
mu0 = 1
rho_mu = 0.5
x_bar_simple = BarrierPrimalLinConstr(f, gradf, hessf, A, c, x0, mu0, rho_mu, backtracking,
simple_solver, max_iter=100, tol=1e-8)
x_bar_elim = BarrierPrimalLinConstr(f, gradf, hessf, A, c, x0, mu0, rho_mu, backtracking,
elimination_solver, tol=1e-8)
print(np.linalg.norm(res["x"] - x_bar_simple))
print(np.linalg.norm(res["x"] - x_bar_elim))
print(c.dot(res["x"]))
print(c.dot(x_bar_elim))
n_list = [10*i for i in range(2, 63, 10)]
times_simple = np.zeros(len(n_list))
times_elim = np.zeros(len(n_list))
mu0 = 5
rho_mu = 0.5
for i, n in enumerate(n_list):
print("Current dimension = {}".format(n))
c, A, b, x0 = generate_linprog_problem(n)
time = %timeit -o BarrierPrimalLinConstr(f, gradf, hessf, A, c, x0, mu0, rho_mu, backtracking, simple_solver)
times_simple[i] = time.average
time = %timeit -o BarrierPrimalLinConstr(f, gradf, hessf, A, c, x0, mu0, rho_mu, backtracking, elimination_solver)
times_elim[i] = time.average
plt.semilogy(n_list, times_elim, label="Elimination solver")
plt.semilogy(n_list, times_simple, label="Simple solver")
# plt.semilogy(n_list, 10**(-6)*np.array(n_list)**3 / 3)
plt.legend(fontsize=18)
plt.xlabel("Dimension, $n$", fontsize=18)
plt.ylabel("Computation time, sec.", fontsize=18)
plt.xticks(fontsize=18)
_ = plt.yticks(fontsize=18)
"""
Explanation: Remarks
It was shown that primal barrier method is equivalent to Karmarkar's method
Use only primal problem information
Initial guess has to lie inside feasible set - separate problem
Comparison of method to solve linear system in Newton method
Compose matrix explicitly
Use block structure and sequential variable eliminating
End of explanation
"""
|
regata/dbda2e_py | chapters/10.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import seaborn as sns
import pystan as ps
import numpy as np
model_code = """
data {
int N;
int y[N];
real omega;
real kappa;
}
parameters {
real<lower=0,upper=1> theta;
}
model {
real alpha;
real beta;
alpha = omega*(kappa-2) + 1;
beta = (1-omega)*(kappa-2) + 1;
for (i in 1:N) {
y[i] ~ bernoulli(theta);
}
theta ~ beta(alpha, beta);
}
"""
"""
Explanation: Model Comparison and Hierarchical Modeling
10.3.1. Non-hierarchical MCMC computation of each model’s marginal likelihood
10.3.2. Hierarchical MCMC computation of relative model probability
10.3.2.1 Using pseudo-priors to reduce autocorrelation
Exercise 10.1
Exercise 10.2
Exercise 10.3
10.3.1. Non-hierarchical MCMC computation of each model’s marginal likelihood
End of explanation
"""
observations = np.repeat([0, 1], [3, 6])
N = len(observations)
omega = 0.75
# omega = 0.25
kappa = 12
data = {
'N': N,
'y': observations,
'omega': omega,
'kappa': kappa
}
fit = ps.stan(model_code=model_code, data=data, iter=11000, warmup=1000, chains=4)
samples = fit.extract(permuted=False, inc_warmup=False)
samples.shape
# Check MCMC conversion
f, ax = plt.subplots(1,1,figsize=(10,5))
for chain_id in range(samples.shape[1]):
smpl = samples[:, chain_id, fit.flatnames.index('theta')]
label = 'chain %d' % chain_id
sns.kdeplot(smpl, label=label, ax=ax)
ax.set_title('theta')
plt.legend()
plt.show()
theta = fit['theta']
from scipy.stats import beta
# Compute mean and standard deviation of MCMC values:
theta_mean = theta.mean()
theta_std = theta.std()
# Convert to a,b shape parameters for use in h(theta) function:
post_a = theta_mean * ( theta_mean*(1-theta_mean)/theta_std**2 - 1 )
post_b = (1-theta_mean) * ( theta_mean*(1-theta_mean)/theta_std**2 - 1 )
h = beta.pdf(theta, post_a, post_b)
N = len(observations)
z = observations.sum()
likelihood = theta**z * (1-theta)**(N-z)
prior_a = omega*(kappa-2)+1
prior_b = (1-omega)*(kappa-2)+1
prior = beta.pdf(theta, prior_a , prior_b)
prob_D_inv = (h / (likelihood*prior)).mean()
prob_D = 1/prob_D_inv
prob_D
prob_D1 = 0.0023318808086998967 # head-biased model
prob_D2 = 0.0004993362445442647 # tail-biased model
"""
Explanation: Sample head-biased model
End of explanation
"""
import pymc3 as pm
from theano import shared
from dbda2e_utils import plotPost
observations = np.repeat([0, 1], [3, 6])
z = observations.sum()
N = len(observations)
kappa = 12
omega = np.array([0.75, 0.25])
with pm.Model() as model:
alpha = omega*(kappa-2) + 1
beta = (1-omega)*(kappa-2) + 1
# wrap into theano tensor
alpha = shared(alpha, name='alpha')
beta = shared(beta, name='beta')
m = pm.Categorical('model', p=np.array([0.5, 0.5]), transform=None)
theta = pm.Beta('theta', alpha[m], beta[m], transform=None)
z = pm.Binomial('z', n=N, p=theta, observed=z)
step = pm.BinaryMetropolis([m])
trace = pm.sample(1000, step=step, njobs=4) # burn in
trace = pm.sample(100000, step=step, start=trace[-1], njobs=4)
axs = pm.traceplot(trace, figsize=(10,4), varnames=['theta'])
axs = pm.traceplot(trace, figsize=(10,4), varnames=['model'])
trace_df = pm.trace_to_dataframe(trace)
model0_mask = trace_df['model'] == 0
theta0 = trace_df[model0_mask]['theta'].values
theta1 = trace_df[~model0_mask]['theta'].values
f, axs = plt.subplots(2,1,figsize=(10,10))
plotPost(theta0, ax=axs[0], title='theta 0')
plotPost(theta1, ax=axs[1], title='theta 1')
"""
Explanation: 10.3.2. Hierarchical MCMC computation of relative model probability
Currently Stan does not support discrete variable. Switching to PyMC3
End of explanation
"""
observations = np.repeat([0, 1], [3, 6])
z = observations.sum()
N = len(observations)
kappa = 12
omega = np.array([0.75, 0.25])
with pm.Model() as model:
alpha = omega*(kappa-2) + 1
beta = (1-omega)*(kappa-2) + 1
m = pm.Categorical('model', p=np.array([0.5, 0.5]), transform=None)
theta0 = pm.Beta('theta0', alpha[0], beta[0], transform=None)
theta1 = pm.Beta('theta1', alpha[1], beta[1], transform=None)
theta = pm.switch(pm.eq(m, 0), theta0, theta1)
theta = pm.Deterministic('theta', theta)
z = pm.Binomial('z', n=N, p=theta, observed=z)
step = pm.BinaryMetropolis([m])
trace = pm.sample(1000, step=step, njobs=4) # burn in
trace = pm.sample(100000, step=step, start=trace[-1], njobs=4)
axs = pm.traceplot(trace, figsize=(10,4), varnames=['theta'])
axs = pm.traceplot(trace, figsize=(10,4), varnames=['model'])
trace_df = pm.trace_to_dataframe(trace)
theta0 = trace_df['theta0'].values
theta1 = trace_df['theta1'].values
f, axs = plt.subplots(2,1,figsize=(10,10))
plotPost(theta0, ax=axs[0], title='theta 0')
plotPost(theta1, ax=axs[1], title='theta 1')
_ = pm.plots.autocorrplot(trace, max_lag=20, figsize=(15, 5), varnames=['model'])
_ = pm.plots.autocorrplot(trace, max_lag=20, figsize=(15, 5), varnames=['theta1'])
"""
Explanation: 10.3.2.1 Using pseudo-priors to reduce autocorrelation
Using switch statement to combine models
End of explanation
"""
observations = np.repeat([0, 1], [13, 17])
z = observations.sum()
N = len(observations)
omega0 = np.array([0.10, 0.40]) # [true, pseudo] prior values
kappa0 = np.array([20, 50]) # [true, pseudo] prior values
omega1 = np.array([0.70, 0.90]) # [pseudo, true] prior value
kappa1 = np.array([50, 20]) # [pseudo, true] prior value
alpha0 = omega0*(kappa0-2) + 1
beta0 = (1-omega0)*(kappa0-2) + 1
# wrap into theano tensor
alpha0 = shared(alpha0, name='alpha0')
beta0 = shared(beta0, name='beta0')
alpha1 = omega1*(kappa1-2) + 1
beta1 = (1-omega1)*(kappa1-2) + 1
# wrap into theano tensor
alpha1 = shared(alpha1, name='alpha1')
beta1 = shared(beta1, name='beta1')
with pm.Model() as model:
m = pm.Categorical('model', p=np.array([0.5, 0.5]), transform=None)
theta0 = pm.Beta('theta0', alpha0[m], beta0[m], transform=None)
theta1 = pm.Beta('theta1', alpha1[m], beta1[m], transform=None)
theta = pm.switch(pm.eq(m, 0), theta0, theta1)
theta = pm.Deterministic('theta', theta)
z = pm.Binomial('z', n=N, p=theta, observed=z)
step = pm.BinaryMetropolis([m])
trace = pm.sample(1000, step=step, njobs=4) # burn in
trace = pm.sample(100000, step=step, start=trace[-1], njobs=4)
axs = pm.traceplot(trace, figsize=(10,4), varnames=['theta0', 'theta1', 'model'])
trace_df = pm.trace_to_dataframe(trace)
model0_mask = trace_df['model'] == 0
theta0_true = trace_df[model0_mask]['theta0'].values
theta0_pseu = trace_df[~model0_mask]['theta0'].values
model1_mask = trace_df['model'] == 1
theta1_true = trace_df[model1_mask]['theta1'].values
theta1_pseu = trace_df[~model1_mask]['theta1'].values
f, axs = plt.subplots(2,2,figsize=(10,10), sharex=True)
plotPost(theta0_true, ax=axs[0,0], title='theta 0 true')
plotPost(theta0_pseu, ax=axs[1,0], title='theta 0 pseudo')
plotPost(theta1_pseu, ax=axs[0,1], title='theta 1 pseudo')
plotPost(theta1_true, ax=axs[1,1], title='theta 1 true')
model0_mask.sum() / (model1_mask.sum() + model0_mask.sum())
"""
Explanation: Using pseudo priors
End of explanation
"""
z = 7
N = 10
from scipy.special import betaln
def p_D(z, N, omega, kappa):
alpha = omega*(kappa - 2) + 1
beta = (1-omega)*(kappa - 2) + 1
postr = betaln(alpha+z, beta+N-z)
prior = betaln(alpha, beta)
return np.exp(postr - prior)
"""
Explanation: Exercise 10.1
Purpose: To illustrate the fact that models with more distinctive predictions can be more easily discriminated.
End of explanation
"""
omega1 = 0.25
omega2 = 0.75
kappa = 6
p_d1 = p_D(z, N, omega1, kappa)
p_d2 = p_D(z, N, omega2, kappa)
BF = p_d1 / p_d2
prior1 = prior2 = 0.5
posterior_odds = BF * prior1 / prior2
p_m1 = posterior_odds / (1 + posterior_odds)
p_m2 = 1 - p_m1
BF, p_m1, p_m2
"""
Explanation: Part A
End of explanation
"""
omega1 = 0.25
omega2 = 0.75
kappa = 202
p_d1 = p_D(z, N, omega1, kappa)
p_d2 = p_D(z, N, omega2, kappa)
BF = p_d1 / p_d2
prior1 = prior2 = 0.5
posterior_odds = BF * prior1 / prior2
p_m1 = posterior_odds / (1 + posterior_odds)
p_m2 = 1 - p_m1
BF, p_m1, p_m2
"""
Explanation: Part B
End of explanation
"""
z = 6
N = 9
kappa = 12
omega = np.array([0.75, 0.25])
with pm.Model() as model:
alpha = omega*(kappa-2) + 1
beta = (1-omega)*(kappa-2) + 1
# wrap into theano tensor
alpha = shared(alpha, name='alpha')
beta = shared(beta, name='beta')
m = pm.Categorical('model', p=np.array([0.5, 0.5]), transform=None)
theta = pm.Beta('theta', alpha[m], beta[m], transform=None)
# z = pm.Binomial('z', n=N, p=theta, observed=z)
step = pm.BinaryMetropolis([m])
trace = pm.sample(1000, step=step, njobs=4) # burn in
trace = pm.sample(100000, step=step, start=trace[-1], njobs=4)
axs = pm.traceplot(trace, figsize=(10,4), varnames=['theta', 'model'])
trace_df = pm.trace_to_dataframe(trace)
model0_mask = trace_df['model'] == 0
theta0 = trace_df[model0_mask]['theta'].values
theta1 = trace_df[~model0_mask]['theta'].values
f, axs = plt.subplots(1,2,figsize=(10,5), sharex=True)
plotPost(theta0, ax=axs[0], title='theta 0')
plotPost(theta1, ax=axs[1], title='theta 1')
z = 6
N = 9
kappa = 12
omega = np.array([0.75, 0.25])
with pm.Model() as model:
alpha = omega*(kappa-2) + 1
beta = (1-omega)*(kappa-2) + 1
# wrap into theano tensor
alpha = shared(alpha, name='alpha')
beta = shared(beta, name='beta')
m = pm.Categorical('model', p=np.array([0.5, 0.5]), transform=None)
theta = pm.Beta('theta', alpha[m], beta[m], transform=None)
z = pm.Binomial('z', n=N, p=theta, observed=z)
step = pm.BinaryMetropolis([m])
trace = pm.sample(1000, step=step, njobs=4) # burn in
trace = pm.sample(100000, step=step, start=trace[-1], njobs=4)
axs = pm.traceplot(trace, figsize=(10,4), varnames=['theta', 'model'])
trace_df = pm.trace_to_dataframe(trace)
model0_mask = trace_df['model'] == 0
theta0 = trace_df[model0_mask]['theta'].values
theta1 = trace_df[~model0_mask]['theta'].values
f, axs = plt.subplots(1,2,figsize=(10,5), sharex=True)
plotPost(theta0, ax=axs[0], title='theta 0')
plotPost(theta1, ax=axs[1], title='theta 1')
"""
Explanation: Exercise 10.2
Purpose: To be sure you really understand the JAGS program for Figure 10.4.
Part A
End of explanation
"""
f, ax = plt.subplots(1,1,figsize=(5,5))
plotPost(trace_df['theta'].values, ax=ax, title='theta')
"""
Explanation: Part B
End of explanation
"""
z = 7
N = 10
kappa = 6
omega = np.array([0.75, 0.25])
with pm.Model() as model:
alpha = omega*(kappa-2) + 1
beta = (1-omega)*(kappa-2) + 1
# wrap into theano tensor
alpha = shared(alpha, name='alpha')
beta = shared(beta, name='beta')
m = pm.Categorical('model', p=np.array([0.5, 0.5]), transform=None)
theta = pm.Beta('theta', alpha[m], beta[m], transform=None)
z = pm.Binomial('z', n=N, p=theta, observed=z)
step = pm.BinaryMetropolis([m])
trace = pm.sample(1000, step=step, njobs=4) # burn in
trace = pm.sample(100000, step=step, start=trace[-1], njobs=4)
axs = pm.traceplot(trace, figsize=(10,4), varnames=['theta', 'model'])
trace_df = pm.trace_to_dataframe(trace)
counts = trace_df['model'].value_counts() / trace_df.shape[0]
counts
"""
Explanation: Part C
kappa = 6
End of explanation
"""
z = 7
N = 10
kappa = 202
omega = np.array([0.75, 0.25])
with pm.Model() as model:
alpha = omega*(kappa-2) + 1
beta = (1-omega)*(kappa-2) + 1
# wrap into theano tensor
alpha = shared(alpha, name='alpha')
beta = shared(beta, name='beta')
m = pm.Categorical('model', p=np.array([0.5, 0.5]), transform=None)
theta = pm.Beta('theta', alpha[m], beta[m], transform=None)
z = pm.Binomial('z', n=N, p=theta, observed=z)
step = pm.BinaryMetropolis([m])
trace = pm.sample(1000, step=step, njobs=4) # burn in
trace = pm.sample(100000, step=step, start=trace[-1], njobs=4)
axs = pm.traceplot(trace, figsize=(10,4), varnames=['theta', 'model'])
trace_df = pm.trace_to_dataframe(trace)
counts = trace_df['model'].value_counts() / trace_df.shape[0]
counts
"""
Explanation: kappa = 202
End of explanation
"""
z = 7
N = 10
kappa = 52
omega = np.array([0.75, 0.25])
model_priors = np.array([0.05, 0.95])
with pm.Model() as model:
alpha = omega*(kappa-2) + 1
beta = (1-omega)*(kappa-2) + 1
# wrap into theano tensor
alpha = shared(alpha, name='alpha')
beta = shared(beta, name='beta')
m = pm.Categorical('model', p=model_priors, transform=None)
theta = pm.Beta('theta', alpha[m], beta[m], transform=None)
z = pm.Binomial('z', n=N, p=theta, observed=z)
step = pm.BinaryMetropolis([m])
trace = pm.sample(1000, step=step, njobs=4) # burn in
trace = pm.sample(100000, step=step, start=trace[-1], njobs=4)
axs = pm.traceplot(trace, figsize=(10,4), varnames=['theta', 'model'])
trace_df = pm.trace_to_dataframe(trace)
counts = trace_df['model'].value_counts()
model_posteriors = counts / counts.sum()
model_posteriors
# renormalize model posteriors to 50/50 priors
model_posteriors[0] *= model_priors[1]
model_posteriors[1] *= model_priors[0]
model_posterior
"""
Explanation: fix the model so it does not use the same model index
End of explanation
"""
N = 30
z = int(0.55 * N + 0.5) # +0.5 for ceiling
omega0 = np.array([0.10, 0.50]) # [true, pseudo] prior values
kappa0 = np.array([20, 2.1]) # [true, pseudo] prior values
omega1 = np.array([0.50, 0.90]) # [pseudo, true] prior value
kappa1 = np.array([2.1, 20]) # [pseudo, true] prior value
alpha0 = omega0*(kappa0-2) + 1
beta0 = (1-omega0)*(kappa0-2) + 1
# wrap into theano tensor
alpha0 = shared(alpha0, name='alpha0')
beta0 = shared(beta0, name='beta0')
alpha1 = omega1*(kappa1-2) + 1
beta1 = (1-omega1)*(kappa1-2) + 1
# wrap into theano tensor
alpha1 = shared(alpha1, name='alpha1')
beta1 = shared(beta1, name='beta1')
with pm.Model() as model:
m = pm.Categorical('model', p=np.array([0.5, 0.5]), transform=None)
theta0 = pm.Beta('theta0', alpha0[m], beta0[m], transform=None)
theta1 = pm.Beta('theta1', alpha1[m], beta1[m], transform=None)
theta = pm.switch(pm.eq(m, 0), theta0, theta1)
theta = pm.Deterministic('theta', theta)
z = pm.Binomial('z', n=N, p=theta, observed=z)
step = pm.BinaryMetropolis([m])
trace = pm.sample(1000, step=step, njobs=4) # burn in
trace = pm.sample(10000, step=step, start=trace[-1], njobs=4)
axs = pm.traceplot(trace, figsize=(10,8), varnames=[theta, 'theta0', 'theta1', 'model'])
trace_df = pm.trace_to_dataframe(trace)
model0_mask = trace_df['model'] == 0
theta0_true = trace_df[model0_mask]['theta0'].values
theta0_pseu = trace_df[~model0_mask]['theta0'].values
model1_mask = trace_df['model'] == 1
theta1_true = trace_df[model1_mask]['theta1'].values
theta1_pseu = trace_df[~model1_mask]['theta1'].values
f, axs = plt.subplots(2,2,figsize=(10,10), sharex=True)
plotPost(theta0_true, ax=axs[0,0], title='theta 0 true')
plotPost(theta0_pseu, ax=axs[1,0], title='theta 0 pseudo')
plotPost(theta1_pseu, ax=axs[0,1], title='theta 1 pseudo')
plotPost(theta1_true, ax=axs[1,1], title='theta 1 true')
model0_mask.sum() / (model1_mask.sum() + model0_mask.sum())
"""
Explanation: Exercise 10.3
Purpose: To get some hands-on experience with pseudo-priors.
Part A
See 10.3.2.1 Using pseudo-priors to reduce autocorrelation for reference
Part B
End of explanation
"""
|
lily-tian/fanfictionstatistics | jupyter_notebooks/profile_analysis.ipynb | mit | # opens raw data
with open ('../data/clean_data/df_profile', 'rb') as fp:
df = pickle.load(fp)
# creates subset of data of active users
df_active = df.loc[df.status != 'inactive', ].copy()
# sets current year
cyear = datetime.datetime.now().year
# sets stop word list for text parsing
stop_word_list = stopwords.words('english')
"""
Explanation: Analysis of the fanfiction readers and writers
To begin, we will examine the userbase of fanfiction.net. Who are they? Where are they from? How active are they? To do so, we will take a random sample of ~10,000 users from the site and break down some of their characteristics.
End of explanation
"""
# examines status of users
status = df['status'].value_counts()
# plots chart
(status/np.sum(status)).plot.bar()
plt.xticks(rotation=0)
plt.show()
"""
Explanation: Account status and volume
Let's begin by examining the types of profiles that make up the userbase: readers, authors, or inactive users. Inactive users are accounts that are no longer existing.
End of explanation
"""
# examines when stories first created
entry = [int(row[2]) for row in df_active['join'] if row != 'NA']
entry = pd.Series(entry).value_counts().sort_index()
# plots chart
(entry/np.sum(entry)).plot()
plt.xlim([np.min(entry.index.values), cyear-1])
plt.show()
"""
Explanation: About ~20% of users on the site are authors! That's much higher than expected. The number of inactive profiles is also notably negligible, meaning that once a profile has been created, it is very unlikely to be ever deleted or pulled off the site.
What about how fast people are joining the site?
End of explanation
"""
# examines distribution of top 10 countries
country = df['country'].value_counts()
# plots chart
(country[1:10]/np.sum(country)).plot.bar()
plt.xticks(rotation=90)
plt.show()
"""
Explanation: Fanfiction has been on an upward trend every since its establishment.
Note that sharp increase in 2011! In a later chapter, we will examine what could have caused that surge...
Countries
The site fanfiction.net allows tracking of location for users. Let's examine where these users are from.
End of explanation
"""
# counts number with written profiles
hasprofile = [row != '' for row in df_active['profile']]
profiletype = pd.Series(hasprofile).value_counts()
# plots chart
profiletype.plot.pie(autopct='%.f', figsize=(5,5))
plt.ylabel('')
plt.show()
"""
Explanation: We looked at the top 10 countries. Over ~30% are from United States!
Profile descriptions
Users have the option of including a profile description. This is typically to allow users to introduce themselves to the community. Let's see what percentage of the active userbase has written something.
End of explanation
"""
# examines word count of profiles
profile_wc = [len(row.split()) for row in df_active['profile']]
# plots chart
pd.Series(profile_wc).plot.hist(normed=True, bins=np.arange(1, 500, 1))
plt.show()
# IMPORTANT NOTE: 'request' package has error in which it cannot find end p tag </p>, thus
# leading to description duplicates i some profiles. Until this error is addressed, an abritrary
# cutoff is used.
"""
Explanation: It would appear about one-fourth of the users have written something in their profile! For those who have, let's see how many words they have written.
End of explanation
"""
# extracts mostly used common words
profile_wf = [set(row.lower().translate(str.maketrans('', '', string.punctuation)).split())
for row in df_active.loc[hasprofile, 'profile']]
profile_wf = [item for sublist in profile_wf for item in sublist]
profile_wf = pd.Series(profile_wf).value_counts()
# prints table
stop_word_list.append('im')
stop_word_list.append('dont')
print((profile_wf.loc[[row not in stop_word_list
for row in profile_wf.index.values]][:10]/sum(hasprofile)).to_string())
"""
Explanation: The right tail is much longer than depicted, with ~3% of profiles exceeding our 500 word cut off. However, it would appear the majority of profiles fall well under 100-200 words.
What do these users typically say? Let's see what are the top 10 words commonly used.
End of explanation
"""
name_count = (profile_wf['name'])/sum(hasprofile)
gender_count = (profile_wf['gender']+profile_wf['sex'])/sum(hasprofile)
age_count = (profile_wf['age']+profile_wf['old'])/sum(hasprofile)
"""
Explanation: Above are the words most likely to be used at least once in a profile description. And of course, this is excluding stop words, such as "the", "is", or "are".
From here, we can guess that most profiles mention the user's likes (or dislikes) and something related to reading and/or writing. Very standard words you'd expect in a community like fanfiction.
End of explanation
"""
# finds gender breakdown
profile_text = [list(set(row.lower().translate(str.maketrans('', '', string.punctuation)).split()))
for row in df_active.loc[hasprofile, 'profile']]
female = ['female' in row and 'male' not in row for row in profile_text]
male = ['male' in row and 'female' not in row for row in profile_text]
gender = pd.Series([sum(female), sum(male)], index = ['female', 'male'])
# plots chart
gender.plot.pie(autopct='%.f', figsize=(5,5))
plt.ylabel('')
plt.show()
"""
Explanation: The word "name" also made the list. We estimate that ~21% of profiles that have something written will include that word. This prompts the question of what other information we might parse out. We looked into "gender"/"sex" found those words existed in ~6% of profiles. As for "age"/"old", ~20%.
Gender
Now this is where things get tricky. So we've discovered that ~25% of users written something in their profiles. For those users, we want to figure out 1) if they have disclosed their gender, 2) get the machine to recognize what that gender is. For this exercise, we will only examine users who have disclosed in English.
We will start with the most basic approach, which is to search for the key words "female" and "male", then count how many profiles have one word but not the other.
End of explanation
"""
# sets gender words
female = ['female', 'girl', 'woman', 'daughter', 'mother', 'sister']
male = ['male', 'boy', 'man', 'son', 'father', 'brother']
other = ['agender', 'queer', 'nonbinary', 'transgender', 'trans', 'bigender', 'intergender']
# finds gender ratio
gender_index = [n+'-'+m for m,n in zip(female, male)]
gender = pd.DataFrame(list(map(list, zip(profile_wf[female]/sum(hasprofile),
(-1 * profile_wf[male])/sum(hasprofile)))),
columns = ['female', 'male'])
gender = gender.set_index(pd.Series(gender_index))
# plots chart
gender.plot.bar(stacked=True)
plt.ylabel('')
plt.show()
"""
Explanation: It should be noted that third-gender and non-binary words were also tested but yielded too little observations for analysis. As such, everything we talk about below is only in reference to the female and male genders.
Assuming a user is either "female" or "male", we estimate a rough 3:1 ratio, favoring female.
This may or may not reflect gender ratio of the actual userbase. Here, we are making a lot of assumptions, including:
* Female and male users are equally likely to write something in their profile.
* Female and male users are equally likely to disclose their gender in their profile.
* Female and male users are equally likely to choose the words "female" or "male" for disclosing their gender.
* The words "female" or "male" are primarily used in the context of gender self-identification.
We decided to do a check against other gender-related words that might be potential identifiers. The below reveal the proportion of written profiles that contain each word. Note that one profile can contain both the female-specific and male-specific word.
End of explanation
"""
# examines distribution of favorited
df_active[['fa', 'fs']].plot.hist(normed=True, bins=np.arange(1, 50, 1), alpha = 0.65)
plt.show()
"""
Explanation: Here we have a much more mixed picture.
The problem with the pairs "girl"/"boy" and "woman"/"man" is that they may refer to third parties (eg. "I once met a girl who...). However, when those two columns are added together, there does not seem to be any evidence of any gender disparity. The final three pairs are even more likely to refer to third parties, and are only there for comparison purposes only.
On a slight digression, the distribution of the word choices is interesting: "girl" is much more widely used than "woman" whereas "man" is more widely used than "boy".
All in all, this is a very rough look into the question of gender. Later on, we will try to readdress this question through more sophistificated classification methods.
Age
Finding the age is an even greater challenge than gender. We will return this question when we focus on text analysis and natural language processing.
Account activity and favorites
The site fanfiction.net allows users to "favorite" authors and stories. These may also serve as bookmarks to help users refind old stories they have read, and can be thought of as a metric of how active an user is.
After some calculations, we find that approximately one-fourth of users have favorited at least one author, and approximately one-third of users have favorited at least one story. In comparison, only ~2% are in a community.
Let's look at the distributions of the number of authors vs. stories a user would favorite.
End of explanation
"""
# examines distribution of stories written
df_active['st'].plot.hist(normed=True, bins=np.arange(1, np.max(df_active['st']), 1))
plt.show()
"""
Explanation: The right tail is much longer than is depicted above -- in the thousands. However, we cut it off at 50 to better show the distribution.
For the most part, the distributions between the number of favorited authors and the number of favorited stories are similar, with heavy right skew. The number of authors is slightly more concentrated near 1, whereas the number of stories is more dispersed.
All of this implies that users are more lenient in favoriting stories than authors, which aligns with our intuition.
Stories written
Finally, let's look at how many stories users write. We already know that ~20% of the users are authors. Of those authors, how many stories do they tend to publish?
End of explanation
"""
|
justiceamoh/ENGS108 | vagrant/notebooks/Keras Introduction.ipynb | apache-2.0 | %matplotlib inline
# Standard Libraries
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='ticks')
"""
Explanation: An Introduction to Keras
In this notebook, I will provide a brief introduction to Keras, a high-level neural networks python library; it is a wrapper ontop of either Theano or Tensorflow.
In the first part, I will implement the simple perceptron model from the first homework. Then I will try the 'hello world' of neural net applications: hand-writing recognition from the MNIST database.
MNIST Database
The MNIST database of handwritten digits. It consists of a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.
It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting.
From Yann Lecun's Website
First, let's import the necessary basic python libraries:
End of explanation
"""
# Load Dataset from Keras Libraries
from keras.datasets import mnist
from keras.utils import np_utils
(X_train, y_train), (X_test, y_test) = mnist.load_data()
nb_class = 10
# Visualize Some Examples
idx = np.random.choice(X_train.shape[0],6,replace=False)
for i,ix in enumerate(idx):
plt.subplot(231+i)
plt.title('Label is {0}'.format(y_train[ix]))
plt.imshow(X_train[ix], cmap='gray')
plt.axis('off')
plt.tight_layout()
# Reshape X data from 2D to 1D (28x28 to 784)
X_train = X_train.reshape(60000,784).astype('float32')
X_test = X_test.reshape(10000,784).astype('float32')
# Convert Y labels to Categorical One-Hot Format
y_train = np_utils.to_categorical(y_train, nb_class)
y_test = np_utils.to_categorical(y_test, nb_class)
"""
Explanation: Next, we let's load the MNIST dataset. Fortunately for us, this is already available in the Keras module (as well as many other common datasets). So we will import from Keras as follows:
End of explanation
"""
# Import Keras Libraries
from keras.models import Sequential
from keras.layers import Dense, Activation
# Define Model Parameters
nb_feat = 28 # no. of features/columns of input
L1_units = 256 # no. of nodes in Layer 1
L2_units = 100 # no. of nodes in Layer 2
L3_units = 50 # no. of nodes in Layer 2
nb_class = 10 # no. of output classes
# Neural Network Model
model = Sequential() # Sequential network model description
model.add(Dense(L1_units,input_shape=(784,))) # Add 1st Dense Layer
model.add(Activation("relu")) # Add activation function
model.add(Dense(L2_units)) # Add 2nd Dense Layer
model.add(Activation("relu")) # Add activation function
model.add(Dense(L3_units)) # Add 3nd Dense Layer
model.add(Activation("relu")) # Add activation function
model.add(Dense(nb_class)) # Add 3rd Dense Layer, also the classification layer
model.add(Activation('softmax')) # Add sigmoid classification
"""
Explanation: Neural Network Model
Now, we are goint to create our neural network model using Keras' high-level library. The big advantage of Keras is that it abstracts away the low level code of the basic elements of neural networks such as hidden units, layers, activations, optimizers, etc. In other words, you just call Keras functions to create these things for you, without hand coding the details yourself. Underneath it all, Keras is using Tensorflow or Theano, just like you would yourself.
To demonstrate this, let's first decide on a network architecture for our MNIST dataset. Let's use the same architecture as in the multilayer_perceptron.py example we used in class to test our Tensorflow installation. Here is the network architecture:
Network Architecture
Now let's proceed to build this in Keras.
End of explanation
"""
model.compile(loss='categorical_crossentropy',optimizer='rmsprop', metrics=['accuracy'])
"""
Explanation: Once you've described the model in a sequential model, you then proceed to compile the model. The model needs to be compiled because the actual Theano/Tensorflow backend uses C/C++ internally for faster implementations and runs.
Also, while compiling, we defining the loss function (optimization criterion) and the kind of optimizer to use. Here I use the popular RMSprop, an adaptive variant of gradient descent.
End of explanation
"""
log = model.fit(X_train, y_train,nb_epoch=30, batch_size=32, verbose=2,validation_data=(X_test, y_test))
# Scikit-Learn Machine Learning Utilities
from sklearn.metrics import confusion_matrix
## Final Accuracy
score = model.evaluate(X_test, y_test)
print'Model Accuracy: {}%'.format(score[1]*100)
print ''
print ''
y_pred = model.predict_classes(X_test)
cm = confusion_matrix(np.argmax(y_test,axis=1),y_pred) ## Confusion matrix
plot_confusion_matrix(cm, ['0','1','2','3','4','5','6','7','8','9'],normalize=True)
# summarize history for accuracy
plt.plot(log.history['acc'])
plt.plot(log.history['val_acc'])
plt.title('Model Accuracy'), plt.ylabel('Accuracy'), plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
# summarize history for loss
plt.plot(log.history['loss'])
plt.plot(log.history['val_loss'])
plt.title('Model Loss'), plt.ylabel('Loss'), plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
thresh = cm.max() / 2.
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
"""
Explanation: Finally, I proceed to train the model with a simple one line command:
End of explanation
"""
|
svdwulp/da-programming-1 | week_07_oefeningen-uitwerkingen.ipynb | gpl-2.0 | # 1a
pi_times_xi = []
for d1 in range(1, 7):
pi_times_xi.append(d1 / 6)
expected_value = sum(pi_times_xi)
print("Expected value:", expected_value)
# 1b
pi_times_xi = []
for d1 in range(1, 7):
for d2 in range(1, 7):
for d3 in range(1, 7):
pi_times_xi.append((d1 + d2 + d3) / (6**3))
expected_value = sum(pi_times_xi)
print("Expected value:", expected_value)
# 1c
pi_times_xi = []
for d1 in range(1, 6):
for d2 in range(1, 9):
for d3 in range(1, 11):
for d4 in range(1, 13):
pi_times_xi.append((d1 + d2 + d3 + d4) * (1/5 * 1/8 * 1/10 * 1/12))
expected_value = sum(pi_times_xi)
print("Expected value:", expected_value)
# 1d: optie 1 - uitrekenen (easy way out)
dice = [6, 6, 8, 10, 12]
expected_values = []
for die in dice:
total = 0
for value in range(1, die + 1):
total += value
expected_values.append(total / die)
print("Expected value:", sum(expected_values))
# 1d: optie 2 - alle combinaties genereren zonder recursie (the hard way)
dice = [6, 6, 8, 10, 12]
values = dice.copy()
pi = 1
for value in values:
pi *= 1 / value
pi_times_xi = 0
while values[0] > 0:
xi = sum(values)
pi_times_xi += xi * pi
i = len(values) - 1
values[i] -= 1
while i > 0 and values[i] == 0:
values[i] = dice[i]
i -= 1
values[i] -= 1
print("Expected value:", pi_times_xi)
# 1d: optie 3 - alle combinaties genereren met recursie (the easy hard way)
def dice_combinations(dice):
result = []
if len(dice) > 0:
for i in range(1, dice[0] + 1):
rest_results = dice_combinations(dice[1:])
if len(rest_results) > 0:
for rest_result in rest_results:
result.append((i,) + rest_result)
else:
result.append((i,))
return result
dice = [6, 6, 8, 10, 12]
combinations = dice_combinations(dice)
expected_value = 0
for combination in combinations:
expected_value += sum(combination) / len(combinations)
print("Expected value:", expected_value)
"""
Explanation: Opmerking bij de onderstaande oefeningen
Probeer bij het maken van de onderstaande oefeningen programma's te schrijven die generiek zijn. Dat wil zeggen, die niet alleen het juiste antwoord geven voor het bij de opgave gegeven voorbeeld, maar het juiste antwoord in alle mogelijke, op het voorbeeld gelijkende, gevallen.
Dus het is beter als je programma bij 1b werkt niet alleen voor de gegeven list A = [[1, 2, 3], [4, 5, 6], [], [7, 8], [9]], maar ook voor andere lists die er ongeveer hetzelfde uitzien.
Oefening 1
Schrijf een programma om ...
<span>a.</span> de verwachtingswaarde van het totaal aantal ogen te bepalen als je gooit een 6-zijdige dobbelsteen.
<span>b.</span> de verwachtingswaarde van het totaal aantal ogen te bepalen als je gooit met drie 6-zijdige dobbelstenen.
<span>c.</span> de verwachtingswaarde van het totaal aantal ogen te bepalen als je gooit met vier dobbelstenen: een 5-zijdige, een 8-zijdige, een 10-zijdige en een 12-zijdige?
<span>d.</span> de verwachtingswaarde van het totaal aantal ogen te bepalen als je gooit met een opgegeven aantal dobbelstenen met een opgegeven aantal zijden.
python
dice = [5, 8, 10, 12] # als oefening 1c. of:
dice = [6, 6, 8, 10, 12]
End of explanation
"""
# 2a
A = [[1, 2, 3], [4, 5, 6], [], 7, 8, [9]]
counter = 0
for elem in A:
if isinstance(elem, list):
counter += 1
print("Aantal geneste lists:", counter)
# 2b
A = [[1, 2, 3], [4, 5, 6], [], 7, 8, [9]]
counter = 0
for elem in A:
if isinstance(elem, list):
for e in elem:
counter += 1
else:
counter += 1
print("Aantal elementen:", counter)
# 2c
A = [[1, 2, 3], [4, 5, 6], [], 7, 8, [9]]
total = 0
for elem in A:
if isinstance(elem, list):
for e in elem:
total += e
else:
total += elem
print("Som van de elementen:", total)
"""
Explanation: Oefening 2
Schrijf een programma dat ...
<span>a.</span> het aantal geneste lists van een list geeft.
```python
A = [[1, 2, 3], [4, 5, 6], [], 7, 8, [9]]
bepaal aantal geneste lists in A (=4)
```
<span>b.</span> het totaal aantal elementen van een list plus het aantal elementen van alle geneste lists in een list geeft.
```python
A = [[1, 2, 3], [4, 5, 6], [], 7, 8, [9]]
bepaal het totaal aantal elementen in A (incl. geneste lists) (=9)
```
<span>c.</span> de som van alle elementen in een list, inclusief de elementen in alle geneste lists.
```python
A = [[1, 2, 3], [4, 5, 6], [], 7, 8, [9]]
bepaal de som van alle elementen van in A geneste lists (=45)
```
End of explanation
"""
# 3a
A = [[1, 2, 3], 4, 5, [6], [7, 8], 9]
result = []
for elem in A:
if isinstance(elem, list):
for e in elem:
result.append(e)
else:
result.append(elem)
print("Flattened list:", result)
# 3b
A = [1, 1, 1, 1, 2, 2, 3, 4, 4, 4, 1, 1]
result = []
last_elem = None
for elem in A:
if elem != last_elem:
result.append(elem)
last_elem = elem
print("Duplicaten verwijderd:", result)
# 3c
A = [1, 1, 1, 1, 2, 2, 3, 4, 4, 4, 1, 1]
result = []
last_elem = None
elem_count = 0
for elem in A:
if elem != last_elem:
if elem_count > 0:
result.append((last_elem, elem_count))
last_elem = elem
elem_count = 1
else:
elem_count += 1
if elem_count > 0:
result.append((last_elem, elem_count))
print("Run-length encoded versie:", result)
# 3d
A = [(1, 4), (2, 2), (3, 1), (4, 3), (1, 2)]
# pak de run-length encoded list A uit
# (= [1, 1, 1, 1, 2, 2, 3, 4, 4, 4, 1, 1])
result = []
for elem, elem_count in A:
result.extend([elem] * elem_count)
print("Uitgepakte versie van run-length encoded list:", result)
"""
Explanation: Oefening 3
Schrijf een programma dat ...
<span>a.<span> een platte list maakt met alle elementen van een list, inclusief de elementen in geneste lists.
```python
A = [[1, 2, 3], 4, 5, [6], [7, 8], 9]
maak een 'platte' list met de elementen van A
(= [1, 2, 3, 4, 5, 6, 7, 8, 9])
```
<span>b.</span> alle opeenvolgende duplicaten van elementen in een list verwijdert.
```python
A = [1, 1, 1, 1, 2, 2, 3, 4, 4, 4, 1, 1]
verwijder opeenvolgende duplicaten van elementen in A
(= [1, 2, 3, 4, 1])
```
<span>c.</span> een run-length encoded versie van een list maakt. Daarbij maak je een nieuwe list, waarin alle opeenvolgende duplicaten van de elementen van de list zijn vervangen door een tuple met daarin het element en het aantal duplicaten.
```python
A = [1, 1, 1, 1, 2, 2, 3, 4, 4, 4, 1, 1]
maak een run-length encoded versie van A
(= [(1, 4), (2, 2), (3, 1), (4, 3), (1, 2)])
```
<span>d.</span> een run-length encoded list (zie 2c.) uitpakt naar een platte list.
```python
A = [(1, 4), (2, 2), (3, 1), (4, 3), (1, 2)]
pak de run-length encoded list A uit
(= [1, 1, 1, 1, 2, 2, 3, 4, 4, 4, 1, 1])
```
End of explanation
"""
# 4a
A = [1, 2, 3, 4]
for i in range(len(A)):
A.insert(2*i, A[2*i])
print("Lijst met gedupliceerde elementen:", A)
# 4b
A = list("abcdefgh")
positions = 3
# draai de list A over positions elementen naar links
# (= ["d", "e", "f", "g", "a", "b", "c"])
result = A[positions:] + A[:positions]
print("Geroteerde lijst:", result)
"""
Explanation: Oefening 4
Schrijf een programma dat ...
<span>a.</span> alle elementen in een list dupliceert in dezelfde list, dus zonder een nieuwe list te maken.
```python
A = [1, 2, 3, 4]
dupliceer de elementen in A (zonder een nieuwe list te maken)
(A = [1, 1, 2, 2, 3, 3, 4, 4])
```
<span>b.</span> de elementen van een list een opgegeven aantal plaatsen naar links 'draait'. Elementen die er aan de linkerzijde af zouden vallen, komen rechts terug in de list.
```python
A = list("abcdefgh")
positions = 3
draai de list A over positions elementen naar links
(= ["d", "e", "f", "g", "a", "b", "c"])
```
End of explanation
"""
# 5a
import random
A = [1, 3, 5, 7, 8, 9]
count = 3
result = []
for i in range(count):
index = random.randrange(len(A))
result.append(A[index])
del A[index]
print("Willekeurig gekozen elementen:", result)
"""
Explanation: Oefening 5
Schrijf een programma dat ...
<span>a.</span> een opgegeven aantal, toevallig gekozen elementen uit een list trekt (lotto).
```python
A = [1, 3, 5, 7, 8, 9]
count = 3
trek count toevallig gekozen elementen uit A
(= [7, 3, 5] of [1, 8, 3] etc.)
```
End of explanation
"""
# 6a
A = [[1, 2, 3], [4, 5, 6]]
B = [[9, 8, 7], [6, 5, 4]]
result = []
for i in range(len(A)):
row = []
for j in range(len(A[i])):
row.append(A[i][j] + B[i][j])
result.append(row)
print("Som matrix:", result)
# 6b
A = [[1, 2, 3], [4, 5, 6]]
B = [[9, 8], [7, 6], [5, 4]]
result = []
for i in range(len(A)):
row = []
for j in range(len(A)):
total = 0
for k in range(len(B)):
total += A[i][k] * B[k][j]
row.append(total)
result.append(row)
print("Product matrix:", result)
"""
Explanation: Oefening 6
<span>a.</span> Schrijf een programma dat twee matrices bij elkaar optelt.
```python
A = [[1, 2, 3], [4, 5, 6]]
B = [[9, 8, 7], [6, 5, 4]]
bepaal C = A + B
```
<span>b.</span> Schrijf een programma dat twee matrices vermenigvuldigt.
```python
A = [[1, 2, 3], [4, 5, 6]]
B = [[9, 8], [7, 6], [5, 4]]
bepaal C = A x B
```
End of explanation
"""
A = [1, 2, 3, 4, 5, 6, 7, 8, 9]
rows, cols = (3, 3) # matrix dimensions
r, c = (1, 2) # 0-based row and column index
index = r * cols + c
print("A(r,c):", A[index])
"""
Explanation: Oefening 7
Als we een matrix niet met een geneste list zouden modelleren,
maar met een platte list, hoe zou je dan een element van de matrix
kunnen aanwijzen?
De afmeting van de matrix is bekend.
Bijvoorbeeld:
$$
A = \begin{bmatrix}
1 & 2 & 3 \
4 & 5 & 6 \
7 & 8 & 9 \end{bmatrix}
$$
Hoe vind ik nu $A_{r,c}$ ?
In Python:
```python
A = [1, 2, 3, 4, 5, 6, 7, 8, 9]
rows, cols = (3, 3) # matrix dimensions
r, c = (1, 2) # 0-based row and column index
hoe vind ik het element op A(r, c)?
```
End of explanation
"""
# 8a
from IPython.display import display, Latex
from scipy.misc import comb
combinations = comb(64, 8, exact=True)
display(Latex(r"Er zijn 8 koniginnen te verdelen over 64 vakjes:"))
display(Latex(r"$\binom{{64}}{{8}} = {}$".format(combinations)))
# 8b
# We zouden het schaakbord kunnen modelleren met een
# geneste list, waarbij iedere sublist een rij voorstelt,
# en deze dan vullen met een Q bij ieder vakje waar een
# konigin staat.
# Maar aangezien koniginnen toch nooit in dezelfde kolom
# kunnen staan, kunnen we net zo goed alleen de rijnummers
# van de 8 koniginnen opslaan.
board = [0] * 8
while True:
# oplossing controleren
# koniginnen mogen niet op dezelfde rij staan:
rows_valid = True
for col in range(len(board) - 1):
if board[col] in board[col+1:]:
rows_valid = False
break
if rows_valid:
# koniginnen mogen niet in dezelfde diagonaal staan,
# top-left to bottom-right:
diagonal_tlbr = []
for col in range(len(board)):
# calculate diagonal:
diagonal_tlbr.append(col - board[col])
diagonal_tlbr_valid = True
for d in range(len(diagonal_tlbr) - 1):
if diagonal_tlbr[d] in diagonal_tlbr[d+1:]:
diagonal_tlbr_valid = False
break
if diagonal_tlbr_valid:
# koniginnen mogen niet in dezelfde diagonaal staan,
# bottom-left to top-right:
diagonal_bltr = []
for col in range(len(board)):
# calculate diagonal:
diagonal_bltr.append(col + board[col])
diagonal_bltr_valid = True
for d in range(len(diagonal_bltr) - 1):
if diagonal_bltr[d] in diagonal_bltr[d+1:]:
diagonal_bltr_valid = False
break
if diagonal_bltr_valid:
print("Valid solution:", board)
break
# set up next board:
col = -1
board[col] += 1
while col >= -len(board) and board[col] >= len(board):
board[col] = 1
col -= 1
board[col] += 1
if col < -len(board):
break
else:
print("No valid solution found!")
"""
Explanation: Oefening 8
In het 8-koniginnen-probleem probeer je zoveel mogelijk koniginnen op een schaakbord te plaatsen. De koniginnen mogen niet de mogelijkheid hebben
elkaar te slaan.
Een konigin mag op het schaakbord zowel horizontaal, verticaal als diagonaal lopen.
Een voorbeeld van een geldige oplossing:
<span>a.</span> Op hoeveel mogelijke manieren kun je de 8 koniginnen op de vakjes van het schaakbord plaatsen als je geen rekening hoeft te houden met het slaan?
<span>b.</span> Schrijf een Python programma dat een geldige oplossing van het probleem zoekt en vindt.
End of explanation
"""
# 9a
A = [
["a", "b", "c"], ["d", "e"],
["f", "g", "h"], ["d", "e"],
["i", "j", "k", "l"], ["m", "n"],
["o"]
]
sort_order = []
for i in range(len(A)):
sort_order.append((len(A[i]), i))
sort_order.sort()
result = []
for length, i in sort_order:
result.append(A[i])
print("Sorted by length of sublist:", result)
"""
Explanation: Oefening 9
Schrijf een programma dat ...
<span>a.<span> de geneste lists in een list sorteert op lengte van de
geneste list.
```python
A = [
["a", "b", "c"], ["d", "e"],
["f", "g", "h"], ["d", "e"],
["i", "j", "k", "l"], ["m", "n"],
["o"]
]
maak een nieuwe lijst, waarin de elementen van A
gesorteerd zijn op lengte van de list
(= [["o"], ["d", "e"], ["d", "e"], ["m", "n"],
["a", "b", "c"], ["f", "g", "h"], ["i", "j", "k", "l"]])
```
End of explanation
"""
|
jonathanmorgan/msu_phd_work | analysis/step-1-sourcenet-to-context.ipynb | lgpl-3.0 | me = "sourcenet-to-context"
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#notes-and-questions" data-toc-modified-id="notes-and-questions-1"><span class="toc-item-num">1 </span>notes and questions</a></span></li><li><span><a href="#Setup" data-toc-modified-id="Setup-2"><span class="toc-item-num">2 </span>Setup</a></span><ul class="toc-item"><li><span><a href="#Setup---Debug" data-toc-modified-id="Setup---Debug-2.1"><span class="toc-item-num">2.1 </span>Setup - Debug</a></span></li><li><span><a href="#Setup---Imports" data-toc-modified-id="Setup---Imports-2.2"><span class="toc-item-num">2.2 </span>Setup - Imports</a></span></li><li><span><a href="#Setup---working-folder-paths" data-toc-modified-id="Setup---working-folder-paths-2.3"><span class="toc-item-num">2.3 </span>Setup - working folder paths</a></span></li><li><span><a href="#Setup---logging" data-toc-modified-id="Setup---logging-2.4"><span class="toc-item-num">2.4 </span>Setup - logging</a></span></li><li><span><a href="#Setup---virtualenv-jupyter-kernel" data-toc-modified-id="Setup---virtualenv-jupyter-kernel-2.5"><span class="toc-item-num">2.5 </span>Setup - virtualenv jupyter kernel</a></span></li><li><span><a href="#Setup---Initialize-Django" data-toc-modified-id="Setup---Initialize-Django-2.6"><span class="toc-item-num">2.6 </span>Setup - Initialize Django</a></span></li><li><span><a href="#Setup---ExportToContext-instance" data-toc-modified-id="Setup---ExportToContext-instance-2.7"><span class="toc-item-num">2.7 </span>Setup - ExportToContext instance</a></span></li><li><span><a href="#Setup---Initialize-LoggingHelper" data-toc-modified-id="Setup---Initialize-LoggingHelper-2.8"><span class="toc-item-num">2.8 </span>Setup - Initialize LoggingHelper</a></span></li></ul></li><li><span><a href="#load-sourcenet-data-into-context" data-toc-modified-id="load-sourcenet-data-into-context-3"><span class="toc-item-num">3 </span>load sourcenet data into context</a></span><ul class="toc-item"><li><span><a href="#Retrieve-Article-instances" data-toc-modified-id="Retrieve-Article-instances-3.1"><span class="toc-item-num">3.1 </span>Retrieve Article instances</a></span></li><li><span><a href="#build-load-code-and-unit-tests" data-toc-modified-id="build-load-code-and-unit-tests-3.2"><span class="toc-item-num">3.2 </span>build load code and unit tests</a></span><ul class="toc-item"><li><span><a href="#Testing" data-toc-modified-id="Testing-3.2.1"><span class="toc-item-num">3.2.1 </span>Testing</a></span></li></ul></li><li><span><a href="#Load-data-from-Article-instances" data-toc-modified-id="Load-data-from-Article-instances-3.3"><span class="toc-item-num">3.3 </span>Load data from Article instances</a></span></li></ul></li></ul></div>
notes and questions
Back to Table of Contents
Notes:
probably will need a class that is an Article_Data container - reference to Article_Data, and then all the logic to process the entities and relations contained within.
Questions:
Do we want an Identifier Type separate from Entity and Relation identifiers? I think we do, so we can specify the entity type(s) a given identifier should be used on.
Created abstract one, and created a concrete entity identifier type. Will create one for relation identifiers if needed.
Setup
Back to Table of Contents
End of explanation
"""
debug_flag = False
"""
Explanation: Setup - Debug
Back to Table of Contents
End of explanation
"""
import datetime
from django.db.models import Avg, Max, Min
from django.utils.text import slugify
import gc
import json
import logging
import six
"""
Explanation: Setup - Imports
Back to Table of Contents
End of explanation
"""
%pwd
# current working folder
current_working_folder = "/home/jonathanmorgan/work/django/research/work/phd_work/analysis"
current_datetime = datetime.datetime.now()
current_date_string = current_datetime.strftime( "%Y-%m-%d-%H-%M-%S" )
"""
Explanation: Setup - working folder paths
Back to Table of Contents
End of explanation
"""
logging_file_name = "{}/logs/{}-{}.log.txt".format( current_working_folder, me, current_date_string )
logging.basicConfig(
level = logging.INFO,
format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
filename = logging_file_name,
filemode = 'w' # set to 'a' if you want to append, rather than overwrite each time.
)
print( "Logging initialized, to {}".format( logging_file_name ) )
"""
Explanation: Setup - logging
Back to Table of Contents
configure logging for this notebook's kernel (If you do not run this cell, you'll get the django application's logging configuration.
End of explanation
"""
# init django
django_init_folder = "/home/jonathanmorgan/work/django/research/work/phd_work"
django_init_path = "django_init.py"
if( ( django_init_folder is not None ) and ( django_init_folder != "" ) ):
# add folder to front of path.
django_init_path = "{}/{}".format( django_init_folder, django_init_path )
#-- END check to see if django_init folder. --#
%run $django_init_path
# context imports
from context.models import Entity
from context.models import Entity_Identifier_Type
from context.models import Entity_Identifier
from context.models import Entity_Type
# context_text imports
from context_text.article_coding.article_coding import ArticleCoder
from context_text.article_coding.article_coding import ArticleCoding
from context_text.article_coding.open_calais_v2.open_calais_v2_article_coder import OpenCalaisV2ArticleCoder
from context_text.collectors.newsbank.newspapers.GRPB import GRPB
from context_text.collectors.newsbank.newspapers.DTNB import DTNB
from context_text.export.to_context_base.export_to_context import ExportToContext
from context_text.models import Article
from context_text.models import Article_Subject
from context_text.models import Newspaper
from context_text.shared.context_text_base import ContextTextBase
"""
Explanation: Setup - virtualenv jupyter kernel
Back to Table of Contents
If you are using a virtualenv, make sure that you:
have installed your virtualenv as a kernel.
choose the kernel for your virtualenv as the kernel for your notebook (Kernel --> Change kernel).
Since I use a virtualenv, need to get that activated somehow inside this notebook. One option is to run ../dev/wsgi.py in this notebook, to configure the python environment manually as if you had activated the sourcenet virtualenv. To do this, you'd make a code cell that contains:
%run ../dev/wsgi.py
This is sketchy, however, because of the changes it makes to your Python environment within the context of whatever your current kernel is. I'd worry about collisions with the actual Python 3 kernel. Better, one can install their virtualenv as a separate kernel. Steps:
activate your virtualenv:
workon research
in your virtualenv, install the package ipykernel.
pip install ipykernel
use the ipykernel python program to install the current environment as a kernel:
python -m ipykernel install --user --name <env_name> --display-name "<display_name>"
sourcenet example:
python -m ipykernel install --user --name sourcenet --display-name "research (Python 3)"
More details: http://ipython.readthedocs.io/en/stable/install/kernel_install.html
Setup - Initialize Django
Back to Table of Contents
First, initialize my dev django project, so I can run code in this notebook that references my django models and can talk to the database using my project's settings.
End of explanation
"""
my_exporter = ExportToContext()
# no variables to set, yet...
my_exporter.set_article_uuid_id_type_name( ExportToContext.ENTITY_ID_TYPE_ARTICLE_NEWSBANK_ID )
"""
Explanation: Setup - ExportToContext instance
Back to Table of Contents
Make instance, set instance variables.
End of explanation
"""
# python_utilities
from python_utilities.logging.logging_helper import LoggingHelper
# init
my_logging_helper = LoggingHelper()
my_logging_helper.set_logger_name( me )
log_message = None
"""
Explanation: Setup - Initialize LoggingHelper
Back to Table of Contents
Create a LoggingHelper instance to use to log debug and also print at the same time.
Preconditions: Must be run after Django is initialized, since python_utilities is in the django path.
End of explanation
"""
# look for publications that have article data:
# - coded by automated coder
# - with coder type of "OpenCalais_REST_API_v2"
# get automated coder
automated_coder_user = ContextTextBase.get_automated_coding_user()
log_message = "{} - Loaded automated user: {}, id = {}".format( datetime.datetime.now(), automated_coder_user, automated_coder_user.id )
my_logging_helper.output_message( log_message, do_print_IN = True, log_level_code_IN = logging.INFO )
# find articles with Article_Data created by the automated user...
article_qs = Article.objects.filter( article_data__coder = automated_coder_user )
# ...and specifically coded using OpenCalais V2...
article_qs = article_qs.filter( article_data__coder_type = OpenCalaisV2ArticleCoder.CONFIG_APPLICATION )
# ...and finally, we just want the distinct articles by ID.
article_qs = article_qs.order_by( "id" ).distinct( "id" )
# count?
article_count = article_qs.count()
log_message = "Found {} articles".format( article_count )
my_logging_helper.output_message( log_message, do_print_IN = True, log_level_code_IN = logging.INFO )
"""
Explanation: load sourcenet data into context
Back to Table of Contents
logic:
get all Article_Data created by automated coder, open calais coder type.
for each:
Article processing:
look for article entity in context. If not created, create one of type "article". Store the sourcenet ID as identifier of type "article_sourcenet_id".
Store the ID and entity reference for the Article (in object? Should I make a place for the article to hold an Entity ID?).
check to see if the newspaper has an entity (using its sourcenet ID as the lookup identifier). If not, create one.
Person processing:
Add all people who are either reporters/authors or subjects to context as entities of type "person", with identifier of type person_sourcenet_id set to their internal django/database "id", and with identifier of type person_open_calais_uuid set to their OpenCalais ID, if they have one. If they have any other types of IDs, add them too, untyped. Once entity is created, store ForeignKey of entity in Person record.
Author/Reporter processing:
look for each author's person entity. If not found, create one.
Subject/Source processing:
look for a person entity for each subject/source. If not found, create one.
Relations - create the following:
from newspaper
newspaper_reporter - Reporter at a newspaper, evidence of which is byline on an article in that newspaper. FROM newspaper TO person (reporter) THROUGH article.
newspaper_source - Person quoted in an article published by a newspaper. FROM newspaper TO person (source) THROUGH article.
newspaper_subject - Subject of an article published in a given newspaper. FROM newspaper TO person (subject, including sources) THROUGH article.
newspaper_article - Article published in a particular newspaper. FROM newspaper TO article.
from article
author - Author/Reporter of an article - FROM article TO reporter.
subject - Subject of a story. FROM article TO subject person.
source - Source quoted in an article - FROM article TO source person.
through article
article_container - Parent for relations based on entities being mentioned in the same article. To start, just people, but eventually, for example, could also include location.
mentioned - Mentioned in an article. FROM reporter/author TO subject THROUGH article.
quoted - The "from" person quoted the "to" person in a publication. FROM reporter TO source THROUGH article.
same_article_sources - Sources in the same article, FROM source person TO source person THROUGH article.
same_article_subjects - Two people who are in a particular article together (includes subjects and sources).
shared_byline - Shared Byline on an article - joint authors - FROM author TO author THROUGH article.
Relations broken out by person type:
For all reporters/authors:
create a relation of type "author" between the article's entity (FROM) and the entity of the person (TO) for each author.
if multiple authors, for each pair of authors, create a relation of type "shared_byline" between the two (it is undirected), THROUGH the article.
For all subjects, including sources:
create a relation of type "subject" between the article's entity (FROM) and the entity of the subject person (TO).
if multiple subjects, for each pair of subjects, create a relation of type "same_article_subjects" between the two (it is undirected), THROUGH the article.
create a relation of type "mentioned" between each of the article's authors (FROM) and the subject (TO), THROUGH the article.
For all sources
create a relation of type "source" between the article's entity and the entity for the source person.
if multiple sources, for each pair of sources, create a relation of type "same_article_sources" between the two (it is undirected), THROUGH the article.
create a relation of type "quoted" between each of the article's authors (FROM) and the source (TO), THROUGH the article.
Convenience methods:
method to find entity - based on type and identifier (accept all the fields that make sense, including optional identifier type instance).
method to find relation - based on type, etc.
Notes:
all ties are undirected.
relations can have three foreign keys into Entity - FROM, TO, and THROUGH (for a containing relationship, like the article that included two sources that we are relating).
Retrieve Article instances
Back to Table of Contents
Before we do anything else, need to be able to pull back all the articles whose data we want to load into the context store.
End of explanation
"""
result = my_exporter.process_articles( article_qs )
print( "Result: {}".format( result ) )
# tag all of the articles with export_to_context-YYYYMMDD-HHMMSS
tag_value = ExportToContext.TAG_PREFIX + "20191126-164206"
print( "Tag value: {}".format( tag_value ) )
# declare variables
progress_interval = None
article_counter = None
current_time = None
previous_time = None
elapsed_time = None
elapsed_this_period = None
average_time = None
average_this_period = None
# init
progress_interval = 100
article_counter = 0
start_time = datetime.datetime.now()
current_time = start_time
# loop over articles
for current_article in article_qs:
# increment counter
article_counter += 1
# add tag.
current_article.tags.add( tag_value )
# output an update every progress_interval articles.
if ( ( article_counter % progress_interval ) == 0 ):
previous_time = current_time
current_time = datetime.datetime.now()
elapsed_time = current_time - start_time
elapsed_this_period = current_time - previous_time
average_time = elapsed_time / article_counter
average_this_period = elapsed_this_period / progress_interval
print( "\n----> Processed {} of {} articles at {}".format( article_counter, article_count, current_time ) )
print( "Total elapsed: {} ( average: {} )".format( elapsed_time, average_time ) )
print( "Period elapsed: {} ( average: {} )".format( elapsed_this_period, average_this_period ) )
# also, garbage collect
#gc.collect()
#-- END check to see if we output progress update. --#
#-- END loop over articles to tag. --#
# Count number of articles with tag name starting with
# ExportToContext.TAG_PREFIX
tag_test_qs = Article.objects.filter( tags__name__istartswith = ExportToContext.TAG_PREFIX )
tag_test_count = tag_test_qs.count()
article_count = article_qs.count()
# did all the articles get tagged?
test_value = tag_test_count
should_be = article_count
error_string = "Count of tagged articles: {}, should be {}.".format( test_value, should_be )
assert test_value == should_be, error_string
print( "Found {} Articles with tag starting with {}.".format( tag_test_count, ExportToContext.TAG_PREFIX ) )
"""
Explanation: build load code and unit tests
Back to Table of Contents
Testing
Back to Table of Contents
First, make unit tests to test convenience methods added to the following models, in context/tests/models/:
// Entity_Identifier_Type - test_Entity_Identifier_Type_model.py
// Entity_Identifier - test_Entity_Identifier_model.py
// Entity - test_Entity_model.py
Also, // move instance creation class methods along with their constants over into "TestHelper" from test_Entity_Identifier_model.py, so they can be re-used across test classes.
To run: python manage.py test context.tests
In test data:
article 21925:
Article_Data: SELECT * FROM context_text_article_data WHERE article_id = 21925;
TODO:
test_entity_model.py
test_get_entity_for_identifier
test_export_to_context.py
test_get_article_uuid_id_type
test_set_article_uuid_id_type_name
Load data from Article instances
Back to Table of Contents
Now, we actually load the data...
End of explanation
"""
|
leyhline/vix-term-structure | comparison1-4.ipynb | mit | argmin_epchs_basic = np.array([v[3] for v in experiment1["val_loss"].loc[:,:,False].groupby(("depth", "width", "datetime")).idxmin()])
argmin_epchs_normal = np.array([v[3] for v in experiment1["val_loss"].loc[:,:,True].groupby(("depth", "width", "datetime")).idxmin()])
print(argmin_epchs_basic.mean(), argmin_epchs_basic.std(), np.median(argmin_epchs_basic))
print(argmin_epchs_normal.mean(), argmin_epchs_normal.std(), np.median(argmin_epchs_normal))
ex1_min = experiment1.groupby(("depth", "width", "normalized", "datetime")).min()
ex1_min_basic = ex1_min.loc(axis=0)[:,:,False]
ex1_min_normal = ex1_min.loc(axis=0)[:,:,True]
plot3d_loss(ex1_min_basic["loss"].groupby(("depth", "width")).mean(), rotation=150)
plt.savefig("approach1-ex1-loss-basic.pdf", format="pdf", bbox_inches="tight", dpi=300)
plt.show()
plot3d_loss(ex1_min_basic["val_loss"].groupby(("depth", "width")).mean(), rotation=150)
plt.savefig("approach1-ex1-val-basic.pdf", format="pdf", bbox_inches="tight", dpi=300)
plt.show()
plot3d_loss(ex1_min_normal["loss"].groupby(("depth", "width")).mean(), rotation=150)
plt.savefig("approach1-ex1-loss-normal.pdf", format="pdf", bbox_inches="tight", dpi=300)
plt.show()
plot3d_loss(ex1_min_normal["val_loss"].groupby(("depth", "width")).mean(), rotation=150)
plt.savefig("approach1-ex1-val-normal.pdf", format="pdf", bbox_inches="tight", dpi=300)
plt.show()
d1w21_basic = experiment1["val_loss"].loc[1, 24, False]
d1w21_normal = experiment1["val_loss"].loc[1, 24, True]
plt.figure(figsize=(8,3))
ax2 = d1w21_basic.groupby("epoch").std().plot(color="red", linewidth=0.8)
d1w21_normal.groupby("epoch").std().plot(color="darkorange", linewidth=0.8)
ax2.tick_params(axis="y", colors="red")
plt.legend(("std.", "std. (normalized)"), title="")
plt.ylim(0, 0.04)
ax1 = d1w21_basic.groupby("epoch").mean().plot(secondary_y=True, color="blue", linewidth=0.8)
d1w21_normal.groupby("epoch").mean().plot(secondary_y=True, color="violet", linewidth=0.8)
ax1.tick_params(axis="y", colors="blue")
ax1.set_xlabel("")
ax2.set_xlabel("")
plt.legend(("mean", "mean (normalized)"), loc="upper center", title="")
plt.ylim(0.095,0.115)
plt.savefig("approach1-ex1-progression.pdf", format="pdf", bbox_inches="tight", dpi=300)
plt.show()
print(experiment1.min(), experiment1.idxmin(), sep="\n")
"""
Explanation: Experiment 1
Basic estimation of width and depth.
End of explanation
"""
ex2_min = experiment2["val_loss"][1,:,False].groupby(("width", "datetime")).min()
ex2_min.groupby("width").mean().plot(figsize=(10,5))
plt.title("Validation loss of best training epoch")
plt.show()
experiment2.loc[1,9,False,:,999]
"""
Explanation: Experiment 2
Using days to expiration as an explicit prior.
End of explanation
"""
ex3_min = experiment3["val_loss"][:,:,False].groupby(("depth", "width", "datetime")).min()
experiment3.groupby(("depth", "width", "normalized", "datetime")).min()
plot3d_loss(ex3_min.groupby(("depth", "width")).mean())
plt.xticks(np.arange(10, 100, 20))
plt.savefig("approach1-ex3-val-selu.pdf", format="pdf", bbox_inches="tight", dpi=300)
plt.show()
experiment35 = parse_whole_directory_old("models/experiment03.5/")
ex35_min = experiment35["val_loss"][:,:,False].groupby(("depth", "width", "datetime")).min()
experiment35.groupby(("depth", "width", "normalized", "datetime")).min()
plot3d_loss(ex35_min.groupby(("depth", "width")).mean())
plt.xticks(np.arange(10, 100, 20))
plt.savefig("approach1-ex3-val-dropout.pdf", format="pdf", bbox_inches="tight", dpi=300)
plt.show()
"""
Explanation: Experiment 3
Trying out regularization with SeLU activation function.
End of explanation
"""
ex4_min = experiment4["val_loss"][1].groupby(("width", "normalized", "datetime")).min()
ex4_min_basic = ex4_min[:,False]
ex4_min_normal = ex4_min[:,True]
ex4_min_basic.groupby("width").mean().plot(figsize=(10,5), label="not normalized")
ex4_min_normal.groupby("width").mean().plot(label="normalized")
plt.legend()
plt.title("Validation loss of best training epoch")
plt.show()
ex4_per_width = pd.DataFrame(ex4_min_basic.values.reshape((11, 10)), index=ex4_min_basic.index.levels[0])
width = ex4_per_width.T.mean()
std = ex4_per_width.T.std()
ax = ex4_per_width.plot(colormap=cm.winter, legend=False, figsize=(8,4), alpha=0.8)
width.plot(linewidth=5)
plt.xticks(np.arange(20, 51, 3))
plt.grid()
plt.ylim(0.080, 0.110)
plt.xlabel("")
x = np.arange(20, 51, 3)
xx = np.concatenate((x, x[::-1]))
yy = np.concatenate((width.values + std.values, (width.values - std.values)[::-1]))
plt.fill(xx, yy, color="cyan", alpha=0.5)
#ax.xaxis.set_ticklabels(())
plt.savefig("approach1-ex4-val-basic.pdf", format="pdf", bbox_inches="tight", dpi=300)
plt.show()
ex4_per_width = pd.DataFrame(ex4_min_normal.values.reshape((11, 10)), index=ex4_min_basic.index.levels[0])
width = ex4_per_width.T.mean()
std = ex4_per_width.T.std()
ex4_per_width.plot(colormap=cm.winter, legend=False, figsize=(8,3), alpha=0.8)
width.plot(linewidth=5)
plt.xticks(np.arange(20, 51, 3))
plt.ylim(0.155, 0.185)
plt.grid()
plt.xlabel("")
x = np.arange(20, 51, 3)
xx = np.concatenate((x, x[::-1]))
yy = np.concatenate((width.values + std.values, (width.values - std.values)[::-1]))
plt.fill(xx, yy, color="cyan", alpha=0.5)
plt.savefig("approach1-ex4-val-normal.pdf", format="pdf", bbox_inches="tight", dpi=300)
plt.show()
ex4_min_basic
"""
Explanation: Experiment 4
Using more input in hope of better performance.
End of explanation
"""
def mse(y1, y2):
return np.mean(np.square(y1 - y2))
def mse_prediction(model, splitted_datasets):
return [mse(model.predict(x), y) for x, y in splitted_datasets]
longprices = LongPricesDataset("data/8_m_settle.csv", "data/expirations.csv")
(x_train, y_train), (x_val, y_val), (x_test, y_test) = longprices.splitted_dataset()
naive_mse = [mse(y_train[:-1], y_train[1:]), mse(y_val[:-1], y_val[1:]), mse(y_test[:-1], y_test[1:])]
comp_data = parse_whole_directory("models/comparison1-4/")
#pd.Series(comp_data.index.droplevel([0,1,2,3]), index=comp_data.index).groupby("datetime").max()
epochs= pd.Series([""]).append(
pd.Series(comp_data.index.droplevel([0,1,2,3]), index=comp_data.index).groupby("datetime").max())
epochs.index = ["Naive", "Basic", "Dropout", "SNN", "AddIn"]
epochs["Minutely"] = 70
epochs = epochs.iloc[[0, 1, 2, 3, 5, 4]]
model1 = term_structure_to_spread_price_v2(1, 30, input_data_length=9)
model1.load_weights("models/comparison1-4/20170826150648_schumpeter_depth1_width30_days1_dropout0e+00_optimAdam_lr1e-03.h5")
model2 = term_structure_to_spread_price_v2(1, 30, input_data_length=9, dropout=0.5)
model2.load_weights("models/comparison1-4/20170826150734_schumpeter_depth1_width30_days1_dropout5e-01_optimAdam_lr1e-03.h5")
model3 = term_structure_to_spread_price_v2(1, 30, input_data_length=9, activation_function="selu")
model3.load_weights("models/comparison1-4/20170826151132_schumpeter_depth1_width30_days1_dropout0e+00_optimAdam_lr1e-03.h5")
model4 = term_structure_to_spread_price_v2(1, 30, input_data_length=11)
model4.load_weights("models/comparison1-4/20170826151354_schumpeter_depth1_width30_days1_dropout0e+00_optimAdam_lr1e-03.h5")
model_minutely = term_structure_to_spread_price_v2(1, 30, input_data_length=9)
model_minutely.load_weights("models/experiment12/20170829131147_tfpool42_depth1_width30_days1_dropout0e+00_optimAdam_lr1e-03.h5")
longprices = LongPricesDataset("data/8_m_settle.csv", "data/expirations.csv")
splitted_dataset = longprices.splitted_dataset()
summary = pd.DataFrame([naive_mse,
mse_prediction(model1, splitted_dataset),
mse_prediction(model2, splitted_dataset),
mse_prediction(model3, splitted_dataset),
mse_prediction(model_minutely, splitted_dataset),
mse_prediction(model4, longprices.splitted_dataset(with_days=True, with_months=True))],
index=["Naive", "Basic", "Dropout", "SNN", "Minutely", "AddIn"],
columns=(["Training MSE", "Validation MSE", "Test MSE"]))
print(pd.DataFrame(epochs, columns=["Epoch"]).join(summary).to_latex(float_format="%.4f"))
fig, axs = plt.subplots(6, 3, sharex=True, figsize=(10,15))
for idx, axh in enumerate(axs):
i1 = random.randint(0, len(x_train))
axh[0].plot(np.arange(6), y_train[i1],
np.arange(6), np.squeeze(model_minutely.predict(np.expand_dims(x_train[i1], axis=0))))
i2 = random.randint(0, len(x_val))
axh[1].plot(np.arange(6), y_val[i2],
np.arange(6), np.squeeze(model_minutely.predict(np.expand_dims(x_val[i2], axis=0))))
i3 = random.randint(0, len(x_test))
a,b = axh[2].plot(np.arange(6), y_test[i3],
np.arange(6), np.squeeze(model_minutely.predict(np.expand_dims(x_test[i3], axis=0))))
if idx == 0:
axh[0].set_title("Training Set")
axh[1].set_title("Validation Set")
axh[2].set_title("Test Set")
plt.tight_layout()
plt.savefig("minutely-prediction-samples.pdf", format="pdf", dpi=300, bbox_inches="tight")
plt.show()
"""
Explanation: General comparison
Looking at the validation loss of the best epoch.
End of explanation
"""
data13 = parse_whole_directory("models/experiment13/")
loss_min13 = data13.groupby(("depth", "width", "datetime")).min().groupby(("depth", "width")).mean()
plot3d_loss(loss_min13.val_loss, rotation=150)
plt.savefig("approach1-ex3-val-minutely.pdf", format="pdf", bbox_inches="tight", dpi=300)
plt.show()
plot3d_loss(data13.groupby(("depth", "width", "datetime")).min().groupby(("depth", "width")).std().val_loss, rotation=150)
plt.show()
loss_min13.val_loss.sort_values().head(10)
deep_minutely = term_structure_to_spread_price_v2(4, 30, input_data_length=9)
deep_minutely.load_weights("models/experiment13/20170829142957_tfpool36_depth4_width30_days1_dropout0e+00_optimAdam_lr1e-03.h5")
print(*mse_prediction(deep_minutely, splitted_dataset))
"""
Explanation: Minutely
End of explanation
"""
|
yuhao0531/dmc | notebooks/week-4/01-tensorflow ANN for regression.ipynb | apache-2.0 | %matplotlib inline
import math
import random
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import load_boston
import numpy as np
import tensorflow as tf
sns.set(style="ticks", color_codes=True)
"""
Explanation: Lab 4 - Tensorflow ANN for regression
In this lab we will use Tensorflow to build an Artificial Neuron Network (ANN) for a regression task.
As opposed to the low-level implementation from the previous week, here we will use Tensorflow to automate many of the computation tasks in the neural network. Tensorflow is a higher-level open-source machine learning library released by Google last year which is made specifically to optimize and speed up the development and training of neural networks.
At its core, Tensorflow is very similar to numpy and other numerical computation libraries. Like numpy, it's main function is to do very fast computation on multi-dimensional datasets (such as computing the dot product between a vector of input values and a matrix of values representing the weights in a fully connected network). While numpy refers to such multi-dimensional data sets as 'arrays', Tensorflow calls them 'tensors', but fundamentally they are the same thing. The two main advantages of Tensorflow over custom low-level solutions are:
While it has a Python interface, much of the low-level computation is implemented in C/C++, making it run much faster than a native Python solution.
Many common aspects of neural networks such as computation of various losses and a variety of modern optimization techniques are implemented as built in methods, reducing their implementation to a single line of code. This also helps in development and testing of various solutions, as you can easily swap in and try various solutions without having to write all the code by hand.
You can get more details about various popular machine learning libraries in this comparison.
To test our basic network, we will use the Boston Housing Dataset, which represents data on 506 houses in Boston across 14 different features. One of the features is the median value of the house in $1000’s. This is a common data set for testing regression performance of machine learning algorithms. All 14 features are continuous values, making them easy to plug directly into a neural network (after normalizing ofcourse!). The common goal is to predict the median house value using the other columns as features.
This lab will conclude with two assignments:
Assignment 1 (at bottom of this notebook) asks you to experiment with various regularization parameters to reduce overfitting and improve the results of the model.
Assignment 2 (in the next notebook) asks you to take our regression problem and convert it to a classification problem.
Let's start by importing some of the libraries we will use for this tutorial:
End of explanation
"""
#load data from scikit-learn library
dataset = load_boston()
#load data as DataFrame
houses = pd.DataFrame(dataset.data, columns=dataset.feature_names)
#add target data to DataFrame
houses['target'] = dataset.target
#print first 5 entries of data
print houses.head()
"""
Explanation: Next, let's import the Boston housing prices dataset. This is included with the scikit-learn library, so we can import it directly from there. The data will come in as two numpy arrays, one with all the features, and one with the target (price). We will use pandas to convert this data to a DataFrame so we can visualize it. We will then print the first 5 entries of the dataset to see the kind of data we will be working with.
End of explanation
"""
print dataset['DESCR']
"""
Explanation: You can see that the dataset contains only continuous features, which we can feed directly into the neural network for training. The target is also a continuous variable, so we can use regression to try to predict the exact value of the target. You can see more information about this dataset by printing the 'DESCR' object stored in the data set.
End of explanation
"""
# Create a datset of correlations between house features
corrmat = houses.corr()
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(9, 6))
# Draw the heatmap using seaborn
sns.set_context("notebook", font_scale=0.7, rc={"lines.linewidth": 1.5})
sns.heatmap(corrmat, annot=True, square=True)
f.tight_layout()
"""
Explanation: Next, we will do some exploratory data visualization to get a general sense of the data and how the different features are related to each other and to the target we will try to predict. First, let's plot the correlations between each feature. Larger positive or negative correlation values indicate that the two features are related (large positive or negative correlation), while values closer to zero indicate that the features are not related (no correlation).
End of explanation
"""
sns.jointplot(houses['target'], houses['RM'], kind='hex')
sns.jointplot(houses['target'], houses['LSTAT'], kind='hex')
"""
Explanation: We can get a more detailed picture of the relationship between any two variables in the dataset by using seaborn's jointplot function and passing it two features of our data. This will show a single-dimension histogram distribution for each feature, as well as a two-dimension density scatter plot for how the two features are related. From the correlation matrix above, we can see that the RM feature has a strong positive correlation to the target, while the LSTAT feature has a strong negative correlation to the target. Let's create jointplots for both sets of features to see how they relate in more detail:
End of explanation
"""
# convert housing data to numpy format
houses_array = houses.as_matrix().astype(float)
# split data into feature and target sets
X = houses_array[:, :-1]
y = houses_array[:, -1]
# normalize the data per feature by dividing by the maximum value in each column
X = X / X.max(axis=0)
# split data into training and test sets
trainingSplit = int(.7 * houses_array.shape[0])
X_train = X[:trainingSplit]
y_train = y[:trainingSplit]
X_test = X[trainingSplit:]
y_test = y[trainingSplit:]
print('Training set', X_train.shape, y_train.shape)
print('Test set', X_test.shape, y_test.shape)
"""
Explanation: As expected, the plots show a positive relationship between the RM feature and the target, and a negative relationship between the LSTAT feature and the target.
This type of exploratory visualization is not strictly necessary for using machine learning, but it does help to formulate your solution, and to troubleshoot your implementation incase you are not getting the results you want. For example, if you find that two features have a strong correlation with each other, you might want to include only one of them to speed up the training process. Similarly, you may want to exclude features that show little correlation to the target, since they have little influence over its value.
Now that we know a little bit about the data, let's prepare it for training with our neural network. We will follow a process similar to the previous lab:
We will first re-split the data into a feature set (X) and a target set (y)
Then we will normalize the feature set so that the values range from 0 to 1
Finally, we will split both data sets into a training and test set.
End of explanation
"""
# helper variables
num_samples = X_train.shape[0]
num_features = X_train.shape[1]
num_outputs = 1
# Hyper-parameters
batch_size = 50
num_hidden_1 = 10
num_hidden_2 = 5
learning_rate = 0.0001
training_epochs = 900
dropout_keep_prob = 0.4# set to no dropout by default
# variable to control the resolution at which the training results are stored
display_step = 1
"""
Explanation: Next, we set up some variables that we will use to define our model. The first group are helper variables taken from the dataset which specify the number of samples in our training set, the number of features, and the number of outputs. The second group are the actual hyper-parameters which define how the model is structured and how it performs. In this case we will be building a neural network with two hidden layers, and the size of each hidden layer is controlled by a hyper-parameter. The other hyper-parameters include:
batch size, which sets how many training samples are used at a time
learning rate which controls how quickly the gradient descent algorithm works
training epochs which sets how many rounds of training occurs
dropout keep probability, a regularization technique which controls how many neurons are 'dropped' randomly during each training step (note in Tensorflow this is specified as the 'keep probability' from 0 to 1, with 0 representing all neurons dropped, and 1 representing all neurons kept). You can read more about dropout here.
End of explanation
"""
def accuracy(predictions, targets):
error = np.absolute(predictions.reshape(-1) - targets)
return np.mean(error)
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
"""
Explanation: Next, we define a few helper functions which will dictate how error will be measured for our model, and how the weights and biases should be defined.
The accuracy() function defines how we want to measure error in a regression problem. The function will take in two lists of values - predictions which represent predicted values, and targets which represent actual target values. In this case we simply compute the absolute difference between the two (the error) and return the average error using numpy's mean() fucntion.
The weight_variable() and bias_variable() functions help create parameter variables for our neural network model, formatted in the proper type for Tensorflow. Both functions take in a shape parameter and return a variable of that shape using the specified initialization. In this case we are using a 'truncated normal' distribution for the weights, and a constant value for the bias. For more information about various ways to initialize parameters in Tensorflow you can consult the documentation
End of explanation
"""
'''First we create a variable to store our graph'''
graph = tf.Graph()
'''Next we build our neural network within this graph variable'''
with graph.as_default():
'''Our training data will come in as x feature data and
y target data. We need to create tensorflow placeholders
to capture this data as it comes in'''
x = tf.placeholder(tf.float32, shape=(None, num_features))
_y = tf.placeholder(tf.float32, shape=(None))
'''Another placeholder stores the hyperparameter
that controls dropout'''
keep_prob = tf.placeholder(tf.float32)
'''Finally, we convert the test and train feature data sets
to tensorflow constants so we can use them to generate
predictions on both data sets'''
tf_X_test = tf.constant(X_test, dtype=tf.float32)
tf_X_train = tf.constant(X_train, dtype=tf.float32)
'''Next we create the parameter variables for the model.
Each layer of the neural network needs it's own weight
and bias variables which will be tuned during training.
The sizes of the parameter variables are determined by
the number of neurons in each layer.'''
W_fc1 = weight_variable([num_features, num_hidden_1])
b_fc1 = bias_variable([num_hidden_1])
W_fc2 = weight_variable([num_hidden_1, num_hidden_2])
b_fc2 = bias_variable([num_hidden_2])
W_fc3 = weight_variable([num_hidden_2, num_outputs])
b_fc3 = bias_variable([num_outputs])
'''Next, we define the forward computation of the model.
We do this by defining a function model() which takes in
a set of input data, and performs computations through
the network until it generates the output.'''
def model(data, keep):
# computing first hidden layer from input, using relu activation function
fc1 = tf.nn.sigmoid(tf.matmul(data, W_fc1) + b_fc1)
# adding dropout to first hidden layer
fc1_drop = tf.nn.dropout(fc1, keep)
# computing second hidden layer from first hidden layer, using relu activation function
fc2 = tf.nn.sigmoid(tf.matmul(fc1_drop, W_fc2) + b_fc2)
# adding dropout to second hidden layer
fc2_drop = tf.nn.dropout(fc2, keep)
# computing output layer from second hidden layer
# the output is a single neuron which is directly interpreted as the prediction of the target value
fc3 = tf.matmul(fc2_drop, W_fc3) + b_fc3
# the output is returned from the function
return fc3
'''Next we define a few calls to the model() function which
will return predictions for the current batch input data (x),
as well as the entire test and train feature set'''
prediction = model(x, keep_prob)
test_prediction = model(tf_X_test, 1.0)
train_prediction = model(tf_X_train, 1.0)
'''Finally, we define the loss and optimization functions
which control how the model is trained.
For the loss we will use the basic mean square error (MSE) function,
which tries to minimize the MSE between the predicted values and the
real values (_y) of the input dataset.
For the optimization function we will use basic Gradient Descent (SGD)
which will minimize the loss using the specified learning rate.'''
loss = tf.reduce_mean(tf.square(tf.sub(prediction, _y)))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
'''We also create a saver variable which will allow us to
save our trained model for later use'''
saver = tf.train.Saver()
"""
Explanation: Now we are ready to build our neural network model in Tensorflow.
Tensorflow operates in a slightly different way than the procedural logic we have been using in Python so far. Instead of telling Tensorflow the exact operations to run line by line, we build the entire neural network within a structure called a Graph. The Graph does several things:
describes the architecture of the network, including how many layers it has and how many neurons are in each layer
initializes all the parameters of the network
describes the 'forward' calculation of the network, or how input data is passed through the network layer by layer until it reaches the result
defines the loss function which describes how well the model is performing
specifies the optimization function which dictates how the parameters are tuned in order to minimize the loss
Once this graph is defined, we can work with it by 'executing' it on sets of training data and 'calling' different parts of the graph to get back results. Every time the graph is executed, Tensorflow will only do the minimum calculations necessary to generate the requested results. This makes Tensorflow very efficient, and allows us to structure very complex models while only testing and using certain portions at a time. In programming language theory, this type of programming is called 'lazy evaluation'.
End of explanation
"""
# create an array to store the results of the optimization at each epoch
results = []
'''First we open a session of Tensorflow using our graph as the base.
While this session is active all the parameter values will be stored,
and each step of training will be using the same model.'''
with tf.Session(graph=graph) as session:
'''After we start a new session we first need to
initialize the values of all the variables.'''
tf.initialize_all_variables().run()
print('Initialized')
'''Now we iterate through each training epoch based on the hyper-parameter set above.
Each epoch represents a single pass through all the training data.
The total number of training steps is determined by the number of epochs and
the size of mini-batches relative to the size of the entire training set.'''
for epoch in range(training_epochs):
'''At the beginning of each epoch, we create a set of shuffled indexes
so that we are using the training data in a different order each time'''
indexes = range(num_samples)
random.shuffle(indexes)
'''Next we step through each mini-batch in the training set'''
for step in range(int(math.floor(num_samples/float(batch_size)))):
offset = step * batch_size
'''We subset the feature and target training sets to create each mini-batch'''
batch_data = X_train[indexes[offset:(offset + batch_size)]]
batch_labels = y_train[indexes[offset:(offset + batch_size)]]
'''Then, we create a 'feed dictionary' that will feed this data,
along with any other hyper-parameters such as the dropout probability,
to the model'''
feed_dict = {x : batch_data, _y : batch_labels, keep_prob: dropout_keep_prob}
'''Finally, we call the session's run() function, which will feed in
the current training data, and execute portions of the graph as necessary
to return the data we ask for.
The first argument of the run() function is a list specifying the
model variables we want it to compute and return from the function.
The most important is 'optimizer' which triggers all calculations necessary
to perform one training step. We also include 'loss' and 'prediction'
because we want these as ouputs from the function so we can keep
track of the training process.
The second argument specifies the feed dictionary that contains
all the data we want to pass into the model at each training step.'''
_, l, p = session.run([optimizer, loss, prediction], feed_dict=feed_dict)
'''At the end of each epoch, we will calcule the error of predictions
on the full training and test data set. We will then store the epoch number,
along with the mini-batch, training, and test accuracies to the 'results' array
so we can visualize the training process later. How often we save the data to
this array is specified by the display_step variable created above'''
if (epoch % display_step == 0):
batch_acc = accuracy(p, batch_labels)
train_acc = accuracy(train_prediction.eval(session=session), y_train)
test_acc = accuracy(test_prediction.eval(session=session), y_test)
results.append([epoch, batch_acc, train_acc, test_acc])
'''Once training is complete, we will save the trained model so that we can use it later'''
save_path = saver.save(session, "model_houses.ckpt")
print("Model saved in file: %s" % save_path)
"""
Explanation: Now that we have specified our model, we are ready to train it. We do this by iteratively calling the model, with each call representing one training step. At each step, we:
Feed in a new set of training data. Remember that with SGD we only have to feed in a small set of data at a time. The size of each batch of training data is determined by the 'batch_size' hyper-parameter specified above.
Call the optimizer function by asking tensorflow to return the model's 'optimizer' variable. This starts a chain reaction in Tensorflow that executes all the computation necessary to train the model. The optimizer function itself will compute the gradients in the model and modify the weight and bias parameters in a way that minimizes the overall loss. Because it needs this loss to compute the gradients, it will also trigger the loss function, which will in turn trigger the model to compute predictions based on the input data. This sort of chain reaction is at the root of the 'lazy evaluation' model used by Tensorflow.
End of explanation
"""
df = pd.DataFrame(data=results, columns = ["epoch", "batch_acc", "train_acc", "test_acc"])
df.set_index("epoch", drop=True, inplace=True)
fig, ax = plt.subplots(1, 1, figsize=(10, 4))
ax.plot(df)
ax.set(xlabel='Epoch',
ylabel='Error',
title='Training result')
ax.legend(df.columns, loc=1)
print "Minimum test loss:", np.min(df["test_acc"])
"""
Explanation: Now that the model is trained, let's visualize the training process by plotting the error we achieved in the small training batch, the full training set, and the test set at each epoch. We will also print out the minimum loss we were able to achieve in the test set over all the training steps.
End of explanation
"""
|
cloudmesh/book | notebooks/opencv/opencv.ipynb | apache-2.0 | import os, sys
from os.path import expanduser
os.path
home = expanduser("~")
sys.path.append('/usr/local/Cellar/opencv/3.3.1_1/lib/python3.6/site-packages/')
sys.path.append(home + '/.pyenv/versions/OPENCV/lib/python3.6/site-packages/')
import cv2
cv2.__version__
"""
Explanation: Open CV
OpenCV (Open Source Computer Vision Library) is a library of thousands of algorithms for various applications in computer vision and machine learning. It has C++, C, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS. In this tutorial, we will explain basic features of this library, including the implementation of a simple example.
Overview
OpenCV has countless functions for image and videos processing. The pipeline starts with reading the
images, low-level operations on pixel values, preprocessing e.g. denoising, and then multiple steps of
higher-level operations which vary depending on the application. OpenCV covers the whole pipeline,
especially providing a large set of library functions for high-level operations.
A simpler library for image processing in Python is
cipy's multi-dimensional image processing package (scipy.ndimage).
Installation on Linux
OpenCV for Python can be installed on Linux in multiple ways, namely PyPI(Python Package Index),
Linux package manager (apt-get for Ubuntu), Conda package manager, and also building from source.
You are recommended to use PyPI. Here's the command that you need to run:
$ pip install opencv-python
This was tested on Ubuntu 16.04 with a fresh Python 3.6 virtual environment. In order to test,
import the module in Python command line:
>>> import cv2
If it does not raise an error, it is installed correctly. Otherwise, try to solve the error.
Instalation on Windows
For installation on Windows, see:
https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_setup/py_setup_in_windows/py_setup_in_windows.html#install-opencv-python-in-windows.
Instalation on OSX Python 3
Careful this installs some libraries system wide. If you do not like this, do not use OpenCV on OSX
pyenv shell system
brew install opencv
pyenv virtualenv 3.6.2 OPENCV
cd ~/.pyenv/versions/OPENCV/lib/python3.6/site-packages/
ln -fs /usr/local/Cellar/opencv/3.3.1_1/lib/python3.6/site-packages/cv2.cpython-36m-darwin.so cv2.cpython-36m-darwin.so
pyenv activate OPENCV
pip install numpy
pip install matplotlib
pip install jupyter
Next, we need to activate OpenCV by adding it to the path.
End of explanation
"""
! pip install numpy > tmp.log
! pip install matplotlib >> tmp.log
"""
Explanation: Instalation on Raspbery Pi
TBD. YOu can earn discussion points for running it on a PI3 and documenting it here.
Install libraries
End of explanation
"""
%matplotlib inline
import cv2
# Load an image
img = cv2.imread('opencv_files/4.2.01.tiff')
# The image was downloaded from USC standard database:
# http://sipi.usc.edu/database/database.php?volume=misc&image=9
"""
Explanation: A Simple Example
In this example, an image is loaded. A simple processing is performed, and the result is written to a new image.
Loading an image
End of explanation
"""
# You can display the image using imshow function:
# cv2.imshow('Original',img)
# cv2.waitKey(0)
# cv2.destroyAllWindows()
# Or you can use Matplotlib
import matplotlib.pyplot as plt
# If you have not installed Matplotlib before, install it using: pip install matplotlib
plt.imshow(img)
"""
Explanation: Displaying the image
The image is saved in a numpy array. Each pixel is represented with 3 values (R,G,B). This provides you with access to manipulate the image at the level of single pixels.
You can display the image using imshow function as well as Matplotlib's imshow function.
End of explanation
"""
res = cv2.resize(img,None,fx=1.2, fy=0.7, interpolation = cv2.INTER_CUBIC)
plt.imshow(res)
"""
Explanation: Scaling and Rotation
Scaling (resizing) the image relative to different axis
End of explanation
"""
rows,cols,_ = img.shape
t = 45
M = cv2.getRotationMatrix2D((cols/2,rows/2),t,1)
dst = cv2.warpAffine(img,M,(cols,rows))
plt.imshow(dst)
"""
Explanation: Rotation of the image for an angle of t
End of explanation
"""
img2 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(img2, cmap='gray')
"""
Explanation: Gray-scaling
End of explanation
"""
plt.hist(img.ravel(),256,[0,256]); plt.show()
"""
Explanation: Histogram of grey values
End of explanation
"""
color = ('b','g','r')
for i,col in enumerate(color):
histr = cv2.calcHist([img],[i],None,[256],[0,256])
plt.plot(histr,color = col)
plt.xlim([0,256])
plt.show()
"""
Explanation: Histogram of color values
End of explanation
"""
ret,thresh = cv2.threshold(img2,127,255,cv2.THRESH_BINARY)
plt.subplot(1,2,1), plt.imshow(img2, cmap='gray')
plt.subplot(1,2,2), plt.imshow(thresh, cmap='gray')
"""
Explanation: Image Thresholding
End of explanation
"""
edges = cv2.Canny(img2,100,200)
plt.subplot(121),plt.imshow(img2,cmap = 'gray')
plt.subplot(122),plt.imshow(edges,cmap = 'gray')
"""
Explanation: Edge Detection
Edge detection using Canny edge detection algorithm
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/gapic/automl/showcase_automl_image_segmentation_online.ipynb | apache-2.0 | import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
"""
Explanation: Vertex client library: AutoML image segmentation model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_segmentation_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_segmentation_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to create image segmentation models and do online prediction using Google Cloud's AutoML.
Dataset
The dataset used for this tutorial is the TODO. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
Objective
In this tutorial, you create an AutoML image segmentation model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
"""
! pip3 install -U google-cloud-storage $USER_FLAG
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
"""
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
"""
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
"""
# Image Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml"
# Image Labeling type
LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_segmentation_io_format_1.0.0.yaml"
# Image Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_segmentation_1.0.0.yaml"
"""
Explanation: AutoML constants
Set constants unique to AutoML datasets and training:
Dataset Schemas: Tells the Dataset resource service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.
End of explanation
"""
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
"""
Explanation: Tutorial
Now you are ready to start creating your own AutoML image segmentation model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
End of explanation
"""
TIMEOUT = 90
def create_dataset(name, schema, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
dataset = aip.Dataset(
display_name=name, metadata_schema_uri=schema, labels=labels
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("unknown-" + TIMESTAMP, DATA_SCHEMA)
"""
Explanation: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following:
Uses the dataset client service.
Creates an Vertex Dataset resource (aip.Dataset), with the following parameters:
display_name: The human-readable name you choose to give it.
metadata_schema_uri: The schema for the dataset type.
Calls the client dataset service method create_dataset, with the following parameters:
parent: The Vertex location root path for your Database, Model and Endpoint resources.
dataset: The Vertex dataset object instance you created.
The method returns an operation object.
An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
End of explanation
"""
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
"""
Explanation: Now save the unique dataset identifier for the Dataset resource instance you created.
End of explanation
"""
IMPORT_FILE = "gs://ucaip-test-us-central1/dataset/isg_data.jsonl"
"""
Explanation: Data preparation
The Vertex Dataset resource for images has some requirements for your data:
Images must be stored in a Cloud Storage bucket.
Each image file must be in an image format (PNG, JPEG, BMP, ...).
There must be an index file stored in your Cloud Storage bucket that contains the path and label for each image.
The index file must be either CSV or JSONL.
JSONL
For image segmentation, the JSONL index file has the requirements:
Each data item is a separate JSON object, on a separate line.
The key/value pair image_gcs_uri is the Cloud Storage path to the image.
The key/value pair category_mask_uri is the Cloud Storage path to the mask image in PNG format.
The key/value pair 'annotation_spec_colors' is a list mapping mask colors to a label.
The key/value pair pair display_name is the label for the pixel color mask.
The key/value pair pair color are the RGB normalized pixel values (between 0 and 1) of the mask for the corresponding label.
{ 'image_gcs_uri': image, 'segmentation_annotations': { 'category_mask_uri': mask_image, 'annotation_spec_colors' : [ { 'display_name': label, 'color': {"red": value, "blue", value, "green": value} }, ...] }
Note: The dictionary key fields may alternatively be in camelCase. For example, 'image_gcs_uri' can also be 'imageGcsUri'.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the JSONL index file in Cloud Storage.
End of explanation
"""
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
"""
Explanation: Quick peek at your data
You will use a version of the Unknown dataset that is stored in a public Cloud Storage bucket, using a JSONL index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of objects in a JSONL index file (wc -l) and then peek at the first few rows.
End of explanation
"""
def import_data(dataset, gcs_sources, schema):
config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}]
print("dataset:", dataset_id)
start_time = time.time()
try:
operation = clients["dataset"].import_data(
name=dataset_id, import_configs=config
)
print("Long running operation:", operation.operation.name)
result = operation.result()
print("result:", result)
print("time:", int(time.time() - start_time), "secs")
print("error:", operation.exception())
print("meta :", operation.metadata)
print(
"after: running:",
operation.running(),
"done:",
operation.done(),
"cancelled:",
operation.cancelled(),
)
return operation
except Exception as e:
print("exception:", e)
return None
import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)
"""
Explanation: Import data
Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following:
Uses the Dataset client.
Calls the client method import_data, with the following parameters:
name: The human readable name you give to the Dataset resource (e.g., unknown).
import_configs: The import configuration.
import_configs: A Python list containing a dictionary, with the key/value entries:
gcs_sources: A list of URIs to the paths of the one or more index files.
import_schema_uri: The schema identifying the labeling type.
The import_data() method returns a long running operation object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.
End of explanation
"""
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
"""
Explanation: Train the model
Now train an AutoML image segmentation model using your Vertex Dataset resource. To train the model, do the following steps:
Create an Vertex training pipeline for the Dataset resource.
Execute the pipeline to start the training.
Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
Being reusable for subsequent training jobs.
Can be containerized and ran as a batch job.
Can be distributed.
All the steps are associated with the same pipeline job for tracking progress.
Use this helper function create_pipeline, which takes the following parameters:
pipeline_name: A human readable name for the pipeline job.
model_name: A human readable name for the model.
dataset: The Vertex fully qualified dataset identifier.
schema: The dataset labeling (annotation) training schema.
task: A dictionary describing the requirements for the training job.
The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:
parent: The Vertex location root path for your Dataset, Model and Endpoint resources.
training_pipeline: the full specification for the pipeline training job.
Let's look now deeper into the minimal requirements for constructing a training_pipeline specification:
display_name: A human readable name for the pipeline job.
training_task_definition: The dataset labeling (annotation) training schema.
training_task_inputs: A dictionary describing the requirements for the training job.
model_to_upload: A human readable name for the model.
input_data_config: The dataset specification.
dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
End of explanation
"""
PIPE_NAME = "unknown_pipe-" + TIMESTAMP
MODEL_NAME = "unknown_model-" + TIMESTAMP
task = json_format.ParseDict(
{"budget_milli_node_hours": 2000, "model_type": "CLOUD_LOW_ACCURACY_1"}, Value()
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
"""
Explanation: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are:
budget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.
model_type: The type of deployed model:
CLOUD_HIGH_ACCURACY_1: For deploying to Google Cloud and optimizing for accuracy.
CLOUD_LOW_LATENCY_1: For deploying to Google Cloud and optimizing for latency (response time),
Finally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.
End of explanation
"""
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
"""
Explanation: Now save the unique identifier of the training pipeline you created.
End of explanation
"""
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
"""
Explanation: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:
name: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.
End of explanation
"""
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
"""
Explanation: Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
End of explanation
"""
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("confidenceMetricsEntries", metrics["confidenceMetricsEntries"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
"""
Explanation: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter:
name: The Vertex fully qualified model identifier for the Model resource.
This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (confidenceMetricsEntries) you will print the result.
End of explanation
"""
ENDPOINT_NAME = "unknown_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
"""
Explanation: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
"""
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
"""
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
"""
MIN_NODES = 1
MAX_NODES = 1
"""
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
"""
DEPLOYED_NAME = "unknown_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"automatic_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
"""
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:
model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
deployed_model: The requirements specification for deploying the model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified model identifier of the (upload) model to deploy.
display_name: A human readable name for the deployed model.
disable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
automatic_resources: This refers to how many redundant compute instances (replicas). For this example, we set it to one (no replication).
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
"""
import json
test_items = !gsutil cat $IMPORT_FILE | head -n1
test_data = test_items[0].replace("'", '"')
test_data = json.loads(test_data)
try:
test_item = test_data["image_gcs_uri"]
test_label = test_data["segmentation_annotation"]["annotation_spec_colors"]
except:
test_item = test_data["imageGcsUri"]
test_label = test_data["segmentationAnnotation"]["annotationSpecColors"]
print(test_item, test_label)
"""
Explanation: Make a online prediction request
Now do a online prediction to your deployed model.
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
"""
import base64
import tensorflow as tf
def predict_item(filename, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
with tf.io.gfile.GFile(filename, "rb") as f:
content = f.read()
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{"content": base64.b64encode(content).decode("utf-8")}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
predict_item(test_item, endpoint_id, None)
"""
Explanation: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters:
filename: The Cloud Storage path to the test item.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
parameters_dict: Additional filtering parameters for serving prediction results.
This function calls the prediction client service's predict method with the following parameters:
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
instances: A list of instances (encoded images) to predict.
parameters: Additional filtering parameters for serving prediction results. Note, image segmentation models do not support additional parameters.
Request
Since in this example your test item is in a Cloud Storage bucket, you will open and read the contents of the imageusing tf.io.gfile.Gfile(). To pass the test data to the prediction service, we will encode the bytes into base 64 -- This makes binary data safe from modification while it is transferred over the Internet.
The format of each instance is:
{ 'content': { 'b64': [base64_encoded_bytes] } }
Since the predict() method can take multiple items (instances), you send our single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the predict() method.
Response
The response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction -- in our case there is just one:
confidenceMask: Confidence level in the prediction.
categoryMask: The predicted label per pixel.
End of explanation
"""
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
"""
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
"""
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_decoding_sensors.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Jean-Remi King <jeanremi.king@gmail.com>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
from mne.decoding import TimeDecoding
print(__doc__)
data_path = sample.data_path()
plt.close('all')
"""
Explanation: Decoding sensor space data
Decoding, a.k.a MVPA or supervised machine learning applied to MEG
data in sensor space. Here the classifier is applied to every time
point.
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.2, 0.5
event_id = dict(aud_l=1, vis_l=3)
# Setup for reading the raw data
raw = io.Raw(raw_fname, preload=True)
raw.filter(2, None, method='iir') # replace baselining with high-pass
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=True, eog=True,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=None, preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
epochs_list = [epochs[k] for k in event_id]
mne.epochs.equalize_epoch_counts(epochs_list)
data_picks = mne.pick_types(epochs.info, meg=True, exclude='bads')
"""
Explanation: Set parameters
End of explanation
"""
td = TimeDecoding(predict_mode='cross-validation', n_jobs=1)
# Fit
td.fit(epochs)
# Compute accuracy
td.score(epochs)
# Plot scores across time
td.plot(title='Sensor space decoding')
"""
Explanation: Setup decoding: default is linear SVC
End of explanation
"""
|
amueller/scipy-2017-sklearn | notebooks/22.Unsupervised_learning-anomaly_detection.ipynb | cc0-1.0 | %matplotlib inline
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
"""
Explanation: Anomaly detection
Anomaly detection is a machine learning task that consists in spotting so-called outliers.
“An outlier is an observation in a data set which appears to be inconsistent with the remainder of that set of data.”
Johnson 1992
“An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism.”
Outlier/Anomaly
Hawkins 1980
Types of anomaly detection setups
Supervised AD
Labels available for both normal data and anomalies
Similar to rare class mining / imbalanced classification
Semi-supervised AD (Novelty Detection)
Only normal data available to train
The algorithm learns on normal data only
Unsupervised AD (Outlier Detection)
no labels, training set = normal + abnormal data
Assumption: anomalies are very rare
End of explanation
"""
from sklearn.datasets import make_blobs
X, y = make_blobs(n_features=2, centers=3, n_samples=500,
random_state=42)
X.shape
plt.figure()
plt.scatter(X[:, 0], X[:, 1])
plt.show()
"""
Explanation: Let's first get familiar with different unsupervised anomaly detection approaches and algorithms. In order to visualise the output of the different algorithms we consider a toy data set consisting in a two-dimensional Gaussian mixture.
Generating the data set
End of explanation
"""
from sklearn.neighbors.kde import KernelDensity
# Estimate density with a Gaussian kernel density estimator
kde = KernelDensity(kernel='gaussian')
kde = kde.fit(X)
kde
kde_X = kde.score_samples(X)
print(kde_X.shape) # contains the log-likelihood of the data. The smaller it is the rarer is the sample
from scipy.stats.mstats import mquantiles
alpha_set = 0.95
tau_kde = mquantiles(kde_X, 1. - alpha_set)
n_samples, n_features = X.shape
X_range = np.zeros((n_features, 2))
X_range[:, 0] = np.min(X, axis=0) - 1.
X_range[:, 1] = np.max(X, axis=0) + 1.
h = 0.1 # step size of the mesh
x_min, x_max = X_range[0]
y_min, y_max = X_range[1]
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
grid = np.c_[xx.ravel(), yy.ravel()]
Z_kde = kde.score_samples(grid)
Z_kde = Z_kde.reshape(xx.shape)
plt.figure()
c_0 = plt.contour(xx, yy, Z_kde, levels=tau_kde, colors='red', linewidths=3)
plt.clabel(c_0, inline=1, fontsize=15, fmt={tau_kde[0]: str(alpha_set)})
plt.scatter(X[:, 0], X[:, 1])
plt.legend()
plt.show()
"""
Explanation: Anomaly detection with density estimation
End of explanation
"""
from sklearn.svm import OneClassSVM
nu = 0.05 # theory says it should be an upper bound of the fraction of outliers
ocsvm = OneClassSVM(kernel='rbf', gamma=0.05, nu=nu)
ocsvm.fit(X)
X_outliers = X[ocsvm.predict(X) == -1]
Z_ocsvm = ocsvm.decision_function(grid)
Z_ocsvm = Z_ocsvm.reshape(xx.shape)
plt.figure()
c_0 = plt.contour(xx, yy, Z_ocsvm, levels=[0], colors='red', linewidths=3)
plt.clabel(c_0, inline=1, fontsize=15, fmt={0: str(alpha_set)})
plt.scatter(X[:, 0], X[:, 1])
plt.scatter(X_outliers[:, 0], X_outliers[:, 1], color='red')
plt.legend()
plt.show()
"""
Explanation: now with One-Class SVM
The problem of density based estimation is that they tend to become inefficient when the dimensionality of the data increase. It's the so-called curse of dimensionality that affects particularly density estimation algorithms. The one-class SVM algorithm can be used in such cases.
End of explanation
"""
X_SV = X[ocsvm.support_]
n_SV = len(X_SV)
n_outliers = len(X_outliers)
print('{0:.2f} <= {1:.2f} <= {2:.2f}?'.format(1./n_samples*n_outliers, nu, 1./n_samples*n_SV))
"""
Explanation: Support vectors - Outliers
The so-called support vectors of the one-class SVM form the outliers
End of explanation
"""
plt.figure()
plt.contourf(xx, yy, Z_ocsvm, 10, cmap=plt.cm.Blues_r)
plt.scatter(X[:, 0], X[:, 1], s=1.)
plt.scatter(X_SV[:, 0], X_SV[:, 1], color='orange')
plt.show()
"""
Explanation: Only the support vectors are involved in the decision function of the One-Class SVM.
Plot the level sets of the One-Class SVM decision function as we did for the true density.
Emphasize the Support vectors.
End of explanation
"""
# %load solutions/22_A-anomaly_ocsvm_gamma.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
**Change** the `gamma` parameter and see it's influence on the smoothness of the decision function.
</li>
</ul>
</div>
End of explanation
"""
from sklearn.ensemble import IsolationForest
iforest = IsolationForest(n_estimators=300, contamination=0.10)
iforest = iforest.fit(X)
Z_iforest = iforest.decision_function(grid)
Z_iforest = Z_iforest.reshape(xx.shape)
plt.figure()
c_0 = plt.contour(xx, yy, Z_iforest,
levels=[iforest.threshold_],
colors='red', linewidths=3)
plt.clabel(c_0, inline=1, fontsize=15,
fmt={iforest.threshold_: str(alpha_set)})
plt.scatter(X[:, 0], X[:, 1], s=1.)
plt.legend()
plt.show()
"""
Explanation: Isolation Forest
Isolation Forest is an anomaly detection algorithm based on trees. The algorithm builds a number of random trees and the rationale is that if a sample is isolated it should alone in a leaf after very few random splits. Isolation Forest builds a score of abnormality based the depth of the tree at which samples end up.
End of explanation
"""
# %load solutions/22_B-anomaly_iforest_n_trees.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Illustrate graphically the influence of the number of trees on the smoothness of the decision function?
</li>
</ul>
</div>
End of explanation
"""
from sklearn.datasets import load_digits
digits = load_digits()
"""
Explanation: Illustration on Digits data set
We will now apply the IsolationForest algorithm to spot digits written in an unconventional way.
End of explanation
"""
images = digits.images
labels = digits.target
images.shape
i = 102
plt.figure(figsize=(2, 2))
plt.title('{0}'.format(labels[i]))
plt.axis('off')
plt.imshow(images[i], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
"""
Explanation: The digits data set consists in images (8 x 8) of digits.
End of explanation
"""
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
data.shape
X = data
y = digits.target
X.shape
"""
Explanation: To use the images as a training set we need to flatten the images.
End of explanation
"""
X_5 = X[y == 5]
X_5.shape
fig, axes = plt.subplots(1, 5, figsize=(10, 4))
for ax, x in zip(axes, X_5[:5]):
img = x.reshape(8, 8)
ax.imshow(img, cmap=plt.cm.gray_r, interpolation='nearest')
ax.axis('off')
"""
Explanation: Let's focus on digit 5.
End of explanation
"""
from sklearn.ensemble import IsolationForest
iforest = IsolationForest(contamination=0.05)
iforest = iforest.fit(X_5)
"""
Explanation: Let's use IsolationForest to find the top 5% most abnormal images.
Let's plot them !
End of explanation
"""
iforest_X = iforest.decision_function(X_5)
plt.hist(iforest_X);
"""
Explanation: Compute the level of "abnormality" with iforest.decision_function. The lower, the more abnormal.
End of explanation
"""
X_strong_inliers = X_5[np.argsort(iforest_X)[-10:]]
fig, axes = plt.subplots(2, 5, figsize=(10, 5))
for i, ax in zip(range(len(X_strong_inliers)), axes.ravel()):
ax.imshow(X_strong_inliers[i].reshape((8, 8)),
cmap=plt.cm.gray_r, interpolation='nearest')
ax.axis('off')
"""
Explanation: Let's plot the strongest inliers
End of explanation
"""
fig, axes = plt.subplots(2, 5, figsize=(10, 5))
X_outliers = X_5[iforest.predict(X_5) == -1]
for i, ax in zip(range(len(X_outliers)), axes.ravel()):
ax.imshow(X_outliers[i].reshape((8, 8)),
cmap=plt.cm.gray_r, interpolation='nearest')
ax.axis('off')
"""
Explanation: Let's plot the strongest outliers
End of explanation
"""
# %load solutions/22_C-anomaly_digits.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Rerun the same analysis with all the other digits
</li>
</ul>
</div>
End of explanation
"""
|
matthiaskoenig/sbmlutils | src/sbmlutils/examples/van_der_pol/van_der_pol.ipynb | lgpl-3.0 | # print settings for notebook
%matplotlib inline
import matplotlib
matplotlib.rcParams; # available global parameters
matplotlib.rcParams['figure.figsize'] = (9.0, 6.0)
matplotlib.rcParams['axes.labelsize'] = 'medium'
font = {'family' : 'sans-serif',
'weight' : 'normal', # bold
'size' : 14}
matplotlib.rc('font', **font)
"""
Explanation: Back to the main Index
Van der Pol oscillator
Matthias König
The Van der Pol oscillator is a non-conservative oscillator with non-linear damping. It is one of the standard models analysed in dynamics. In this tutorial the Van der Pol oscillator is analysed using SBML.
http://en.wikipedia.org/wiki/Van_der_Pol_oscillator
http://www.opencor.ws/user/howToGetStarted.html
History
The Van der Pol oscillator was originally proposed by the Dutch electrical engineer and physicist Balthasar van der Pol while he was working at Philips. Van der Pol found stable oscillations, which he called relaxation-oscillations[2] and are now known as limit cycles, in electrical circuits employing vacuum tubes. When these circuits were driven near the limit cycle they become entrained, i.e. the driving signal pulls the current along with it. Van der Pol and his colleague, van der Mark, reported in the September 1927 issue of Nature that at certain drive frequencies an irregular noise was heard. This irregular noise was always heard near the natural entrainment frequencies. This was one of the first discovered instances of deterministic chaos.
The Van der Pol equation has a long history of being used in both the physical and biological sciences. For instance, in biology, Fitzhugh and Nagumo extended the equation in a planar field as a model for action potentials of neurons.
Equations
The Van der Pol oscillator evolves in time according to the second-order differential equation
$$\frac{d^2x}{dt^2} - µ(1-x^2)\frac{dx}{dt} + x = 0$$
with initial conditions $x=−2$ and $\frac{dx}{dt}=0$. $x$ is the position coordinate—which is a function of the time $t$, and $μ$ is a scalar parameter indicating the nonlinearity and the strength of the damping.
To create a SBML file, we need to convert the second-order equation to two first-order equations by defining the velocity $\frac{dx}{dt}$ as a new variable $y$:
$$\frac{dx}{dt}=y$$
$$\frac{dy}{dt}=\mu(1-x^2)y-x$$
The initial conditions are now $x=−2$ and $y=0$.
Requirements
antimony
roadrunner
matplotlib
Notebook settings
In a first step general settings for the notebook are defined.
End of explanation
"""
# create SBML
import antimony
model_id = 'van_der_pol'
# ----------------------------
model_str = '''
model {}
var species x = 2;
var species y = 0;
const mu = 0;
J1: -> x; y
J2: -> y; mu *(1-x^2)*y - x
end
'''.format(model_id)
# ----------------------------
antimony.setBareNumbersAreDimensionless(True)
antimony.loadAntimonyString(model_str)
model_file = '../models/{}.xml'.format(model_id)
antimony.writeSBMLFile(model_file, model_id)
print(model_file)
# show SBML
print(antimony.getSBMLString(model_id))
"""
Explanation: SBML Model
Now the model description of the Van der Pol oscillator in a standard format for computational models, the Systems Biology Markup Language (SBML) is created [Hucka2003]. This allows to analyse the model in a multitude of tools supporting SBML, to name a few
COPASI
cy3sbml
The SBML for the above system of ordinary differential equations (ODEs) is created with Antimony: A modular human-readable, human-writeable model definition language [Smith?].
We set the initial conditions $x=−2$ and $y=0$ and the damping parameter $mu=0$, i.e. no dampening.
End of explanation
"""
import multiscale.sbmlutils.validation as validation
reload(validation)
validation.validate_sbml(model_file, ucheck=False); # no unit checks
"""
Explanation: Validation
Validation of the SBML model and display of problems with the file.
End of explanation
"""
import roadrunner
import multiscale.odesim.simulate.roadrunner_tools as rt
reload(rt)
# load model in roadrunner
r = roadrunner.RoadRunner(model_file)
# what are the current selections
print(r.selections)
# make the integrator settings
rt.set_integrator_settings(r)
# Simulation
print('mu = {}'.format(r['mu']))
tend = 50
s = r.simulate(0, tend)
# create plot
import matplotlib.pylab as plt
plt.subplot(121)
plt.plot(s['time'], s['[x]'], color='blue', label='x')
plt.plot(s['time'], s['[y]'], color='black', label='y')
plt.legend()
plt.xlabel('time')
plt.ylabel('x , y');
plt.xlim([0,tend])
plt.ylim([-3,3])
plt.subplot(122)
plt.plot(s['[x]'], s['[y]'], color="black")
plt.xlabel('x')
plt.ylabel('y');
plt.xlim([-3,3])
plt.ylim([-3,3]);
"""
Explanation: Simulation
After the model definition the model can be be simulated.
We can now use the defined model for some simulations. We use roadrunner for the simulations.
End of explanation
"""
# add the additional values of interest to the selection
print(r.selections)
r.selections = ['time'] + ['[{}]'.format(sid) for sid in r.model.getFloatingSpeciesIds()] \
+ r.model.getReactionIds()
print(r.selections)
# change the control parameter mu in the model
import numpy as np
results = []
mu_values = [0, 0.01, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0]
for mu in mu_values:
print(mu)
s, gp = rt.simulate(r, t_start=0, t_stop=100, parameters={'mu': mu})
results.append(s)
plt.figure(figsize=(5,10))
for k, mu in enumerate(mu_values):
res = results[k]
plt.plot(res['[x]'], res['[y]'], color='black')
plt.title('Phase plane of limit cycle')
plt.xlabel('x')
plt.ylabel('y')
plt.xlim([-2,2])
plt.ylim([-7,7])
# calculating the derivatives for given concentrations
def dxdt(rr, X, Y):
DX = np.zeros_like(X)
DY = np.zeros_like(Y)
for k, _ in np.ndenumerate(X):
# print('X[k], Y[k]', X[k], Y[k])
rr['[x]'], rr['[y]'] = X[k], Y[k]
DX[k], DY[k] = rr['J1'], rr['J2']
return DX, DY
def phase_portrait(r, x=np.linspace(-6, 6, 20), y=np.linspace(-8, 8, 20), figsize = (5,8)):
fig2 = plt.figure(figsize=figsize)
ax2 = fig2.add_subplot(1,1,1)
# quiverplot
# define a grid and compute direction at each point
X1 , Y1 = np.meshgrid(x, y) # create a grid
DX1, DY1 = dxdt(r, X1, Y1) # compute J1 and J2 (use roadrunner as calculator)
M = (np.hypot(DX1, DY1)) # norm the rate
M[ M == 0] = 1. # avoid zero division errors
DX1 /= M # normalize each arrows
DY1 /= M
ax2.quiver(X1, Y1, DX1, DY1, M, pivot='mid')
# ax2.xaxis.label = 'x'
# ax2.yaxis.label = 'y'
# ax2.legend()
ax2.grid()
# single trajectories
def trajectory(mu, x0=2.0, y0=0.0, color="black"):
s, _ = rt.simulate(r, t_start=0, t_stop=100,
parameters = {'mu' : mu}, init_concentrations={'x':x0, 'y':y0})
plt.plot(s['[x]'], s['[y]'], color=color)
for mu in [0.3, 0.0, 0.3, 4.0]:
print(mu)
r['mu'] = mu
phase_portrait(r, figsize=(8,8))
trajectory(mu, x0=2.3, y0=4.0)
"""
Explanation: Model behavior
Evolution of the limit cycle in the phase plane. The limit cycle begins as circle and, with varying $\mu$, become increasingly sharp. An example of a Relaxation oscillator.
The Van der Pol oscillator shows an interesting behavior depending on the dampin parameter $\mu$.
End of explanation
"""
mu = 0.3 # 0.0, 0.3, 4.0
# create phase portrait
r['mu'] = mu
phase_portrait(r)
trajectory(mu, x0=2.3, y0=4.0)
trajectory(mu, x0=0.2, y0=0.4)
trajectory(mu, x0=-2.0, y0=6.0)
trajectory(mu, x0=-2.0, y0=-6.0)
# limit cycle
trajectory(mu, color="blue")
"""
Explanation: Phase plane and trajectories
We can now analyse the phase plane and the trajectories for different damping values $mu$.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.16/_downloads/plot_time_frequency_global_field_power.ipynb | bsd-3-clause | # Authors: Denis A. Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import somato
from mne.baseline import rescale
from mne.stats import _bootstrap_ci
"""
Explanation: Explore event-related dynamics for specific frequency bands
The objective is to show you how to explore spectrally localized
effects. For this purpose we adapt the method described in [1]_ and use it on
the somato dataset. The idea is to track the band-limited temporal evolution
of spatial patterns by using the Global Field Power (GFP).
We first bandpass filter the signals and then apply a Hilbert transform. To
reveal oscillatory activity the evoked response is then subtracted from every
single trial. Finally, we rectify the signals prior to averaging across trials
by taking the magniude of the Hilbert.
Then the GFP is computed as described in [2], using the sum of the squares
but without normalization by the rank.
Baselining is subsequently applied to make the GFPs comparable between
frequencies.
The procedure is then repeated for each frequency band of interest and
all GFPs are visualized. To estimate uncertainty, non-parametric confidence
intervals are computed as described in [3] across channels.
The advantage of this method over summarizing the Space x Time x Frequency
output of a Morlet Wavelet in frequency bands is relative speed and, more
importantly, the clear-cut comparability of the spectral decomposition (the
same type of filter is used across all bands).
References
.. [1] Hari R. and Salmelin R. Human cortical oscillations: a neuromagnetic
view through the skull (1997). Trends in Neuroscience 20 (1),
pp. 44-49.
.. [2] Engemann D. and Gramfort A. (2015) Automated model selection in
covariance estimation and spatial whitening of MEG and EEG signals,
vol. 108, 328-342, NeuroImage.
.. [3] Efron B. and Hastie T. Computer Age Statistical Inference (2016).
Cambrdige University Press, Chapter 11.2.
End of explanation
"""
data_path = somato.data_path()
raw_fname = data_path + '/MEG/somato/sef_raw_sss.fif'
# let's explore some frequency bands
iter_freqs = [
('Theta', 4, 7),
('Alpha', 8, 12),
('Beta', 13, 25),
('Gamma', 30, 45)
]
"""
Explanation: Set parameters
End of explanation
"""
# set epoching parameters
event_id, tmin, tmax = 1, -1., 3.
baseline = None
# get the header to extract events
raw = mne.io.read_raw_fif(raw_fname, preload=False)
events = mne.find_events(raw, stim_channel='STI 014')
frequency_map = list()
for band, fmin, fmax in iter_freqs:
# (re)load the data to save memory
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.pick_types(meg='grad', eog=True) # we just look at gradiometers
# bandpass filter and compute Hilbert
raw.filter(fmin, fmax, n_jobs=1, # use more jobs to speed up.
l_trans_bandwidth=1, # make sure filter params are the same
h_trans_bandwidth=1, # in each band and skip "auto" option.
fir_design='firwin')
raw.apply_hilbert(n_jobs=1, envelope=False)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=baseline,
reject=dict(grad=4000e-13, eog=350e-6), preload=True)
# remove evoked response and get analytic signal (envelope)
epochs.subtract_evoked() # for this we need to construct new epochs.
epochs = mne.EpochsArray(
data=np.abs(epochs.get_data()), info=epochs.info, tmin=epochs.tmin)
# now average and move on
frequency_map.append(((band, fmin, fmax), epochs.average()))
"""
Explanation: We create average power time courses for each frequency band
End of explanation
"""
fig, axes = plt.subplots(4, 1, figsize=(10, 7), sharex=True, sharey=True)
colors = plt.get_cmap('winter_r')(np.linspace(0, 1, 4))
for ((freq_name, fmin, fmax), average), color, ax in zip(
frequency_map, colors, axes.ravel()[::-1]):
times = average.times * 1e3
gfp = np.sum(average.data ** 2, axis=0)
gfp = mne.baseline.rescale(gfp, times, baseline=(None, 0))
ax.plot(times, gfp, label=freq_name, color=color, linewidth=2.5)
ax.axhline(0, linestyle='--', color='grey', linewidth=2)
ci_low, ci_up = _bootstrap_ci(average.data, random_state=0,
stat_fun=lambda x: np.sum(x ** 2, axis=0))
ci_low = rescale(ci_low, average.times, baseline=(None, 0))
ci_up = rescale(ci_up, average.times, baseline=(None, 0))
ax.fill_between(times, gfp + ci_up, gfp - ci_low, color=color, alpha=0.3)
ax.grid(True)
ax.set_ylabel('GFP')
ax.annotate('%s (%d-%dHz)' % (freq_name, fmin, fmax),
xy=(0.95, 0.8),
horizontalalignment='right',
xycoords='axes fraction')
ax.set_xlim(-1000, 3000)
axes.ravel()[-1].set_xlabel('Time [ms]')
"""
Explanation: Now we can compute the Global Field Power
We can track the emergence of spatial patterns compared to baseline
for each frequency band, with a bootstrapped confidence interval.
We see dominant responses in the Alpha and Beta bands.
End of explanation
"""
|
msampathkumar/kaggle-quora-tensorflow | preprocessor.ipynb | apache-2.0 | df_train = pd.read_csv(TRAIN_CSV)
df_test = pd.read_csv(TEST_CSV)
# Train Data
train_feature_1_string = pd.Series(df_train['question1'].tolist()).astype(str)
train_feature_2_string = pd.Series(df_train['question2'].tolist()).astype(str)
target = pd.Series(df_train['is_duplicate'].tolist())
all_train_qs = train_feature_1_string + train_feature_2_string
# Test Data
test_feature_1_string = pd.Series(df_test['question1'].tolist()).astype(str)
test_feature_2_string = pd.Series(df_test['question2'].tolist()).astype(str)
all_test_qs = test_feature_1_string + test_feature_2_string
all_qs = all_train_qs + all_test_qs
print(all_train_qs.tolist()[:10])
df_train.head()
df_test.head()
"""
Explanation: Data Analysis
End of explanation
"""
all_qids = df_train.qid1 + df_train.qid2
train_qs = df_train.question1 + df_train.question2
total_ques_pairs = len(df_train)
print('Total number of question pairs for training: {}'.format(total_ques_pairs))
duplicate_ques_pairs = round(df_train['is_duplicate'].mean()*100, 2)
print('Duplicate pairs: {}%'.format(duplicate_ques_pairs))
unique_qids = len(np.unique(train_qs.fillna("")))
print('Total number of questions in the training data: {}'.format(unique_qids))
print('Number of questions that appear multiple times: {}'.format(np.sum(all_qids.value_counts() > 1)))
print("Total number of questions in Quora dataset: {}".format(len(all_qs)))
"""
Explanation: Text Analysis
End of explanation
"""
import string
import codecs
ACC_ALPHA_NUM_CHARS = set(map(ord, string.ascii_lowercase + string.digits + ' ')) # adding space here
ACC_SPECIAL_CHARS = set(map(ord, '''~!@#$%^&*()_+`-=[]\{}|;':"<>?,./<\/>|?'"+=-*&^%$#@!_.;:{}()[]~!@#$%^&*()_+'''))
# TODO: Split char processor to char_selector & char_procssor
def char_processor(text):
"""String charecters are processed based on globally defined ACC_* variables.
Args:
text(string): input string
Features:
* clears trailing or heading space or tabs
* converts text to lower case
* adds appends and prepends spaces to all special charecters
"""
new_text = []
new_text_append = new_text.append
for char in text.strip().lower():
if ord(char) in ACC_ALPHA_NUM_CHARS:
new_text_append(char)
elif ord(char) in ACC_SPECIAL_CHARS:
new_text_append(" " + char + " ")
return ''.join(new_text)
def word_selector(word):
"""Return True or False, to tell if we should select the word.
Args:
word(string): an array of charecters
Selection Criteria:
* If len == 1, has to be one of accepted ACC_* variables.
* If len > 1, has to be lower case & alpha-numeric (Eg: i3, m5, 1plus, 3g,..)
"""
if (len(word) == 1 and (ord(word) in ACC_ALPHA_NUM_CHARS or ord(word) in ACC_SPECIAL_CHARS))\
or (key.islower() and len(key) > 1 and key.isalnum()):
return True
else:
return Flase
def glove_embeddings():
print('Indexing word vectors.')
embeddings_index = dict()
with codecs.open(GLOVE_DATA_FILE, encoding='utf-8') as f:
for line in f:
key, *vector = line.split(' ')
if word_selector(key):
embeddings_index[key] = np.asarray(vector, dtype='float32')
return embeddings_index
def text_processor(text):
return [word for word in text.split() if word_selector(word)]
def micro_procesor(text):
return char_processor(text)
## Testing
msg = r'''
How does the Surface Pro himself 4 compare with iPad Pro?Why did Microsoft choose core
m3 and not core i3 home Surface Pro 4? as/df\asdf\ta?sd|asdf.as
'''
print(msg)
print('--' * 25)
msg = micro_procesor(msg)
print(msg)
print('--' * 25)
lows = text_processor(msg)
print(lows)
print('--' * 25)
if ('embeddings_index' not in dir()):
print('Indexing word vectors.')
embeddings_index = glove_embeddings()
print('Found %s word vectors.' % len(embeddings_index))
else:
print('Skipped to save some time!')
PAD_EMB = '<PAD>'
embeddings_index[PAD_EMB] = np.zeros(300, dtype=float)
msg = 'What is the best medication equation erectile dysfunction?How do I out get rid of Erectile Dysfunction?'
msg, text_processor(msg)
l = ["just say a word","of some length"]
ll = list(map(lambda l: l.split(" "), l))
seq_length = 5
ll
e = {"just": [0,0,0,0,1],
"say" : [0,0,0,1,1],
"a" : [0,1,1,1,0],
"word" : [1,0,0,1,0],
"of" : [0,1,0,1,0],
"some" : [0,1,0,1,0],
"length" : [0,0,1,1,0],
'<PAD>': [0,0,0,0,0]
}
# list of words --> lows
from pprint import pprint as pp
from numpy import array
def lows_padding(list_of_words, seq_length=5, append=True):
"""Pads/slices given bag of words for specified length."""
if len(list_of_words) == seq_length:
return list_of_words
elif len(list_of_words) > seq_length:
return list_of_words[:seq_length]
# else:
tmp = ['<PAD>' for i in range(seq_length - len(list_of_words))]
if append:
return list_of_words + tmp
return tmp + list_of_words
def lows_embedding(list_of_words, serializer):
"""To serializer/string Tokenize the list of words."""
alist = []
for word in list_of_words:
try:
alist.append(serializer[word])
except KeyError:
alist.append(serializer[PAD_EMB])
return array(alist)
def lows_transformer(list_of_words, serializer, seq_length, append):
"""To pad the given list of words and serialiase them."""
return lows_embedding(lows_padding(list_of_words, seq_length, append),
serializer)
bag_of_lows_transformer = lambda list_of_words: lows_transformer(list_of_words, serializer=embeddings_index,
seq_length=4, append=True)
game = list(map(bag_of_lows_transformer, ll))
## Testing
msg = r'''
How does the Surface Pro himself 4 compare with iPad Pro?Why did Microsoft choose core
m3 and not core i3 home Surface Pro 4? as/df\asdf\ta?sd|asdf.as
'''
print(msg)
print('--' * 25)
msg = micro_procesor(msg)
print(msg)
print('--' * 25)
lows = text_processor(msg)
print(lows)
print('--' * 25)
lows = lows_padding(lows, append=True, seq_length=50)
print(lows)
print('--' * 25)
lows = lows_embedding(lows, embeddings_index)
print(lows)
print(lows.shape)
print('--' * 25)
question_transformer = lambda msg: lows_embedding(
lows_padding(
text_processor(
micro_procesor(msg)
),
append=True,
seq_length=50),
embeddings_index)
all_qs[5], question_transformer(all_qs[5])
sample = all_qs[:500]
sample.map(question_transformer)
"""
Explanation: Text Processing
TODO: Check the AVG length of english words and filter them out in GLOVE DATA
TODO: Check the AVG length of english words and filter them out in INPUT DATA
End of explanation
"""
# df_train = pd.read_csv(TRAIN_CSV)
# df_test = pd.read_csv(TEST_CSV)
df_train.describe()
np.sum(df_train.isnull(), axis=0), np.sum(df_test.isnull(), axis=0)
df_train = pd.read_csv(TRAIN_CSV)
df_test = pd.read_csv(TEST_CSV)
df_train.dropna(inplace=True)
df_test.dropna(inplace=True)
df_train.question1.apply?
def row_mapper(row):
print(str(row))
row.question1 = "a"
row.question2 = 'b'
return row
df_train[:5].applymap?
df_train.apply?
df_train['question1 question2'.split()][:2].applymap(question_transformer)
import copy
# sample = copy.deepcopy(df_train['question1 question2'.split()][:50].applymap(question_transformer))
sample = copy.deepcopy(df_train['question1 question2'.split()][:10]) # .applymap(question_transformer)
sample.shape, question_transformer('0 What is the step by step guide to invest i').shape
row = []
def row_mapper(input_row):
global row
row = input_row
return row
sample.applymap(row_mapper)
features1 = np.array(sample.question1)
new_features1 = array([question_transformer(_) for _ in sample.question1])
new_features2 = array([question_transformer(_) for _ in sample.question2])
df_train = array([array([question_transformer(_) for _ in df_train.question1]),
array([question_transformer(_) for _ in df_train.question2])])
df_train.shape
new = sample.applymap(question_transformer)
np.array(sample.question1.map(lambda x: array([1, 1]))).shape
features1 = np.zeros_like(sample.question1)
for i, row in enumerate(sample.question1):
features1[i] = question_transformer(sample.question1[i])
type(features1[1])
"""
Explanation: Text Transformation
End of explanation
"""
|
Kusdhill/bioapi-examples | python_notebooks/1kg.ipynb | apache-2.0 | import ga4gh.client as client
c = client.HttpClient("http://1kgenomes.ga4gh.org/")
"""
Explanation: GA4GH 1000 Genome Examples
Variant data from the Thousand Genomes project is being served by the GA4GH reference server. In this example we show how to use the GA4GH client to access the data.
Initialize the client
In this step we create a client object which will be used to communicate with the server. It is initialized using the URL.
End of explanation
"""
dataset = c.searchDatasets().next()
print(dataset)
"""
Explanation: We will continue to refer to this client object for accessing the remote server.
Access the dataset
Here, we issue or first API call to get a listing of datasets hosted by the server. The API call returns an iterator, which is iterated on once to get the 1kgenomes dataset.
End of explanation
"""
referenceSet = c.searchReferenceSets().next()
print(referenceSet)
"""
Explanation: Access the reference set
The GA4GH Genomics API provides methods for accessing the bases of a reference. The Thousand Genomes data presented here are mapped to GRCh37.
To access these data we first list the reference sets.
End of explanation
"""
references = [r for r in c.searchReferences(referenceSetId = referenceSet.id)]
print(', '.join(sorted([reference.name for reference in references])))
"""
Explanation: With the reference set saved to a variable we will now request for the available references. This is the list of contigs for which we can request reference bases.
End of explanation
"""
chr1 = filter(lambda x: x.name == "1", references)[0]
bases = c.listReferenceBases(chr1.id, start=10000, end=11000)
print(bases)
print(len(bases))
"""
Explanation: Here, we print the names of the available references. These reference names are used in the variants/search API. By selecting one of the references we can craft a ListBases request. Here, we ask for the 1000 bases between 10,000 and 11,000 on the first chromosome.
List reference bases
End of explanation
"""
release = None
functional = None
for variantSet in c.searchVariantSets(datasetId=dataset.id):
if variantSet.name == "release":
release = variantSet
else:
functional = variantSet
"""
Explanation: List Variant Sets
The Thousand Genomes project sequenced the genome of over 2500 participants and released variation data in VCF format. The GA4GH reference server hosts those variant sets, and in this step we will list the available containers.
End of explanation
"""
exampleVariant = c.searchVariants(variantSetId=release.id, start=10000, end=11000, referenceName=chr1.name).next()
print("Variant name: {}".format(exampleVariant.names[0]))
print("Start: {}, End: {}".format(exampleVariant.start, exampleVariant.end))
print("Reference bases: {}".format(exampleVariant.referenceBases))
print("Alternate bases: {}".format(exampleVariant.alternateBases))
print("Number of calls: {}".format(len(exampleVariant.calls)))
"""
Explanation: There are two variant sets currently being made available by this server instance. release contains the calls for the each participant and functional_annotation provides details of the effects of these variants created using the Variant Effect Predictor.
Search variants
Now that we have stored identifiers for the variant sets hosted by this server, we can craft search requests to find the individual variants. The GA4GH genomics API closely follows VCF by providing the calls for each variant in the variant record itself. Let's get a single variant record to examine.
End of explanation
"""
print(exampleVariant.calls[0])
"""
Explanation: Variant calls
Here, we select a few of the important fields of the variant record. Let's now examine the calls returned with this variant. The calls are what tell us which participants were believed to have presented the given variant.
End of explanation
"""
total = 0
count = 0
for call in exampleVariant.calls:
total += 1
count += call.genotype[0] or call.genotype[1]
print("{}/{} participants with this variant".format(count, total))
print(float(count) / float(total))
"""
Explanation: This tells us that for the participant HG00096 the variant in question was observed on the first haplotype ("genotype": [1, 0]). We can perform summary statistics on the calls field to measure allele frequency. For this demonstration we will count the presence of the variant on either haplotype to determine how often it was seen amongst the participants.
End of explanation
"""
annotationSet = c.searchVariantAnnotationSets(variantSetId=functional.id).next()
"""
Explanation: Variant annotations
To get at the detailed variant annotations, we must first find the variant annotation set.
End of explanation
"""
annotation = c.searchVariantAnnotations(
variantAnnotationSetId=annotationSet.id,
start=exampleVariant.start,
end=exampleVariant.end,
referenceName=chr1.name).next()
print(annotation.transcriptEffects[0].effects[0].term)
"""
Explanation: We can now search for the range that includes our example variant to discover relevant annotations.
End of explanation
"""
gencode = c.searchFeatureSets(datasetId=dataset.id).next()
print(gencode)
"""
Explanation: Here we have found the annotation for our example variant and have found it has the upstream_gene_variant consequence.
Sequence annotations
Finally, we can learn more about the site of our example variant by querying the sequence annotations API, which is serving the Gencode Genes data.
End of explanation
"""
gene = c.searchFeatures(
featureSetId=gencode.id,
start=10000,
end=12000,
referenceName="chr1",
featureTypes=['gene']).next()
print("Gene name: {}".format(gene.attributes.vals['gene_name'][0]))
print("Start: {}, End: {}".format(gene.start, gene.end))
"""
Explanation: We can now craft search requests for features to find the nearest gene:
End of explanation
"""
|
minh5/cpsc | reports/api data.ipynb | mit | import pickle
import operator
import numpy as np
import pandas as pd
import gensim.models
data = pickle.load(open('/home/datauser/cpsc/data/processed/cleaned_api_data', 'rb'))
data.head()
"""
Explanation: Introduction
This notebook serves as a reporting tool for the CPSC. In this notebook, I laid out the questions CPSC is interested in learning from their SaferProduct API. The format will be that there are a few questions presented and each question will have a findings section where there is a quick summary of the findings while in Section 4, there will be further information on how on the findings were conducted.
Analysis
Given that the API was down during this time of the reporting, I obtained data from Ana Carolina Areias via Dropbox link. Here I cleaned up the pure JSON format and converted it into a dataframe (the cleaning code can be found in the exploratory.ipynb in the /notebook directory. After that I saved the data using pickle where I can easily load it up for analysis.
The questions answered here are the result of a conversation between DKDC and the CPSC regarding their priorities and what information is available from the data.
The main takeaways from this analysis is that:
From the self-reported statistics of people who reported their injury to the API, it appears that there is a skew against people who are older. The data shows that people who are reporting are 40-60 years old.
An overwhelming amount of reports did not involve bodily harm or require medical attention; much of the reports were just incident reports with a particular product
Out of the reports that resulted in some harm, the most reported product was in the footwear category regarding some harm and discomfort with walking with the Sketchers Tone-Ups shoes
Although not conclusive, but from the reports, there appears to be come indication that there are a lot of fire-related incidents from a cursory examination of the most popular words
End of explanation
"""
pd.crosstab(data['GenderDescription'], data['age_range'])
"""
Explanation: Are there certain populations we're not getting reports from?
We can create a basic cross tab between age and gender to see if there are any patterns that emerges.
End of explanation
"""
#removing minor harm incidents
no_injuries = ['Incident, No Injury', 'Unspecified', 'Level of care not known',
'No Incident, No Injury', 'No First Aid or Medical Attention Received']
damage = data.ix[~data['SeverityTypePublicName'].isin(no_injuries), :]
damage.ProductCategoryPublicName.value_counts()[0:9]
"""
Explanation: From the data, it seems that there's not much underrepresentation by gender. There are only around a thousand less males than females in a dataset of 28,000. Age seems to be a bigger issue. There appears to be a lack of representation of older people using the API. Given that older folks may be less likely to self report, or if they wanted to self report, they may not be tech-savvy enough to use with a web interface. My assumption that people over 70 are probably experience product harm at a higher rate and are not reporting this.
If we wanted to raise awareness about a certain tool or item, where should we focus our efforts
To construct this, I removed any incidents that did not cause any bodily harm and taking the top ten categories. There were several levels of severity. We can remove complaints that does not involve any physical harm. After removing these complaint, it is really interesting to see that "Footwear" was the product category of harm.
End of explanation
"""
data.SeverityTypeDescription.value_counts()
"""
Explanation: This is actually preplexing, so I decided to investigate further by analyzing the complaints filed for the "Footwear" category. To do this, I created a Word2Vec model that uses a convolution neural network for text analysis. This process maps a word and the linguistic context it is in to be able to calculate similarity between words. The purpose of this is to find words that related to each other. Rather than doing a simple cross tab of product categories, I can ingest the complaints and map out their relationship. For instance, using the complaints that resulted in bodily harm, I found that footwear was associated with pain and walking. It seems that there is injuries related to Sketcher sneakers specifically since it was the only brand that showed up enough to be included in the model's dictionary. In fact, there was a lawsuit regarding Sketchers and their toning shoes
Are there certain complaints that people are filing? Quality issues vs injuries?
Look below, we see that a vast majority are incidents with any bodily harm. Over 60% of all complaints were categorized as Incident, No Injury.
End of explanation
"""
model.most_similar('was')
"""
Explanation: Although, while it is label to have no injury, it does not necessarily mean that we can't take precaution. What I did was take the same approach as the previous model, I subsetted the data to only complaints that had "no injury" and ran a model to examine words used. From the analysis, we see that the word to, was, and it were the top three words. At first glance, it may seem that these words are meaningless, however if we examine words that are similar to it, we can start seeing a connection.
For instance, the word most closely related to "to" was "unable" and "trying", which conveys a sense of urgency in attempting to turn something on or off. Examining the words "unable," I was able to see it was related to words such as "attempted" and "disconnect." Further investigation lead me to find it was dealing with a switch or a plug, possibly dealing with an electrical item.
A similar picture is painted when trying to examine the word "was." The words that felt out of place was "emitting", "extinguish," and "smelled." It is not surprise that after a few investigations of these words, that words like "sparks" and "smoke" started popping up more. This leads me to believe that these complaints have something to do with encounters closely related to fire.
So while these complaints are maybe encounters with danger, it may be worthwile to review these complaints further with an eye out for fire related injuries or products that could cause fire.
End of explanation
"""
data.GenderDescription.value_counts()
data['age'] = map(lambda x: x/12, data['VictimAgeInMonths'])
labels = ['under 10', '10-20', '20-30', '30-40', '40-50', '50-60',
'60-70','70-80', '80-90', '90-100', 'over 100']
data['age_range'] = pd.cut(data['age'], bins=np.arange(0,120,10), labels=labels)
data['age_range'][data['age'] > 100] = 'over 100'
counts = data['age_range'].value_counts()
counts.sort()
counts
"""
Explanation: Who are the people who are actually reporting to us?
This question is difficult to answer because of a lack of data on the reporter. From the cross tabulation in Section 3.1, we see that the majority of our the respondents are female and the largest age group are 40-60. That is probably the best guess of who are the people who are using the API.
Conclusion
This is meant to serve as a starting point on examining the API data. The main findings were that:
From the self-reported statistics of people who reported their injury to the API, it appears that there is a skew against people who are older. The data shows that people who are reporting are 40-60 years old.
An overwhelming amount of reports did not involve bodily harm or require medical attention; much of the reports were just incident reports with a particular product
Out of the reports that resulted in some harm, the most reported product was in the footwear category regarding some harm and discomfort with walking with the Sketchers Tone-Ups shoes
Although not conclusive, but from the reports, there appears to be come indication that there are a lot of fire-related incidents from a cursory examination of the most popular words
While text analysis is helpful, it is often not sufficient. What would really help the analysis process would be include more information from the user. The following information would be helpful to collect to make conduct more actionable insight.
Ethnicity/Race
Self Reported Income
Geographic information
Region (Mid Atlantic, New England, etc)
Closest Metropolitan Area
State
City
Geolocation of IP address
coordinates can be "jittered" to conserve anonymity
A great next step would be a deeper text analysis on shoes. It may be possible to train a neural network to consider smaller batches of words so we can capture the context better. Other steps that I would do if I had more time would be to find a way to fix up unicode issues with some of the complaints (there were special characters that prevented some of the complaints to be converted into strings). I would also look further into the category that had the most overall complaints: "Electric Ranges and Stoves" and see what the complaints were.
If we could implement these challenges, there is no doubt we could gain some valuable insights on products that are harming Americans. This report serves as the first step. I would like to thank CPSC for this data set and DKDC for the opportunity to conduct this analysis.
References
Question 2.1
The data that we worked with had limited information regarding the victim's demographics beside age and gender. However, that was enough to draw some base inferences. Below we can grab a counts of gender, which a plurality is females.
Age is a bit tricky, we have the victim's birthday in months. I converted it into years and break them down into 10 year age ranges so we can better examine the data.
End of explanation
"""
#Top products affect by people with 0 age
data.ix[data['age_range'].isnull(), 'ProductCategoryPublicName'].value_counts()[0:9]
#top products that affect people overall
data.ProductCategoryPublicName.value_counts()[0:9]
"""
Explanation: However after doing this, we still have around 13,000 people with an age of zero, whether it is that they did not fill in the age or that the incident involves infant is still unknown but looking at the distribution betweeen of the product that are affecting people with an age of 0 and the overall dataset, it appears that null values in the age range represents people who did not fill out an age when reporting
End of explanation
"""
#overall products listed
data.ProductCategoryPublicName.value_counts()[0:9]
#removing minor harm incidents
no_injuries = ['Incident, No Injury', 'Unspecified', 'Level of care not known',
'No Incident, No Injury', 'No First Aid or Medical Attention Received']
damage = data.ix[~data['SeverityTypePublicName'].isin(no_injuries), :]
damage.ProductCategoryPublicName.value_counts()[0:9]
"""
Explanation: Question 2.2
At first glance, we can look at the products that were reported, like below. And see that Eletric Ranges or Ovens is at top in terms of harm. However, there are levels of severity within the API that needs to be filtered before we can assess which products causes the most harm.
End of explanation
"""
model = gensim.models.Word2Vec.load('/home/datauser/cpsc/models/footwear')
model.most_similar('walking')
model.most_similar('injury')
model.most_similar('instability')
"""
Explanation: This shows that incidents where there are actually injuries and medical attention was given was that in footwear, which was weird. To explore this, I created a Word2Vec model that maps out how certain words relate to each other. To train the model, I used the comments that were made from the API. This will train a model and help us identify words that are similar. For instance, if you type in foot, you will get left and right as these words are most closely related to the word foot. However after some digging around, I found out that the word "walking" was associated with "painful". I have some reason to believe that there are orthopedic injuries associated with shoes and people have been experience pain while walking with Sketchers that were supposed to tone up their bodies and having some instability or balance issues.
End of explanation
"""
model = gensim.models.Word2Vec.load('/home/datauser/cpsc/models/severity')
items_dict = {}
for word, vocab_obj in model.vocab.items():
items_dict[word] = vocab_obj.count
sorted_dict = sorted(items_dict.items(), key=operator.itemgetter(1))
sorted_dict.reverse()
sorted_dict[0:5]
"""
Explanation: Question 2.3
End of explanation
"""
|
andre-martini/advanced-comp-2017 | 02-trees/splitting-criteria.ipynb | gpl-3.0 | X = np.array([[0,0,0], [0,0,1], [0,1,0], [0,1,1], [0,1,1], [1,0,0],
[1,0,0], [1,0,0], [1,0,0], [1,1,1]])
# two class problem with classes 0 and 1
y = [0,0,1,1,1,1,1,1,1,1]
def accuracy(a, b):
N = a + b
return 1 - max(a/N, b/N)
def entropy(a, b):
p = b/(a + b) # fraction or probability for class 1
ent = 0
if p > 0:
ent += p * np.log2(p)
if (1-p) > 0:
ent += (1-p) * np.log2(1-p)
return -ent
def gini(a, b):
p = b/(a+b)
return 2* p * (1-p)
def delta_i(impurity, t, tl, tr):
N = sum(t)
print('i(t) =', round(impurity(*t), 5),
' Nl/(Nl+Nr) * i(tL) =', round((sum(tl)/N)*impurity(*tl), 5),
' Nr/(Nl+Nr) * i(tR) =', round((sum(tr)/N)*impurity(*tr), 5))
return impurity(*t) - (sum(tl)/N)*impurity(*tl) - (sum(tr)/N)*impurity(*tr)
def split(impurity):
# for simplicity we iterate over all three variables
for idx in (0, 1, 2):
t = Counter(y)
print('Splitting t on X%i<0.5' % (idx))
tl = [0, 0]
tr = [0, 0]
for x,y_ in zip(X[:, idx], y):
if x < 0.5:
if y_ < 0.5:
tl[0] += 1
else:
tl[1] += 1
else:
if y_ < 0.5:
tr[0] += 1
else:
tr[1] += 1
print('resulting leaf tL:', tl, 'and leaf tR:', tr)
dI = delta_i(impurity, (t[0], t[1]), tl, tr)
print('di(t) = i(t) - Nl/(Nl+Nr)*i(tL) - Nr/(Nl+Nr)*i(tR) =', round(dI, 5))
print('----------')
print('With "accuracy" as splitting criteria:')
split(accuracy)
print()
"""
Explanation: Splitting criteria for tree growing
Why not use accuracy as splitting criteria for classification trees.
In this problem it is possible to separate class 0 and class 1 perfectly.
End of explanation
"""
# repeat using entropy as splitting criteria
print('With "entropy" as splitting criteria:')
split(entropy)
from utils import draw_tree
from sklearn.tree import DecisionTreeClassifier
# check using a DecisionTreeClassifier from sklearn
# note the tree stops growing after depth 3, even though
# we set no limit on max_depth.
# The terminal leaves are pure.
clf = DecisionTreeClassifier(criterion='entropy', max_depth=None)
clf.fit(X, y)
draw_tree(clf, ['X1', 'X2', 'X3'], filled=True)
# similar for the gini index
split(gini)
"""
Explanation: When using accuracy as splitting criteria we can not find a split to make for the root node as none of the splits would lead to an increase in the impurity. Splits where the majority class remains the same in the child nodes are considered not worth making.
The tree growing procedure proceeds one step at a time, this makes it shortsighted. We can help the procedure out by using a impurity/splitting criteria that can measure the "potential" for a future split. The entropy does this as it looks at the class probabilities. It consideres a split worth making if the split alters the class distributions of the child nodes. Even if the predicted classes for each remain the same as for the parent.
End of explanation
"""
|
rasilab/ferrin_elife_2017 | scripts/fit_simulation_to_experiment.ipynb | gpl-3.0 | import pandas as pd
import re
import os
import numpy as np
import simulation_utils
from scipy.interpolate import interp1d
"""
Explanation: Fit simulation results to experiment
<div id="toc-wrapper"><h3> Table of Contents </h3><div id="toc" style="max-height: 787px;"><ol class="toc-item"><li><a href="#Globals">Globals</a></li><li><a href="#Fit-Run-2-stall-strength-to-reproduce-measured-single-mutant-YFP-rates-for-Run-3-initiation-mutant-simulations">Fit Run 2 stall strength to reproduce measured single mutant YFP rates for Run 3 initiation mutant simulations</a></li><li><a href="#Fit-Run-2-stall-strength-to-reproduce-measured-single-mutant-YFP-rates-for-Run-4-double-mutant-simulations">Fit Run 2 stall strength to reproduce measured single mutant YFP rates for Run 4 double mutant simulations</a></li><li><a href="#Fit-Run-2-stall-strength-to-reproduce-measured-single-mutant-YFP-rates-for-Run-5-CTC-distance-mutant-simulations">Fit Run 2 stall strength to reproduce measured single mutant YFP rates for Run 5 CTC distance mutant simulations</a></li><li><a href="#Fit-Run-2-stall-strength-to-reproduce-measured-single-mutant-YFP-rates-for-Run-11-CTA-distance-mutant-simulations">Fit Run 2 stall strength to reproduce measured single mutant YFP rates for Run 11 CTA distance mutant simulations</a></li><li><a href="#Fit-Run-13-stall-strength-to-reproduce-measured-single-mutant-YFP-rates-during-serine-starvation-for-Run-14-initiation-mutant-simulations">Fit Run 13 stall strength to reproduce measured single mutant YFP rates during serine starvation for Run 14 initiation mutant simulations</a></li><li><a href="#Fit-Run-13-stall-strength-to-reproduce-measured-single-mutant-YFP-rates-during-serine-starvation-for-Run-15-double-mutant-simulations">Fit Run 13 stall strength to reproduce measured single mutant YFP rates during serine starvation for Run 15 double mutant simulations</a></li><li><a href="#Fit-Run-2-stall-strength-to-reproduce-measured-single-mutant-YFP-rates-for-Run-16-multiple-CTA-mutant-simulations">Fit Run 2 stall strength to reproduce measured single mutant YFP rates for Run 16 multiple CTA mutant simulations</a></li></ol></div></div>
Globals
End of explanation
"""
experimentdata = pd.read_table(
'../processeddata/platereader/measured_yfprates_for_initiation_simulations.tsv',
sep='\t',
index_col=0)
'''
# Uncomment this region if run_simulations_whole_cell_parameter_sweep.ipynb
# was run to generate new simulation data
simulationdata = simulation_utils.get_simulation_data(runnumber=2)
simulationdata.drop(
['files'], axis=1).to_csv(
'../rawdata/simulations/run2_data.tsv', sep='\t', index_label='index')
'''
simulationdata = pd.read_table(
'../rawdata/simulations/run2_data.tsv', index_col=0)
pretermtypes = ['5primepreterm', 'selpreterm']
for pretermtype in pretermtypes:
pretermrates = np.unique(simulationdata[pretermtype])
for pretermrate in pretermrates:
fitresults = dict()
if pretermtype == 'selpreterm' and pretermrate == 0:
continue
for mutant in experimentdata.index:
subset = simulationdata[(simulationdata[pretermtype] == pretermrate
) & (simulationdata['mutant'] == mutant.
lower())]
# if pretermrate is 0, make sure all other preterm rates are also 0
if pretermrate == 0:
for innerpretermtype in pretermtypes:
if innerpretermtype == pretermtype:
continue
subset = subset[(subset[innerpretermtype] == 0)]
# to avoid parameter ranges that do not have any effect at pause
# sites
subset = subset[subset['ps_ratio'] < 0.9].sort_values(
by=['stallstrength'])
fit = interp1d(subset['ps_ratio'], subset['stallstrength'])
temp = fit(experimentdata.ix[mutant]['measuredRateNormalized'])
fitresults[mutant] = {'stallstrength': temp}
if pretermrate == 0:
title = 'trafficjam'
else:
title = pretermtype
fitresults = pd.DataFrame.from_dict(fitresults, orient='index')
fitresults.to_csv(
'../processeddata/simulations/run2_fit_stallstrength_for_initiation_'
+ title + '.tsv',
sep='\t',
index_label='mutant')
"""
Explanation: Fit Run 2 stall strength to reproduce measured single mutant YFP rates for Run 3 initiation mutant simulations
End of explanation
"""
experimentdata = pd.read_table(
'../processeddata/platereader/measured_yfprates_for_double_simulations.tsv',
sep='\t',
index_col=0)
'''
# Uncomment this region if run_simulations_whole_cell_parameter_sweep.ipynb
# was run to generate new simulation data
simulationdata = simulation_utils.get_simulation_data(runnumber=2)
'''
simulationdata = pd.read_table(
'../rawdata/simulations/run2_data.tsv', index_col=0)
pretermtypes = ['5primepreterm', 'selpreterm']
for pretermtype in pretermtypes:
pretermrates = np.unique(simulationdata[pretermtype])
for pretermrate in pretermrates:
fitresults = dict()
if pretermtype == 'selpreterm' and pretermrate == 0:
continue
for mutant in experimentdata.index:
subset = simulationdata[(simulationdata[pretermtype] == pretermrate
) & (simulationdata['mutant'] == mutant)]
# if pretermrate is 0, make sure all other preterm rates are also 0
if pretermrate == 0:
for innerpretermtype in pretermtypes:
if innerpretermtype == pretermtype:
continue
subset = subset[(subset[innerpretermtype] == 0)]
# to avoid parameter ranges that do not have any effect at pause
# sites
subset = subset[subset['ps_ratio'] < 0.9].sort_values(
by=['stallstrength'])
fit = interp1d(subset['ps_ratio'], subset['stallstrength'])
temp = fit(experimentdata.ix[mutant]['measuredRateNormalized'])
fitresults[mutant] = {'stallstrength': temp}
if pretermrate == 0:
title = 'trafficjam'
else:
title = pretermtype
fitresults = pd.DataFrame.from_dict(fitresults, orient='index')
fitresults.to_csv(
'../processeddata/simulations/run2_fit_stallstrength_for_double_' +
title + '.tsv',
sep='\t',
index_label='mutant')
"""
Explanation: Fit Run 2 stall strength to reproduce measured single mutant YFP rates for Run 4 double mutant simulations
End of explanation
"""
experimentdata = pd.read_table(
'../processeddata/platereader/measured_yfprates_for_distance_simulations.tsv',
sep='\t',
index_col=0)
'''
# Uncomment this region if run_simulations_whole_cell_parameter_sweep.ipynb
# was run to generate new simulation data
simulationdata = simulation_utils.get_simulation_data(runnumber=2)
'''
simulationdata = pd.read_table(
'../rawdata/simulations/run2_data.tsv', index_col=0)
pretermtypes = ['5primepreterm', 'selpreterm']
for pretermtype in pretermtypes:
pretermrates = np.unique(simulationdata[pretermtype])
for pretermrate in pretermrates:
fitresults = dict()
if pretermtype == 'selpreterm' and pretermrate == 0:
continue
for mutant in experimentdata.index:
subset = simulationdata[(simulationdata[pretermtype] == pretermrate
) & (simulationdata['mutant'] == mutant.
lower())]
# if pretermrate is 0, make sure all other preterm rates are also 0
if pretermrate == 0:
for innerpretermtype in pretermtypes:
if innerpretermtype == pretermtype:
continue
subset = subset[(subset[innerpretermtype] == 0)]
# to avoid parameter ranges that do not have any effect at pause
# sites
subset = subset[subset['ps_ratio'] < 0.9].sort_values(
by=['stallstrength'])
fit = interp1d(subset['ps_ratio'], subset['stallstrength'])
temp = fit(experimentdata.ix[mutant]['measuredRateNormalized'])
fitresults[mutant] = {'stallstrength': temp}
if pretermrate == 0:
title = 'trafficjam'
else:
title = pretermtype
fitresults = pd.DataFrame.from_dict(fitresults, orient='index')
fitresults.to_csv(
'../processeddata/simulations/run2_fit_stallstrength_for_ctc_distance_'
+ title + '.tsv',
sep='\t',
index_label='mutant')
"""
Explanation: Fit Run 2 stall strength to reproduce measured single mutant YFP rates for Run 5 CTC distance mutant simulations
End of explanation
"""
experimentdata = pd.read_table(
'../processeddata/platereader/measured_yfprates_for_cta_distance_simulations.tsv',
sep='\t',
index_col=0)
'''
# Uncomment this region if run_simulations_whole_cell_parameter_sweep.ipynb
# was run to generate new simulation data
simulationdata = simulation_utils.get_simulation_data(runnumber=2)
'''
simulationdata = pd.read_table(
'../rawdata/simulations/run2_data.tsv', index_col=0)
pretermtypes = ['5primepreterm', 'selpreterm']
for pretermtype in pretermtypes:
pretermrates = np.unique(simulationdata[pretermtype])
for pretermrate in pretermrates:
fitresults = dict()
if pretermtype == 'selpreterm' and pretermrate == 0:
continue
for mutant in experimentdata.index:
subset = simulationdata[(simulationdata[pretermtype] == pretermrate
) & (simulationdata['mutant'] == mutant.
lower())]
# if pretermrate is 0, make sure all other preterm rates are also 0
if pretermrate == 0:
for innerpretermtype in pretermtypes:
if innerpretermtype == pretermtype:
continue
subset = subset[(subset[innerpretermtype] == 0)]
# to avoid parameter ranges that do not have any effect at pause
# sites
subset = subset[subset['ps_ratio'] < 0.9].sort_values(
by=['stallstrength'])
fit = interp1d(subset['ps_ratio'], subset['stallstrength'])
temp = fit(experimentdata.ix[mutant]['measuredRateNormalized'])
fitresults[mutant] = {'stallstrength': temp}
if pretermrate == 0:
title = 'trafficjam'
else:
title = pretermtype
fitresults = pd.DataFrame.from_dict(fitresults, orient='index')
fitresults.to_csv(
'../processeddata/simulations/run2_fit_stallstrength_for_cta_distance_'
+ title + '.tsv',
sep='\t',
index_label='mutant')
"""
Explanation: Fit Run 2 stall strength to reproduce measured single mutant YFP rates for Run 11 CTA distance mutant simulations
End of explanation
"""
experimentdata = pd.read_table(
'../processeddata/platereader/measured_yfprates_for_serine_initiation_simulations.tsv',
sep='\t',
index_col=0)
'''
# Uncomment this region if run_simulations_whole_cell_parameter_sweep.ipynb
# was run to generate new simulation data
simulationdata = simulation_utils.get_simulation_data(runnumber=13)
simulationdata.drop(
['files'], axis=1).to_csv(
'../rawdata/simulations/run13_data.tsv', sep='\t', index_label='index')
'''
simulationdata = pd.read_table(
'../rawdata/simulations/run13_data.tsv', index_col=0)
pretermtypes = ['5primepreterm', 'selpreterm']
for pretermtype in pretermtypes:
pretermrates = np.unique(simulationdata[pretermtype])
for pretermrate in pretermrates:
fitresults = dict()
if pretermtype == 'selpreterm' and pretermrate == 0:
continue
for mutant in experimentdata.index:
subset = simulationdata[(simulationdata[pretermtype] == pretermrate
) & (simulationdata['mutant'] == mutant.
lower())]
# if pretermrate is 0, make sure all other preterm rates are also 0
if pretermrate == 0:
for innerpretermtype in pretermtypes:
if innerpretermtype == pretermtype:
continue
subset = subset[(subset[innerpretermtype] == 0)]
# to avoid parameter ranges that do not have any effect at pause
# sites
subset = subset[subset['ps_ratio'] < 0.9].sort_values(
by=['stallstrength'])
fit = interp1d(subset['ps_ratio'], subset['stallstrength'])
temp = fit(experimentdata.ix[mutant]['measuredRateNormalized'])
fitresults[mutant] = {'stallstrength': temp}
if pretermrate == 0:
title = 'trafficjam'
else:
title = pretermtype
fitresults = pd.DataFrame.from_dict(fitresults, orient='index')
fitresults.to_csv(
'../processeddata/simulations/run13_serine_fit_stallstrength_for_initiation_'
+ title + '.tsv',
sep='\t',
index_label='mutant')
"""
Explanation: Fit Run 13 stall strength to reproduce measured single mutant YFP rates during serine starvation for Run 14 initiation mutant simulations
End of explanation
"""
experimentdata = pd.read_table(
'../processeddata/platereader/measured_yfprates_for_serine_double_simulations.tsv',
sep='\t',
index_col=0)
'''
# Uncomment this region if run_simulations_whole_cell_parameter_sweep.ipynb
# was run to generate new simulation data
simulationdata = simulation_utils.get_simulation_data(runnumber=13)
'''
simulationdata = pd.read_table(
'../rawdata/simulations/run13_data.tsv', index_col=0)
pretermtypes = ['5primepreterm', 'selpreterm']
for pretermtype in pretermtypes:
pretermrates = np.unique(simulationdata[pretermtype])
for pretermrate in pretermrates:
fitresults = dict()
if pretermtype == 'selpreterm' and pretermrate == 0:
continue
for mutant in experimentdata.index:
if mutant == 'tcg8': # I did not use TCG8 for double mutants
continue
subset = simulationdata[(simulationdata[pretermtype] == pretermrate
) & (simulationdata['mutant'] == mutant.
lower())]
# if pretermrate is 0, make sure all other preterm rates are also 0
if pretermrate == 0:
for innerpretermtype in pretermtypes:
if innerpretermtype == pretermtype:
continue
subset = subset[(subset[innerpretermtype] == 0)]
# to avoid parameter ranges that do not have any effect at pause
# sites
subset = subset[subset['ps_ratio'] < 0.9].sort_values(
by=['stallstrength'])
fit = interp1d(subset['ps_ratio'], subset['stallstrength'])
temp = fit(experimentdata.ix[mutant]['measuredRateNormalized'])
fitresults[mutant] = {'stallstrength': temp}
if pretermrate == 0:
title = 'trafficjam'
else:
title = pretermtype
fitresults = pd.DataFrame.from_dict(fitresults, orient='index')
fitresults.to_csv(
'../processeddata/simulations/run13_serine_fit_stallstrength_for_double_'
+ title + '.tsv',
sep='\t',
index_label='mutant')
"""
Explanation: Fit Run 13 stall strength to reproduce measured single mutant YFP rates during serine starvation for Run 15 double mutant simulations
End of explanation
"""
experimentdata = pd.read_table(
'../processeddata/platereader/measured_yfprates_for_leucine_multiple_simulations.tsv',
sep='\t',
index_col=0)
'''
# Uncomment this region if run_simulations_whole_cell_parameter_sweep.ipynb
# was run to generate new simulation data
simulationdata = simulation_utils.get_simulation_data(runnumber=2)
'''
simulationdata = pd.read_table(
'../rawdata/simulations/run2_data.tsv', index_col=0)
pretermtypes = ['5primepreterm', 'selpreterm']
for pretermtype in pretermtypes:
pretermrates = np.unique(simulationdata[pretermtype])
for pretermrate in pretermrates:
fitresults = dict()
if pretermtype == 'selpreterm' and pretermrate == 0:
continue
for mutant in experimentdata.index:
subset = simulationdata[(simulationdata[pretermtype] == pretermrate
) & (simulationdata['mutant'] == mutant)]
# if pretermrate is 0, make sure all other preterm rates are also 0
if pretermrate == 0:
for innerpretermtype in pretermtypes:
if innerpretermtype == pretermtype:
continue
subset = subset[(subset[innerpretermtype] == 0)]
# to avoid parameter ranges that do not have any effect at pause
# sites
subset = subset[subset['ps_ratio'] < 0.9].sort_values(
by=['stallstrength'])
fit = interp1d(subset['ps_ratio'], subset['stallstrength'])
temp = fit(experimentdata.ix[mutant]['measuredRateNormalized'])
fitresults[mutant] = {'stallstrength': temp}
if pretermrate == 0:
title = 'trafficjam'
else:
title = pretermtype
fitresults = pd.DataFrame.from_dict(fitresults, orient='index')
fitresults.to_csv(
'../processeddata/simulations/run2_fit_stallstrength_for_leucine_multiple_'
+ title + '.tsv',
sep='\t',
index_label='mutant')
"""
Explanation: Fit Run 2 stall strength to reproduce measured single mutant YFP rates for Run 16 multiple CTA mutant simulations
End of explanation
"""
|
kit-cel/wt | wt/vorlesung/ch1_3/relative_frequency.ipynb | gpl-2.0 | # importing
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=True)
matplotlib.rc('figure', figsize=(18, 6) )
"""
Explanation: Content and Objective
Plotting relative frequency of 'Kopf' for multiple samples of a coin
NOTE: Results are being based on identical history with only the most recent value being updated.
Import
End of explanation
"""
# length of sequences to be sampled
len_sequence = 100
# number of sequences to be sampled
N_sequences = 100
"""
Explanation: Parameters
End of explanation
"""
# initialize array for storing sampled sequences
results = np.zeros( ( N_sequences, len_sequence ) )
# vector of lengths in order to perform averaging
lengths = np.arange( 1, len_sequence + 1 )
# loop for sequence length
for _n in np.arange( N_sequences ):
# sample sequence
sequence = np.random.choice( [ 0 , 1 ], size = len_sequence, p = [ .5, .5 ] )
# summing up for averaging and normalize each entry by the according length
results[ _n, :] = np.cumsum( sequence ) / lengths
"""
Explanation: Simulation
End of explanation
"""
# plotting
plt.figure()
for _n in np.arange( N_sequences ):
plt.plot( range( 1, len_sequence+1 ), results[ _n, : ], linewidth = 2.0 )
plt.grid( True )
plt.xlabel('$N$')
plt.ylabel('$H_N($'+'Kopf'+'$)$')
plt.margins(.1)
"""
Explanation: Plotting
End of explanation
"""
|
barjacks/pythonrecherche | Kursteilnehmer/Dominique Strebel/06/Python Functions, 10 Übungen.ipynb | mit | def test(element):
element = element * 2
return element
"""
Explanation: 03 Python Functions, 10 Übungen
Hier nochmals zur Erinnerung, wie Funktionen geschrieben werden.
End of explanation
"""
test(5)
"""
Explanation: Multipliziert Integers oder Floats mit 2
End of explanation
"""
lst = [3,7,14,222,6]
lst.reverse()
print(lst)
def maxi(element):
element.sort()
element.reverse()
return element[0]
maxi(lst)
"""
Explanation: 1.Schreibe eine Funktion, die aus einer Liste, die grösste Zahl herauszieht. Es ist verboten mit "max" zu arbeiten. :-)
End of explanation
"""
def summe(mylist):
long_elem = 0
for elem in mylist:
long_elem = long_elem + elem
return long_elem
summe(lst)
"""
Explanation: 2.Schreibe eine Funktion, die alle Elemente einer Liste, addiert. Es ist verboten mit "sum" zu arbeiten.
End of explanation
"""
def multi(Menge):
multi_elem = 1
for elem in Menge:
multi_elem = multi_elem * elem
return multi_elem
multi(lst)
"""
Explanation: 3.Schreibe eine Funktion, die alle Elemente einer Liste multipliziert.
End of explanation
"""
elem = input('Bitte geben Sie ein Wort ein ')
wort = list(elem)
print(wort)
def umkehr(wort):
elem = input('Bitte geben Sie ein Wort ein ')
wort = list(elem)
wort.reverse()
print(wort)
umkehr(wort)
"""
Explanation: 4.Schreibe eine Funktion, die einen String nimmt, und spiegelt. Also "hallo" zu "ollah".
End of explanation
"""
liste = [45, 34, 64,45]
def such(zahlen):
zahl = int(input('Bitte geben Sie eine Zahl ein:'))
if zahl in zahlen:
return 'Treffer'
else:
return 'Kein Treffer'
such(liste)
"""
Explanation: 5.Schreibe eine Funktion, die prüft, ob eine Zahl in einer bestimmten Zahlenfolge zu finden ist.
End of explanation
"""
liste = [5,5,5,5,3,2,11,5]
def lösch(mehrfach):
new_mehrfach = []
for elem in mehrfach:
if elem in new_mehrfach:
new_mehrfach.append(elem)
else:
continue
print(new_mehrfach)
lösch(liste)
"""
Explanation: 6.Lösche die mehrfach genannten Elemente aus der folgenden Liste.
End of explanation
"""
lst = [34,23,22,443,45,78,23,89,23]
def gerade(summe):
for elem in summe:
if elem % 2 == 0:
print(elem)
else:
continue
gerade(lst)
"""
Explanation: 7.Drucke die geraden Zahlen aus der folgenden Liste aus:
End of explanation
"""
satz = "In Oesterreich zeichnet sich ein Rechtsrutsch ab. OeVP und FPOe haben stark zugelegt. Gemaess der neusten Hochrechnung ist die Partei von Sebastian Kurz mit 31,6 Prozent der Stimmen Wahlsiegerin, auf Platz zwei folgt die SPÖ (26,9 Prozent) vor der FPOe (26,0 Prozent)."
def counting_caps(XXXXX):
XXXXXX = 0
for XXXXXX in elem:
if XXXXXX.isupper():
XXXXXXX += 1
return XXXXXXXX
counting_caps(satz)
"""
Explanation: 8.Prüfe mit einer Funktionen, wieviele Grossbuchstaben in folgendem Satz zu finden sind.
End of explanation
"""
|
sassoftware/sas-viya-programming | recommend/bx_recommender.ipynb | apache-2.0 | import html
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import re
import swat
from IPython.core.display import display, HTML, Markdown
from io import StringIO
%matplotlib inline
"""
Explanation: Build a Recommender System for Books
This notebook demonstrates the use of many actions within the Recommender System (recommend)
action set for SAS Cloud Analytic Services (CAS).
This example uses explicit ratings. The data set is the Book-Crossing data set<sup>1</sup>. The data preparation excludes the implicit ratings and also excludes ratings that do not match an ISBN in the books data set.
You must have access to a SAS Viya 3.3 release of CAS. To connect to CAS from Python, you must install the SAS Scripting Wrapper for Analytics Transfer (SWAT).
* For information about SWAT, including installation, see Python-SWAT.
* For information about the CAS actions used in this example, see Recommender System Action Set: Details in the SAS Visual Analytics 8.2: Programming Guide.
Copyright SAS Institute, Inc.
Notebook contents
Initial setup<br/>Import packages, including the SAS Wrapper for Analytic Transfer (SWAT) and open source libraries ○ Connect to CAS and start a session ○ Import the books file ○ Import the ratings file ○ Upload the data frames to the server
Simple exploration<br/>Calculate the sparsity ○ View the ratings distribution
Build the recommender system<br/>Partition the ratings table ○ Calculate average ratings by item and user ○ Explore the item ratings and user ratings
Build a matrix factorization model<br/>Sample the data for a hold-out group ○ Build the model using ALS ○ Make recommendations for one user
Build a KNN model<br/>Calculate similarity between users ○ View rating history for similar users ○ Recommend 10 books for one user
Combine search with recommendations<br/>Build a search index ○ Create a simple filter table ○ Make recommendations from the filter table
Initial Setup
Import Packages: SAS Wrapper for Analytic Transfer (SWAT) and Open Source Libraries<a name="initial_setup"/>
End of explanation
"""
s = swat.CAS("host.example.com", 5570);
s.loadactionset("dataPreprocess")
s.loadactionset("fedSql")
s.loadactionset("recommend")
data_dir = '/path/to/data/'
ratings_file = data_dir + 'BX-Book-Ratings.csv'
books_file = data_dir + 'BX-Books.csv'
"""
Explanation: Connect to CAS and start a session<a name="initial_connect"/>
End of explanation
"""
with open(books_file, 'r', encoding='iso-8859-1') as f:
content = html.unescape(f.read())
books_df = pd.read_csv(StringIO(content),
header=0,
error_bad_lines=False,
sep=';', encoding='iso-8859-1',
names=['ISBN', 'Title', 'Author', 'Year_Of_Publication', 'Publisher'],
usecols=['ISBN', 'Title', 'Author', 'Year_Of_Publication', 'Publisher'],
dtype={'Year_Of_Publication': str})
books_df = books_df[books_df.ISBN.str.len() == 10]
books_df.head()
"""
Explanation: Import the books file<a name="initial_books"/>
This file include HTML entities such as &amp; that interfere with parsing. Keep the
books with 10 characters for the ISBN.
End of explanation
"""
pattern = re.compile('"\d+";"[0-9X]{10}";"[1-9][0]?"')
buffer = StringIO()
with open(ratings_file, 'r', encoding='iso-8859-1') as f:
for line in f:
if pattern.match(line):
buffer.write(line)
buffer.seek(0)
ratings_df = pd.read_csv(buffer,
skiprows=0,
sep=';',
names=['User_ID', 'ISBN', 'Rating'],
dtype={'User_ID': str, 'ISBN': str, 'Rating': int})
buffer.close()
ratings_df.drop_duplicates(inplace=True)
ratings_df.head()
"""
Explanation: Import the ratings file <a name="initial_ratings"/>
The regex is used to skip invalid lines and to skip ratings with a 0. This example
uses explicit ratings only.
End of explanation
"""
ratings = s.upload_frame(ratings_df,
casout=s.CASTable('ratings',
replace=True,
indexVars=['isbn']),
importoptions=dict(filetype="csv",
vars=[
dict(name="User_ID", type="double"),
dict(name="ISBN", type="CHAR", length=10),
dict(name="Rating", type="double")
]))
books = s.upload_frame(books_df,
casout=s.CASTable('books', replace=True),
importoptions=dict(filetype="csv",
vars=[dict(name="ISBN", type="CHAR", length=10)]))
display(Markdown('### Books'))
display(books.table.columninfo())
display(Markdown('### Ratings'))
display(ratings.table.columninfo())
"""
Explanation: Finally, upload the data frames to the server <a name="initial_upload"/>
Because the ISBNs that are used from the data set are all 10 characters long, it is more
efficient to use a fixed-width size for them. The author and title strings vary greatly, so
VARCHAR is a better choice for those two columns.
End of explanation
"""
original_row_count = len(ratings)
s.dataStep.runCode(code='''
data ratings;
merge ratings(in=ratings) books(in=books keep=isbn);
by isbn;
if books and ratings then output;
run;
''')
final_row_count = len(ratings)
df = pd.DataFrame([[original_row_count], [final_row_count]],
columns=['Ratings Count'],
index=['Original', 'Final'])
df
"""
Explanation: Discard any ratings that do not have a corresponding ISBN in the books table.
End of explanation
"""
ratings['rating'].describe(stats=['mean', 'count', 'nmiss'])
"""
Explanation: Confirm there are no missing values for ratings
Check that the value for the NMiss column in the results is 0.
End of explanation
"""
out = ratings.simple.distinct().Distinct.set_index('Column')
out
# Store the number of rows in a variable
rating_count = len(ratings)
result = ratings.simple.distinct().Distinct.set_index('Column')
# Store the distinct number of users.
user_count = result.loc['User_ID', 'NDistinct']
# Store the distinct number of items.
item_count = result.loc['ISBN', 'NDistinct']
# Finally, here's the sparsity.
sparsity = 1.0 - (rating_count / (user_count * item_count))
df = pd.DataFrame([rating_count, user_count, item_count, sparsity],
index=['Ratings', 'Users', 'Items', 'Sparsity'],
columns=['Value'])
df
"""
Explanation: Simple exploration<a name="simple_explore"/>
Calculate the sparsity
Sparsity of ratings is a common problem with recommender systems.
End of explanation
"""
results = ratings['rating'].value_counts(sort=False)
results
ax = results.plot.bar(
title='Distribution of Ratings',
figsize=(15,5)
)
"""
Explanation: View the distribution of ratings
End of explanation
"""
ratings_by_item = s.CASTable('ratings_by_item', replace=True)
result = ratings.groupby('ISBN').partition(casout=ratings_by_item)
ratings_by_user = s.CASTable('ratings_by_user', replace=True)
result = ratings.groupby('User_ID').partition(casout=ratings_by_user)
"""
Explanation: Build the recommender system<a name="build_recommender"/>
Partition the tables
Subsequent actions are more efficient if the ratings table is partitioned once by the item
and a second table is partitioned by the user.
If you do not perform this step now, then many of the subsequent actions will automatically
make a copy of the data and group by the item or the user. If that is done once, it is
convenient. However the notebook shows several actions so the data transfer and reorganization
is done once so that subsequent actions are more memory and CPU efficient.
End of explanation
"""
ratings.table.droptable()
book_recommend = s.CASTable('bookRecommend', replace=True)
results = s.recommend.recomCreate(
system=book_recommend,
user='User_ID',
item='ISBN',
rate='Rating')
"""
Explanation: Now that we have two instances of the ratings table, partitioned by different variables,
we can drop the original ratings table.
End of explanation
"""
avg_user = s.CASTable('avg_user', replace=True)
results = ratings_by_user.recommend.recomRateinfo(
label='avg_user_model',
system=book_recommend,
id='User_ID',
sparseid='ISBN',
sparseval='rating',
casout=avg_user)
result = avg_user.head()
result
"""
Explanation: Determine average ratings by user and item
For the average ratings by user, specify the ratings table that is partitioned by user. If the table is not already partitioned by user, then the action will temporarily group the data for you. However, it slows the action.
End of explanation
"""
firstUser = result.loc[0,'User_ID']
count = result.loc[0,'_NRatings_']
ratings_by_user[ratings_by_user.user_id == firstUser].head(count)
"""
Explanation: You can view the ratings to confirm that the average is shown. If the first user
has a single rating, you can rerun the preceding cell with avg_user.query("_nrating_ = 2").head() or a related query.
End of explanation
"""
avg_item = s.CASTable('avg_item', replace=True)
results = ratings_by_item.recommend.recomRateinfo(
label='avg_item_model',
system=book_recommend,
id='isbn',
sparseid='user_id',
sparseval='rating',
casOut=avg_item)
avg_item.head()
"""
Explanation: Create an average ratings by item table.
End of explanation
"""
avg_user.query('_nratings_ > 3').sort_values('_stat_').head(10)
"""
Explanation: Explore item ratings and user ratings
The tables that are created with the recomrateinfo action can be
used for simple data exploration.
Discerning reviewers
End of explanation
"""
avg_user.sort_values(['_stat_', '_nratings_'], ascending=False).head(10)
"""
Explanation: Generous reviewers
End of explanation
"""
s.fedSql.execDirect(query='''
select t1.isbn, t1._stat_ as "Average Rating",
t1._nratings_ as "Number of Ratings",
t2.author, t2.title from
avg_item as t1 join books as t2
on (t1.isbn = t2.isbn) order by 3 desc limit 10
''')
"""
Explanation: Ten most frequently reviewed books
End of explanation
"""
result = avg_item.query('_nratings_ > 10').sort_values('_stat_').head(10)
result
#Store the ISBN for the first row.
first_isbn = result.loc[0, 'ISBN']
result = ratings_by_item['rating'].query("isbn eq '%s'" % first_isbn).dataPreprocess.histogram()
display(Markdown('#### Ratings Distribution for ISBN %s' % first_isbn))
display(result.BinDetails.loc[:, ['BinLowerBnd', 'NInBin', 'Percent']])
"""
Explanation: Frequently reviewed books with low ratings
End of explanation
"""
holdout_users = s.CASTable('holdout_users', replace=True)
ratings_by_user.recommend.recomSample(
system=book_recommend,
label='holdout_users',
withhold=.2,
hold=1,
seed=1234,
id='user_id',
sparseid='isbn',
casout=holdout_users
)
holdout_users.head(10)
als_u = s.CASTable('als_u', replace=True)
als_i = s.CASTable('als_i', replace=True)
result = s.recommend.recomAls(
system=book_recommend,
tableu=ratings_by_user,
tablei=ratings_by_item,
label='als1',
casoutu=als_u,
casouti=als_i,
rateinfo=avg_user,
maxiter=20,
hold=holdout_users,
seed=1234,
details=True,
k=20,
stagnation=10,
threshold=.1
)
result.ModelInfo.set_index('Descr')
ax = result.IterHistory.plot(
x='Iteration', y='Objective',
title='Objective Function',
figsize=(9,6)
)
result.IterHistory.set_index('Iteration')
"""
Explanation: Build a matrix factorization model<a name="build_matrixfactorization"/>
First, create a hold-out group. From a random selection of 20% of users, hold out 1 rating.
After that, create the model.
End of explanation
"""
users= '104437'
recommendations = s.CASTable('recommendations', replace=True)
s.recommend.recomMfScore(
system=book_recommend,
label='als1',
userlist=users,
n=5,
casout=recommendations
)
s.fedSql.execDirect(query='''
select t1.*, t2.author, t2.title,
t3._stat_ as "Average Rating", t3._nratings_ as "Number of Ratings"
from recommendations as t1
left outer join books as t2 on (t1.isbn = t2.isbn)
left outer join avg_item as t3 on (t1.isbn = t3.isbn)
order by user_id, _rank_
''')
"""
Explanation: Make recommendations for one user
End of explanation
"""
recommend_heldout = s.CASTable('recommend_heldout', replace=True)
s.recommend.recomMfScore(
system=book_recommend,
label='als1',
usertable=holdout_users,
n=5,
casout=recommend_heldout
)
result = s.fedsql.execdirect(query='''
select t1.*, t2.author, t2.title,
t3._stat_ as "Average Rating", t3._nratings_ as "Number of Ratings"
from recommend_heldout as t1
left outer join books as t2 on (t1.isbn = t2.isbn)
left outer join avg_item as t3 on (t1.isbn = t3.isbn)
order by user_id, _rank_
''')
# There are many rows in the results. Print results for the first three users only.
three = result['Result Set'].loc[[0,5,10],:'User_ID'].values
for user in np.nditer(three):
display(Markdown('#### Recommendations for user %s ' % user))
display(result['Result Set'].query('User_ID == %s' % user))
"""
Explanation: Make recommendations for hold-out users
End of explanation
"""
similar_users = s.CASTable("similar_users", replace=True)
ratings_by_user.recommend.recomSim(
label="similar_users",
system=book_recommend,
id="user_id",
sparseId="isbn",
sparseVal="rating",
measure="cos",
casout=similar_users,
threshold=.2)
"""
Explanation: Build a KNN model<a name="build_knn"/>
Calculate the similarity between users
End of explanation
"""
result = similar_users.query("user_id_1 = 104437 and user_id_2 = 199981").head(1)
display(result)
def one_users_ratings(user_id):
result = s.fedSql.execDirect(query='''
select t1.*,
t2.author, t2.title from ratings_by_user as t1
left outer join books as t2 on (t1.isbn = t2.isbn)
where t1.user_id = {}
order by author, isbn;
'''.format(user_id))
display(Markdown('#### Ratings by user %s' % user_id))
display(result)
one_users_ratings(104437)
one_users_ratings(199981)
"""
Explanation: View the similarity for one pair of users
In this case, these two users read three of the same books. They rated the three
at 7 and above.
End of explanation
"""
ratings_by_item.recommend.recomKnnTrain(
label='knn1',
system=book_recommend,
similarity=similar_users,
k=20,
hold=holdout_users,
rateinfo=avg_user,
user=True # need to tell if similarity is for the user or the item
)
users = ['104437']
knn_recommended = s.CASTable("knn_recommended", replace=True)
s.recommend.recomKnnScore(
system="bookRecommend",
label="knn1",
userList=users,
n=10,
casout=knn_recommended
)
s.fedSql.execDirect(
query='''
select t1.*, t2.author, t2.title,
t3._stat_ as "Average Rating", t3._nratings_ as "Number of Ratings"
from knn_recommended as t1
left outer join books as t2 on (t1.isbn = t2.isbn)
left outer join avg_item as t3 on (t1.isbn = t3.isbn)
order by user_id, _rank_;
''')
"""
Explanation: Calculate KNN based on user's similar ratings
End of explanation
"""
book_search = s.CASTable("book_search", replace=True)
book_search.table.droptable(quiet=True)
books.recommend.recomSearchIndex(
system=book_recommend,
label='book_search',
id='isbn')
yoga_query = 'yoga fitness'
query_filter = s.CASTable("query_filter", replace=True)
result = book_search.recommend.recomSearchQuery(
system=book_recommend,
label='book_search',
casout=query_filter,
query=yoga_query,
n=100)
query_filter.columnInfo()
yoga_reader = '99955'
filtered_results = s.CASTable('filtered_results', replace=True)
filtered_results = s.recommend.recomMfScore(
system=book_recommend,
label='als1',
filter=query_filter,
userlist=yoga_reader,
n=5,
casout=filtered_results
)
s.fedSql.execDirect(query='''
select t1.*, t2.author, t2.title,
t3._stat_ as "Average Rating", t3._nratings_ as "Number of Ratings"
from filtered_results as t1
left outer join books as t2 on (t1.isbn = t2.isbn)
left outer join avg_item as t3 on (t1.isbn = t3.isbn)
order by user_id, _rank_;
''')
#s.close()
"""
Explanation: Combine search with recommendations<a name="build_searchindex"/>
Search can be included with recommendations
First, build the search index.
* The recomSearchIndex action generates a global-scope table that is named the same as the label parameter.
* The generated index table is always appended when recomSearchIndex is run again with the same label. This can generate duplicate documents in the index table.
* To avoid duplicates, the table.dropTable action is run first. The quiet=True parameter is used to ignore whether the table exists or not.
Afterward, run search queries for terms.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/bnu/cmip6/models/sandbox-3/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-3', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: BNU
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
SHDShim/pytheos | examples/11_pvt-eos_fit-el_anh.ipynb | apache-2.0 | %config InlineBackend.figure_format = 'retina'
"""
Explanation: For high dpi displays.
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
import uncertainties as uct
from uncertainties import unumpy as unp
import pytheos as eos
"""
Explanation: 0. General note
This notebook shows an example of how to include anharmonic effects and/or electronic effects in the P-V-T EOS fitting.
We use data on SiC from Nisr et al. (2017, JGR-Planet).
Note that the code here is for demonstration purpose. In fact, we did not find any evidence for including anharmonic and electronic effects in fitting the data for SiC.
1. Global setup
End of explanation
"""
au_eos = {'Fei2007': eos.gold.Fei2007bm3(), 'Dorogokupets2007': eos.gold.Dorogokupets2007()}
"""
Explanation: 2. Setup for fitting with different gold pressure scales
The equations of state of gold from Fei et al. (2007, PNAS) and Dorogokupets and Dewaele (2007, HPR). These equations are provided in pytheos as built-in classes.
End of explanation
"""
st_model = {'Fei2007': eos.BM3Model, 'Dorogokupets2007': eos.VinetModel}
"""
Explanation: Because we use Birch-Murnaghan EOS version of Fei2007 and Dorogokupets2007 used Vinet EOS, we create a dictionary to provide different static compression EOSs for the different pressure scales used.
End of explanation
"""
k0_3c = {'Fei2007': 241.2, 'Dorogokupets2007': 243.0}
k0p_3c = {'Fei2007': 2.84, 'Dorogokupets2007': 2.68}
k0_6h = {'Fei2007': 243.1, 'Dorogokupets2007': 245.5}
k0p_6h = {'Fei2007': 2.79, 'Dorogokupets2007': 2.59}
"""
Explanation: Assign initial values for the EOS parameters.
End of explanation
"""
gamma0 = 1.06
q = 1.
theta0 = 1200.
"""
Explanation: Also for the thermal parameters. In this example, we will use the constant $q$ equation for the thermal part of the EOS.
End of explanation
"""
v0 = {'3C': 82.8042, '6H': 124.27}
n_3c = 2.; z_3c = 4.
n_6h = 2.; z_6h = 6.
"""
Explanation: We also setup for the physical constants of two different polymorphs of SiC.
End of explanation
"""
data = np.recfromcsv('./data/3C-HiTEOS-final.csv', case_sensitive=True, deletechars='')
"""
Explanation: 3. Data distribution (3C)
The data set is provided in a csv file.
End of explanation
"""
v_std = unp.uarray( data['V(Au)'], data['sV(Au)'])
temp = unp.uarray(data['T(3C)'], data['sT(3C)'])
v = unp.uarray(data['V(3C)'], data['sV(3C)'])
"""
Explanation: Set up variables for the data.
End of explanation
"""
for key, value in au_eos.items():
p = au_eos[key].cal_p(v_std, temp)
eos.plot.thermal_data({'p': p, 'v': v, 'temp': temp}, title=key)
"""
Explanation: Plot $P$-$V$-$T$ data in the $P$-$V$ and $P$-$T$ spaces.
End of explanation
"""
for key, value in au_eos.items():
# calculate pressure
p = au_eos[key].cal_p(v_std, temp)
# add prefix to the parameters. this is important to distinguish thermal and static parameters
eos_st = st_model[key](prefix='st_')
eos_th = eos.ConstqModel(n_3c, z_3c, prefix='th_')
eos_el = eos.ZharkovElecModel(n_3c, z_3c, prefix='el_')
# define initial values for parameters
params = eos_st.make_params(v0=v0['3C'], k0=k0_3c[key], k0p=k0p_3c[key])
params += eos_th.make_params(v0=v0['3C'], gamma0=gamma0, q=q, theta0=theta0)
params += eos_el.make_params(v0=v0['3C'], e0=0.1e-6, g=0.01)
# construct PVT eos
pvteos = eos_st + eos_th + eos_el
# fix static parameters and some other well known parameters
params['th_v0'].vary=False; params['th_gamma0'].vary=False; params['th_theta0'].vary=False
params['th_q'].vary=False
params['st_v0'].vary=False; params['st_k0'].vary=False; params['st_k0p'].vary=False
params['el_v0'].vary=False#; params['el_e0'].vary=False#; params['el_g'].vary=False
# calculate weights. setting it None results in unweighted fitting
weights = 1./unp.std_devs(p) #None
fit_result = pvteos.fit(unp.nominal_values(p), params, v=unp.nominal_values(v),
temp=unp.nominal_values(temp), weights=weights)
print('********'+key)
print(fit_result.fit_report())
# plot fitting results
eos.plot.thermal_fit_result(fit_result, p_err=unp.std_devs(p), v_err=unp.std_devs(v), title=key)
"""
Explanation: 4. Data fitting with constq equation (3C) with electronic effects
Normally weight for each data point can be calculated from $\sigma(P)$. In this case, using uncertainties, we can easily propagate the temperature and volume uncertainties to get the value.
End of explanation
"""
for key, value in au_eos.items():
# calculate pressure
p = au_eos[key].cal_p(v_std, temp)
# add prefix to the parameters. this is important to distinguish thermal and static parameters
eos_st = st_model[key](prefix='st_')
eos_th = eos.ConstqModel(n_3c, z_3c, prefix='th_')
eos_anh = eos.ZharkovAnhModel(n_3c, z_3c, prefix='anh_')
# define initial values for parameters
params = eos_st.make_params(v0=v0['3C'], k0=k0_3c[key], k0p=k0p_3c[key])
params += eos_th.make_params(v0=v0['3C'], gamma0=gamma0, q=q, theta0=theta0)
params += eos_anh.make_params(v0=v0['3C'], a0=0.1e-6, m=0.01)
# construct PVT eos
pvteos = eos_st + eos_th + eos_anh
# fix static parameters and some other well known parameters
params['th_v0'].vary=False; params['th_gamma0'].vary=False; params['th_theta0'].vary=False
params['th_q'].vary=False
params['st_v0'].vary=False; params['st_k0'].vary=False; params['st_k0p'].vary=False
params['anh_v0'].vary=False#; params['el_e0'].vary=False#; params['el_g'].vary=False
# calculate weights. setting it None results in unweighted fitting
weights = 1./unp.std_devs(p) #None
fit_result = pvteos.fit(unp.nominal_values(p), params, v=unp.nominal_values(v),
temp=unp.nominal_values(temp), weights=weights)
print('********'+key)
print(fit_result.fit_report())
# plot fitting results
eos.plot.thermal_fit_result(fit_result, p_err=unp.std_devs(p), v_err=unp.std_devs(v), title=key)
"""
Explanation: 5. Data fitting with constq equation (3C) with anharmonic effects
End of explanation
"""
|
exe0cdc/PyscesToolbox | example_notebooks/RateChar.ipynb | bsd-3-clause | mod = pysces.model('lin4_fb.psc')
rc = psctb.RateChar(mod)
"""
Explanation: RateChar
RateChar is a tool for performing generalised supply-demand analysis (GSDA) [2,3]. This entails the generation data needed to draw rate characteristic plots for all the variable species of metabolic model through parameter scans and the subsequent visualisation of these data in the form of ScanFig objects.
Features
Performs parameter scans for any variable species of a metabolic model
Stores results in a structure similar to Data2D.
Saving of raw parameter scan data, together with metabolic control analysis results to disk.
Saving of RateChar sessions to disk for later use.
Generates rate characteristic plots from parameter scans (using ScanFig).
Can perform parameter scans of any variable species with outputs for relevant response, partial response, elasticity and control coefficients (with data stores as Data2D objects).
Usage and Feature Walkthrough
Workflow
Performing GSDA with RateChar usually requires taking the following steps:
Instantiation of RateChar object (optionally specifying default settings).
Performing a configurable parameter scan of any combination of variable species (or loading previously saved results).
Accessing scan results through RateCharData objects corresponding to the names of the scanned species that can be found as attributes of the instantiated RateChar object.
Plotting results of a particular species using the plot method of the RateCharData object corresponding to that species.
Further analysis using the do_mca_scan method.
Session/Result saving if required.
Further Analysis
.. note:: Parameter scans are performed for a range of concentrations values between two set values. By default the minimum and maximum scan range values are calculated relative to the steady state concentration the species for which a scan is performed respectively using a division and multiplication factor. Minimum and maximum values may also be explicitly specified. Furthermore the number of points for which a scan is performed may also be specified. Details of how to access these options will be discussed below.
Object Instantiation
Like most tools provided in PySCeSToolbox, instantiation of a RateChar object requires a pysces model object (PysMod) as an argument. A RateChar session will typically be initiated as follows (here we will use the included lin4_fb.psc model):
End of explanation
"""
rc = psctb.RateChar(mod,min_concrange_factor=100,
max_concrange_factor=100,
scan_points=255,
auto_load=False)
"""
Explanation: Default parameter scan settings relating to a specific RateChar session can also be specified during instantiation:
End of explanation
"""
mod.species
rc.do_ratechar()
"""
Explanation: min_concrange_factor : The steady state division factor for calculating scan range minimums (default: 100).
max_concrange_factor : The steady state multiplication factor for calculating scan range maximums (default: 100).
scan_points : The number of concentration sample points that will be taken during parameter scans (default: 256).
auto_load : If True RateChar will try to load saved data from a previous session during instantiation. Saved data is unaffected by the above options and are only subject to the settings specified during the session where they were generated. (default: False).
The settings specified with these optional arguments take effect when the corresponding arguments are not specified during a parameter scan.
Parameter Scan
After object instantiation, parameter scans may be performed for any of the variable species using the do_ratechar method. By default do_ratechar will perform parameter scans for all variable metabolites using the settings specified during instantiation. For saving/loading see Saving/Loading Sessions below.
End of explanation
"""
rc.do_ratechar(fixed=['S1','S3'], scan_min=0.02, max_concrange_factor=110, scan_points=200)
"""
Explanation: Various optional arguments, similar to those used during object instantiation, can be used to override the default settings and customise any parameter scan:
fixed : A string or list of strings specifying the species for which to perform a parameter scan. The string 'all' specifies that all variable species should be scanned. (default: all)
scan_min : The minimum value of the scan range, overrides min_concrange_factor (default: None).
scan_max : The maximum value of the scan range, overrides max_concrange_factor (default: None).
min_concrange_factor : The steady state division factor for calculating scan range minimums (default: None)
max_concrange_factor : The steady state multiplication factor for calculating scan range maximums (default: None).
scan_points : The number of concentration sample points that will be taken during parameter scans (default: None).
solver : An integer value that specifies which solver to use (0:Hybrd,1:NLEQ,2:FINTSLV). (default: 0).
.. note:: For details on different solvers see the PySCeS documentation:
For example in a scenario where we only wanted to perform parameter scans of 200 points for the metabolites S1 and S3 starting at a value of 0.02 and ending at a value 110 times their respective steady-state values the method would be called as follows:
End of explanation
"""
# Each key represents a field through which results can be accessed
sorted(rc.S3.scan_results.keys())
"""
Explanation: Accessing Results
Parameter Scan Results
Parameter scan results for any particular species are saved as an attribute of the RateChar object under the name of that species. These RateCharData objects are similar to Data2D objects with parameter scan results being accessible through a scan_results DotDict:
End of explanation
"""
# Single value results
# scan_min value
rc.S3.scan_results.scan_min
# fixed metabolite name
rc.S3.scan_results.fixed
# 1-dimensional ndarray results (only every 10th value of 200 value arrays)
# scan_range values
rc.S3.scan_results.scan_range[::10]
# J_R3 values for scan_range
rc.S3.scan_results.J_R3[::10]
# total_supply values for scan_range
rc.S3.scan_results.total_supply[::10]
# Note that J_R3 and total_supply are equal in this case, because S3
# only has a single supply reaction
"""
Explanation: .. note:: The DotDict data structure is essentially a dictionary with additional functionality for displaying results in table form (when appropriate) and for accessing data using dot notation in addition the normal dictionary bracket notation.
In the above dictionary-like structure each field can represent different types of data, the most simple of which is a single value, e.g., scan_min and fixed, or a 1-dimensional numpy ndarray which represent input (scan_range) or output (J_R3, J_R4, total_supply):
End of explanation
"""
# Metabolic Control Analysis coefficient line data
# Names of elasticity coefficients related to the 'S3' parameter scan
rc.S3.scan_results.ec_names
# The x, y coordinates for two points that will be used to plot a
# visual representation of ecR3_S3
rc.S3.scan_results.ecR3_S3
# The x,y coordinates for two points that will be used to plot a
# visual representation of ecR4_S3
rc.S3.scan_results.ecR4_S3
# The ecR3_S3 and ecR4_S3 data collected into a single array
# (horizontally stacked).
rc.S3.scan_results.ec_data
"""
Explanation: Finally data needed to draw lines relating to metabolic control analysis coefficients are also included in scan_results. Data is supplied in 3 different forms: Lists names of the coefficients (under ec_names, prc_names, etc.), 2-dimensional arrays with exactly 4 values (representing 2 sets of x,y coordinates) that will be used to plot coefficient lines, and 2-dimensional array that collects coefficient line data for each coefficient type into single arrays (under ec_data, prc_names, etc.).
End of explanation
"""
# Metabolic control analysis coefficient results
rc.S3.mca_results
"""
Explanation: Metabolic Control Analysis Results
The in addition to being able to access the data that will be used to draw rate characteristic plots, the user also has access to the values of the metabolic control analysis coefficient values at the steady state of any particular species via the mca_results field. This field represents a DotDict dictionary-like object (like scan_results), however as each key maps to exactly one result, the data can be displayed as a table (see Basic Usage):
End of explanation
"""
# Control coefficient ccJR3_R1 value
rc.S3.mca_results.ccJR3_R1
"""
Explanation: Naturally, coefficients can also be accessed individually:
End of explanation
"""
# Rate characteristic plot for 'S3'.
S3_rate_char_plot = rc.S3.plot()
"""
Explanation: Plotting Results
One of the strengths of generalised supply-demand analysis is that it provides an intuitive visual framework for inspecting results through the used of rate characteristic plots. Naturally this is therefore the main focus of RateChar. Parameter scan results for any particular species can be visualised as a ScanFig object through the plot method:
End of explanation
"""
# Display plot via `interact` and enable certain lines by clicking category buttons.
# The two method calls below are equivalent to clicking the 'J_R3'
# and 'Partial Response Coefficients' buttons:
# S3_rate_char_plot.toggle_category('J_R3',True)
# S3_rate_char_plot.toggle_category('Partial Response Coefficients',True)
S3_rate_char_plot.interact()
"""
Explanation: Plots generated by RateChar do not have widgets for each individual line; lines are enabled or disabled in batches according to the category they belong to. By default the Fluxes, Demand and Supply categories are enabled when plotting. To display the partial response coefficient lines together with the flux lines for J_R3, for instance, we would click the J_R3 and the Partial Response Coefficients buttons (in addition to those that are enabled by default).
End of explanation
"""
S3_rate_char_plot.toggle_line('prcJR3_S3_R4', False)
S3_rate_char_plot.show()
"""
Explanation: Modifying the status of individual lines is still supported, but has to take place via the toggle_line method. As an example prcJR3_C_R4 can be disabled as follows:
End of explanation
"""
# This points to a file under the Pysces directory
save_file = '~/Pysces/rc_doc_example.npz'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32':
save_file = psctb.utils.misc.unix_to_windows_path(save_file)
else:
save_file = path.expanduser(save_file)
rc.save_session(file_name = save_file)
"""
Explanation: .. note:: For more details on saving see the sections Saving and Default Directories and ScanFig under Basic Usage.
Saving
Saving/Loading Sessions
RateChar sessions can be saved for later use. This is especially useful when working with large data sets that take some time to generate. Data sets can be saved to any arbitrary location by supplying a path:
End of explanation
"""
rc.save_session() # to "~/Pysces/lin4_fb/ratechar/save_data.npz"
"""
Explanation: When no path is supplied the dataset will be saved to the default directory. (Which should be "~/Pysces/lin4_fb/ratechar/save_data.npz" in this case.
End of explanation
"""
rc.load_session(save_file)
# OR
rc.load_session() # from "~/Pysces/lin4_fb/ratechar/save_data.npz"
"""
Explanation: Similarly results may be loaded using the load_session method, either with or without a specified path:
End of explanation
"""
# This points to a subdirectory under the Pysces directory
save_folder = '~/Pysces/lin4_fb/'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32':
save_folder = psctb.utils.misc.unix_to_windows_path(save_folder)
else:
save_folder = path.expanduser(save_folder)
rc.save_results(save_folder)
"""
Explanation: Saving Results
Results may also be exported in csv format either to a specified location or to the default directory. Unlike saving of sessions results are spread over multiple files, so here an existing folder must be specified:
End of explanation
"""
# Otherwise results will be saved to the default directory
rc.save_results(save_folder) # to sub folders in "~/Pysces/lin4_fb/ratechar/
"""
Explanation: A subdirectory will be created for each metabolite with the files ec_results_N, rc_results_N, prc_results_N, flux_results_N and mca_summary_N (where N is a number starting at "0" which increments after each save operation to prevent overwriting files).
End of explanation
"""
|
qinwf-nuan/keras-js | notebooks/layers/wrappers/TimeDistributed.ipynb | mit | data_in_shape = (3, 6)
layer_0 = Input(shape=data_in_shape)
layer_1 = TimeDistributed(Dense(4))(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(4000 + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
weight_names = ['W', 'b']
for w_i, w_name in enumerate(weight_names):
print('{} shape:'.format(w_name), weights[w_i].shape)
print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['wrappers.TimeDistributed.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: TimeDistributed
[wrappers.TimeDistributed.0] wrap a Dense layer with units 4 (input: 3 x 6)
End of explanation
"""
data_in_shape = (5, 4, 4, 2)
layer_0 = Input(shape=data_in_shape)
layer_1 = TimeDistributed(Conv2D(6, (3,3), data_format='channels_last'))(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(4010 + i)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
weight_names = ['W', 'b']
for w_i, w_name in enumerate(weight_names):
print('{} shape:'.format(w_name), weights[w_i].shape)
print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['wrappers.TimeDistributed.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
"""
Explanation: [wrappers.TimeDistributed.1] wrap a Conv2D layer with 6 3x3 filters (input: 5x4x4x2)
End of explanation
"""
print(json.dumps(DATA))
"""
Explanation: export for Keras.js tests
End of explanation
"""
|
martin-hunt/hublib | hublib/rappture/test/CompositeLaminate.ipynb | mit | import os, sys
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
sys.path.insert(0, os.path.abspath('../../..'))
import hublib.rappture as rappture
io = rappture.Tool("complam")
io.inputs
"""
Explanation: Composite laminate
Starting from a list of properties for the matrix and fibers we use rules of mixtures expressions to obtain the properties of a unidirectional ply. The ply properties are then used to compute the properties of the laminate using the Complam tool in nanoHUB.
End of explanation
"""
AS4 = [234.0e9, 19.5e9, 93.0e9, 0.26, 0.70, -0.4e-6, 5.6e-6]
IM6 = [276.0e9, 19.5e9, 109.0e9, 0.26, 0.70, -0.4e-6, 5.6e-6]
IM7 = [276.0e9, 19.5e9, 109.0e9, 0.26, 0.70, -0.4e-6, 5.0e-6]
IM8 = [303.0e9, 25.0e9, 120.0e9, 0.26, 0.70, -0.3e-6, 5.0e-6]
PANEX = [228.0e9, 19.5e9, 80.0e9, 0.25, 0.70, -0.4e-6, 5.6e-6]
T300 = [231.0e9, 19.5e9, 91.0e9, 0.27, 0.70, -0.6e-6, 9.0e-6]
T40 = [283.0e9, 19.5e9, 107.0e9, 0.32, 0.70, -0.4e-6, 5.6e-6]
T50 = [393.0e9, 19.5e9, 159.0e9, 0.24, 0.70, -0.4e-6, 5.6e-6]
T55 = [379.0e9, 19.5e9, 142.0e9, 0.33, 0.70, -0.4e-6, 5.6e-6]
T75 = [517.0e9, 19.5e9, 204.0e9, 0.27, 0.70, -0.4e-6, 5.6e-6]
fiber = [IM6, IM7, IM8, PANEX, T300, T40, T50, T55, T75]
"""
Explanation: Fiber properties: $E_1$, $E_2$, $G_{12}$, $\nu_{12}$, $\nu_{23}$, $\alpha_1$, $\alpha_2$
End of explanation
"""
matrix = [[3.4E9, 0.37, 65.0E-6],
[4.1E9, 0.28, 65.0E-6],
[2.8E9, 0.32, 65.0E-6],
[4.4E9, 0.31, 65.0E-6],
[3.9E9, 0.33, 65.0E-6],
[3.7E9, 0.35, 65.0E-6]]
"""
Explanation: Matrix properties: $E$, $\nu$, $\alpha$
End of explanation
"""
Vf = 0.6
Vm = 1.-Vf
"""
Explanation: Volume fractions and fiber and matrix
End of explanation
"""
Ex = []
nu = []
Gxy = []
for fiber_p in fiber:
for matrix_p in matrix:
#print(fiber_p)
#print(matrix_p)
E1 = fiber_p[0] * Vf + matrix_p[0] * Vm
nu12 = fiber_p[3] * Vf + matrix_p[1] * Vm
E2 = 1.0 / (Vf / fiber_p[1] + Vm / matrix_p[0])
Gm = matrix_p[0] / (2.0 * (1.0 + matrix_p[1]))
G12 = 1.0 / (Vf / fiber_p[2] + Vm / Gm)
alpha1 = (fiber_p[5] * fiber_p[0] * Vf + matrix_p[2] * matrix_p[0] * Vm) / \
(fiber_p[0] * Vf + matrix_p[0] * Vm)
# alpha2 = Vf * (fiber_p[6] - (matrix_p[0] / fiber_p[0]) * fiber_p[4] * \
# (matrix_p[2] - fiber_p[5]) * Vm) + Vm * \
# (matrix_p[2] + (fiber_p[0] / matrix_p[0]) * \
# matrix_p[1] * (matrix_p[2] - fiber_p[5]) * Vf)
alpha2 = fiber_p[6] * Vf + matrix_p[2] * Vm
#print E1/1e9, E2/1e9, nu12, G12/1.e9, alpha1, alpha2
# Set input values for the Complam tool
material = io['input.group(tabs).group(Material)']
material['number(E1).current'] = E1 / 1e9
material['number(E2).current'] = E2 / 1e9
material['number(nu12).current'] = nu12
material['number(G12).current'] = G12 / 1.0e9
material['number(alpha1).current'] = alpha1
material['number(alpha2).current'] = alpha2
# Run the complam tool
io.run()
# Get the output from the tool and save for plotting later
Ex.append(io['output.number(Ex).current'].value.magnitude)
nu.append(io['output.number(nu).current'].value)
Gxy.append(io['output.number(Gxy).current'].value.magnitude)
print('Done running composites laminate tool')
val = io['output.number(Gxy).current'].value
print(val)
print(val.magnitude)
print(val.units)
print(val.to('kPa'))
print(val.to('kPa').magnitude)
'{:~}'.format(val)
"""
Explanation: Loop over pairs of matrix/fiber and compute ply properties
End of explanation
"""
%matplotlib notebook
plt.style.use('ggplot')
plt.figure(figsize=(8,5))
plt.plot(Ex, Gxy, 'ro')
plt.xlabel('Ex', fontsize=18)
plt.ylabel('Gxy', fontsize=18);
plt.figure(figsize=(8,5))
plt.plot(Ex, nu, 'ro')
plt.xlabel('Ex', fontsize=18)
plt.ylabel('nu', fontsize=18);
plt.figure(figsize=(8,5))
plt.plot(Ex, nu, 'ro')
# need to convert fiber list to array so we can extract columns
fiber = np.array(fiber)
plt.plot(fiber[:,0] / 1e9, fiber[:,3], 'bo')
matrix = np.array(matrix)
plt.plot(matrix[:,0] / 1e9, matrix[:,1], 'go')
plt.xlabel('Ex', fontsize=18)
plt.ylabel('nu', fontsize=18);
"""
Explanation: Plot Longitudinal modulus (Ex) vs. Shear modulus (Gxy)
End of explanation
"""
plt.figure(figsize=(8,5))
plt.semilogx(Ex, nu, 'ro', label='Composite')
# need to convert fiber list to array so we can extract columns
fiber = np.array(fiber)
plt.semilogx(fiber[:,0] / 1e9, fiber[:,3], 'bo', label="Fiber")
matrix = np.array(matrix)
plt.semilogx(matrix[:,0] / 1e9, matrix[:,1], 'go', label="Matrix")
plt.legend()
plt.xlabel('Ex', fontsize=18)
plt.ylabel('nu', fontsize=18);
os.environ['HOME']
os.path.join(os.environ['HOME'], 'data/sessions', os.environ['SESSION'])
"""
Explanation: <h2>Maybe it looks better with a log axis</h2>
End of explanation
"""
|
tanghaibao/goatools | notebooks/cell_cycle.ipynb | bsd-2-clause | # Get http://geneontology.org/ontology/go-basic.obo
from goatools.base import download_go_basic_obo
obo_fname = download_go_basic_obo()
"""
Explanation: Cell Cycle genes
Using Gene Ontologies (GO), create an up-to-date list of all human protein-coding genes that are know to be associated with cell cycle.
1. Download Ontologies, if necessary
End of explanation
"""
# Get ftp://ftp.ncbi.nlm.nih.gov/gene/DATA/gene2go.gz
from goatools.base import download_ncbi_associations
gene2go = download_ncbi_associations()
"""
Explanation: 2. Download Associations, if necessary
End of explanation
"""
from goatools.anno.genetogo_reader import Gene2GoReader
objanno = Gene2GoReader("gene2go", taxids=[9606])
go2geneids_human = objanno.get_id2gos(namespace='BP', go2geneids=True)
print("{N:} GO terms associated with human NCBI Entrez GeneIDs".format(N=len(go2geneids_human)))
"""
Explanation: 3. Read associations
Normally, when reading associations, GeneID2GOs are returned. We get the reverse, GO2GeneIDs, by adding the key-word arg, "go2geneids=True" to the call to read_ncbi_gene2go.
End of explanation
"""
from goatools.go_search import GoSearch
srchhelp = GoSearch("go-basic.obo", go2items=go2geneids_human)
"""
Explanation: 4. Initialize Gene-Search Helper
End of explanation
"""
import re
# Compile search pattern for 'cell cycle'
cell_cycle_all = re.compile(r'cell cycle', flags=re.IGNORECASE)
cell_cycle_not = re.compile(r'cell cycle.independent', flags=re.IGNORECASE)
"""
Explanation: 5. Find human all genes related to "cell cycle"
5a. Prepare "cell cycle" text searches
We will need to search for both cell cycle and cell cycle-independent. Those GOs that contain the text cell cycle-independent are specifically not related to cell cycle and must be removed from our list of cell cycle GO terms.
End of explanation
"""
# Find ALL GOs and GeneIDs associated with 'cell cycle'.
# Details of search are written to a log file
fout_allgos = "cell_cycle_gos_human.log"
with open(fout_allgos, "w") as log:
# Search for 'cell cycle' in GO terms
gos_cc_all = srchhelp.get_matching_gos(cell_cycle_all, prt=log)
# Find any GOs matching 'cell cycle-independent' (e.g., "lysosome")
gos_no_cc = srchhelp.get_matching_gos(cell_cycle_not, gos=gos_cc_all, prt=log)
# Remove GO terms that are not "cell cycle" GOs
gos = gos_cc_all.difference(gos_no_cc)
# Add children GOs of cell cycle GOs
gos_all = srchhelp.add_children_gos(gos)
# Get Entrez GeneIDs for cell cycle GOs
geneids = srchhelp.get_items(gos_all)
print("{N} human NCBI Entrez GeneIDs related to 'cell cycle' found.".format(N=len(geneids)))
"""
Explanation: 5b. Find NCBI Entrez GeneIDs related to "cell cycle"
End of explanation
"""
from genes_ncbi_9606_proteincoding import GENEID2NT
for geneid in geneids: # geneids associated with cell-cycle
nt = GENEID2NT.get(geneid, None)
if nt is not None:
print("{Symbol:<10} {desc}".format(
Symbol = nt.Symbol,
desc = nt.description))
"""
Explanation: 6. Print the "cell cycle" protein-coding gene Symbols
In this example, the background is all human protein-codinge genes.
Follow the instructions in the background_genes_ncbi notebook to download a set of background population genes from NCBI.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/cccr-iitm/cmip6/models/sandbox-1/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-1', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CCCR-IITM
Source ID: SANDBOX-1
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:48
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.19/_downloads/9256698a6dc1e67585812dacb4e9f960/plot_elekta_epochs.ipynb | bsd-3-clause | # Author: Jussi Nurminen (jnu@iki.fi)
#
# License: BSD (3-clause)
import mne
import os
from mne.datasets import multimodal
fname_raw = os.path.join(multimodal.data_path(), 'multimodal_raw.fif')
print(__doc__)
"""
Explanation: ======================================
Getting averaging info from .fif files
======================================
Parse averaging information defined in Elekta Vectorview/TRIUX DACQ (data
acquisition). Extract and average epochs accordingly. Modify some
averaging parameters and get epochs.
End of explanation
"""
raw = mne.io.read_raw_fif(fname_raw)
"""
Explanation: Read raw file
End of explanation
"""
print(raw.acqparser)
"""
Explanation: Check DACQ defined averaging categories and other info
End of explanation
"""
cond = raw.acqparser.get_condition(raw, 'Auditory right')
epochs = mne.Epochs(raw, **cond)
epochs.average().plot_topo(background_color='w')
"""
Explanation: Extract epochs corresponding to a category
End of explanation
"""
evokeds = []
for cat in raw.acqparser.categories:
cond = raw.acqparser.get_condition(raw, cat)
# copy (supported) rejection parameters from DACQ settings
epochs = mne.Epochs(raw, reject=raw.acqparser.reject,
flat=raw.acqparser.flat, **cond)
evoked = epochs.average()
evoked.comment = cat['comment']
evokeds.append(evoked)
# save all averages to an evoked fiff file
# fname_out = 'multimodal-ave.fif'
# mne.write_evokeds(fname_out, evokeds)
"""
Explanation: Get epochs from all conditions, average
End of explanation
"""
newcat = dict()
newcat['comment'] = 'Visual lower left, longer epochs'
newcat['event'] = 3 # reference event
newcat['start'] = -.2 # epoch start rel. to ref. event (in seconds)
newcat['end'] = .7 # epoch end
newcat['reqevent'] = 0 # additional required event; 0 if none
newcat['reqwithin'] = .5 # ...required within .5 sec (before or after)
newcat['reqwhen'] = 2 # ...required before (1) or after (2) ref. event
newcat['index'] = 9 # can be set freely
cond = raw.acqparser.get_condition(raw, newcat)
epochs = mne.Epochs(raw, reject=raw.acqparser.reject,
flat=raw.acqparser.flat, **cond)
epochs.average().plot(time_unit='s')
"""
Explanation: Make a new averaging category
End of explanation
"""
|
visualfabriq/bquery | bquery/benchmarks/bench_groupby.ipynb | bsd-3-clause | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import itertools as itt
import time
import shutil
import os
import contextlib
import pandas as pd
import blaze as blz
import bquery
import cytoolz
from cytoolz.curried import pluck as cytoolz_pluck
from collections import OrderedDict
import copy
from prettyprint import pp
elapsed_times = OrderedDict()
@contextlib.contextmanager
def ctime(message=None):
"Counts the time spent in some context"
assert message is not None
global elapsed_times
t_elapsed = 0.0
print('\n')
t = time.time()
yield
if message:
print message + ": ",
t_elapsed = time.time() - t
print round(t_elapsed, 4), "sec"
elapsed_times[message] = t_elapsed
ga = itt.cycle(['ES', 'NL'])
gb = itt.cycle(['b1', 'b2', 'b3', 'b4', 'b5'])
gx = itt.cycle([1, 2])
gy = itt.cycle([-1, -2])
rootdir = 'bench-data.bcolz'
if os.path.exists(rootdir):
shutil.rmtree(rootdir)
n_rows = 1000000
# -- data
z = np.fromiter(((a, b, x, y) for a, b, x, y in itt.izip(ga, gb, gx, gy)),
dtype='S2,S2,i8,i8', count=n_rows)
ct = bquery.ctable(z, rootdir=rootdir)
ct.flush()
"""
Explanation: Bquery - groupby on-disk
In this notebook we will compare the performance several groupby solutions over out-of-core (disk-based) bcolz files against Python's golden standard: the fast, in-memory Pandas framework.
Our goal is to show how with bcolz you can approach (and in some cases even exceed) the Pandas in-memory performance while working with solutions that only have the intermediate and/or end result in-memory.
We will have two test cases:
1) a query with a single groupby column and a single aggregated (sum) column
2) a query with five groupby columns and three aggregated (sum) columns
This is a simple example with 1 million rows, but you can experiment with other sizes to see how they affect the outcome.
This performance comparison was run on a 16gb ram, 8 core, SSD-based DigitalOcean server on Ubuntu 14.04. System caching might influence the results, but in our experience this very much resembles our real-life experience. We encourage anyone to have their own test and share them with us.
For the impatient, you can scroll to the end to see a graphical presentation of the performance results.
The bquery framework provides methods to perform query and aggregation operations on bcolz containers, as well as accelerate these operations by pre-processing possible groupby columns.
Bcolz is a light weight package that provides columnar, chunked data containers that can be compressed either in-memory and on-disk. that are compressed by default not only for reducing memory/disk storage, but also to improve I/O speed. It excels at storing and sequentially accessing large, numerical data sets.
The code you'll find below was inspired on the following nicely written notebooks:
Blaze - http://nbviewer.ipython.org/url/blaze.pydata.org/notebooks/timings-bcolz.ipynb
Bcolz - http://nbviewer.ipython.org/github/Blosc/movielens-bench/blob/master/querying-ep14.ipynb
End of explanation
"""
print('Simple Test Case')
df = pd.DataFrame(z)
with ctime(message='pandas'):
result = df.groupby(['f0'], sort=False, as_index=False)['f2'].sum()
# print(result)
"""
Explanation: pandas
End of explanation
"""
print('Simple Test Case')
blaze_data = blz.Data(ct.rootdir)
expr = blz.by(blaze_data.f0, sum_f2=blaze_data.f2.sum())
with ctime(message='blaze (pandas + bcolz)'):
result = blz.compute(expr)
# print result
"""
Explanation: blaze
End of explanation
"""
print('Simple Test Case')
with ctime(message='bquery + bcolz'):
result = ct.groupby(['f0'], ['f2'])
# print(result)
"""
Explanation: bquery without caching
End of explanation
"""
print('Simple Test Case')
with ctime(message='bquery, create factorization cache'):
ct.cache_factor(['f0'], refresh=True)
with ctime(message='bquery + bcolz (fact. cached)'):
result = ct.groupby(['f0'], ['f2'])
# print(result)
"""
Explanation: bquery with caching
End of explanation
"""
print('Simple Test Case Running Time')
elapsed_times_bak = OrderedDict({ k: v for (k,v) in sorted(elapsed_times.iteritems())})
pp(elapsed_times_bak)
print('Simple Test Case Running Time relative to Pandas')
elapsed_times_bak = OrderedDict({ k: v for (k,v) in sorted(elapsed_times.iteritems())})
pp(elapsed_times_bak)
elapsed_times = elapsed_times_bak
elapsed_times_norm = OrderedDict({ k: v/elapsed_times['pandas'] for (k,v) in sorted(elapsed_times.iteritems())})
print '\nNormalized running time'
pp(elapsed_times_norm)
"""
Explanation: Running Times Summary
End of explanation
"""
if 'bquery, create factorization cache' in elapsed_times_norm:
base_bquery = elapsed_times_norm.pop('bquery, create factorization cache')
labels = []
val = []
for k,v in sorted(elapsed_times_norm.iteritems(), reverse=True):
labels.append(k)
val.append(v)
pos = np.arange(len(elapsed_times_norm))+.5 # the bar centers on the y axis
print elapsed_times_norm.keys()
plt.figure(1, figsize=[15,5])
plt.grid(True)
plt.barh(pos,val, align='center')
plt.barh(pos,[0, base_bquery, 0,0],
left=[0, elapsed_times_norm['bquery + bcolz (fact. cached)'], 0, 0],
align='center', color = '#FFFFCC')
plt.yticks(pos, labels, fontsize=15)
plt.xlabel('X times slower', fontsize=15)
plt.title('Performance compared to pandas', fontsize=25)
"""
Explanation: Graphic Summary
End of explanation
"""
elapsed_times = OrderedDict()
ga = itt.cycle(['ES', 'NL'])
gb = itt.cycle(['b1', 'b2', 'b3', 'b4'])
gc = itt.cycle([1, 2])
gd = itt.cycle([3, 4, 4, 3])
ge = itt.cycle(['c','d','e'])
gx = itt.cycle([1, 2])
gy = itt.cycle([-1, -2])
gz = itt.cycle([1.11, 2.22, 3.33, 4.44, 5.55])
rootdir = 'bench-data.bcolz'
if os.path.exists(rootdir):
shutil.rmtree(rootdir)
n_rows = 1000000
print('Rows: ', n_rows)
z = np.fromiter(((a, b, c, d, e, x, y, z) for a, b, c, d, e, x, y, z
in itt.izip(ga, gb, gc, gd, ge, gx, gy, gz)),
dtype='S2,S2,i4,i8,S1,i4,i8,f8', count=n_rows)
ct = bquery.ctable(z, rootdir=rootdir, )
# -- pandas --
df = pd.DataFrame(z)
with ctime(message='pandas'):
result = df.groupby(['f0','f1','f2','f3','f4'], sort=False, as_index=False)['f5','f6','f7'].sum()
# print(result)
# -- bquery --
with ctime(message='bquery + bcolz'):
result = ct.groupby(['f0','f1','f2','f3','f4'], ['f5','f6','f7'])
# print(result)
with ctime(message='bquery, create factorization cache'):
ct.cache_factor(['f0','f1','f2','f3','f4'], refresh=True)
with ctime(message='bquery over bcolz (factorization cached)'):
result = ct.groupby(['f0','f1','f2','f3','f4'], ['f5','f6','f7'])
# print(result)
print('Complex Test Case Running Time relative to Pandas')
elapsed_times_bak = OrderedDict({ k: v for (k,v) in sorted(elapsed_times.iteritems())})
pp(elapsed_times_bak)
elapsed_times = elapsed_times_bak
elapsed_times_norm = OrderedDict({ k: v/elapsed_times['pandas'] for (k,v) in sorted(elapsed_times.iteritems())})
print '\nNormalized running time'
pp(elapsed_times_norm)
if 'bquery, create factorization cache' in elapsed_times_norm:
base_bquery = elapsed_times_norm.pop('bquery, create factorization cache')
labels = []
val = []
for k,v in sorted(elapsed_times_norm.iteritems(), reverse=True):
labels.append(k)
val.append(v)
pos = np.arange(len(elapsed_times_norm))+.5 # the bar centers on the y axis
print elapsed_times_norm.keys()
plt.figure(1, figsize=[15,5])
plt.grid(True)
plt.barh(pos,val, align='center')
plt.barh(pos,[0, base_bquery, 0],
left=[0, elapsed_times_norm['bquery over bcolz (factorization cached)'], 0],
align='center', color = '#FFFFCC')
plt.yticks(pos, labels, fontsize=15)
plt.xlabel('X times slower', fontsize=15)
plt.title('Performance compared to pandas', fontsize=25)
"""
Explanation: The Light yellow shows the one-time factorization caching (which after the first run can be left out of future queries).
NB: when data changes, this caching has to be refreshed but it is very useful in most reporting & analytics cases.
Complex scenario
End of explanation
"""
|
csaladenes/csaladenes.github.io | present/mcc2/sklearn_tutorial/04.1-Dimensionality-PCA.ipynb | mit | from __future__ import print_function, division
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
"""
Explanation: Dénes Csala
MCC, 2022
Based on Elements of Data Science (Allen B. Downey, 2021) and Python Data Science Handbook (Jake VanderPlas, 2018)
License: MIT
Dimensionality Reduction: Principal Component Analysis in-depth
Here we'll explore Principal Component Analysis, which is an extremely useful linear dimensionality reduction technique.
We'll start with our standard set of initial imports:
End of explanation
"""
np.random.seed(1)
X = np.dot(np.random.random(size=(2, 2)), np.random.normal(size=(2, 200))).T
plt.plot(X[:, 0], X[:, 1], 'o')
plt.axis('equal');
"""
Explanation: Introducing Principal Component Analysis
Principal Component Analysis is a very powerful unsupervised method for dimensionality reduction in data. It's easiest to visualize by looking at a two-dimensional dataset:
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(X)
print(pca.explained_variance_)
print(pca.components_)
"""
Explanation: We can see that there is a definite trend in the data. What PCA seeks to do is to find the Principal Axes in the data, and explain how important those axes are in describing the data distribution:
End of explanation
"""
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.5)
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
plt.plot([0, v[0]], [0, v[1]], '-k', lw=3)
plt.axis('equal');
"""
Explanation: To see what these numbers mean, let's view them as vectors plotted on top of the data:
End of explanation
"""
clf = PCA(0.95) # keep 95% of variance
X_trans = clf.fit_transform(X)
print(X.shape)
print(X_trans.shape)
"""
Explanation: Notice that one vector is longer than the other. In a sense, this tells us that that direction in the data is somehow more "important" than the other direction.
The explained variance quantifies this measure of "importance" in direction.
Another way to think of it is that the second principal component could be completely ignored without much loss of information! Let's see what our data look like if we only keep 95% of the variance:
End of explanation
"""
X_new = clf.inverse_transform(X_trans)
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.2)
plt.plot(X_new[:, 0], X_new[:, 1], 'ob', alpha=0.8)
plt.axis('equal');
"""
Explanation: By specifying that we want to throw away 5% of the variance, the data is now compressed by a factor of 50%! Let's see what the data look like after this compression:
End of explanation
"""
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target
print(X[0][:8])
print(X[0][8:16])
print(X[0][16:24])
print(X[0][24:32])
print(X[0][32:40])
print(X[0][40:48])
pca = PCA(2) # project from 64 to 2 dimensions
Xproj = pca.fit_transform(X)
print(X.shape)
print(Xproj.shape)
(1797*2)/(1797*64)
plt.scatter(Xproj[:, 0], Xproj[:, 1], c=y, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('nipy_spectral', 10))
plt.colorbar();
"""
Explanation: The light points are the original data, while the dark points are the projected version. We see that after truncating 5% of the variance of this dataset and then reprojecting it, the "most important" features of the data are maintained, and we've compressed the data by 50%!
This is the sense in which "dimensionality reduction" works: if you can approximate a data set in a lower dimension, you can often have an easier time visualizing it or fitting complicated models to the data.
Application of PCA to Digits
The dimensionality reduction might seem a bit abstract in two dimensions, but the projection and dimensionality reduction can be extremely useful when visualizing high-dimensional data. Let's take a quick look at the application of PCA to the digits data we looked at before:
End of explanation
"""
from fig_code.figures import plot_image_components
with plt.style.context('seaborn-white'):
plot_image_components(digits.data[0])
"""
Explanation: This gives us an idea of the relationship between the digits. Essentially, we have found the optimal stretch and rotation in 64-dimensional space that allows us to see the layout of the digits, without reference to the labels.
What do the Components Mean?
PCA is a very useful dimensionality reduction algorithm, because it has a very intuitive interpretation via eigenvectors.
The input data is represented as a vector: in the case of the digits, our data is
$$
x = [x_1, x_2, x_3 \cdots]
$$
but what this really means is
$$
image(x) = x_1 \cdot{\rm (pixel~1)} + x_2 \cdot{\rm (pixel~2)} + x_3 \cdot{\rm (pixel~3)} \cdots
$$
If we reduce the dimensionality in the pixel space to (say) 6, we recover only a partial image:
End of explanation
"""
from fig_code.figures import plot_pca_interactive
plot_pca_interactive(digits.data)
"""
Explanation: But the pixel-wise representation is not the only choice. We can also use other basis functions, and write something like
$$
image(x) = {\rm mean} + x_1 \cdot{\rm (basis~1)} + x_2 \cdot{\rm (basis~2)} + x_3 \cdot{\rm (basis~3)} \cdots
$$
What PCA does is to choose optimal basis functions so that only a few are needed to get a reasonable approximation.
The low-dimensional representation of our data is the coefficients of this series, and the approximate reconstruction is the result of the sum:
End of explanation
"""
pca = PCA().fit(X)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
"""
Explanation: Here we see that with only six PCA components, we recover a reasonable approximation of the input!
Thus we see that PCA can be viewed from two angles. It can be viewed as dimensionality reduction, or it can be viewed as a form of lossy data compression where the loss favors noise. In this way, PCA can be used as a filtering process as well.
Choosing the Number of Components
But how much information have we thrown away? We can figure this out by looking at the explained variance as a function of the components:
End of explanation
"""
fig, axes = plt.subplots(8, 8, figsize=(8, 8))
fig.subplots_adjust(hspace=0.1, wspace=0.1)
for i, ax in enumerate(axes.flat):
pca = PCA(i + 1).fit(X)
im = pca.inverse_transform(pca.transform(X[25:26]))
ax.imshow(im.reshape((8, 8)), cmap='binary')
ax.text(0.95, 0.05, 'n = {0}'.format(i + 1), ha='right',
transform=ax.transAxes, color='green')
ax.set_xticks([])
ax.set_yticks([])
"""
Explanation: Here we see that our two-dimensional projection loses a lot of information (as measured by the explained variance) and that we'd need about 20 components to retain 90% of the variance. Looking at this plot for a high-dimensional dataset can help you understand the level of redundancy present in multiple observations.
PCA as data compression
As we mentioned, PCA can be used for is a sort of data compression. Using a small n_components allows you to represent a high dimensional point as a sum of just a few principal vectors.
Here's what a single digit looks like as you change the number of components:
End of explanation
"""
from ipywidgets import interact
def plot_digits(n_components):
fig = plt.figure(figsize=(8, 8))
plt.subplot(1, 1, 1, frameon=False, xticks=[], yticks=[])
nside = 10
pca = PCA(n_components).fit(X)
Xproj = pca.inverse_transform(pca.transform(X[:nside ** 2]))
Xproj = np.reshape(Xproj, (nside, nside, 8, 8))
total_var = pca.explained_variance_ratio_.sum()
im = np.vstack([np.hstack([Xproj[i, j] for j in range(nside)])
for i in range(nside)])
plt.imshow(im)
plt.grid(False)
plt.title("n = {0}, variance = {1:.2f}".format(n_components, total_var),
size=18)
plt.clim(0, 16)
interact(plot_digits, n_components=[1, 15, 20, 25, 32, 40, 64], nside=[1, 8]);
"""
Explanation: Let's take another look at this by using IPython's interact functionality to view the reconstruction of several images at once:
End of explanation
"""
|
dariox2/CADL | session-5/.ipynb_checkpoints/lecture-5-checkpoint.ipynb | apache-2.0 | import tensorflow as tf
from libs.datasets import CELEB
files = CELEB()
"""
Explanation: Session 5: Generative Models
<p class="lead">
<a href="https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info">Creative Applications of Deep Learning with Google's Tensorflow</a><br />
<a href="http://pkmital.com">Parag K. Mital</a><br />
<a href="https://www.kadenze.com">Kadenze, Inc.</a>
</p>
<a name="learning-goals"></a>
Learning Goals
<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->
Introduction
Generative Adversarial Networks
Input Pipelines
GAN/DCGAN
Extensions
Recurrent Networks
Basic RNN Cell
LSTM RNN Cell
GRU RNN Cell
Character Langauge Model
Setting up the Data
Creating the Model
Loss
Clipping the Gradient
Training
Extensions
DRAW Network
Future
Homework
Examples
Reading
<!-- /MarkdownTOC -->
<a name="introduction"></a>
Introduction
So far we've seen the basics of neural networks, how they can be used for encoding large datasets, or for predicting labels. We've also seen how to interrogate the deeper representations that networks learn in order to help with their objective, and how ampliyfing some of these objectives led to creating deep dream. Finally, we saw how the representations in deep nets trained on object recognition are capable of representing both style and content, and how we could independently manipulate a new image to have the style of one image, and the content of another.
In this session we'll start to explore some more generative models. We've already seen how an autoencoder is composed of both an encoder which takes an input and represents it into some hidden state vector. From this hidden state vector, a decoder is capable of resynthsizing the original input, though with some loss. So think back to the the decoders that we've already built. It has an internal state, and from that state, it can express the entire distribution of the original data, that is, it can express any possible image that is has seen.
We call that a generative model as it is capable of generating the distribution of the data. Contrast this to the latter half of Session 3 when we saw how ot label an image using supervised learning. This model is really trying to discriminate the data distribution based on the extra labels that we have. So this is another helpful distinction with machine learning algorithms, ones that are generative and others that are discriminative.
In this session, we'll explore more generative models, and states can be used to generate data in two other very powerful generative networks, one based on game theory called the generative adversarial network, and another capable of remembering and forgetting over time, allowing us to model dynamic content and sequences, called the recurrent neural network.
<a name="generative-adversarial-networks"></a>
Generative Adversarial Networks
In session 3, we were briefly introduced to the Variational Autoencoder. This network was very powerful because it encompasses a very strong idea. And that idea is measuring distance not necessarily based on pixels, but in some "semantic space". And I mentioned then that we'd see another type of network capable of generating even better images of CelebNet.
So this is where we're heading...
We're now going to see how to do that using what's called the generative adversarial network.
The generative adversarial network is actually two networks. One called the generator, and another called the discriminator. The basic idea is the generator is trying to create things which look like the training data. So for images, more images that look like the training data. The discriminator has to guess whether what its given is a real training example. Or whether its the output of the generator. By training one after another, you ensure neither are ever too strong, but both grow stronger together. The discriminator is also learning a distance function! This is pretty cool because we no longer need to measure pixel-based distance, but we learn the distance function entirely!
The Generative Adversarial Network, or GAN, for short, are in a way, very similar to the autoencoder we created in session 3. Or at least the implementation of it is. The discriminator is a lot like the encoder part of this network, except instead of going down to the 64 dimensions we used in our autoencoder, we'll reduce our input down to a single value, yes or no, 0 or 1, denoting yes its a true training example, or no, it's a generated one.
And the generator network is exactly like the decoder of the autoencoder. Except, there is nothing feeding into this inner layer. It is just on its own. From whatever vector of hidden values it starts off with, it will generate a new example meant to look just like the training data. One pitfall of this model is there is no explicit encoding of an input. Meaning, you can't take an input and find what would possibly generate it. However, there are recent extensions to this model which make it more like the autoencoder framework, allowing it to do this.
<a name="input-pipelines"></a>
Input Pipelines
Before we get started, we're going to need to work with a very large image dataset, the CelebNet dataset. In session 1, we loaded this dataset but only grabbed the first 1000 images. That's because loading all 200 thousand images would take up a lot of memory which we'd rather not have to do. And in Session 3 we were introduced again to the CelebNet and Sita Sings the Blues which required us to load a lot of images. I glossed over the details of the input pipeline then so we could focus on learning the basics of neural networks. But I think now we're ready to see how to handle some larger datasets.
Tensorflow provides operations for takinga list of files, using that list to load the data pointed to it, decoding that file's data as an image, and creating shuffled minibatches. All of this is put into a queue and managed by queuerunners and coordinators.
As you may have already seen in the Variational Autoencoder's code, I've provided a simple interface for creating such an input pipeline using image files which will also apply cropping and reshaping of images in the pipeline so you don't have to deal with any of it. Let's see how we can use it to load the CelebNet dataset.
Let's first get the list of all the CelebNet files:
End of explanation
"""
from libs.dataset_utils import create_input_pipeline
batch_size = 100
n_epochs = 10
input_shape = [218, 178, 3]
crop_shape = [64, 64, 3]
crop_factor = 0.8
batch = create_input_pipeline(
files=files,
batch_size=batch_size,
n_epochs=n_epochs,
crop_shape=crop_shape,
crop_factor=crop_factor,
shape=input_shape)
"""
Explanation: And then create our input pipeline to create shuffled minibatches and crop the images to a standard shape. This will require us to specify the list of files, how large each minibatch is, how many epochs we want to run for, and how we want the images to be cropped.
End of explanation
"""
sess = tf.Session()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
"""
Explanation: Then when we are ready to use the batch generator, we'll need to create a Coordinator and specify this to tensorflow using the start_queue_runners method in order to provide the data:
End of explanation
"""
batch_xs = sess.run(batch)
# We get batch_size at a time, so 100
print(batch_xs.shape)
# The datatype is float32 since what is what we use in the tensorflow graph
# And the max value still has the original image range from 0-255
print(batch_xs.dtype, np.max(batch_xs.dtype))
# So to plot it, we'll need to divide by 255.
plt.imshow(batch_xs[0] / 255.0)
"""
Explanation: We can grab our data using our batch generator like so:
End of explanation
"""
%pylab
import tensorflow as tf
from six.moves import urllib
f, _ = urllib.request.urlretrieve('https://www.gutenberg.org/cache/epub/11/pg11.txt', 'alice.txt')
with open(f, 'r') as fp:
txt = fp.read()
"""
Explanation: Let's see how to make use of this while we train a generative adversarial network!
<a name="gandcgan"></a>
GAN/DCGAN
Inside the libs directory, you'll find gan.py which shows how to create a generative adversarial network with or without convolution, and how to train it using the CelebNet dataset. Let's step through the code and then I'll show you what it's capable of doing.
-- Code demonstration not transcribed. --
<a name="extensions"></a>
Extensions
So it turns out there are a ton of very fun and interesting extensions when you have a model in this space. It turns out that you can perform addition in the latent space. I'll just show you Alec Radford's code base on github to show you what that looks like.
<a name="recurrent-networks"></a>
Recurrent Networks
Up until now, all of the networks that we've learned and worked with really have no sense of time. They are static. They cannot remember sequences, nor can they understand order outside of the spatial dimensions we offer it. Imagine for instance that we wanted a network capable of reading. As input, it is given one letter at a time. So let's say it were given the letters 'n', 'e', 't', 'w', 'o', 'r', and we wanted it to learn to output 'k'. It would need to be able to reason about inputs it received before the last one it received, the letters before 'r'. But it's not just letters.
Consider the way we look at the world. We don't simply download a high resolution image of the world in front of us. We move our eyes. Each fixation takes in new information and each of these together in sequence help us perceive and act. That again is a sequential process.
Recurrent neural networks let us reason about information over multiple timesteps. They are able to encode what it has seen in the past as if it has a memory of its own. It does this by basically creating one HUGE network that expands over time. It can reason about the current timestep by conditioning on what it has already seen. By giving it many sequences as batches, it can learn a distribution over sequences which can model the current timestep given the previous timesteps. But in order for this to be practical, we specify at each timestep, or each time it views an input, that the weights in each new timestep cannot change. We also include a new matrix, H, which reasons about the past timestep, connecting each new timestep. For this reason, we can just think of recurrent networks as ones with loops in it.
Other than that, they are exactly like every other network we've come across! They will have an input and an output. They'll need a loss or an objective function to optimize which will relate what we want the network to output for some given set of inputs. And they'll be trained with gradient descent and backprop.
<a name="basic-rnn-cell"></a>
Basic RNN Cell
The basic recurrent cell can be used in tensorflow as tf.nn.rnn_cell.BasicRNNCell. Though for most complex sequences, especially longer sequences, this is almost never a good idea. That is because the basic RNN cell does not do very well as time goes on. To understand why this is, we'll have to learn a bit more about how backprop works. When we perform backrprop, we're multiplying gradients from the output back to the input. As the network gets deeper, there are more multiplications along the way from the output to the input.
Same for recurrent networks. Remember, their just like a normal feedforward network with each new timestep creating a new layer. So if we're creating an infinitely deep network, what will happen to all our multiplications? Well if the derivatives are all greater than 1, then they will very quickly grow to infinity. And if they are less than 1, then they will very quickly grow to 0. That makes them very difficult to train in practice. The problem is known in the literature as the exploding or vanishing gradient problem. Luckily, we don't have to figure out how to solve it, because some very clever people have already come up with a solution, in 1997!, yea, what were you doing in 1997. Probably not coming up with they called the long-short-term-memory, or LSTM.
<a name="lstm-rnn-cell"></a>
LSTM RNN Cell
The mechanics of this are unforunately far beyond the scope of this course, but put simply, it uses a combinations of gating cells to control its contents and by having gates, it is able to block the flow of the gradient, avoiding too many multiplications during backprop. For more details, I highly recommend reading: https://colah.github.io/posts/2015-08-Understanding-LSTMs/.
In tensorflow, we can make use of this cell using tf.nn.rnn_cell.LSTMCell.
<a name="gru-rnn-cell"></a>
GRU RNN Cell
One last cell type is worth mentioning, the gated recurrent unit, or GRU. Again, beyond the scope of this class. Just think of it as a simplifed version of the LSTM with 2 gates instead of 4, though that is not an accurate description. In Tensorflow we can use this with tf.nn.rnn_cell.GRUCell.
<a name="character-langauge-model"></a>
Character Langauge Model
We'll now try a fun application of recurrent networks where we try to model a corpus of text, one character at a time. The basic idea is to take one character at a time and try to predict the next character in sequence. Given enough sequences, the model is capable of generating entirely new sequences all on its own.
<a name="setting-up-the-data"></a>
Setting up the Data
For data, we're going to start with text. You can basically take any text file that is sufficiently long, as we'll need a lot of it, and try to use this. This website seems like an interesting place to begin: http://textfiles.com/directory.html and project guttenberg https://www.gutenberg.org/browse/scores/top. http://prize.hutter1.net/ also has a 50k euro reward for compressing wikipedia. Let's try w/ Alice's Adventures in Wonderland by Lewis Carroll:
End of explanation
"""
vocab = list(set(txt))
len(txt), len(vocab)
"""
Explanation: And let's find out what's inside this text file by creating a set of all possible characters.
End of explanation
"""
encoder = dict(zip(vocab, range(len(vocab))))
decoder = dict(zip(range(len(vocab)), vocab))
"""
Explanation: Great so we now have about 164 thousand characters and 85 unique characters in our vocabulary which we can use to help us train a model of language. Rather than use the characters, we'll convert each character to a unique integer. We'll later see that when we work with words, we can achieve a similar goal using a very popular model called word2vec: https://www.tensorflow.org/versions/r0.9/tutorials/word2vec/index.html
We'll first create a look up table which will map a character to an integer:
End of explanation
"""
# Number of sequences in a mini batch
batch_size = 100
# Number of characters in a sequence
sequence_length = 100
# Number of cells in our LSTM layer
n_cells = 256
# Number of LSTM layers
n_layers = 2
# Total number of characters in the one-hot encoding
n_chars = len(vocab)
"""
Explanation: <a name="creating-the-model"></a>
Creating the Model
For our model, we'll need to define a few parameters.
End of explanation
"""
X = tf.placeholder(tf.int32, [None, sequence_length], name='X')
# We'll have a placeholder for our true outputs
Y = tf.placeholder(tf.int32, [None, sequence_length], name='Y')
"""
Explanation: Now create the input and output to the network. Rather than having batch size x number of features; or batch size x height x width x channels; we're going to have batch size x sequence length.
End of explanation
"""
# we first create a variable to take us from our one-hot representation to our LSTM cells
embedding = tf.get_variable("embedding", [n_chars, n_cells])
# And then use tensorflow's embedding lookup to look up the ids in X
Xs = tf.nn.embedding_lookup(embedding, X)
# The resulting lookups are concatenated into a dense tensor
print(Xs.get_shape().as_list())
"""
Explanation: Now remember with MNIST that we used a one-hot vector representation of our numbers. We could transform our input data into such a representation. But instead, we'll use tf.nn.embedding_lookup so that we don't need to compute the encoded vector. Let's see how this works:
End of explanation
"""
# Let's create a name scope for the operations to clean things up in our graph
with tf.name_scope('reslice'):
Xs = [tf.squeeze(seq, [1])
for seq in tf.split(1, sequence_length, Xs)]
"""
Explanation: To create a recurrent network, we're going to need to slice our sequences into individual inputs. That will give us timestep lists which are each batch_size x input_size. Each character will then be connected to a recurrent layer composed of n_cells LSTM units.
End of explanation
"""
cells = tf.nn.rnn_cell.BasicLSTMCell(num_units=n_cells, state_is_tuple=True)
"""
Explanation: Now we'll create our recurrent layer composed of LSTM cells.
End of explanation
"""
initial_state = cells.zero_state(tf.shape(X)[0], tf.float32)
"""
Explanation: We'll initialize our LSTMs using the convenience method provided by tensorflow. We could explicitly define the batch size here or use the tf.shape method to compute it based on whatever X is, letting us feed in different sizes into the graph.
End of explanation
"""
if n_layers > 1:
cells = tf.nn.rnn_cell.MultiRNNCell(
[cells] * n_layers, state_is_tuple=True)
initial_state = cells.zero_state(tf.shape(X)[0], tf.float32)
"""
Explanation: Great now we have a layer of recurrent cells and a way to initialize them. If we wanted to make this a multi-layer recurrent network, we could use the MultiRNNCell like so:
End of explanation
"""
# this will return us a list of outputs of every element in our sequence.
# Each output is `batch_size` x `n_cells` of output.
# It will also return the state as a tuple of the n_cells's memory and
# their output to connect to the time we use the recurrent layer.
outputs, state = tf.nn.rnn(cells, Xs, initial_state=initial_state)
# We'll now stack all our outputs for every cell
outputs_flat = tf.reshape(tf.concat(1, outputs), [-1, n_cells])
"""
Explanation: In either case, the cells are composed of their outputs as modulated by the LSTM's output gate, and whatever is currently stored in its memory contents. Now let's connect our input to it.
End of explanation
"""
with tf.variable_scope('prediction'):
W = tf.get_variable(
"W",
shape=[n_cells, n_chars],
initializer=tf.random_normal_initializer(stddev=0.1))
b = tf.get_variable(
"b",
shape=[n_chars],
initializer=tf.random_normal_initializer(stddev=0.1))
# Find the output prediction of every single character in our minibatch
# we denote the pre-activation prediction, logits.
logits = tf.matmul(outputs_flat, W) + b
# We get the probabilistic version by calculating the softmax of this
probs = tf.nn.softmax(logits)
# And then we can find the index of maximum probability
Y_pred = tf.argmax(probs)
"""
Explanation: For our output, we'll simply try to predict the very next timestep. So if our input sequence was "networ", our output sequence should be: "etwork". This will give us the same batch size coming out, and the same number of elements as our input sequence.
End of explanation
"""
with tf.variable_scope('loss'):
# Compute mean cross entropy loss for each output.
Y_true_flat = tf.reshape(tf.concat(1, Y), [-1])
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, Y_true_flat)
mean_loss = tf.reduce_mean(loss)
"""
Explanation: <a name="loss"></a>
Loss
Our loss function will take the reshaped predictions and targets, and compute the softmax cross entropy.
End of explanation
"""
with tf.name_scope('optimizer'):
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
gradients = []
clip = tf.constant(5.0, name="clip")
for grad, var in optimizer.compute_gradients(mean_loss):
gradients.append((tf.clip_by_value(grad, -clip, clip), var))
updates = optimizer.apply_gradients(gradients)
"""
Explanation: <a name="clipping-the-gradient"></a>
Clipping the Gradient
Normally, we would just create an optimizer, give it a learning rate, and tell it to minize our loss. But with recurrent networks, we can help out a bit by telling it to clip gradients. That helps with the exploding gradient problem, ensureing they can't get any bigger than the value we tell it. We can do that in tensorflow by iterating over every gradient and variable, and changing their value before we apply their update to every trainable variable.
End of explanation
"""
sess = tf.Session()
init = tf.initialize_all_variables()
sess.run(init)
cursor = 0
it_i = 0
while True:
Xs, Ys = [], []
for batch_i in range(batch_size):
if (cursor + sequence_length) >= len(txt) - sequence_length - 1:
cursor = 0
Xs.append([encoder[ch]
for ch in txt[cursor:cursor + sequence_length]])
Ys.append([encoder[ch]
for ch in txt[cursor + 1: cursor + sequence_length + 1]])
cursor = (cursor + sequence_length)
Xs = np.array(Xs).astype(np.int32)
Ys = np.array(Ys).astype(np.int32)
loss_val, _ = sess.run([mean_loss, updates],
feed_dict={X: Xs, Y: Ys})
print(it_i, loss_val)
if it_i % 500 == 0:
p = np.argmax(sess.run([Y_pred], feed_dict={X: Xs})[0], axis=1)
preds = [decoder[p_i] for p_i in p]
print("".join(preds).split('\n'))
it_i += 1
"""
Explanation: We could also explore other methods of clipping the gradient based on a percentile of the norm of activations or other similar methods, like when we explored deep dream regularization. But the LSTM has been built to help regularize the network through its own gating mechanisms, so this may not be the best idea for your problem. Really, the only way to know is to try different approaches and see how it effects the output on your problem.
<a name="training"></a>
Training
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.