repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
beyondvalence/biof509_wtl | Wk05-OOP2/Wk05-OOP-Public-interface_wl.ipynb | mit | class Item(object):
def __init__(self, name, description, location):
self.name = name
self.description = description
self.location = location
def update_location(self, new_location):
pass
class Equipment(Item):
pass
class Consumable(Item):
def __init__(self, name, description, location, initial_quantity, current_quantity, storage_temp, flammability):
self.name = name
self.description = description
self.location = location
self.initial_quantity = initial_quantity
self.current_quantity = current_quantity
self.flammability = flammability
def update_quantity_remaining(self, amount):
pass
"""
Explanation: Week 5 - Crafting the public interface.
Learning Objectives
Explain what a public interface is
Discuss the advantages of defining a public interface
Compare different public interfaces
Design a simple public interface
Inheritance
Last week we looked at inheritance, building a general class that we could then extend with additional functionality for special situations.
Each of the classes we create inheriting from our general class can be thought of as having a 'is-a' relationship with the general class. For example, looking at our Item example from last week Equipment is a Item, Consumable is a Item.
End of explanation
"""
class Ingredient(object):
"""The ingredient object that contains nutritional information"""
def __init__(self, name, carbs, protein, fat):
self.name = name
self.carbs = carbs
self.protein = protein
self.fat = fat
def get_nutrition(self):
"""Returns the nutritional information for the ingredient"""
return (self.carbs, self.protein, self.fat)
class Recipe(object):
"""The Recipe object containing the ingredients"""
def __init__(self, name, ingredients):
self.name = name
self.ingredients = ingredients
def get_nutrition(self):
"""Returns the nutritional information for the recipe"""
nutrition = [0, 0, 0]
for amount, ingredient in self.ingredients:
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
return nutrition
bread = Recipe('Bread', [(820, Ingredient('Flour', 0.77, 0.10, 0.01)),
(30, Ingredient('Oil', 0, 0, 1)),
(36, Ingredient('Sugar', 1, 0, 0)),
(7, Ingredient('Yeast', 0.3125, 0.5, 0.0625)),
(560, Ingredient('Water', 0, 0, 0))])
print(bread.ingredients)
print(bread.get_nutrition())
"""
Explanation: Composition
In week 3 we took example projects and broke them down into a collection of different classes. Many of you chose the cookbook example for the assignment and questioned whether things like ingredients should be attributes on the recipe class or classes in their own right. Often the answer is both. These are the interactions that change a collection of different classes into a functioning program. This is called composition. The Recipe object is a composite object, it has ingredients, it has instructions, etc.
This week we will look at how we can design our classes to be easy to use, for both programmer-class and class-class interactions.
End of explanation
"""
import requests
r = requests.get('https://api.github.com/repos/streety/biof509/events')
print(r.status_code)
print(r.headers['content-type'])
print(r.text[:1000])
print(r.json()[0]['payload']['commits'][0]['message'])
type(r)
"""
Explanation: This has the basic functionality implemented but there are some improvements we can make.
Before we look at making changes we can seek inspiration. Requests and Pandas are two packages well regarded for having well implemented interfaces.
Requests: HTTP for Humans
Requests is a package used for making HTTP requests. There are options in the python standard library for making http requests but they can seem difficult to use.
End of explanation
"""
import pandas as pd
data = pd.DataFrame([[0,1,2,3], [4,5,6,7], [8,9,10,11]], index=['a', 'b', 'c'], columns=['col1', 'col2', 'col3', 'col4'])
data
print(data.shape)
print(data['col1'])
print(data.col1)
import matplotlib.pyplot as plt
%matplotlib inline
data.plot()
data.to_csv('Wk05-temp.csv')
data2 = pd.read_csv('Wk05-temp.csv', index_col=0)
print(data2)
"""
Explanation: The API documentation for requests
The Response class
Some useful features:
property
Pandas
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and
data analysis tools for the Python programming language.
End of explanation
"""
class Ingredient(object):
"""The ingredient object that contains nutritional information"""
def __init__(self, name, carbs, protein, fat):
self.name = name
self.carbs = carbs
self.protein = protein
self.fat = fat
def __repr__(self):
return 'Ingredient({0}, {1}, {2}, {3})'.format(self.name, self.carbs, self.protein, self.fat)
def get_nutrition(self):
"""Returns the nutritional information for the ingredient"""
return (self.carbs, self.protein, self.fat)
class Recipe(object):
"""The Recipe object containing the ingredients"""
def __init__(self, name, ingredients):
self.name = name
self.ingredients = ingredients
def get_nutrition(self):
"""Returns the nutritional information for the recipe"""
nutrition = [0, 0, 0]
for amount, ingredient in self.ingredients:
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
return nutrition
bread = Recipe('Bread', [(820, Ingredient('Flour', 0.77, 0.10, 0.01)),
(30, Ingredient('Oil', 0, 0, 1)),
(36, Ingredient('Sugar', 1, 0, 0)),
(7, Ingredient('Yeast', 0.3125, 0.5, 0.0625)),
(560, Ingredient('Water', 0, 0, 0))])
print(bread.ingredients)
print(bread.get_nutrition())
"""
Explanation: The API documentation for the DataFrame object.
The actual code.
Some useful features:
* classmethod
* property
* __getitem__
* Public and private attributes/methods
* __getattr__
Cookbook
We can now return to our cookbook example.
Displaying the ingredients needs to be improved.
End of explanation
"""
class Ingredient(object):
"""The ingredient object that contains nutritional information"""
def __init__(self, name, carbs, protein, fat):
self.name = name
self.carbs = carbs
self.protein = protein
self.fat = fat
def __repr__(self):
return 'Ingredient({0}, {1}, {2}, {3})'.format(self.name, self.carbs, self.protein, self.fat)
def get_nutrition(self):
"""Returns the nutritional information for the ingredient"""
return (self.carbs, self.protein, self.fat)
class Recipe(object):
"""The Recipe object containing the ingredients"""
def __init__(self, name, ingredients):
self.name = name
self.ingredients = ingredients
def get_nutrition(self):
"""Returns the nutritional information for the recipe"""
nutrition = [0, 0, 0]
for amount, ingredient in self.ingredients:
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
return nutrition
bread = Recipe('Bread', [(820, Ingredient('Flour', 0.77, 0.10, 0.01)),
(30, Ingredient('Oil', 0, 0, 1)),
(36, Ingredient('Sugar', 1, 0, 0)),
(7, Ingredient('Yeast', 0.3125, 0.5, 0.0625)),
(560, Ingredient('Water', 0, 0, 0))])
print(bread.ingredients)
print(bread.get_nutrition())
"""
Explanation: Viewing the ingredients now looks much better. Let's now look at the get_nutrition method.
There are still a number of areas that could be improved
When we call get_nutrition it is not clear what the different values returned actually are
We don't use the get_nutrition method when calculating the nutrition values in the Recipe class
There is no way to add additional types of nutrient
Ingredient and Recipe return different types from get_nutrition, tuple and list respectively
Recipe could not be used as an ingredient for another Recipe
End of explanation
"""
!cat Wk05-wsgi.py
"""
Explanation: WSGI
The value of building and documenting a interface to our code is not unique to object oriented programming.
Next week we will look at creating websites as an alternative to command line programs and GUIs. Python has a rich ecosystem of web servers and frameworks for creating web applications. Importantly, the vast majority use a common interface called WSGI.
WSGI is based on a simple exchange. The example below use the wsgiref package for the web server with the application implemented without using external packages. Next week, we will look at some of the more commonly used web servers and use a web framework to develop a more substantial web project.
End of explanation
"""
class Ingredient(object):
"""The ingredient object that contains nutritional information"""
def __init__(self, name, *args, **kwargs):
self.name = name
self.nums = []
for a in [*args]:
if isinstance(a, dict):
for key in a.keys():
setattr(self, key, a[key])
elif isinstance(a, float):
self.nums.append(a)
if len(self.nums) in [3,4]:
for n, val in zip(['carbs', 'protein', 'fat', 'cholesterol'], self.nums):
setattr(self, n, val)
elif isinstance(a, int):
self.nums.append(a)
if len(self.nums) in [3,4]:
for n, val in zip(['carbs', 'protein', 'fat', 'cholesterol'], self.nums):
setattr(self, n, val)
else:
print('Need correct nutritional information format')
def __repr__(self):
if getattr(self, 'cholesterol', False):
return 'Ingredient({0}, {1}, {2}, {3}, {4})'.format(self.name,
self.carbs,
self.protein,
self.fat,
self.cholesterol)
else:
return 'Ingredient({0}, {1}, {2}, {3})'.format(self.name,
self.carbs,
self.protein,
self.fat)
def get_nutrition(self):
"""Returns the nutritional information for the ingredient"""
return (self.carbs, self.protein, self.fat, self.cholestrol)
def get_name(self):
"""Returns the ingredient name"""
return self.name
class Recipe(object):
"""The Recipe object containing the ingredients"""
def __init__(self, name, *ingredients):
self.name = name
self.ingredients = [*ingredients][0]
self.number = len(*ingredients)
self.nutrition_ = {'carbs': 0, 'protein': 0, 'fat':0, 'cholesterol':0}
def __repr__(self):
return 'Recipe({0}, {1})'.format(self.name, self.ingredients)
def get_nutrition(self):
"""Returns the nutritional information for the recipe"""
#for _ in range(self.number):
nutrition = [0,0,0,0] # need to be length of dict
for amount, ingredient in self.ingredients:
# print(type(ingredient), ingredient) # test
try:
if getattr(ingredient, 'cholesterol', False):
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
nutrition[3] += amount * ingredient.cholesterol
else:
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
except AttributeError: # in case another recipe is in the ingredients (nested)
nu = ingredient.get_nutrition()
nu = [amount * x for x in nu]
nutrition[0] += nu[0]
nutrition[1] += nu[1]
nutrition[2] += nu[2]
nutrition[3] += nu[3]
return nutrition
@property
def nutrition(self):
facts = self.get_nutrition()
self.nutrition_['carbs'] = facts[0]
self.nutrition_['protein'] = facts[1]
self.nutrition_['fat'] = facts[2]
self.nutrition_['cholesterol'] = facts[3]
return self.nutrition_
def get_name(self):
return self.name
bread = Recipe('Bread', [(820, Ingredient('Flour', 0.77, 0.10, 0.01)),
(30, Ingredient('Oil', 0, 0, 1)),
(36, Ingredient('Sugar', 1, 0, 0)),
(7, Ingredient('Yeast', 0.3125, 0.5, 0.0625)),
(560, Ingredient('Water', 0, 0, 0))])
print(bread.ingredients)
# Should be roughly [(820, Ingredient(Flour, 0.77, 0.1, 0.01)), (30, Ingredient(Oil, 0, 0, 1)),
# (36, Ingredient(Sugar, 1, 0, 0)), (7, Ingredient(Yeast, 0.3125, 0.5, 0.0625)), (560, Ingredient(Water, 0, 0, 0))]
print(bread.nutrition)
#Should be roughly {'carbs': 669.5875, 'protein': 85.5, 'fat': 38.6375} the order is not important
eggs = Ingredient('Egg', {'carbs': 0.0077, 'protein': 0.1258, 'fat': 0.0994, 'cholesterol': 0.00423, 'awesome':100})
#eggs = Ingredient('Egg', {'carbs': 0.0077, 'protein': 0.1258, 'fat': 0.0994})
#eggs = Ingredient('Egg', 0.0077, 0.1258, 0.0994, 0.00423)
print(eggs)
#Points to note:
# - The different call to Ingredient, you can use isinstance or type to change the
# behaviour depending on the arguments supplied
# - Cholesterol as an extra nutrient, your implementation should accept any nutrient
# - Use of Recipe (bread) as an ingredient
basic_french_toast = Recipe('Basic French Toast', [(300, Ingredient('Egg', {'carbs': 0.0077, 'protein': 0.1258,
'fat': 0.0994, 'cholesterol': 0.00423})),
(0.25, bread)])
print(basic_french_toast.ingredients)
# Should be roughly:
# [(300, Ingredient(Egg, 0.0077, 0.1258, 0.0994)), (0.25, Recipe(Bread, [(820, Ingredient(Flour, 0.77, 0.1, 0.01)),
# (30, Ingredient(Oil, 0, 0, 1)), (36, Ingredient(Sugar, 1, 0, 0)), (7, Ingredient(Yeast, 0.3125, 0.5, 0.0625)),
# (560, Ingredient(Water, 0, 0, 0))]))]
# Note the formatting for the Recipe object, a __repr__ method will be needed
print(basic_french_toast.nutrition)
# Should be roughly {'protein': 59.115, 'carbs': 169.706875, 'cholesterol': 1.2690000000000001, 'fat': 39.479375000000005}
# The order is not important
"""
Explanation: Assignments
Modify the Ingredient and Recipe classes so that the following code works.
End of explanation
"""
|
StingraySoftware/notebooks | Transfer Functions/TransferFunction Tutorial.ipynb | mit | import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
"""
Explanation: Contents
This notebook covers the basics of creating TransferFunction object, obtaining time and energy resolved responses, plotting them and using IO methods available. Finally, artificial responses are introduced which provide a way for quick testing.
Setup
Set up some useful libraries.
End of explanation
"""
from stingray.simulator.transfer import TransferFunction
from stingray.simulator.transfer import simple_ir, relativistic_ir
"""
Explanation: Import relevant stingray libraries.
End of explanation
"""
response = np.loadtxt('intensity.txt')
"""
Explanation: Creating TransferFunction
A transfer function can be initialized by passing a 2-d array containing time across the first dimension and energy across the second. For example, if the 2-d array is defined by arr, then arr[1][5] defines a time of 5 units and energy of 1 unit.
For the purpose of this tutorial, we have stored a 2-d array in a text file named intensity.txt. The script to generate this file is explained in Data Preparation notebook.
End of explanation
"""
transfer = TransferFunction(response)
transfer.data.shape
"""
Explanation: Initialize transfer function by passing the array defined above.
End of explanation
"""
transfer.time_response()
"""
Explanation: By default, time and energy spacing across both axes are set to 1. However, they can be changed by supplying additional parameters dt and de.
Obtaining Time-Resolved Response
The 2-d transfer function can be converted into a time-resolved/energy-averaged response.
End of explanation
"""
transfer.time[1:10]
"""
Explanation: This sets time parameter which can be accessed by transfer.time
End of explanation
"""
transfer.energy_response()
"""
Explanation: Additionally, energy interval over which to average, can be specified by specifying e0 and e1 parameters.
Obtaining Energy-Resolved Response
Energy-resolved/time-averaged response can be also be formed from 2-d transfer function.
End of explanation
"""
transfer.energy[1:10]
"""
Explanation: This sets energy parameter which can be accessed by transfer.energy
End of explanation
"""
transfer.plot(response='2d')
transfer.plot(response='time')
transfer.plot(response='energy')
"""
Explanation: Plotting Responses
TransferFunction() creates plots of time-resolved, energy-resolved and 2-d responses. These plots can be saved by setting save parameter.
End of explanation
"""
transfer.write('transfer.pickle')
"""
Explanation: By enabling save=True parameter, the plots can be also saved.
IO
TransferFunction can be saved in pickle format and retrieved later.
End of explanation
"""
transfer_new = TransferFunction.read('transfer.pickle')
transfer_new.time[1:10]
"""
Explanation: Saved files can be read using static read() method.
End of explanation
"""
s_ir = simple_ir(dt=0.125, start=10, width=5, intensity=0.1)
plt.plot(s_ir)
"""
Explanation: Artificial Responses
For quick testing, two helper impulse response models are provided.
1- Simple IR
simple_ir() allows to define an impulse response of constant height. It takes in time resolution starting time, width and intensity as arguments.
End of explanation
"""
r_ir = relativistic_ir(dt=0.125)
plt.plot(r_ir)
"""
Explanation: 2- Relativistic IR
A more realistic impulse response mimicking black hole dynamics can be created using relativistic_ir(). Its arguments are: time_resolution, primary peak time, secondary peak time, end time, primary peak value, secondary peak value, rise slope and decay slope. These paramaters are set to appropriate values by default.
End of explanation
"""
|
tpin3694/tpin3694.github.io | regex/match_a_symbol.ipynb | mit | # Load regex package
import re
"""
Explanation: Title: Match A Symbol
Slug: match_a_symbol
Summary: Match A Symbol
Date: 2016-05-01 12:00
Category: Regex
Tags: Basics
Authors: Chris Albon
Based on: Regular Expressions Cookbook
Preliminaries
End of explanation
"""
# Create a variable containing a text string
text = '$100'
"""
Explanation: Create some text
End of explanation
"""
# Find all instances of the exact match '$'
re.findall(r'\$', text)
"""
Explanation: Apply regex
End of explanation
"""
|
h-mayorquin/time_series_basic | presentations/2016-03-01(Visualizing Data Clusters Nexa Wall Street Columns).ipynb | bsd-3-clause | import h5py
import matplotlib.pyplot as plt
import sys
sys.path.append("../")
%matplotlib inline
from visualization.data_clustering import visualize_data_cluster_text_to_image_columns
"""
Explanation: We visualize here the data cluster for the Wall Street Data presented in the columns version
End of explanation
"""
# First we load the file
file_location = '../results_database/text_wall_street_columns.hdf5'
run_name = '/test'
f = h5py.File(file_location, 'r')
# Nexa parameters
Nspatial_clusters = 3
Ntime_clusters = 3
Nembedding = 3
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
cluster_to_index = nexa['cluster_to_index']
"""
Explanation: First let's do it for the version with spaces
Mixed Receptive Fields
End of explanation
"""
matrix = np.zeros((10, 3))
for cluster in cluster_to_index:
cluster_indexes = cluster_to_index[str(cluster)]
for index in cluster_indexes:
first_index = index // 3
second_index = index % 3
matrix[first_index, second_index] = cluster
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.imshow(matrix, origin='lower', interpolation='none')
cluster = 0
for data_center in range(Ntime_clusters):
fig = visualize_data_cluster_text_to_image_columns(nexa, f, run_name,
cluster, data_center, colorbar=True)
cluster = 1
for data_center in range(Ntime_clusters):
fig = visualize_data_cluster_text_to_image_columns(nexa, f, run_name,
cluster, data_center, colorbar=True)
cluster = 2
for data_center in range(Ntime_clusters):
fig = visualize_data_cluster_text_to_image_columns(nexa, f, run_name,
cluster, data_center, colorbar=True)
"""
Explanation: Let's see the receptive fields first
End of explanation
"""
# First we load the file
file_location = '../results_database/text_wall_street_columns.hdf5'
run_name = '/indep'
f = h5py.File(file_location, 'r')
# Nexa parameters
Nspatial_clusters = 3
Ntime_clusters = 3
Nembedding = 3
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
cluster_to_index = nexa['cluster_to_index']
matrix = np.zeros((10, 3))
for cluster in cluster_to_index:
cluster_indexes = cluster_to_index[str(cluster)]
for index in cluster_indexes:
first_index = index // 3
second_index = index % 3
matrix[first_index, second_index] = cluster
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.imshow(matrix, origin='lower', interpolation='none')
cluster = 0
for data_center in range(Ntime_clusters):
fig = visualize_data_cluster_text_to_image_columns(nexa, f, run_name,
cluster, data_center, colorbar=True)
cluster = 1
for data_center in range(Ntime_clusters):
fig = visualize_data_cluster_text_to_image_columns(nexa, f, run_name,
cluster, data_center, colorbar=True)
cluster = 2
for data_center in range(Ntime_clusters):
fig = visualize_data_cluster_text_to_image_columns(nexa, f, run_name,
cluster, data_center, colorbar=True)
"""
Explanation: Now for independent receptive fields
End of explanation
"""
# First we load the file
file_location = '../results_database/text_wall_street_columns_spaces.hdf5'
run_name = '/test'
f = h5py.File(file_location, 'r')
# Nexa parameters
Nspatial_clusters = 3
Ntime_clusters = 3
Nembedding = 3
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
cluster_to_index = nexa['cluster_to_index']
matrix = np.zeros((10, 3))
for cluster in cluster_to_index:
cluster_indexes = cluster_to_index[str(cluster)]
for index in cluster_indexes:
first_index = index // 3
second_index = index % 3
matrix[first_index, second_index] = cluster
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.imshow(matrix, origin='lower', interpolation='none')
for cluster in range(3):
for data_center in range(Ntime_clusters):
fig = visualize_data_cluster_text_to_image_columns(nexa, f, run_name,
cluster, data_center, colorbar=True)
"""
Explanation: Now let's do it for the version without spaces
Mixed Receptive Fields
End of explanation
"""
# First we load the file
file_location = '../results_database/text_wall_street_columns_spaces.hdf5'
run_name = '/indep'
f = h5py.File(file_location, 'r')
# Nexa parameters
Nspatial_clusters = 3
Ntime_clusters = 3
Nembedding = 3
parameters_string = '/' + str(Nspatial_clusters)
parameters_string += '-' + str(Ntime_clusters)
parameters_string += '-' + str(Nembedding)
nexa = f[run_name + parameters_string]
cluster_to_index = nexa['cluster_to_index']
matrix = np.zeros((10, 3))
for cluster in cluster_to_index:
cluster_indexes = cluster_to_index[str(cluster)]
for index in cluster_indexes:
first_index = index // 3
second_index = index % 3
matrix[first_index, second_index] = cluster
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(111)
ax.imshow(matrix, origin='lower', interpolation='none')
for cluster in range(3):
for data_center in range(Ntime_clusters):
fig = visualize_data_cluster_text_to_image_columns(nexa, f, run_name,
cluster, data_center, colorbar=True)
"""
Explanation: Now for independent receptive fields
End of explanation
"""
|
fonnesbeck/ngcm_pandas_2016 | notebooks/2.1 - High-level Plotting with pandas and Seaborn.ipynb | cc0-1.0 | %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
normals = pd.Series(np.random.normal(size=10))
normals.plot()
"""
Explanation: High-level Plotting with Pandas and Seaborn
In 2016, there are more options for generating plots in Python than ever before:
matplotlib
Pandas
Seaborn
ggplot
Bokeh
pygal
Plotly
These packages vary with respect to their APIs, output formats, and complexity. A package like matplotlib, while powerful, is a relatively low-level plotting package, that makes very few assumptions about what constitutes good layout (by design), but has a lot of flexiblility to allow the user to completely customize the look of the output.
On the other hand, Seaborn and Pandas include methods for DataFrame and Series objects that are relatively high-level, and that make reasonable assumptions about how the plot should look. This allows users to generate publication-quality visualizations in a relatively automated way.
End of explanation
"""
normals.cumsum().plot(grid=True)
"""
Explanation: Notice that by default a line plot is drawn, and light background is included. These decisions were made on your behalf by pandas.
All of this can be changed, however:
End of explanation
"""
variables = pd.DataFrame({'normal': np.random.normal(size=100),
'gamma': np.random.gamma(1, size=100),
'poisson': np.random.poisson(size=100)})
variables.cumsum(0).plot()
"""
Explanation: Similarly, for a DataFrame:
End of explanation
"""
variables.cumsum(0).plot(subplots=True, grid=True)
"""
Explanation: As an illustration of the high-level nature of Pandas plots, we can split multiple series into subplots with a single argument for plot:
End of explanation
"""
variables.cumsum(0).plot(secondary_y='normal', grid=True)
"""
Explanation: Or, we may want to have some series displayed on the secondary y-axis, which can allow for greater detail and less empty space:
End of explanation
"""
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(12, 4))
for i,var in enumerate(['normal','gamma','poisson']):
variables[var].cumsum(0).plot(ax=axes[i], title=var)
axes[0].set_ylabel('cumulative sum')
"""
Explanation: If we would like a little more control, we can use matplotlib's subplots function directly, and manually assign plots to its axes:
End of explanation
"""
titanic = pd.read_excel("../data/titanic.xls", "titanic")
titanic.head()
titanic.groupby('pclass').survived.sum().plot.bar()
titanic.groupby(['sex','pclass']).survived.sum().plot.barh()
death_counts = pd.crosstab([titanic.pclass, titanic.sex], titanic.survived.astype(bool))
death_counts.plot.bar(stacked=True, color=['black','gold'], grid=True)
"""
Explanation: Bar plots
Bar plots are useful for displaying and comparing measurable quantities, such as counts or volumes. In Pandas, we just use the plot method with a kind='bar' argument.
For this series of examples, let's load up the Titanic dataset:
End of explanation
"""
death_counts.div(death_counts.sum(1).astype(float), axis=0).plot.barh(stacked=True, color=['black','gold'])
"""
Explanation: Another way of comparing the groups is to look at the survival rate, by adjusting for the number of people in each group.
End of explanation
"""
titanic.fare.hist(grid=False)
"""
Explanation: Histograms
Frequenfly it is useful to look at the distribution of data before you analyze it. Histograms are a sort of bar graph that displays relative frequencies of data values; hence, the y-axis is always some measure of frequency. This can either be raw counts of values or scaled proportions.
For example, we might want to see how the fares were distributed aboard the titanic:
End of explanation
"""
titanic.fare.hist(bins=30)
"""
Explanation: The hist method puts the continuous fare values into bins, trying to make a sensible décision about how many bins to use (or equivalently, how wide the bins are). We can override the default value (10):
End of explanation
"""
sturges = lambda n: int(np.log2(n) + 1)
square_root = lambda n: int(np.sqrt(n))
from scipy.stats import kurtosis
doanes = lambda data: int(1 + np.log(len(data)) + np.log(1 + kurtosis(data) * (len(data) / 6.) ** 0.5))
n = len(titanic)
sturges(n), square_root(n), doanes(titanic.fare.dropna())
titanic.fare.hist(bins=doanes(titanic.fare.dropna()))
"""
Explanation: There are algorithms for determining an "optimal" number of bins, each of which varies somehow with the number of observations in the data series.
End of explanation
"""
titanic.fare.dropna().plot.kde(xlim=(0,600))
"""
Explanation: A density plot is similar to a histogram in that it describes the distribution of the underlying data, but rather than being a pure empirical representation, it is an estimate of the underlying "true" distribution. As a result, it is smoothed into a continuous line plot. We create them in Pandas using the plot method with kind='kde', where kde stands for kernel density estimate.
End of explanation
"""
titanic.fare.hist(bins=doanes(titanic.fare.dropna()), normed=True, color='lightseagreen')
titanic.fare.dropna().plot.kde(xlim=(0,600), style='r--')
"""
Explanation: Often, histograms and density plots are shown together:
End of explanation
"""
titanic.boxplot(column='fare', by='pclass', grid=False)
"""
Explanation: Here, we had to normalize the histogram (normed=True), since the kernel density is normalized by definition (it is a probability distribution).
We will explore kernel density estimates more in the next section.
Boxplots
A different way of visualizing the distribution of data is the boxplot, which is a display of common quantiles; these are typically the quartiles and the lower and upper 5 percent values.
End of explanation
"""
bp = titanic.boxplot(column='age', by='pclass', grid=False)
for i in [1,2,3]:
y = titanic.age[titanic.pclass==i].dropna()
# Add some random "jitter" to the x-axis
x = np.random.normal(i, 0.04, size=len(y))
plt.plot(x, y.values, 'r.', alpha=0.2)
"""
Explanation: You can think of the box plot as viewing the distribution from above. The blue crosses are "outlier" points that occur outside the extreme quantiles.
One way to add additional information to a boxplot is to overlay the actual data; this is generally most suitable with small- or moderate-sized data series.
End of explanation
"""
# Write your answer here
"""
Explanation: When data are dense, a couple of tricks used above help the visualization:
reducing the alpha level to make the points partially transparent
adding random "jitter" along the x-axis to avoid overstriking
Exercise
Using the Titanic data, create kernel density estimate plots of the age distributions of survivors and victims.
End of explanation
"""
wine = pd.read_table("../data/wine.dat", sep='\s+')
attributes = ['Grape',
'Alcohol',
'Malic acid',
'Ash',
'Alcalinity of ash',
'Magnesium',
'Total phenols',
'Flavanoids',
'Nonflavanoid phenols',
'Proanthocyanins',
'Color intensity',
'Hue',
'OD280/OD315 of diluted wines',
'Proline']
wine.columns = attributes
"""
Explanation: Scatterplots
To look at how Pandas does scatterplots, let's look at a small dataset in wine chemistry.
End of explanation
"""
wine.plot.scatter('Color intensity', 'Hue')
"""
Explanation: Scatterplots are useful for data exploration, where we seek to uncover relationships among variables. There are no scatterplot methods for Series or DataFrame objects; we must instead use the matplotlib function scatter.
End of explanation
"""
wine.plot.scatter('Color intensity', 'Hue', s=wine.Alcohol*100, alpha=0.5)
wine.plot.scatter('Color intensity', 'Hue', c=wine.Grape)
wine.plot.scatter('Color intensity', 'Hue', c=wine.Alcohol*100, cmap='hot')
"""
Explanation: We can add additional information to scatterplots by assigning variables to either the size of the symbols or their colors.
End of explanation
"""
_ = pd.scatter_matrix(wine.loc[:, 'Alcohol':'Flavanoids'], figsize=(14,14), diagonal='kde')
"""
Explanation: To view scatterplots of a large numbers of variables simultaneously, we can use the scatter_matrix function that was recently added to Pandas. It generates a matrix of pair-wise scatterplots, optiorally with histograms or kernel density estimates on the diagonal.
End of explanation
"""
normals.plot()
"""
Explanation: Seaborn
Seaborn is a modern data visualization tool for Python, created by Michael Waskom. Seaborn's high-level interface makes it easy to visually explore your data, by being able to easily iterate through different plot types and layouts with minimal hand-coding. In this way, Seaborn complements matplotlib (which we will learn about later) in the data science toolbox.
An easy way to see how Seaborn can immediately improve your data visualization, is by setting the plot style using one of its sevefral built-in styles.
Here is a simple pandas plot before Seaborn:
End of explanation
"""
import seaborn as sns
normals.plot()
"""
Explanation: Seaborn is conventionally imported using the sns alias. Simply importing Seaborn invokes the default Seaborn settings. These are generally more muted colors with a light gray background and subtle white grid lines.
End of explanation
"""
sns.set_style('whitegrid')
sns.boxplot(x='pclass', y='age', data=titanic)
sns.set_style('ticks')
sns.boxplot(x='pclass', y='age', data=titanic)
"""
Explanation: Customizing Seaborn Figure Aesthetics
Seaborn manages plotting parameters in two general groups:
setting components of aesthetic style of the plot
scaling elements of the figure
This default theme is called darkgrid; there are a handful of preset themes:
darkgrid
whitegrid
dark
white
ticks
Each are suited to partiular applications. For example, in more "data-heavy" situations, one might want a lighter background.
We can apply an alternate theme using set_style:
End of explanation
"""
sns.boxplot(x='pclass', y='age', data=titanic)
sns.despine()
"""
Explanation: The figure still looks heavy, with the axes distracting from the lines in the boxplot. We can remove them with despine:
End of explanation
"""
sns.boxplot(x='pclass', y='age', data=titanic)
sns.despine(offset=20, trim=True)
"""
Explanation: Finally, we can give the plot yet more space by specifying arguments to despine; specifically, we can move axes away from the figure elements (via offset) and minimize the length of the axes to the lowest and highest major tick value (via trim):
End of explanation
"""
sns.set_context('paper')
sns.boxplot(x='pclass', y='age', data=titanic)
sns.despine(offset=20, trim=True)
sns.set_context('poster')
sns.boxplot(x='pclass', y='age', data=titanic)
sns.despine(offset=20, trim=True)
"""
Explanation: The second set of figure aesthetic parameters controls the scale of the plot elements.
There are four default scales that correspond to different contexts that a plot may be intended for use with.
paper
notebook
talk
poster
The default is notebook, which is optimized for use in Jupyter notebooks. We can change the scaling with set_context:
End of explanation
"""
sns.set_context('notebook', font_scale=0.5, rc={'lines.linewidth': 0.5})
sns.boxplot(x='pclass', y='age', data=titanic)
sns.despine(offset=20, trim=True)
"""
Explanation: Each of the contexts can be fine-tuned for more specific applications:
End of explanation
"""
sns.plotting_context()
"""
Explanation: The detailed settings are available in the plotting.context:
End of explanation
"""
data = np.random.multivariate_normal([0, 0], [[5, 2], [2, 2]], size=2000)
data = pd.DataFrame(data, columns=['x', 'y'])
data.head()
sns.set()
for col in 'xy':
sns.kdeplot(data[col], shade=True)
"""
Explanation: Seaborn works hand-in-hand with pandas to create publication-quality visualizations quickly and easily from DataFrame and Series data.
For example, we can generate kernel density estimates of two sets of simulated data, via the kdeplot function.
End of explanation
"""
sns.distplot(data['x'])
"""
Explanation: distplot combines a kernel density estimate and a histogram.
End of explanation
"""
sns.kdeplot(data);
cmap = {1:'Reds', 2:'Blues', 3:'Greens'}
for grape in cmap:
alcohol, phenols = wine.loc[wine.Grape==grape, ['Alcohol', 'Total phenols']].T.values
sns.kdeplot(alcohol, phenols,
cmap=cmap[grape], shade=True, shade_lowest=False, alpha=0.3)
"""
Explanation: If kdeplot is provided with two columns of data, it will automatically generate a contour plot of the joint KDE.
End of explanation
"""
with sns.axes_style('white'):
sns.jointplot("Alcohol", "Total phenols", wine, kind='kde');
"""
Explanation: Similarly, jointplot will generate a shaded joint KDE, along with the marginal KDEs of the two variables.
End of explanation
"""
sns.axes_style()
with sns.axes_style('white', {'font.family': ['serif']}):
sns.jointplot("Alcohol", "Total phenols", wine, kind='kde');
"""
Explanation: Notice in the above, we used a context manager to temporarily assign a white axis stype to the plot. This is a great way of changing the defaults for just one figure, without having to set and then reset preferences.
You can do this with a number of the seaborn defaults. Here is a dictionary of the style settings:
End of explanation
"""
titanic = titanic[titanic.age.notnull() & titanic.fare.notnull()]
sns.pairplot(titanic, vars=['age', 'fare', 'pclass', 'sibsp'], hue='survived', palette="muted", markers='+')
"""
Explanation: To explore correlations among several variables, the pairplot function generates pairwise plots, along with histograms along the diagonal, and a fair bit of customization.
End of explanation
"""
sns.FacetGrid(titanic, col="sex", row="pclass")
"""
Explanation: Plotting Small Multiples on Data-aware Grids
The pairplot above is an example of replicating the same visualization on different subsets of a particular dataset. This facilitates easy visual comparisons among groups, making otherwise-hidden patterns in complex data more apparent.
Seaborn affords a flexible means for generating plots on "data-aware grids", provided that your pandas DataFrame is structured appropriately. In particular, you need to organize your variables into columns and your observations (replicates) into rows. Using this baseline pattern of organization, you can take advantage of Seaborn's functions for easily creating lattice plots from your dataset.
FacetGrid is a Seaborn object for plotting mutliple variables simulaneously as trellis plots. Variables can be assigned to one of three dimensions of the FacetGrid:
rows
columns
colors (hue)
Let's use the titanic dataset to create a trellis plot that represents 3 variables at a time. This consists of 2 steps:
Create a FacetGrid object that relates two variables in the dataset in a grid of pairwise comparisons.
Add the actual plot (distplot) that will be used to visualize each comparison.
The first step creates a set of axes, according to the dimensions passed as row and col. These axes are empty, however:
End of explanation
"""
g = sns.FacetGrid(titanic, col="sex", row="pclass")
g.map(sns.distplot, 'age')
"""
Explanation: The FacetGrid's map method then allows a third variable to be plotted in each grid cell, according to the plot type passed. For example, a distplot will generate both a histogram and kernel density estimate for age, according each combination of sex and passenger class as follows:
End of explanation
"""
cdystonia = pd.read_csv('../data/cdystonia.csv')
cdystonia.head()
"""
Explanation: To more fully explore trellis plots in Seaborn, we will use a biomedical dataset. These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia from nine U.S. sites.
Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)
Response variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment)
TWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began
End of explanation
"""
g = sns.FacetGrid(cdystonia[cdystonia.patient<=12], col='patient', col_wrap=4)
g.map(sns.pointplot, 'week', 'twstrs', color='0.5')
"""
Explanation: Notice that this data represents time series of individual patients, comprised of follow-up measurements at 2-4 week intervals following treatment.
As a first pass, we may wish to see how the trajectories of outcomes vary from patient to patient. Using pointplot, we can create a grid of plots to represent the time series for each patient. Let's just look at the first 12 patients:
End of explanation
"""
ordered_treat = ['Placebo', '5000U', '10000U']
g = sns.FacetGrid(cdystonia, col='treat', col_order=ordered_treat)
g.map(sns.pointplot, 'week', 'twstrs', color='0.5')
"""
Explanation: Where pointplot is particularly useful is in representing the central tendency and variance of multiple replicate measurements. Having examined individual responses to treatment, we may now want to look at the average response among treatment groups. Where there are mutluple outcomes (y variable) for each predictor (x variable), pointplot will plot the mean, and calculate the 95% confidence interval for the mean, using bootstrapping:
End of explanation
"""
g = sns.FacetGrid(cdystonia, row='treat', col='week')
g.map(sns.distplot, 'twstrs', hist=False, rug=True)
"""
Explanation: Notice that to enforce the desired order of the facets (lowest to highest treatment level), the labels were passed as a col_order argument to FacetGrid.
Let's revisit the distplot function to look at how the disribution of the outcome variables vary by time and treatment. Instead of a histogram, however, we will here include the "rug", which are just the locations of individual data points that were used to fit the kernel density estimate.
End of explanation
"""
from scipy.stats import norm
g = sns.FacetGrid(cdystonia, row='treat', col='week')
g.map(sns.distplot, 'twstrs', kde=False, fit=norm)
"""
Explanation: displot can also fit parametric data models (instead of a kde). For example, we may wish to fit the data to normal distributions. We can used the distributions included in the SciPy package; Seaborn knows how to use these distributions to generate a fit to the data.
End of explanation
"""
g = sns.FacetGrid(cdystonia, col='treat', row='week')
g.map(sns.regplot, 'age', 'twstrs')
"""
Explanation: We can take the statistical analysis a step further, by using regplot to conduct regression analyses.
For example, we can simultaneously examine the relationship between age and the primary outcome variable as a function of both the treatment received and the week of the treatment by creating a scatterplot of the data, and fitting a linear relationship between age and twstrs:
End of explanation
"""
segments = pd.read_csv('../data/AIS/transit_segments.csv')
segments.head()
"""
Explanation: Exercise
From the AIS subdirectory of the data directory, import both the vessel_information table and transit_segments table and join them. Use the resulting table to create a faceted scatterplot of segment length (seg_length) and average speed (avg_sog) as a trellis plot by flag and vessel type.
To simplify the plot, first generate a subset of the data that includes only the 5 most commont ship types and the 5 most common countries.
End of explanation
"""
|
kubeflow/kfp-tekton-backend | samples/tutorials/mnist/00_Kubeflow_Cluster_Setup.ipynb | apache-2.0 | work_directory_name = 'kubeflow'
! mkdir -p $work_directory_name
%cd $work_directory_name
"""
Explanation: Deploying a Kubeflow Cluster on Google Cloud Platform (GCP)
This notebook provides instructions for setting up a Kubeflow cluster on GCP using the command-line interface (CLI). For additional help, see the guide to deploying Kubeflow using the CLI.
There are two possible alternatives:
- The first alternative is to deploy Kubeflow cluster using the Kubeflow deployment web app, and the instruction can be found here.
- Another alternative is to use recently launched AI Platform Pipeline. But, it is important to note that the AI Platform Pipeline is a standalone Kubeflow Pipeline deployment, where a lot of the components in full Kubeflow deployment won't be pre-installed. The instruction can be found here.
The CLI deployment gives you more control over the deployment process and configuration than you get if you use the deployment UI.
Prerequisites
You have a GCP project setup for your Kubeflow Deployment with you having the owner role for the project and with the following APIs enabled:
Compute Engine API
Kubernetes Engine API
Identity and Access Management(IAM) API
Deployment Manager API
Cloud Resource Manager API
AI Platform Training & Prediction API
You have set up OAuth for Cloud IAP
You have installed and setup kubectl
You have installed gcloud-sdk
Running Environment
This notebook helps in creating the Kubeflow cluster on GCP. You must run this notebook in an environment with Cloud SDK installed, such as Cloud Shell. Learn more about installing Cloud SDK.
Setting up a Kubeflow cluster
Download kfctl
Setup environment variables
Create dedicated service account for deployment
Deploy Kubefow
Install Kubeflow Pipelines SDK
Sanity check
Create a working directory
Create a new working directory in your current directory. The default name is kubeflow, but you can change the name.
End of explanation
"""
## Download kfctl v0.7.0
! curl -LO https://github.com/kubeflow/kubeflow/releases/download/v0.7.0/kfctl_v0.7.0_linux.tar.gz
## Unpack the tar ball
! tar -xvf kfctl_v0.7.0_linux.tar.gz
"""
Explanation: Download kfctl
Download kfctl to your working directory. The default version used is v0.7.0, but you can find the latest release here.
End of explanation
"""
## Create user credentials
! gcloud auth application-default login
"""
Explanation: If you are using AI Platform Notebooks, your environment is already authenticated. Skip the following cell.
End of explanation
"""
# Set your GCP project ID and the zone where you want to create the Kubeflow deployment
%env PROJECT=<ADD GCP PROJECT HERE>
%env ZONE=<ADD GCP ZONE TO LAUNCH KUBEFLOW CLUSTER HERE>
# google cloud storage bucket
%env GCP_BUCKET=gs://<ADD STORAGE LOCATION HERE>
# Use the following kfctl configuration file for authentication with
# Cloud IAP (recommended):
uri = "https://raw.githubusercontent.com/kubeflow/manifests/v0.7-branch/kfdef/kfctl_gcp_iap.0.7.0.yaml"
uri = uri.strip()
%env CONFIG_URI=$uri
# For using Cloud IAP for authentication, create environment variables
# from the OAuth client ID and secret that you obtained earlier:
%env CLIENT_ID=<ADD OAuth CLIENT ID HERE>
%env CLIENT_SECRET=<ADD OAuth CLIENT SECRET HERE>
# Set KF_NAME to the name of your Kubeflow deployment. You also use this
# value as directory name when creating your configuration directory.
# For example, your deployment name can be 'my-kubeflow' or 'kf-test'.
%env KF_NAME=<ADD KUBEFLOW DEPLOYMENT NAME HERE>
# Set up name of the service account that should be created and used
# while creating the Kubeflow cluster
%env SA_NAME=<ADD SERVICE ACCOUNT NAME TO BE CREATED HERE>
"""
Explanation: Set up environment variables
Set up environment variables to use while installing Kubeflow. Replace variable placeholders (for example, <VARIABLE NAME>) with the correct values for your environment.
End of explanation
"""
! gcloud config set project ${PROJECT}
! gcloud config set compute/zone ${ZONE}
# Set the path to the base directory where you want to store one or more
# Kubeflow deployments. For example, /opt/.
# Here we use the current working directory as the base directory
# Then set the Kubeflow application directory for this deployment.
import os
base = os.getcwd()
%env BASE_DIR=$base
kf_dir = os.getenv('BASE_DIR') + "/" + os.getenv('KF_NAME')
%env KF_DIR=$kf_dir
# The following command is optional. It adds the kfctl binary to your path.
# If you don't add kfctl to your path, you must use the full path
# each time you run kfctl. In this example, the kfctl file is present in
# the current directory
new_path = os.getenv('PATH') + ":" + os.getenv('BASE_DIR')
%env PATH=$new_path
"""
Explanation: Configure gcloud and add kfctl to your path.
End of explanation
"""
! gcloud iam service-accounts create ${SA_NAME}
! gcloud projects add-iam-policy-binding ${PROJECT} \
--member serviceAccount:${SA_NAME}@${PROJECT}.iam.gserviceaccount.com \
--role 'roles/owner'
! gcloud iam service-accounts keys create key.json \
--iam-account ${SA_NAME}@${PROJECT}.iam.gserviceaccount.com
"""
Explanation: Create service account
End of explanation
"""
key_path = os.getenv('BASE_DIR') + "/" + 'key.json'
%env GOOGLE_APPLICATION_CREDENTIALS=$key_path
"""
Explanation: Set GOOGLE_APPLICATION_CREDENTIALS
End of explanation
"""
! mkdir -p ${KF_DIR}
%cd $kf_dir
! kfctl apply -V -f ${CONFIG_URI}
"""
Explanation: Setup and deploy Kubeflow
End of explanation
"""
%%capture
# Install the SDK (Uncomment the code if the SDK is not installed before)
! pip3 install 'kfp>=0.1.36' --quiet --user
"""
Explanation: Install Kubeflow Pipelines SDK
End of explanation
"""
! kubectl -n istio-system describe ingress
"""
Explanation: Sanity Check: Check the ingress created
End of explanation
"""
|
ValueFromData/reasoning-under-uncertainty | 1-invitation-to-probability.ipynb | cc0-1.0 | %matplotlib inline
import numpy as np
import scipy.stats as stats
import matplotlib.pylab as pylab
from matplotlib import pyplot as plt
"""
Explanation: Reasoning Under Uncertainty Workshop
PyCon 2015
Part 1 : An invitation to probability
Author : Ronojoy Adhikari
Email : rjoy@imsc.res.in | Web : www.imsc.res.in/~rjoy
Github : www.github.com/ronojoy | Twitter: @phyrjoy
End of explanation
"""
# the scipy.stats package has many discrete and continuous random variables
# the coin toss random variable
X = stats.bernoulli(p=0.5)
# bernoulli mean = p, variance = p(1-p)
#X.mean(), X.var()
# entropy = -\sum_x p(x) * ln p(x)
# quantifies the uncertainty in the random variable
#X.entropy()
# lots of throws of the coin
x = X.rvs(1000)
# the empirical frequency distribution - aka the histogram
pylab.rcParams['figure.figsize'] = 6, 4 # medium size figures
plt.hist(x, normed = True, color="#348ABD", alpha = 0.4);
# the 'central limit theorem' random variable
X = stats.norm()
# the normal deviate has zero mean and unit variance
#X.mean(), X.var()
# the normal deviate has the greatest entropy amongst all continuous, unbounded
# random variables with given mean and variance
# see Jaynes - Probability theory, the logic of science - for proof and discussion
#X.entropy()
# a million draws from the normal distribution
x = X.rvs(1e6)
# the empirical frequency distribution - aka the histogram
pylab.rcParams['figure.figsize'] = 6, 4 # medium size figure
plt.hist(x, bins = 50, normed = True, color="#348ABD", alpha = 0.4 );
"""
Explanation: Random variables in python
The probability that a random variable $X$ takes on a value $X=x$ is given by $P(x)$
A random variable is not a number, but rather, the pair consisting of the variable and its probability distribution
Probabilities always sum to one : $\sum_x P(x) = 1$
Random variables can be discrete or continuous, according to the values the random variable can assume.
Discrete random variable - coin tosses, die rolls, ...
Continuous random variable - rainfall, temperature, financial indices ...
End of explanation
"""
# lets add lots independent, identically distributed numbers together
X = stats.uniform() # uniform deviates
X = stats.bernoulli(0.5) # coin tosses
X = stats.poisson(1) # telephone calls
print 'The mean and variance of the uniform distribtion are', X.mean(), 'and', X.var(), 'respectively.'
N = [1, 2, 4, 6, 8, 16, 32, 64]
for k, n in enumerate(N):
# draw the n random variables
x = X.rvs(size = [n, 1e5])
# get their sum
s = x.mean(axis=0)
# plot the distribution of the sum
sx = plt.subplot(len(N) / 2, 2, k + 1)
plt.setp(sx.get_yticklabels(), visible=False)
plt.hist(s,
bins = 50,
label="$\mu = $ %f \n $n\sigma^2 = $ %f " % (s.mean(), n*s.var()),
color="#348ABD",
alpha=0.4)
leg = plt.legend()
leg.get_frame().set_alpha(0.4)
plt.autoscale(tight=True)
"""
Explanation: The central limit theorem
Intuitive idea : add a lot of independent, and identically distributed random variabes and the result will
normally distributed.
More precisely : Consider i.i.d random varibles $X_i$ with mean $\mu$ and variance $\sigma^2$.
The sum $S_n = \frac{X_1 + X_2 + \ldots X_n}{n}$ approaches a normal distribution with mean $\mu$ and variance $\frac{\sigma^2}{n}$ as $n\rightarrow \infty$.
Lets see this in Python ....
End of explanation
"""
# N throws of a coin with parameter theta0
N, theta0 = 500, 0.5
# data contains the outcome of the trial
data = stats.bernoulli.rvs(p = theta0, size = N)
# theta is distributed in the unit interval
theta = np.linspace(0, 1, 128)
# inspiration : Devinder Sivia and Cam Pilon
# compute posterior after the nthrow-th trial
nthrow = [0, 1, 2, 3, 4, 5, 8, 16, 32, N]
for k, n in enumerate(nthrow):
# number of heads
heads = data[:n].sum()
# posterior probability of theta with conjugate prior (see Sivia for derivation)
p = stats.beta.pdf(theta, 1 + heads, 1 + n - heads)
# plot the posterior
sx = plt.subplot(len(nthrow) / 2, 2, k + 1)
plt.setp(sx.get_yticklabels(), visible=False)
plt.plot(theta, p, label="tosses %d \n heads %d " % (n, heads))
plt.fill_between(theta, 0, p, color="#348ABD", alpha=0.4)
plt.vlines(theta0, 0, 4, color="k", linestyles="--", lw=1)
leg = plt.legend()
leg.get_frame().set_alpha(0.4)
plt.autoscale(tight=True)
"""
Explanation: The central limit theorem
Intuitive idea : add a lot of independent, and identically distributed random variabes and the result will
normally distributed.
More precisely : Consider i.i.d random varibles $X_i$ with mean $\mu$ and variance $\sigma^2$.
The sum $S_n = \frac{X_1 + X_2 + \ldots X_n}{n}$ approaches a normal distribution with mean $\mu$ and variance $\frac{\sigma^2}{n}$ as $n\rightarrow \infty$.
The normal distribution $P(x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$ is important!
Is this a fair coin ? Bayesian inference
End of explanation
"""
X = stats.bernoulli(0.5) # one fair coin
Y = stats.bernoulli(0.5) # another fair coin
# want a new random variable which is their sum
Z = Y + X
"""
Explanation: Exercise 1.1
Compute the Bayesian credible intervals which contain 90% of the probability at each iteration and plot the result
Compute the entropy of the posterior distribution at each stage. How do you interpret changing entropy values ?
How would you answer the question "is this a fair coin" after this analysis ?
Why we need probabilistic programming
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nerc/cmip6/models/sandbox-3/atmoschem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-3', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NERC
Source ID: SANDBOX-3
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
mjones01/NEON-Data-Skills | tutorials-in-development/DI-remote-sensing-python/Day2_LiDAR/Day2_Lesson1_Intro_NEON_AOP_LiDAR_Rasters_GDAL/notebook/2018.2.1_GDAL_Read_Classify_LiDAR_Rasters.ipynb | agpl-3.0 | import numpy as np
import gdal, copy
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
"""
Explanation: Classify a Raster Using Threshold Values
In this tutorial, we will work with the NEON AOP L3 LiDAR ecoysystem structure (Canopy Height Model) data product. Refer to the links below for more information about NEON data products and the CHM product DP3.30015.001:
http://data.neonscience.org/data-products/explore
http://data.neonscience.org/data-products/DP3.30015.001
Objectives
By the end of this tutorial, you should be able to
Use gdal to read NEON LiDAR Raster Geotifs (eg. CHM, Slope Aspect) into a Python numpy array.
Create a classified array using thresholds.
A useful resource for using gdal in Python is the Python GDAL/OGR cookbook.
https://pcjericks.github.io/py-gdalogr-cookbook/
First, let's import the required packages and set our plot display to be in-line:
End of explanation
"""
chm_filename = r'C:\Users\bhass\Documents\GitHub\NEON_RSDI\RSDI_2018\Day2_LiDAR\data\NEON_D02_SERC_DP3_368000_4306000_CHM.tif'
chm_dataset = gdal.Open(chm_filename)
chm_dataset
"""
Explanation: Open a Geotif with GDAL
Let's look at the SERC Canopy Height Model (CHM) to start. We can open and read this in Python using the gdal.Open function:
End of explanation
"""
#Display the dataset dimensions, number of bands, driver, and geotransform
cols = chm_dataset.RasterXSize; print('# of columns:',cols)
rows = chm_dataset.RasterYSize; print('# of rows:',rows)
print('# of bands:',chm_dataset.RasterCount)
print('driver:',chm_dataset.GetDriver().LongName)
"""
Explanation: Read information from Geotif Tags
The Geotif file format comes with associated metadata containing information about the location and coordinate system/projection. Once we have read in the dataset, we can access this information with the following commands:
End of explanation
"""
print('projection:',chm_dataset.GetProjection())
"""
Explanation: GetProjection
We can use the gdal GetProjection method to display information about the coordinate system and EPSG code.
End of explanation
"""
print('geotransform:',chm_dataset.GetGeoTransform())
"""
Explanation: GetGeoTransform
The geotransform contains information about the origin (upper-left corner) of the raster, the pixel size, and the rotation angle of the data. All NEON data in the latest format have zero rotation. In this example, the values correspond to:
End of explanation
"""
chm_mapinfo = chm_dataset.GetGeoTransform()
xMin = chm_mapinfo[0]
yMax = chm_mapinfo[3]
xMax = xMin + chm_dataset.RasterXSize/chm_mapinfo[1] #divide by pixel width
yMin = yMax + chm_dataset.RasterYSize/chm_mapinfo[5] #divide by pixel height (note sign +/-)
chm_ext = (xMin,xMax,yMin,yMax)
print('chm raster extent:',chm_ext)
"""
Explanation: In this case, the geotransform values correspond to:
Left-Most X Coordinate = 367000.0
W-E Pixel Resolution = 1.0
Rotation (0 if Image is North-Up) = 0.0
Upper Y Coordinate = 4307000.0
Rotation (0 if Image is North-Up) = 0.0
N-S Pixel Resolution = -1.0
The negative value for the N-S Pixel resolution reflects that the origin of the image is the upper left corner. We can convert this geotransform information into a spatial extent (xMin, xMax, yMin, yMax) by combining information about the origin, number of columns & rows, and pixel size, as follows:
End of explanation
"""
chm_raster = chm_dataset.GetRasterBand(1)
noDataVal = chm_raster.GetNoDataValue(); print('no data value:',noDataVal)
scaleFactor = chm_raster.GetScale(); print('scale factor:',scaleFactor)
chm_stats = chm_raster.GetStatistics(True,True)
print('SERC CHM Statistics: Minimum=%.2f, Maximum=%.2f, Mean=%.3f, StDev=%.3f' %
(chm_stats[0], chm_stats[1], chm_stats[2], chm_stats[3]))
"""
Explanation: GetRasterBand
We can read in a single raster band with GetRasterBand and access information about this raster band such as the No Data Value, Scale Factor, and Statitiscs as follows:
End of explanation
"""
#Read Raster Band as an Array
chm_array = chm_dataset.GetRasterBand(1).ReadAsArray(0,0,cols,rows).astype(np.float)
#Assign CHM No Data Values to NaN
chm_array[chm_array==int(noDataVal)]=np.nan
#Apply Scale Factor
chm_array=chm_array/scaleFactor
print('SERC CHM Array:\n',chm_array) #display array values
"""
Explanation: ReadAsArray
Finally we can convert the raster to an array using the gdal ReadAsArray method. Cast the array to a floating point value using astype(np.float). Once we generate the array, we want to set No Data Values to nan, and apply the scale factor (for CHM this is just 1.0, so will not matter, but it's a good habit to get into):
End of explanation
"""
chm_array.shape
"""
Explanation: Let's look at the dimensions of the array we read in:
End of explanation
"""
pct_nan = np.count_nonzero(np.isnan(chm_array))/(rows*cols)
print('% NaN:',round(pct_nan*100,2))
print('% non-zero:',round(100*np.count_nonzero(chm_array)/(rows*cols),2))
"""
Explanation: We can calculate the % of pixels that are undefined (nan) and non-zero using np.count_nonzero. Typically tiles in the center of a site will have close to 0% NaN, but tiles on the edges of sites may have a large percent of nan values.
End of explanation
"""
def plot_spatial_array(band_array,spatial_extent,colorlimit,ax=plt.gca(),title='',cmap_title='',colormap=''):
plot = plt.imshow(band_array,extent=spatial_extent,clim=colorlimit);
cbar = plt.colorbar(plot,aspect=40); plt.set_cmap(colormap);
cbar.set_label(cmap_title,rotation=90,labelpad=20);
plt.title(title); ax = plt.gca();
ax.ticklabel_format(useOffset=False, style='plain'); #do not use scientific notation #
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90); #rotate x tick labels 90 degrees
"""
Explanation: Plot Canopy Height Data
To get a better idea of the dataset, we can use a similar function to plot_aop_refl that we used in the NEON AOP reflectance tutorials:
End of explanation
"""
plt.hist(chm_array[~np.isnan(chm_array)],100);
ax = plt.gca()
ax.set_ylim([0,15000]) #adjust the y limit to zoom in on area of interest
"""
Explanation: Plot Histogram of Data
As we did with the reflectance tile, it is often useful to plot a histogram of the geotiff data in order to get a sense of the range and distribution of values. First we'll make a copy of the array and remove the nan values.
End of explanation
"""
plot_spatial_array(chm_array,
chm_ext,
(0,35),
title='SERC Canopy Height',
cmap_title='Canopy Height, m',
colormap='BuGn')
"""
Explanation: On your own, adjust the number of bins, and range of the y-axis to get a good idea of the distribution of the canopy height values. We can see that most of the values are zero. In SERC, many of the zero CHM values correspond to bodies of water as well as regions of land without trees. Let's look at a histogram and plot the data without zero values:
Note that it appears that the trees don't have a smooth or normal distribution, but instead appear blocked off in chunks. This is an artifact of the Canopy Height Model algorithm, which bins the trees into 5m increments (this is done to avoid another artifact of "pits" (Khosravipour et al., 2014).
From the histogram we can see that the majority of the trees are < 30m. We can re-plot the CHM array, this time adjusting the color bar limits to better visualize the variation in canopy height. We will plot the non-zero array so that CHM=0 appears white.
End of explanation
"""
chm_reclass = chm_array.copy()
chm_reclass[np.where(chm_array==0)] = 1 # CHM = 0 : Class 1
chm_reclass[np.where((chm_array>0) & (chm_array<=10))] = 2 # 0m < CHM <= 10m - Class 2
chm_reclass[np.where((chm_array>10) & (chm_array<=20))] = 3 # 10m < CHM <= 20m - Class 3
chm_reclass[np.where((chm_array>20) & (chm_array<=30))] = 4 # 20m < CHM <= 30m - Class 4
chm_reclass[np.where(chm_array>30)] = 5 # CHM > 30m - Class 5
"""
Explanation: Threshold Based Raster Classification
Next, we will create a classified raster object. To do this, we will use the se the numpy.where function to create a new raster based off boolean classifications. Let's classify the canopy height into five groups:
- Class 1: CHM = 0 m
- Class 2: 0m < CHM <= 10m
- Class 3: 10m < CHM <= 20m
- Class 4: 20m < CHM <= 30m
- Class 5: CHM > 30m
We can use np.where to find the indices where a boolean criteria is met.
End of explanation
"""
import matplotlib.colors as colors
plt.figure();
cmapCHM = colors.ListedColormap(['lightblue','yellow','orange','green','red'])
plt.imshow(chm_reclass,extent=chm_ext,cmap=cmapCHM)
plt.title('SERC CHM Classification')
ax=plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees
# Create custom legend to label the four canopy height classes:
import matplotlib.patches as mpatches
class1_box = mpatches.Patch(color='lightblue', label='CHM = 0m')
class2_box = mpatches.Patch(color='yellow', label='0m < CHM <= 10m')
class3_box = mpatches.Patch(color='orange', label='10m < CHM <= 20m')
class4_box = mpatches.Patch(color='green', label='20m < CHM <= 30m')
class5_box = mpatches.Patch(color='red', label='CHM > 30m')
ax.legend(handles=[class1_box,class2_box,class3_box,class4_box,class5_box],
handlelength=0.7,bbox_to_anchor=(1.05, 0.4),loc='lower left',borderaxespad=0.)
"""
Explanation: We can define our own colormap to plot these discrete classifications, and create a custom legend to label the classes:
End of explanation
"""
|
sauloal/ipython | probes/gff_reader.ipynb | mit | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#import matplotlib as plt
#plt.use('TkAgg')
import operator
import pylab
pylab.show()
%pylab inline
"""
Explanation: GFF plotter
Helping hands
http://nbviewer.ipython.org/github/herrfz/dataanalysis/blob/master/week2/getting_data.ipynb
http://nbviewer.ipython.org/github/jvns/pandas-cookbook/blob/master/cookbook/Chapter%201%20-%20Reading%20from%20a%20CSV.ipynb
Imports
End of explanation
"""
fileUrl = "probes/extraGff/ITAG2.3_repeats.gff3"
#fileUrl = "probes/extraGff/ITAG2.3_repeats.gff3.Copia.agp"
#fileUrl = "probes/extraGff/ITAG2.3_repeats.gff3.Gypsy.agp"
#fileUrl = "probes/extraGff/ITAG2.3_repeats.gff3.Low_complexity.agp"
#fileUrl = "probes/extraGff/ITAG2.3_repeats.gff3.LTR.agp"
#fileUrl = "probes/extraGff/ITAG2.3_repeats.gff3.Simple_repeat.agp"
#fileUrl = "probes/extraGff/ITAG2.3_repeats.gff3.SINE.agp"
FULL_FIG_W , FULL_FIG_H = 16, 8
CHROM_FIG_W, CHROM_FIG_H = FULL_FIG_W, 20
"""
Explanation: Definitions
End of explanation
"""
class size_controller(object):
def __init__(self, w, h):
self.w = w
self.h = h
def __enter__(self):
self.o = rcParams['figure.figsize']
rcParams['figure.figsize'] = self.w, self.h
return None
def __exit__(self, type, value, traceback):
rcParams['figure.figsize'] = self.o
"""
Explanation: Setup
Figure sizes controller
End of explanation
"""
col_type_int = np.int64
col_type_flo = np.float64
col_type_str = np.str_ #np.object
col_type_char = np.character
col_info =[
[ "chromosome", col_type_str ],
[ "source" , col_type_str ],
[ "type" , col_type_str ],
[ "start" , col_type_int ],
[ "end" , col_type_int ],
[ "qual" , col_type_int ],
[ "strand" , col_type_char ],
[ "frame" , col_type_char ],
[ "info" , col_type_str ],
]
col_names=[cf[0] for cf in col_info]
col_types=dict(zip([c[0] for c in col_info], [c[1] for c in col_info]))
col_types
"""
Explanation: Column type definition
End of explanation
"""
info_keys = set()
def filter_conv(fi):
global info_keys
vs = []
for pair in fi.split(";"):
kv = pair.split("=")
info_keys.add(kv[0])
if len(kv) == 2:
#in case of key/value pairs
vs.append(kv)
else:
#in case of flags such as INDEL
vs.append([kv[0], True])
x = dict(zip([x[0] for x in vs], [x[1] for x in vs]))
#z = pd.Series(x)
#print z
return x
"""
Explanation: Read GFF
Parse INFO column
End of explanation
"""
CONVERTERS = {
'info': filter_conv
}
SKIP_ROWS = 3
NROWS = None
#index_col=['chromosome', 'start'], usecols=col_names,
gffData = pd.read_csv(fileUrl, header=None, names=col_names, dtype=col_types, nrows=NROWS, skiprows=SKIP_ROWS, converters=CONVERTERS, verbose=True, delimiter="\t", comment="#")
print gffData.shape
gffData.head()
"""
Explanation: Read GFF
http://nbviewer.ipython.org/github/herrfz/dataanalysis/blob/master/week2/getting_data.ipynb
End of explanation
"""
gffData['length'] = gffData['end'] - gffData['start']
gffData.head()
"""
Explanation: Add length column
End of explanation
"""
info_keys = list(info_keys)
info_keys.sort()
info_keys
info_keys_types = {
'score': col_type_int
}
def gen_val_extracter(info_keys_g):
def val_extracter_l(info_row, **kwargs):
vals = [None] * len(info_keys_g)
for k,v in info_row.items():
if k in info_keys_g:
vals[info_keys_g.index(k)] = v
else:
pass
return vals
return val_extracter_l
#gffData[info_keys] = gffData['info'].apply(gen_val_extracter(info_keys), axis=1).apply(pd.Series, 1)
gffData.head()
"""
Explanation: Split INFO column
End of explanation
"""
gffData.dtypes
"""
Explanation: Good part
http://nbviewer.ipython.org/github/jvns/pandas-cookbook/blob/master/cookbook/Chapter%201%20-%20Reading%20from%20a%20CSV.ipynb
http://pandas.pydata.org/pandas-docs/dev/visualization.html
https://bespokeblog.wordpress.com/2011/07/11/basic-data-plotting-with-matplotlib-part-3-histograms/
http://nbviewer.ipython.org/github/mwaskom/seaborn/blob/master/examples/plotting_distributions.ipynb
http://nbviewer.ipython.org/github/herrfz/dataanalysis/blob/master/week3/exploratory_graphs.ipynb
http://pandas.pydata.org/pandas-docs/version/0.15.0/visualization.html
http://www.gregreda.com/2013/10/26/working-with-pandas-dataframes/
Column types
End of explanation
"""
gffData.describe()
maxPos = gffData['end' ].max()
print "sum ", gffData['length'].sum()
print "avg ", gffData['length'].sum() * 1.0 / 950000000
chromosomes = np.unique(gffData['chromosome'].values)
chromstats = {}
for chrom in chromosomes:
chromdata = gffData['length'][gffData['chromosome'] == chrom]
chromdatasize = gffData['end' ][gffData['chromosome'] == chrom].max() - gffData['start' ][gffData['chromosome'] == chrom].min()
chromdatasum = chromdata.sum()
#print "chrom", chrom
#print " size ", chromdatasize
#print " count", chromdata.count()
#print " sum ", chromdatasum
#print " avg ", chromdatasum * 1.0 / chromdatasize
chromstats[ chrom ] = { 'size': chromdatasize, 'count': chromdata.count(), 'sum': chromdatasum, 'avg': chromdatasum * 1.0 / chromdatasize }
chromstats = pd.DataFrame.from_dict(chromstats, orient='index')
#print "col types ", chromstats.dtypes
#print "col names ", chromstats.columns
#print "col indexes", chromstats.index
print "median.avg ", chromstats['avg'].median()
print "MAD.avg ", chromstats['avg'].mad()
print chromstats
"""
Explanation: Global statistics
End of explanation
"""
gffData.median()
"""
Explanation: Median
End of explanation
"""
gffData.mad()
"""
Explanation: MAD
End of explanation
"""
chromosomes = np.unique(gffData['chromosome'].values)
chromosomes
"""
Explanation: List of chromosomes
End of explanation
"""
with size_controller(FULL_FIG_W, FULL_FIG_H):
bq = gffData.boxplot(column='qual', return_type='dict')
"""
Explanation: Quality distribution
End of explanation
"""
with size_controller(FULL_FIG_W, FULL_FIG_H):
bqc = gffData.boxplot(column='qual', by='chromosome', return_type='dict')
"""
Explanation: Quality distribution per chromosome
End of explanation
"""
with size_controller(FULL_FIG_W, FULL_FIG_H):
bqc = gffData.boxplot(column='start', by='chromosome', return_type='dict')
"""
Explanation: Start position distribution per chromosome
End of explanation
"""
with size_controller(FULL_FIG_W, FULL_FIG_H):
hs = gffData['start'].hist()
"""
Explanation: Position distribution
End of explanation
"""
hsc = gffData['start'].hist(by=gffData['chromosome'], figsize=(CHROM_FIG_W, CHROM_FIG_H), layout=(len(chromosomes),1))
"""
Explanation: Position distribution per chromosome
End of explanation
"""
with size_controller(FULL_FIG_W, FULL_FIG_H):
hl = gffData['length'].hist()
"""
Explanation: Length distribution
End of explanation
"""
hlc = gffData['length'].hist(by=gffData['chromosome'], figsize=(CHROM_FIG_W, CHROM_FIG_H), layout=(len(chromosomes),1))
#http://stackoverflow.com/questions/27934885/how-to-hide-code-from-cells-in-ipython-notebook-visualized-with-nbviewer
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
var classes_to_hide = ['div.input', 'div.output_stderr', 'div.output_prompt', 'div.input_prompt', 'div.prompt'];
if (code_show){
for ( var c in classes_to_hide ) {
$(classes_to_hide[c]).hide();
}
} else {
for ( var c in classes_to_hide ) {
$(classes_to_hide[c]).show();
}
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Clickk here to toggle on/off the raw code."></form>''')
"""
Explanation: Length distribution per chromosome
End of explanation
"""
|
robertoalotufo/ia898 | 2S2018/Seminarios/Dithering.ipynb | mit | import numpy as np
from PIL import Image
%matplotlib inline
import matplotlib.pyplot as plt
img = Image.open('../seminario/imagens/man_r2.tif')
img_quant = img.quantize(2,1)
"""
Explanation: Dithering
Dithering é um processo de redução da quantização de uma imagem que cria a ilusão de que não foi perdida muita informação radiométrica. Ou seja, ela cria a ilusão de que não foram perdidos níveis de cores na quantização. Ele faz isso através de uma reconfiguração dos pixels ponto a ponto. Ao criar essa ilusão, o dithering minimiza o efeito forte de transição entre uma cor e outra, mas também reduz a nitidez da imagem e cria um notável padrão granulado na mesma.
Hoje em dia, o dithering tem aplicações em web-sites, transformando imagens com muitas cores para uma com poucas, reduzindo, assim, o tamanho do arquivo (e a largura de banda) sem prejudicar a qualidade. Outra aplicação se dá em edição de imagens, reduzindo fotos digitais de 48 ou 64bpp RAW-format para 24bpp RGB, por exemplo. E em jogos, a redução de configurações gráficas também é feita através de dithering. Ou seja, reduz-se a qualidade visual do jogo com a menor perda possível de tonalidades, para que este possa ser executado num PC de menor capacidade gráfica.
Ao aplicarmos dithering em uma imagem, alteramos o valor do pixel para a nova quantização, comparando seu valor inicial com um limiar adotado, para assim, mudar o valor do pixel. Por exemplo, supondo que queremos transformar uma imagem de 8 bits (256 tons) para uma de 1 bit (2 tons), e o limiar adotado é o centro da cartela de cor (127), devemos analizar cada um dos pixels, comparando seu valor com o limiar. Se o seu valor inicial for maior que o limiar ele será setado para branco, se não, para preto. Por exemplo: um pixel com valor igual a 96 será configurado para preto, na imagem final. Esse é o conceito básico da aplicação de dithering, mas existem várias variações.
Neste notebook, abordaremos algumas das várias maneiras de aplicar dithering à uma imagem, começando pelo método mais básico, o dithering aleatório, e indo até métodos usuais como o filtro Floyd Steinberg. Ao final, faremos uma comparação gráfica dos vários métodos apresentados.
End of explanation
"""
img_array = np.asarray(img)
def function_random_Dither(img_array):
(M,N) = img_array.shape
randon_dither = np.zeros((M,N))
randon_limiar = np.random.randint(0,255,(M,N))
for lin in range(M):
for col in range(N):
if img_array[lin,col]>=randon_limiar[lin,col]:
randon_dither[lin,col] = 255
return randon_dither
randon_dither = function_random_Dither(img_array)
plt.figure(figsize=(15,8))
plt.subplot(131)
plt.imshow(img)
plt.title('Imagem original - 8 bits')
plt.subplot(132)
plt.imshow(img_quant)
plt.title('Imagem quantizada - 1 bit')
plt.subplot(133)
plt.imshow(randon_dither,cmap='gray',vmin=0,vmax=255)
plt.title('Imagem com Dithering aleatório - 1 bit');
"""
Explanation: Dithering Aleatório
O meio mais básico de se aplicar dithering é gerar para cada pixel um valor aleatório entre 0:255 e compará-lo com o valor inicial do pixel. Se esse valor for maior que o limiar aleatório escolhido, o pixel é setado para branco, se não, para preto. Esse método, chamado de dithering aleatório, quase não é utilizado por deixar a imagem muito ruidosa. Mas, dependendo da aplicação e da imagem a ser processada, ele pode ser eficiente, já que é um método muito simples e rápido.
Abaixo mostramos uma imagem que foi quantizada pela forma simples de quantização e outra aplicando o dithering aleatório. As duas ocupam 1 bit, mas a que foi processada usando dithering consegue fornecer mais informação que a quantizada.
End of explanation
"""
img2 = Image.open('../seminario/imagens/cube_gray.png')
img2_quant = img2.quantize(2,1)
img2_array= np.array(img2)
plt.figure(figsize=(15,8))
plt.subplot(121)
plt.imshow(img2_array,cmap='gray',vmin=0,vmax=255)
plt.title('Imagem em escala de cinza')
plt.subplot(122)
plt.imshow(img2_quant,cmap='gray',vmin=0,vmax=255)
plt.title('Imagem quantizada para 1 bit');
"""
Explanation: Dithering Ordenado
Neste método, uma matriz conhecida como matriz de limiar é adotada para se fazer a comparação com o valor inicial do pixel. A imagem é normalizada para um range que equivale ao número de elementos da matriz de limiar, em seguida ,cada pixel é comparado com o valor de limiar correspondente à posição na matriz de limiar. Se o valor do pixel for maior que o limiar, o pixel é setado para 255, se não, para 0. A matriz de limiar pode ser de vários tamanhos, mas o algorítmo mais conhecido para gerar essas matrizes (Bayer matrix) gera somente matrizes com tamanhos proporcionais à potencias de 2.
End of explanation
"""
m_limiar_2x2 = np.array([[0,2],
[3,1]])
m_limiar_4x4 = np.array([[0,8,2,10],
[12,4,14,6],
[3,11,1,9],
[15,7,13,5]])
m_limiar_8x8 = np.array([[0,48,12,60,3,51,15,63],
[32,16,44,28,35,19,47,31],
[8,56,4,52,11,59,7,55],
[40,24,36,20,43,27,39,23],
[2,50,14,62,1,49,13,61],
[34,18,46,30,33,17,45,29],
[10,58,6,54,9,57,5,53],
[42,26,38,22,41,25,37,21]])
from IPython.display import display
def function_dithering_ordered(img_array,m_limiar):
M = img_array.shape[0]
N = img_array.shape[1]
ordered_dither = np.zeros((M2,N2))
mat=m_limiar
n = mat.shape[0]
norm = np.amax(img_array)/n**2
img_norm = img_array/norm
for col in range(N):
for lin in range(M):
if img_norm[lin,col] >=mat[(lin%n),(col%n)]:
ordered_dither[lin,col] = 255
return ordered_dither
ordered_dither = function_dithering_ordered(img2_array,m_limiar_2x2)
ordered_img_8x8 = Image.open('../seminario/imagens/cube_ordered_8x8.png')
ordered_img_4x4 = Image.open('../seminario/imagens/cube_ordered_4x4.png')
ordered_img = Image.fromarray(np.uint8(ordered_dither))
print('Imagem com dithering ordenado tipo Bayer 2x2')
display(ordered_img)
print('Imagem com dithering ordenado tipo Bayer 4x4')
display(ordered_img_4x4)
print('Imagem com dithering ordenado tipo Bayer 8x8')
display(ordered_img_8x8)
"""
Explanation: Neste exemplo, usaremos a imagem do cubo mostrada acima, pois ela mostra um resultado mais didático. Mostraremos a implementação do dithering ordenado utilizando matrizes de Bayer com potencias de 2, 4 e 8. Para melhor visualização do resultado, os arrays foram transformados para imagem, pois o 'plt.imshow' muda as configurações do arquivo para poder gerar a imagem.
End of explanation
"""
def function_floyde_stein_dither(img_array):
M = img_array.shape[0]
N = img_array.shape[1]
#linhas de zeros adicionadas na imagem para poder processar as bordas corretamente
array_col = np.zeros((1,M)).reshape(M, 1)
array_lin = np.zeros((1,N+2))
img_fl_e_st = np.hstack((img_array,array_col))
img_fl_e_st = np.hstack((array_col,img_fl_e_st))
img_fl_e_st = np.vstack((img_fl_e_st,array_lin))
floyde_stein_dither = np.zeros((M,N))
limiar = 127
for lin in range(M):
for col in range(1,N+1):
if img_fl_e_st[lin,col] >255:
floyde_stein_dither[lin,col-1]=255
erro=0
else:
var = limiar - img_fl_e_st[lin,col]
if var>=0:
erro = img_fl_e_st[lin,col]
else:
floyde_stein_dither[lin,col-1]=255
erro = img_fl_e_st[lin,col] - 255
img_fl_e_st[lin,col+1] = (7/16)*erro+img_fl_e_st[lin,col+1]
img_fl_e_st[lin+1,col-1] = (3/16)*erro+img_fl_e_st[lin+1,col-1]
img_fl_e_st[lin+1,col] = (5/16)*erro+img_fl_e_st[lin+1,col]
img_fl_e_st[lin+1,col+1] = (1/16)*erro+img_fl_e_st[lin+1,col+1]
return floyde_stein_dither
floyde_stein_dither = function_floyde_stein_dither(img2_array)
floyde_stein_img = Image.fromarray(np.uint8(floyde_stein_dither))
print('Imagem com dithering por difusão de erro tipo Floyd-Steinberg')
display(floyde_stein_img)
"""
Explanation: Dithering por Difusão de Erro
Nos algoritimos de dithering por difusão de erro, a diferença entre o valor inicial do pixel e o limiar adotado é chamada de erro. Uma parcela desse erro é difundido pelos pixels adjacentes. A propoção de erro que irá para cada pixel adjacente é dada por uma matriz de erro, que é diferente para cada algoritmo. Nesse caso, o limiar (ou limiares) adotado é o centro do range de cor (para o caso de binário). Se forem mais cores, os limiares serão os centros dos intervalos de cor.
Algorítmo de Floyd-Steinberg
O algoritmo mais conhecido de dithering por difusão de erro é o que foi publicado por Robert Floyd e Louis Steinberg em 1976, conhecido por Floyd-Steinberg Dithering, esse será o primeiro que iremos implementar aqui. A matriz de porcentagem de erro que será propagado é:
X 7
3 5 1
(1/16)
onde X é o pixel que está sendo processado no momento. Como a difusão de erro é feita para pixels posteriores e de linhas abaixo, a matriz original deve ser acrescentada de zeros nas laterais e no final, lembrando que esses zeros não devem entrar no cálculo do erro, eles são inseridos somente para receber o erro das colunas mais externas e da linha inferior.
End of explanation
"""
def function_jjn_dither(img_array):
M = img_array.shape[0]
N = img_array.shape[1]
#linhas de zeros adicionadas na imagem para poder processar as bordas corretamente
array_col = np.zeros((2,M)).reshape(M, 2)
array_lin = np.zeros((2,N+4))
img_jjn = np.hstack((img2_array,array_col))
img_jjn = np.hstack((array_col,img_jjn))
img_jjn = np.vstack((img_jjn,array_lin))
jjn_dither = np.zeros((M,N))
limiar = 127
for lin in range(M):
for col in range(2,N+1):
if img_jjn[lin,col] >255:
jjn_dither[lin,col-2]=255
erro=0
else:
var = limiar - img_jjn[lin,col]
if var>=0:
erro = img_jjn[lin,col]
else:
jjn_dither[lin,col-2]=255
erro = img_jjn[lin,col] - 255
img_jjn[lin,col+1] = (7/48)*erro+img_jjn[lin,col+1]
img_jjn[lin,col+2] = (5/48)*erro+img_jjn[lin,col+2]
img_jjn[lin+1,col-2] = (3/48)*erro+img_jjn[lin+1,col-2]
img_jjn[lin+1,col-1] = (5/48)*erro+img_jjn[lin+1,col-1]
img_jjn[lin+1,col] = (7/48)*erro+img_jjn[lin+1,col]
img_jjn[lin+1,col+1] = (5/48)*erro+img_jjn[lin+1,col+1]
img_jjn[lin+1,col+2] = (3/48)*erro+img_jjn[lin+1,col+2]
img_jjn[lin+2,col-2] = (1/48)*erro+img_jjn[lin+2,col-2]
img_jjn[lin+2,col-1] = (3/48)*erro+img_jjn[lin+2,col-1]
img_jjn[lin+2,col] = (5/48)*erro+img_jjn[lin+2,col]
img_jjn[lin+2,col+1] = (3/48)*erro+img_jjn[lin+2,col+1]
img_jjn[lin+2,col+2] = (1/48)*erro+img_jjn[lin+2,col+2]
return jjn_dither
jjn_dither = function_jjn_dither(img2_array)
jjn_img = Image.fromarray(np.uint8(jjn_dither))
print('Imagem com dithering por difusão de erro tipo JJN')
display(jjn_img)
"""
Explanation: A imagem gerada pelo algorítmo de Floyde-Steinberg é incrívelmente melhor que a quantizada para 1 bit, já que é possível ver detalhes do cubo que na imagem quantizada nós não vemos.
Algorítmo de Jarvis, Judice, and Ninke (JJN)
Esse algorítmo possui o mesmo princípio de difusão de erro que o anterior. O que os difere é a porcentagem de erro que é propagado para cada pixel e o número de pixels que recebem esse erro. Aqui, pixels a duas linhas e colunas de distancia, para cada direção (menos para cima), recebem uma porcentagem de erro. A matriz de erro agora é:
X 7 5
3 5 7 5 3
1 3 5 3 1
(1/48)
Sendo assim, agora devemos acrescentar duas colunas de zeros nas extremidades e duas linhas de zeros ao final da matriz, para que possamos processar os pixels dos cantos da imagem.
End of explanation
"""
def function_stuck_dither(img_array):
M = img_array.shape[0]
N = img_array.shape[1]
#linhas de zeros adicionadas na imagem para poder processar as bordas corretamente
array_col = np.zeros((2,M)).reshape(M, 2)
array_lin = np.zeros((2,N+4))
img_stuck = np.hstack((img2_array,array_col))
img_stuck = np.hstack((array_col,img_stuck))
img_stuck = np.vstack((img_stuck,array_lin))
stuck_dither = np.zeros((M,N))
limiar = 127
for lin in range(M):
for col in range(2,N+1):
if img_stuck[lin,col] >255:
stuck_dither[lin,col-2]=255
erro=0
else:
var = limiar - img_stuck[lin,col]
if var>=0:
erro = img_stuck[lin,col]
else:
stuck_dither[lin,col-2]=255
erro = img_stuck[lin,col] - 255
img_stuck[lin,col+1] = (8/42)*erro+img_stuck[lin,col+1]
img_stuck[lin,col+2] = (4/42)*erro+img_stuck[lin,col+2]
img_stuck[lin+1,col-2] = (2/42)*erro+img_stuck[lin+1,col-2]
img_stuck[lin+1,col-1] = (4/42)*erro+img_stuck[lin+1,col-1]
img_stuck[lin+1,col] = (8/42)*erro+img_stuck[lin+1,col]
img_stuck[lin+1,col+1] = (4/42)*erro+img_stuck[lin+1,col+1]
img_stuck[lin+1,col+2] = (2/42)*erro+img_stuck[lin+1,col+2]
img_stuck[lin+2,col-2] = (1/42)*erro+img_stuck[lin+2,col-2]
img_stuck[lin+2,col-1] = (2/42)*erro+img_stuck[lin+2,col-1]
img_stuck[lin+2,col] = (4/42)*erro+img_stuck[lin+2,col]
img_stuck[lin+2,col+1] = (2/42)*erro+img_stuck[lin+2,col+1]
img_stuck[lin+2,col+2] = (1/42)*erro+img_stuck[lin+2,col+2]
return stuck_dither
stuck_dither = function_stuck_dither(img2_array)
stuck_img = Image.fromarray(np.uint8(stuck_dither))
print('Imagem com dithering por difusão de erro tipo Stuck')
display(stuck_img)
"""
Explanation: Analisando a imagem, podemos ver que há uma melhora na imagem resultante se compararmos com o método de Floyde-Steinberg. Isso se deve ao fato de que a matriz de erro propagante é maior, ou seja, o valor de um pixel interfere no valor de mais pixels a sua volta, distribuindo melhor o padrão de cinza na imagem resultante.
Algorítmo de Stucki
O algorítmo proposto por Peter Stucki foi publicado 5 anos após o método JJN, apresentando mudanças que, segundo ele, melhoravam a qualidade da imagem. A mudança proposta foi trocar os valores da matriz de erro que não eram potencias de 2, por um valor que seja. Assim, o no lugar de 3 temos 2, no lugar de 5 temos 4 e no lugar de 7 temos 8. Isso muda também o valor da soma resultante, que é o divisor da matriz e que agora passa a ser 42. A nova matriz de erro é:
X 8 4
2 4 8 4 2
1 2 4 2 1
(1/42)
Em termos de código, a única diferença é a porcentagem de erro inserido. Essa porcentagem gera sim uma melhora na imagem, diminuindo (um pouco) o aspecto granulado.
End of explanation
"""
comp_img = Image.open('../seminario/imagens/comparacao_cinza.png')
comp_img
"""
Explanation: Comparando os métodos
Agora vamos ver uma comparação entre os métodos apresentados aqui. É possível ver que todos os métodos melhoram a vizualição do objeto se comparado com a quantização simples, ou seja, todos eles podem ser utilizados, dependendo da aplicação (até mesmo o dither aleatório).
É de se esperar que a imagem resultante não fique igual à original, mas alguns métodos mostram o objeto muito parecido com o original, mesmo possuindo somente dois níveis de cores. Quanto mais níveis de cores, mais próximo ao original à imagem 'ditherizada' será. Mas, por outro lado, quanto mais tons de cinza a imagem 'ditherizada' possuir maior será o seu arquivo, e mais ele se aproximará do tamanho do arquivo original.
Avaliando para o caso estudado, em que as imagens foram 'ditherizadas' para 1 bit (2 cores), os seus arquivos apresentaram uma redução significativa, mas não se comparada com a imagem quantizada. As que possuem menor tamanho são as que foram processadas usando o conceito de Dithering ordenado. Sendo assim, deve-se avaliar bem qual método de Dithering vale a pena para sua aplicação.
Imagem original --------- 64 KB
Quantização simples ----- 3,54 KB
Dither aleatório -------- 36,6 KB
Dither Bayer 2x2 -------- 5,25 KB
Dither Bayer 4x4 -------- 6,97 KB
Dither Bayer 8x8 -------- 9,22 KB
Dither Floyde_Steinberg - 34,2 KB
Dither JJN -------------- 36,7 KB
Dither Stuck ------------ 35 KB
End of explanation
"""
|
Naereen/notebooks | NetHack's functions Rne, Rn2 and Rnz in Python 3.ipynb | mit | %load_ext watermark
%watermark -v -m -p numpy,matplotlib
import random
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#NetHack's-functions-Rne,-Rn2-and-Rnz-in-Python-3" data-toc-modified-id="NetHack's-functions-Rne,-Rn2-and-Rnz-in-Python-3-1"><span class="toc-item-num">1 </span>NetHack's functions Rne, Rn2 and Rnz in Python 3</a></div><div class="lev2 toc-item"><a href="#Rn2-distribution" data-toc-modified-id="Rn2-distribution-11"><span class="toc-item-num">1.1 </span><code>Rn2</code> distribution</a></div><div class="lev2 toc-item"><a href="#Rne-distribution" data-toc-modified-id="Rne-distribution-12"><span class="toc-item-num">1.2 </span><code>Rne</code> distribution</a></div><div class="lev2 toc-item"><a href="#Rnz-distribution" data-toc-modified-id="Rnz-distribution-13"><span class="toc-item-num">1.3 </span><code>Rnz</code> distribution</a></div><div class="lev2 toc-item"><a href="#Examples" data-toc-modified-id="Examples-14"><span class="toc-item-num">1.4 </span>Examples</a></div><div class="lev3 toc-item"><a href="#For-x=350" data-toc-modified-id="For-x=350-141"><span class="toc-item-num">1.4.1 </span>For <code>x=350</code></a></div><div class="lev2 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-15"><span class="toc-item-num">1.5 </span>Conclusion</a></div>
# NetHack's functions Rne, Rn2 and Rnz in Python 3
I liked [this blog post](https://eev.ee/blog/2018/01/02/random-with-care/#beware-gauss) by [Eevee](https://eev.ee/blog/).
He wrote about interesting things regarding random distributions, and linked to [this page](https://nethackwiki.com/wiki/Rnz) which describes a weird distribution implemented as `Rnz` in the [NetHack](https://www.nethack.org/) game.
> Note: I never heard of any of those before today.
I wanted to implement and experiment with the `Rnz` distribution myself.
Its code ([see here](https://nethackwiki.com/wiki/Source:NetHack_3.6.0/src/rnd.c#rnz)) uses two other distributions, `Rne` and `Rn2`.
End of explanation
"""
def rn2(x):
return random.randint(0, x-1)
np.asarray([rn2(10) for _ in range(100)])
"""
Explanation: Rn2 distribution
The Rn2 distribution is simply an integer uniform distribution, between $0$ and $x-1$.
End of explanation
"""
from collections import Counter
Counter([rn2(10) == 0 for _ in range(100)])
Counter([rn2(10) == 0 for _ in range(1000)])
Counter([rn2(10) == 0 for _ in range(10000)])
"""
Explanation: Testing for rn2(x) == 0 gives a $1/x$ probability :
End of explanation
"""
def rne(x, truncation=5):
truncation = max(truncation, 1)
tmp = 1
while tmp < truncation and rn2(x) == 0:
tmp += 1
return tmp
"""
Explanation: Rne distribution
The Rne distribution is a truncated geometric distribution.
End of explanation
"""
np.asarray([rne(3) for _ in range(50)])
plt.hist(np.asarray([rne(3) for _ in range(10000)]), bins=5)
np.asarray([rne(4, truncation=10) for _ in range(50)])
plt.hist(np.asarray([rne(4, truncation=10) for _ in range(10000)]), bins=10)
"""
Explanation: In the NetHack game, the player's experience is used as default value of the truncation parameter...
End of explanation
"""
ref_table = {1: 3/4, 2: 3/16, 3: 3/64, 4: 3/256, 5: 1/256}
ref_table
N = 100000
table = Counter([rne(4, truncation=5) for _ in range(N)])
for k in table:
table[k] /= N
table = dict(table)
table
rel_diff = lambda x, y: abs(x - y) / x
for k in ref_table:
x, y = ref_table[k], table[k]
r = rel_diff(x, y)
print(f"For k={k}: relative difference is {r:.3g} between {x:.3g} (expectation) and {y:.3g} (with N={N} samples).")
"""
Explanation: Let's check what this page says about rne(4):
The rne(4) call returns an integer from 1 to 5, with the following probabilities:
|Number| Probability |
|:-----|------------:|
| 1 | 3/4 |
| 2 | 3/16 |
| 3 | 3/64 |
| 4 | 3/256 |
| 5 | 1/256 |
End of explanation
"""
def rnz(i, truncation=10):
x = i
tmp = 1000
tmp += rn2(1000)
tmp *= rne(4, truncation=truncation)
flip = rn2(2)
if flip:
x *= tmp
x /= 1000
else:
x *= 1000
x /= tmp
return int(x)
"""
Explanation: Seems true !
Rnz distribution
It's not too hard to write.
End of explanation
"""
np.asarray([rnz(3) for _ in range(100)])
np.asarray([rnz(3, truncation=10) for _ in range(100)])
"""
Explanation: Examples
End of explanation
"""
np.asarray([rnz(350) for _ in range(100)])
_ = plt.hist(np.asarray([rnz(350) for _ in range(100000)]), bins=200)
np.asarray([rnz(350, truncation=10) for _ in range(100)])
_ = plt.hist(np.asarray([rnz(350, truncation=10) for _ in range(10000)]), bins=200)
"""
Explanation: For x=350
End of explanation
"""
|
jstac/yale_class_2016 | equilibrium_2.ipynb | bsd-3-clause | import numpy as np
from scipy.optimize import bisect
"""
Explanation: Equilibrium Price, Take 2
Jan 2016
First we import some functionality from the scientific libraries
End of explanation
"""
def supply(price, b):
return np.exp(b * price) - 1
def demand(price, a, epsilon):
return a * price**(-epsilon)
"""
Explanation: Now let's write routines to compute supply and demand as functions of price and parameters:
End of explanation
"""
def compute_equilibrium(a, b, epsilon):
plow = 0.1
phigh = 10.0
def excess_supply(price):
return supply(price, b) - demand(price, a, epsilon)
pclear = bisect(excess_supply, plow, phigh)
return pclear
"""
Explanation: Next we'll write a function that takes a parameter set and returns a market clearing price via bisection:
End of explanation
"""
compute_equilibrium(1, 0.1, 1)
"""
Explanation: Let's test it with the original parameter set, the market clearing price for which was 2.9334. The parameters are
$$ a = 1, \quad b = 0.1, \quad \epsilon = 1 $$
End of explanation
"""
import matplotlib.pyplot as plt
"""
Explanation: Let's see this visually. First we import the plotting library matplotlib in the standard way:
End of explanation
"""
%matplotlib inline
"""
Explanation: The next command is a Jupyter "line magic" that tells Jupyter to display figures in the browser:
End of explanation
"""
grid_size = 100
grid = np.linspace(2, 4, grid_size)
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(grid, demand(grid, 1, 1), 'b-', label='demand')
ax.plot(grid, supply(grid, 0.1), 'g-', label='supply')
ax.set_xlabel('price', fontsize=14)
ax.set_ylabel('quantity', fontsize=14)
ax.legend(loc='upper center', frameon=False)
"""
Explanation: Now let's plot supply and demand on a grid of points
End of explanation
"""
parameter_list = [[1, 0.1, 1],
[2, 0.1, 1],
[1, 0.2, 1],
[1, 0.1, 2]]
for parameters in parameter_list:
print("Price = {}".format(compute_equilibrium(*parameters)))
"""
Explanation: Now let's output market clearing prices for all parameter configurations given in exercise 1 of the homework.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/miroc/cmip6/models/nicam16-9s/toplevel.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'nicam16-9s', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MIROC
Source ID: NICAM16-9S
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
unpingco/Python-for-Probability-Statistics-and-Machine-Learning | chapters/machine_learning/notebooks/pca.ipynb | mit | from IPython.display import Image
Image('../../../python_for_probability_statistics_and_machine_learning.jpg')
"""
Explanation: Principal Component Analysis
End of explanation
"""
from sklearn import decomposition
import numpy as np
pca = decomposition.PCA()
"""
Explanation: The features from a particular dataset that will ultimately prove important for
machine learning can be difficult to know ahead of time. This is especially
true for problems that do not have a strong physical underpinning. The
row-dimension of the input matrix ($X$) for fitting data in Scikit-learn is the
number of samples and the column dimension is the number of features. There may
be a large number of column dimensions in this matrix, and the purpose of
dimensionality reduction is to somehow reduce these to only those columns that
are important for the machine learning task.
Fortunately, Scikit-learn provides some powerful tools to help uncover the most
relevant features. Principal Component Analysis (PCA) consists of taking
the input $X$ matrix and (1) subtracting the mean, (2) computing the covariance
matrix, and (3) computing the eigenvalue decomposition of the covariance
matrix. For example, if $X$ has more columns than is practicable for a
particular learning method, then PCA can reduce the number of columns to a more
manageable number. PCA is widely used in statistics and other areas beyond
machine learning, so it is worth examining what it does in some detail. First,
we need the decomposition module from Scikit-learn.
End of explanation
"""
x = np.linspace(-1,1,30)
X = np.c_[x,x+1,x+2] # stack as columns
pca.fit(X)
print(pca.explained_variance_ratio_)
"""
Explanation: Let's create some very simple data and apply PCA.
End of explanation
"""
%matplotlib inline
from matplotlib.pylab import subplots
fig,axs = subplots(2,1,sharex=True,sharey=True)
ax = axs[0]
_=ax.plot(x,X[:,0],'-k',lw=3)
_=ax.plot(x,X[:,1],'--k',lw=3)
_=ax.plot(x,X[:,2],':k',lw=3)
ax=axs[1]
_=ax.plot(x,pca.fit_transform(X)[:,0],'-k',lw=3)
#ax.tick_params(labelsize='x-large')
"""
Explanation: Programming Tip.
The np.c_ is a shorcut method for creating stacked column-wise arrays.
In this example, the columns are just constant offsets of the first
column. The explained variance ratio is the percentage of the variance
attributable to the transformed columns of X. You can think of this as the
information that is relatively concentrated in each column of the transformed
matrix X. Figure shows the graph of this dominant
transformed column in the bottom panel.
End of explanation
"""
X = np.c_[x,2*x+1,3*x+2,x] # change slopes of columns
pca.fit(X)
print(pca.explained_variance_ratio_)
"""
Explanation: <!-- dom:FIGURE: [fig-machine_learning/pca_001.png, width=500 frac=0.75] The
top panel shows the columns of the feature matrix and the bottom panel shows the
dominant component that PCA has extracted. <div id="fig:pca_001"></div> -->
<!-- begin figure -->
<div id="fig:pca_001"></div>
<p>The top panel shows the columns of the feature matrix and the bottom panel
shows the dominant component that PCA has extracted.</p>
<img src="fig-machine_learning/pca_001.png" width=500>
<!-- end figure -->
To make this more interesting, let's change the slope of each of the
columns as
in the following,
End of explanation
"""
x = np.linspace(-1,1,30)
X = np.c_[np.sin(2*np.pi*x),
2*np.sin(2*np.pi*x)+1,
3*np.sin(2*np.pi*x)+2]
pca.fit(X)
print(pca.explained_variance_ratio_)
"""
Explanation: However, changing the slope did not impact the explained variance
ratio. Again, there is still only one dominant column. This means that PCA is
invariant to both constant offsets and scale changes. This works for functions
as well as simple lines,
End of explanation
"""
fig,axs = subplots(2,1,sharex=True,sharey=True)
ax = axs[0]
_=ax.plot(x,X[:,0],'-k')
_=ax.plot(x,X[:,1],'--k')
_=ax.plot(x,X[:,2],':k')
ax=axs[1]
_=ax.axis(xmin=-1,xmax=1)
_=ax.plot(x,pca.fit_transform(X)[:,0],'-k')
# ax.tick_params(labelsize='x-large')
fig,axs=subplots(1,2,sharey=True)
fig.set_size_inches((8,5))
ax=axs[0]
ax.set_aspect(1/1.6)
_=ax.axis(xmin=-2,xmax=12)
x1 = np.arange(0, 10, .01/1.2)
x2 = x1+ np.random.normal(loc=0, scale=1, size=len(x1))
X = np.c_[(x1, x2)]
good = (x1>5) | (x2>5)
bad = ~good
_=ax.plot(x1[good],x2[good],'ow',alpha=.3)
_=ax.plot(x1[bad],x2[bad],'ok',alpha=.3)
_=ax.set_title("original data space")
_=ax.set_xlabel("x")
_=ax.set_ylabel("y")
_=pca.fit(X)
Xx=pca.fit_transform(X)
ax=axs[1]
ax.set_aspect(1/1.6)
_=ax.plot(Xx[good,0],Xx[good,1]*0,'ow',alpha=.3)
_=ax.plot(Xx[bad,0],Xx[bad,1]*0,'ok',alpha=.3)
_=ax.set_title("PCA-reduced data space")
_=ax.set_xlabel(r"$\hat{x}$")
_=ax.set_ylabel(r"$\hat{y}$")
"""
Explanation: Once again, there is only one dominant column, which is shown in the
bottom panel of Figure. The top panel shows the individual
columns of the feature matrix. To sum up, PCA is able to identify and eliminate
features that are merely linear transformations of existing features. This also
works when there is additive noise in the features, although more samples are
needed to separate the uncorrelated noise from between features.
<!-- dom:FIGURE: [fig-machine_learning/pca_002.png, width=500 frac=0.85] The top
panel shows the columns of the feature matrix and the bottom panel shows the
dominant component that PCA has computed. <div id="fig:pca_002"></div> -->
<!-- begin figure -->
<div id="fig:pca_002"></div>
<p>The top panel shows the columns of the feature matrix and the bottom panel
shows the dominant component that PCA has computed.</p>
<img src="fig-machine_learning/pca_002.png" width=500>
<!-- end figure -->
End of explanation
"""
fig,axs=subplots(1,2,sharey=True)
ax=axs[0]
x1 = np.arange(0, 10, .01/1.2)
x2 = x1+np.random.normal(loc=0, scale=1, size=len(x1))
X = np.c_[(x1, x2)]
good = x1>x2
bad = ~good
_=ax.plot(x1[good],x2[good],'ow',alpha=.3)
_=ax.plot(x1[bad],x2[bad],'ok',alpha=.3)
_=ax.set_title("original data space")
_=ax.set_xlabel("x")
_=ax.set_ylabel("y")
_=pca.fit(X)
Xx=pca.fit_transform(X)
ax=axs[1]
_=ax.plot(Xx[good,0],Xx[good,1]*0,'ow',alpha=.3)
_=ax.plot(Xx[bad,0],Xx[bad,1]*0,'ok',alpha=.3)
_=ax.set_title("PCA-reduced data space")
_=ax.set_xlabel(r"\hat{x}")
_=ax.set_ylabel(r"\hat{y}")
"""
Explanation: To see how PCA can simplify machine learning tasks, consider
Figure wherein the two classes are separated along the diagonal.
After PCA, the transformed data lie along a single axis where the two classes
can be split using a one-dimensional interval, which greatly simplifies the
classification task. The class identities are preserved under PCA because the
principal component is along the same direction that the classes are separated.
On the other hand, if the classes are separated along the direction
orthogonal to the principal component, then the two classes become mixed
under PCA and the classification task becomes much harder. Note that in both
cases, the explained_variance_ratio_ is the same because the explained
variance ratio does not account for class membership.
<!-- dom:FIGURE: [fig-machine_learning/pca_003.png, width=500 frac=0.85] The
left panel shows the original two-dimensional data space of two easily
distinguishable classes and the right panel shows the reduced the data space
transformed using PCA. Because the two classes are separated along the principal
component discovered by PCA, the classes are preserved under the
transformation. <div id="fig:pca_003"></div> -->
<!-- begin figure -->
<div id="fig:pca_003"></div>
<p>The left panel shows the original two-dimensional data space of two easily
distinguishable classes and the right panel shows the reduced the data space
transformed using PCA. Because the two classes are separated along the principal
component discovered by PCA, the classes are preserved under the
transformation.</p>
<img src="fig-machine_learning/pca_003.png" width=500>
<!-- end figure -->
End of explanation
"""
np.random.seed(123456)
from numpy import matrix, c_, sin, cos, pi
t = np.linspace(0,1,250)
s1 = sin(2*pi*t*6)
s2 =np.maximum(cos(2*pi*t*3),0.3)
s2 = s2 - s2.mean()
s3 = np.random.randn(len(t))*.1
# normalize columns
s1=s1/np.linalg.norm(s1)
s2=s2/np.linalg.norm(s2)
s3=s3/np.linalg.norm(s3)
S =c_[s1,s2,s3] # stack as columns
# mixing matrix
A = matrix([[ 1, 1,1],
[0.5, -1,3],
[0.1, -2,8]])
X= S*A.T # do mixing
fig,axs=subplots(3,2,sharex=True)
fig.set_size_inches((8,8))
X = np.array(X)
_=axs[0,1].plot(t,-X[:,0],'k-')
_=axs[1,1].plot(t,-X[:,1],'k-')
_=axs[2,1].plot(t,-X[:,2],'k-')
_=axs[0,0].plot(t,s1,'k-')
_=axs[1,0].plot(t,s2,'k-')
_=axs[2,0].plot(t,s3,'k-')
_=axs[2,0].set_xlabel('$t$',fontsize=18)
_=axs[2,1].set_xlabel('$t$',fontsize=18)
_=axs[0,0].set_ylabel('$s_1(t)$ ',fontsize=18,rotation='horizontal')
_=axs[1,0].set_ylabel('$s_2(t)$ ',fontsize=18,rotation='horizontal')
_=axs[2,0].set_ylabel('$s_3(t)$ ',fontsize=18,rotation='horizontal')
for ax in axs.flatten():
_=ax.yaxis.set_ticklabels('')
_=axs[0,1].set_ylabel(' $X_1(t)$',fontsize=18,rotation='horizontal')
_=axs[1,1].set_ylabel(' $X_2(t)$',fontsize=18,rotation='horizontal')
_=axs[2,1].set_ylabel(' $X_3(t)$',fontsize=18,rotation='horizontal')
_=axs[0,1].yaxis.set_label_position("right")
_=axs[1,1].yaxis.set_label_position("right")
_=axs[2,1].yaxis.set_label_position("right")
"""
Explanation: <!-- dom:FIGURE: [fig-machine_learning/pca_004.png, width=500 frac=0.85] As
compared with [Figure](#fig:pca_003), the two classes differ along the
coordinate direction that is orthogonal to the principal component. As a result,
the two classes are no longer distinguishable after transformation. <div
id="fig:pca_004"></div> -->
<!-- begin figure -->
<div id="fig:pca_004"></div>
<p>As compared with [Figure](#fig:pca_003), the two classes differ along the
coordinate direction that is orthogonal to the principal component. As a result,
the two classes are no longer distinguishable after transformation.</p>
<img src="fig-machine_learning/pca_004.png" width=500>
<!-- end figure -->
PCA works by decomposing the covariance matrix of the data using the Singular
Value Decomposition (SVD). This decomposition exists for all matrices and
returns the following factorization for an arbitrary matrix $\mathbf{A}$,
$$
\mathbf{A} = \mathbf{U} \mathbf{S} \mathbf{V}^T
$$
Because of the symmetry of the covariance matrix, $\mathbf{U} =
\mathbf{V}$. The elements of the diagonal matrix $\mathbf{S}$ are the singular
values of $\mathbf{A}$ whose squares are the eigenvalues of $\mathbf{A}^T
\mathbf{A}$. The eigenvector matrix $\mathbf{U}$ is orthogonal: $\mathbf{U}^T
\mathbf{U} =\mathbf{I}$. The singular values are in decreasing order so that
the first column of $\mathbf{U}$ is the axis corresponding to the largest
singular value. This is the first dominant column that PCA identifies. The
entries of the covariance matrix are of the form $\mathbb{E}(x_i x_j)$ where
$x_i$ and $x_j$ are different features [^covariance]. This means that the
covariance matrix is filled with entries that attempt to uncover mutually
correlated relationships between all pairs of columns of the feature matrix.
Once these have been tabulated in the covariance matrix, the SVD finds optimal
orthogonal transformations to align the components along the directions most
strongly associated with these correlated relationships. Simultaneously,
because orthogonal matrices have columns of unit-length, the SVD collects the
absolute squared lengths of these components into the $\mathbf{S}$ matrix. In
our example above in Figure, the two feature vectors were
obviously correlated along the
diagonal, meaning that PCA selected that diagonal direction as the principal
component.
[^covariance]: Note that these entries are constructed from the data
using an estimator of the covariance matrix because we do not have
the full probability densities at hand.
We have seen that PCA is a powerful dimension reduction method that is
invariant to linear transformations of the original feature space. However,
this method performs poorly with transformations that are nonlinear. In that
case, there are a wide range of extensions to PCA, such as Kernel PCA, that are
available in Scikit-learn, which allow for embedding parameterized
non-linearities into the PCA at the risk of overfitting.
Independent Component Analysis
Independent Component Analysis (ICA) via the FastICA algorithm is also
available in Scikit-learn. This method is fundamentally different from PCA
in that it is the small differences between components that are emphasized,
not the large principal components. This method is adopted from signal
processing. Consider a matrix of signals ($\mathbf{X}$) where the rows are
the samples and the columns are the different signals. For example, these
could be EKG signals from multiple leads on a single patient. The analysis
starts with the following model,
<!-- Equation labels as ordinary links -->
<div id="eq:ICA"></div>
$$
\begin{equation}
\mathbf{X} = \mathbf{S}\mathbf{A}^T
\label{eq:ICA} \tag{1}
\end{equation}
$$
In other words, the observed signal matrix is an unknown mixture
($\mathbf{A}$) of some set of conformable, independent random sources
$\mathbf{S}$,
$$
\mathbf{S}=\left[ \mathbf{s}_1(t),\mathbf{s}_2(t),\ldots,\mathbf{s}_n(t)\right]
$$
The distribution on the random sources is otherwise unknown, except
there can be at most one Gaussian source, otherwise, the mixing matrix
$\mathbf{A}$ cannot be identified because of technical reasons. The problem in
ICA is to find $\mathbf{A}$ in Equation eq:ICA and thereby un-mix the
$s_i(t)$ signals, but this cannot be solved without a strategy to reduce the
inherent arbitrariness in this formulation.
To make this concrete, let us simulate the situation with the following code,
End of explanation
"""
from sklearn.decomposition import FastICA
ica = FastICA()
# estimate unknown S matrix
S_=ica.fit_transform(X)
"""
Explanation: <!-- dom:FIGURE: [fig-machine_learning/pca_008.png, width=500 frac=0.85] The
left column shows the original signals and the right column shows the mixed
signals. The object of ICA is to recover the left column from the right. <div
id="fig:pca_008"></div> -->
<!-- begin figure -->
<div id="fig:pca_008"></div>
<p>The left column shows the original signals and the right column shows the
mixed signals. The object of ICA is to recover the left column from the
right.</p>
<img src="fig-machine_learning/pca_008.png" width=500>
<!-- end figure -->
The individual signals ($s_i(t)$) and their mixtures ($X_i(t)$) are
shown in Figure. To recover the individual signals using ICA,
we use the FastICA object and fit the parameters on the X matrix,
End of explanation
"""
fig,axs=subplots(3,2,sharex=True)
fig.set_size_inches((8,8))
X = np.array(X)
_=axs[0,1].plot(t,-S_[:,2],'k-')
_=axs[1,1].plot(t,-S_[:,1],'k-')
_=axs[2,1].plot(t,-S_[:,0],'k-')
_=axs[0,0].plot(t,s1,'k-')
_=axs[1,0].plot(t,s2,'k-')
_=axs[2,0].plot(t,s3,'k-')
_=axs[2,0].set_xlabel('$t$',fontsize=18)
_=axs[2,1].set_xlabel('$t$',fontsize=18)
_=axs[0,0].set_ylabel('$s_1(t)$ ',fontsize=18,rotation='horizontal')
_=axs[1,0].set_ylabel('$s_2(t)$ ',fontsize=18,rotation='horizontal')
_=axs[2,0].set_ylabel('$s_3(t)$ ',fontsize=18,rotation='horizontal')
for ax in axs.flatten():
_=ax.yaxis.set_ticklabels('')
_=axs[0,1].set_ylabel(' $s_1^\prime(t)$',fontsize=18,rotation='horizontal')
_=axs[1,1].set_ylabel(' $s_2^\prime(t)$',fontsize=18,rotation='horizontal')
_=axs[2,1].set_ylabel(' $s_3^\prime(t)$',fontsize=18,rotation='horizontal')
_=axs[0,1].yaxis.set_label_position("right")
_=axs[1,1].yaxis.set_label_position("right")
_=axs[2,1].yaxis.set_label_position("right")
"""
Explanation: The results of this estimation are shown in Figure,
showing that ICA is able to recover the original signals from the observed
mixture. Note that ICA is unable to distinguish the signs of the recovered
signals or preserve the order of the input signals.
End of explanation
"""
|
SlipknotTN/udacity-deeplearning-nanodegree | intro-to-rnns/Anna_KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
encoded[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
len(vocab)
"""
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
"""
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the batch size and number of batches we can make
batch_size = n_seqs * n_steps
n_batches = len(arr)//batch_size
# Keep only enough characters to make full batches
arr = arr[:n_batches * batch_size]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
"""
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
"""
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
"""
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
End of explanation
"""
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
"""
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
"""
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
"""
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
End of explanation
"""
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
"""
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
End of explanation
"""
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
"""
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
"""
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
"""
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
End of explanation
"""
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
"""
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
carthach/essentia | src/examples/tutorial/example_clickdetector.ipynb | agpl-3.0 | import essentia.standard as es
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import Audio
from essentia import array as esarr
plt.rcParams["figure.figsize"] =(12,9)
def compute(x, frame_size=1024, hop_size=512, **kwargs):
clickDetector = es.ClickDetector(frameSize=frame_size,
hopSize=hop_size,
**kwargs)
ends = []
starts = []
for frame in es.FrameGenerator(x, frameSize=frame_size,
hopSize=hop_size, startFromZero=True):
frame_starts, frame_ends = clickDetector(frame)
for s in frame_starts:
starts.append(s)
for e in frame_ends:
ends.append(e)
return starts, ends
"""
Explanation: ClickDetector use example
This algorithm detects the locations of impulsive noises (clicks and pops) on
the input audio frame. It relies on LPC coefficients to inverse-filter the
audio in order to attenuate the stationary part and enhance the prediction
error (or excitation noise)[1]. After this, a matched filter is used to
further enhance the impulsive peaks. The detection threshold is obtained from
a robust estimate of the excitation noise power [2] plus a parametric gain
value.
References:
[1] Vaseghi, S. V., & Rayner, P. J. W. (1990). Detection and suppression of
impulsive noise in speech communication systems. IEE Proceedings I
(Communications, Speech and Vision), 137(1), 38-46.
[2] Vaseghi, S. V. (2008). Advanced digital signal processing and noise
reduction. John Wiley & Sons. Page 355
End of explanation
"""
fs = 44100.
audio_dir = '../../audio/'
audio = es.MonoLoader(filename='{}/{}'.format(audio_dir,
'recorded/vignesh.wav'),
sampleRate=fs)()
originalLen = len(audio)
jumpLocation1 = int(originalLen / 4.)
jumpLocation2 = int(originalLen / 2.)
jumpLocation3 = int(originalLen * 3 / 4.)
audio[jumpLocation1] += .5
audio[jumpLocation2] += .15
audio[jumpLocation3] += .05
groundTruth = esarr([jumpLocation1, jumpLocation2, jumpLocation3]) / fs
for point in groundTruth:
l1 = plt.axvline(point, color='g', alpha=.5)
times = np.linspace(0, len(audio) / fs, len(audio))
plt.plot(times, audio)
l1.set_label('Click locations')
plt.legend()
plt.title('Signal with artificial clicks of different amplitudes')
"""
Explanation: Generating a click example
Lets start by degradating some audio files with some clicks of different amplitudes
End of explanation
"""
Audio(audio, rate=fs)
"""
Explanation: Lets listen to the clip to have an idea on how audible the clips are
End of explanation
"""
starts, ends = compute(audio)
fig, ax = plt.subplots(len(groundTruth))
plt.subplots_adjust(hspace=.4)
for idx, point in enumerate(groundTruth):
l1 = ax[idx].axvline(starts[idx], color='r', alpha=.5)
ax[idx].axvline(ends[idx], color='r', alpha=.5)
l2 = ax[idx].axvline(point, color='g', alpha=.5)
ax[idx].plot(times, audio)
ax[idx].set_xlim([point-.001, point+.001])
ax[idx].set_title('Click located at {:.2f}s'.format(point))
fig.legend((l1, l2), ('Detected click', 'Ground truth'), 'upper right')
"""
Explanation: The algorithm
This algorithm outputs the starts and ends timestapms of the clicks. The following plots show how the algorithm performs in the previous examples
End of explanation
"""
|
tpin3694/tpin3694.github.io | machine-learning/one-hot_encode_features_with_multiple_labels.ipynb | mit | # Load libraries
from sklearn.preprocessing import MultiLabelBinarizer
import numpy as np
"""
Explanation: Title: One-Hot Encode Features With Multiple Labels
Slug: one-hot_encode_features_with_multiple_labels
Summary: How to one-hot encode nominal categorical features with multiple labels per observation for machine learning in Python.
Date: 2016-09-06 12:00
Category: Machine Learning
Tags: Preprocessing Structured Data
Authors: Chris Albon
Preliminaries
End of explanation
"""
# Create NumPy array
y = [('Texas', 'Florida'),
('California', 'Alabama'),
('Texas', 'Florida'),
('Delware', 'Florida'),
('Texas', 'Alabama')]
"""
Explanation: Create Data
End of explanation
"""
# Create MultiLabelBinarizer object
one_hot = MultiLabelBinarizer()
# One-hot encode data
one_hot.fit_transform(y)
"""
Explanation: One-hot Encode Data
End of explanation
"""
# View classes
one_hot.classes_
"""
Explanation: View Column Headers
End of explanation
"""
|
google/earthengine-api | python/examples/ipynb/authorize_notebook_server.ipynb | apache-2.0 | import ee
"""
Explanation: Overview
This notebook guides you through process of testing if the Jupyter Notebook server is authorized to access the Earth Engine servers, and provides a way to authorize the server, if needed.
Testing if the Jupyter Notebook server is authorized to access Earth Engine
To begin, first verify that you can import the Earth Engine Python API package by running the following cell.
End of explanation
"""
try:
ee.Initialize()
print('The Earth Engine package initialized successfully!')
except ee.EEException as e:
print('The Earth Engine package failed to initialize!')
except:
print("Unexpected error:", sys.exc_info()[0])
raise
"""
Explanation: Next, try to initialize the ee Python package.
End of explanation
"""
%%bash
earthengine authenticate --quiet
"""
Explanation: If the initialization succeeded, you can stop here. Congratulations! If not, continue on below...
Authenticating to the Earth Engine servers
If the initialization process failed, you will need to authenticate the Jupyter Notebook server so that it can communicate with the Earth Engine servers. You can initiate the authentication process by running the following command.
Note that earthengine authenticate is a system command (rather than a Python command), and the cell uses the %%bash cell magic in the first line of the cell to indicate that the cell contents should be executed using a bash shell.
End of explanation
"""
%%bash
earthengine authenticate --authorization-code=PLACE_AUTH_CODE_HERE
"""
Explanation: Once you have obtained an authorization code from the previous step, paste the code into the following cell and run it.
End of explanation
"""
%%bash
rm ~/.config/earthengine
"""
Explanation: Removing authentication credentials
Authentication credentials are stored as a file in the user's configuration directory. If you need to remove the authentication credentials, run the following cell.
End of explanation
"""
|
arviz-devs/arviz | doc/source/getting_started/XarrayforArviZ.ipynb | apache-2.0 | # Load the centered eight schools model
import arviz as az
data = az.load_arviz_data("centered_eight")
data
"""
Explanation: (xarray_for_arviz)=
Introduction to xarray, InferenceData, and netCDF for ArviZ
While ArviZ supports plotting from familiar data types, such as dictionaries and NumPy arrays, there are a couple of data structures central to ArviZ that are useful to know when using the library.
They are
{class}xarray:xarray.Dataset
{class}arviz.InferenceData
{ref}netCDF <netcdf>
Why more than one data structure?
Bayesian inference generates numerous datasets that represent different aspects of the model. For example, in a single analysis, a Bayesian practitioner could end up with any of the following data.
Prior Distribution for N number of variables
Posterior Distribution for N number of variables
Prior Predictive Distribution
Posterior Predictive Distribution
Trace data for each of the above
Sample statistics for each inference run
Any other array like data source
For more detail, see the InferenceData structure specification {ref}here <schema>.
Why not Pandas Dataframes or NumPy Arrays?
Data from probabilistic programming is naturally high dimensional. To add to the complexity ArviZ must handle the data generated from multiple Bayesian modeling libraries, such as PyMC3 and PyStan. This application is handled by the xarray package quite well. The xarray package lets users manage high dimensional data with human readable dimensions and coordinates quite easily.
Above is a visual representation of the data structures and their relationships. Although it seems more complex at a glance, the ArviZ devs believe that the usage of xarray, InferenceData, and netCDF will simplify the handling, referencing, and serialization of data generated during Bayesian analysis.
An introduction to each
To help get familiar with each, ArviZ includes some toy datasets. You can check the different ways to start an InferenceData {ref}here <creating_InferenceData>. For illustration purposes, here we have shown only one example provided by the library. To start an az.InferenceData, sample can be loaded from disk.
End of explanation
"""
# Get the posterior dataset
posterior = data.posterior
posterior
"""
Explanation: In this case the az.InferenceData object contains both a posterior predictive distribution and the observed data, among other datasets. Each group in InferenceData is both an attribute on InferenceData and itself a xarray.Dataset object.
End of explanation
"""
# Get the observed xarray
observed_data = data.observed_data
observed_data
"""
Explanation: In our eight schools model example, the posterior trace consists of 3 variables and approximately over 4 chains. In addition, it is a hierarchical model where values for the variable theta are associated with a particular school.
According to the xarray's terminology:
* Data variables are the actual values generated from the MCMC draws
* Dimensions are the axes on which refer to the data variables
* Coordinates are pointers to specific slices or points in the xarray.Dataset
Observed data from the eight schools model can be accessed through the same method.
End of explanation
"""
data = az.load_arviz_data("centered_eight")
"""
Explanation: It should be noted that the observed dataset contains only 8 data variables. Moreover, it doesn't have a chain and draw dimension or coordinates unlike posterior. This difference in sizes is the motivating reason behind InferenceData. Rather than force multiple different sized arrays into one array, or have users manage multiple objects corresponding to different datasets, it is easier to hold references to each xarray.Dataset in an InferenceData object.
(netcdf)=
NetCDF
NetCDF is a standard for referencing array oriented files. In other words, while xarray.Datasets, and by extension InferenceData, are convenient for accessing arrays in Python memory, netCDF provides a convenient mechanism for persistence of model data on disk. In fact, the netCDF dataset was the inspiration for InferenceData as netCDF4 supports the concept of groups. InferenceData merely wraps xarray.Dataset with the same functionality.
Most users will not have to concern themselves with the netCDF standard but for completeness it is good to make its usage transparent. It is also worth noting that the netCDF4 file standard is interoperable with HDF5 which may be familiar from other contexts.
Earlier in this tutorial InferenceData was loaded from a netCDF file
End of explanation
"""
data.to_netcdf("eight_schools_model.nc")
"""
Explanation: Similarly, the InferenceData objects can be persisted to disk in the netCDF format
End of explanation
"""
|
InsightLab/data-science-cookbook | 2019/09-clustering/Notebook_KMeans_Answer.ipynb | mit | # import libraries
# linear algebra
import numpy as np
# data processing
import pandas as pd
# data visualization
from matplotlib import pyplot as plt
# load the data with pandas
dataset = pd.read_csv('dataset.csv', header=None)
dataset = np.array(dataset)
plt.scatter(dataset[:,0], dataset[:,1], s=10)
plt.show()
"""
Explanation: <p style="text-align: center;">Clusterização e algoritmo K-means</p>
Organizar dados em agrupamentos é um dos modos mais fundamentais de compreensão e aprendizado. Como por exemplo, os organismos em um sistema biologico são classificados em domínio, reino, filo, classe, etc. A análise de agrupamento é o estudo formal de métodos e algoritmos para agrupar objetos de acordo com medidas ou características semelhantes. A análise de cluster, em sua essência, não utiliza rótulos de categoria que marcam objetos com identificadores anteriores, ou seja, rótulos de classe. A ausência de informação de categoria distingue o agrupamento de dados (aprendizagem não supervisionada) da classificação ou análise discriminante (aprendizagem supervisionada). O objetivo da clusterização é encontrar estruturas em dados e, portanto, é de natureza exploratória.
A técnica de Clustering tem uma longa e rica história em uma variedade de campos científicos. Um dos algoritmos de clusterização mais populares e simples, o K-means, foi publicado pela primeira vez em 1955. Apesar do K-means ter sido proposto há mais de 50 anos e milhares de algoritmos de clustering terem sido publicados desde então, o K-means é ainda amplamente utilizado.
Fonte: Anil K. Jain, Data clustering: 50 years beyond K-means, Pattern Recognition Letters, Volume 31, Issue 8, 2010
Objetivo
Implementar as funções do algoritmo KMeans passo-a-passo
Comparar a implementação com o algoritmo do Scikit-Learn
Entender e codificar o Método do Cotovelo
Utilizar o K-means em um dataset real
Carregando os dados de teste
Carregue os dados disponibilizados, e identifique visualmente em quantos grupos os dados parecem estar distribuídos.
End of explanation
"""
# Selecionar três centróides
cluster_center_1 = np.array([2,3])
cluster_center_2 = np.array([6,6])
cluster_center_3 = np.array([10,1])
# Gerar amostras aleátorias a partir dos centróides escolhidos
cluster_data_1 = np.random.randn(100, 2) + cluster_center_1
cluster_data_2 = np.random.randn(100,2) + cluster_center_2
cluster_data_3 = np.random.randn(100,2) + cluster_center_3
new_dataset = np.concatenate((cluster_data_1, cluster_data_2,
cluster_data_3), axis = 0)
plt.scatter(new_dataset[:,0], new_dataset[:,1], s=10)
plt.show()
"""
Explanation: Criar um novo dataset para práticar
End of explanation
"""
def calculate_initial_centers(dataset, k):
"""
Inicializa os centróides iniciais de maneira arbitrária
Argumentos:
dataset -- Conjunto de dados - [m,n]
k -- Número de centróides desejados
Retornos:
centroids -- Lista com os centróides calculados - [k,n]
"""
#### CODE HERE ####
minimum = np.min(dataset, axis=0)
maximum = np.max(dataset, axis=0)
shape = [k, dataset.shape[1]]
centroids = np.random.uniform(minimum, maximum, size=shape)
### END OF CODE ###
return centroids
"""
Explanation: 1. Implementar o algoritmo K-means
Nesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída.
1.1 Inicializar os centróides
A primeira etapa do algoritmo consiste em inicializar os centróides de maneira aleatória. Essa etapa é uma das mais importantes do algoritmo e uma boa inicialização pode diminuir bastante o tempo de convergência.
Para inicializar os centróides você pode considerar o conhecimento prévio sobre os dados, mesmo sem saber a quantidade de grupos ou sua distribuição.
Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html
End of explanation
"""
k = 3
centroids = calculate_initial_centers(dataset, k)
plt.scatter(dataset[:,0], dataset[:,1], s=10)
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red',s=100)
plt.show()
"""
Explanation: Teste a função criada e visualize os centróides que foram calculados.
End of explanation
"""
def euclidean_distance(a, b):
"""
Calcula a distância euclidiana entre os pontos a e b
Argumentos:
a -- Um ponto no espaço - [1,n]
b -- Um ponto no espaço - [1,n]
Retornos:
distance -- Distância euclidiana entre os pontos
"""
#### CODE HERE ####
distance = np.sqrt(np.sum(np.square(a-b)))
### END OF CODE ###
return distance
"""
Explanation: 1.2 Definir os clusters
Na segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados.
1.2.1 Função de distância
Codifique a função de distância euclidiana entre dois pontos (a, b).
Definido pela equação:
$$ dist(a, b) = \sqrt{(a_1-b_1)^{2}+(a_2-b_2)^{2}+ ... + (a_n-b_n)^{2}} $$
$$ dist(a, b) = \sqrt{\sum_{i=1}^{n}(a_i-b_i)^{2}} $$
End of explanation
"""
a = np.array([1, 5, 9])
b = np.array([3, 7, 8])
if (euclidean_distance(a,b) == 3):
print("Distância calculada corretamente!")
else:
print("Função de distância incorreta")
"""
Explanation: Teste a função criada.
End of explanation
"""
def nearest_centroid(a, centroids):
"""
Calcula o índice do centroid mais próximo ao ponto a
Argumentos:
a -- Um ponto no espaço - [1,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_index -- Índice do centróide mais próximo
"""
#### CODE HERE ####
distance_zeros = np.zeros(centroids.shape[0])
for index, centroid in enumerate(centroids):
distance = euclidean_distance(a, centroid)
distance_zeros[index] = distance
nearest_index = np.argmin(distance_zeros)
### END OF CODE ###
return nearest_index
"""
Explanation: 1.2.2 Calcular o centroide mais próximo
Utilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centroid mais próximo de um ponto qualquer.
Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmin.html
End of explanation
"""
# Seleciona um ponto aleatório no dataset
index = np.random.randint(dataset.shape[0])
a = dataset[index,:]
# Usa a função para descobrir o centroid mais próximo
idx_nearest_centroid = nearest_centroid(a, centroids)
# Plota os dados ------------------------------------------------
plt.scatter(dataset[:,0], dataset[:,1], s=10)
# Plota o ponto aleatório escolhido em uma cor diferente
plt.scatter(a[0], a[1], c='magenta', s=30)
# Plota os centroids
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
# Plota o centroid mais próximo com uma cor diferente
plt.scatter(centroids[idx_nearest_centroid,0],
centroids[idx_nearest_centroid,1],
marker='^', c='springgreen', s=100)
# Cria uma linha do ponto escolhido para o centroid selecionado
plt.plot([a[0], centroids[idx_nearest_centroid,0]],
[a[1], centroids[idx_nearest_centroid,1]],c='orange')
plt.annotate('CENTROID', (centroids[idx_nearest_centroid,0],
centroids[idx_nearest_centroid,1],))
plt.show()
"""
Explanation: Teste a função criada
End of explanation
"""
def all_nearest_centroids(dataset, centroids):
"""
Calcula o índice do centroid mais próximo para cada
ponto do dataset
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
Retornos:
nearest_indexes -- Índices do centróides mais próximos - [m,1]
"""
#### CODE HERE ####
nearest_indexes = np.zeros(dataset.shape[0])
for index, a in enumerate(dataset):
nearest_indexes[index] = nearest_centroid(a, centroids)
### END OF CODE ###
return nearest_indexes
"""
Explanation: 1.2.3 Calcular centroid mais próximo de cada dado do dataset
Utilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centroid mais próximo de cada dado do dataset.
End of explanation
"""
nearest_indexes = all_nearest_centroids(dataset, centroids)
plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
plt.show()
"""
Explanation: Teste a função criada visualizando os cluster formados.
End of explanation
"""
def inertia(dataset, centroids, nearest_indexes):
"""
Soma das distâncias quadradas das amostras para o
centro do cluster mais próximo.
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
inertia -- Soma total do quadrado da distância entre
os dados de um cluster e seu centróide
"""
#### CODE HERE ####
inertia = 0
for index, centroid in enumerate(centroids):
dataframe = dataset[nearest_indexes == index,:]
for a in dataframe:
inertia += np.square(euclidean_distance(a,centroid))
### END OF CODE ###
return inertia
"""
Explanation: 1.3 Métrica de avaliação
Após formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação.
O algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conhecida como inertia.
$$\sum_{i=0}^{n}\min_{c_j \in C}(||x_i - c_j||^2)$$
A inertia, ou o critério de soma dos quadrados dentro do cluster, pode ser reconhecido como uma medida de o quão internamente coerentes são os clusters, porém ela sofre de alguns inconvenientes:
A inertia pressupõe que os clusters são convexos e isotrópicos, o que nem sempre é o caso. Desta forma, pode não representar bem em aglomerados alongados ou variedades com formas irregulares.
A inertia não é uma métrica normalizada: sabemos apenas que valores mais baixos são melhores e zero é o valor ótimo. Mas em espaços de dimensões muito altas, as distâncias euclidianas tendem a se tornar infladas (este é um exemplo da chamada “maldição da dimensionalidade”). A execução de um algoritmo de redução de dimensionalidade, como o PCA, pode aliviar esse problema e acelerar os cálculos.
Fonte: https://scikit-learn.org/stable/modules/clustering.html
Para podermos avaliar os nosso clusters, codifique a métrica da inertia abaixo, para isso você pode utilizar a função de distância euclidiana construída anteriormente.
$$inertia = \sum_{i=0}^{n}\min_{c_j \in C} (dist(x_i, c_j))^2$$
End of explanation
"""
tmp_data = np.array([[1,2,3],[3,6,5],[4,5,6]])
tmp_centroide = np.array([[2,3,4]])
tmp_nearest_indexes = all_nearest_centroids(tmp_data, tmp_centroide)
if inertia(tmp_data, tmp_centroide, tmp_nearest_indexes) == 26:
print("Inertia calculada corretamente!")
else:
print("Função de inertia incorreta!")
# Use a função para verificar a inertia dos seus clusters
inertia(dataset, centroids, nearest_indexes)
"""
Explanation: Teste a função codificada executando o código abaixo.
End of explanation
"""
def update_centroids(dataset, centroids, nearest_indexes):
"""
Atualiza os centroids
Argumentos:
dataset -- Conjunto de dados - [m,n]
centroids -- Lista com os centróides - [k,n]
nearest_indexes -- Índices do centróides mais próximos - [m,1]
Retornos:
centroids -- Lista com centróides atualizados - [k,n]
"""
#### CODE HERE ####
for index, centroid in enumerate(centroids):
dataframe = dataset[nearest_indexes == index,:]
if(dataframe.size != 0):
centroids[index] = np.mean(dataframe, axis=0)
### END OF CODE ###
return centroids
"""
Explanation: 1.4 Atualizar os clusters
Nessa etapa, os centróides são recomputados. O novo valor de cada centróide será a media de todos os dados atribuídos ao cluster.
End of explanation
"""
nearest_indexes = all_nearest_centroids(dataset, centroids)
# Plota os os cluster ------------------------------------------------
plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes)
# Plota os centroids
plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100)
for index, centroid in enumerate(centroids):
dataframe = dataset[nearest_indexes == index,:]
for data in dataframe:
plt.plot([centroid[0], data[0]], [centroid[1], data[1]],
c='lightgray', alpha=0.3)
plt.show()
"""
Explanation: Visualize os clusters formados
End of explanation
"""
centroids = update_centroids(dataset, centroids, nearest_indexes)
"""
Explanation: Execute a função de atualização e visualize novamente os cluster formados
End of explanation
"""
class KMeans():
def __init__(self, n_clusters=8, max_iter=300):
self.n_clusters = n_clusters
self.max_iter = max_iter
def fit(self,X):
# Inicializa os centróides
self.cluster_centers_ = calculate_initial_centers(X, self.n_clusters)
# Computa o cluster de cada amostra
self.labels_ = all_nearest_centroids(X, self.cluster_centers_)
# Calcula a inércia inicial
old_inertia = inertia(X, self.cluster_centers_, self.labels_)
for index in range(self.max_iter):
#### CODE HERE ####
self.cluster_centers_ = update_centroids(X, self.cluster_centers_, self.labels_)
self.labels_ = all_nearest_centroids(X, self.cluster_centers_)
self.inertia_ = inertia(X, self.cluster_centers_, self.labels_)
if(old_inertia == self.inertia_):
break
else:
old_inertia = self.inertia_
### END OF CODE ###
return self
def predict(self, X):
return all_nearest_centroids(X, self.cluster_centers_)
"""
Explanation: 2. K-means
2.1 Algoritmo completo
Utilizando as funções codificadas anteriormente, complete a classe do algoritmo K-means!
End of explanation
"""
kmeans = KMeans(n_clusters=3)
kmeans.fit(dataset)
print("Inércia = ", kmeans.inertia_)
plt.scatter(dataset[:,0], dataset[:,1], c=kmeans.labels_)
plt.scatter(kmeans.cluster_centers_[:,0],
kmeans.cluster_centers_[:,1], marker='^', c='red', s=100)
plt.show()
"""
Explanation: Verifique o resultado do algoritmo abaixo!
End of explanation
"""
from sklearn.cluster import KMeans as scikit_KMeans
scikit_kmeans = scikit_KMeans(n_clusters=3)
scikit_kmeans.fit(dataset)
print("Inércia = ", scikit_kmeans.inertia_)
plt.scatter(dataset[:,0], dataset[:,1], c=scikit_kmeans.labels_)
plt.scatter(scikit_kmeans.cluster_centers_[:,0],
scikit_kmeans.cluster_centers_[:,1], c='red')
plt.show()
"""
Explanation: 2.2 Comparar com algoritmo do Scikit-Learn
Use a implementação do algoritmo do scikit-learn do K-means para o mesmo conjunto de dados. Mostre o valor da inércia e os conjuntos gerados pelo modelo. Você pode usar a mesma estrutura da célula de código anterior.
Dica: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans
End of explanation
"""
n_clusters_test = 8
n_sequence = np.arange(1, n_clusters_test+1)
inertia_vec = np.zeros(n_clusters_test)
for index, n_cluster in enumerate(n_sequence):
inertia_vec[index] = KMeans(n_clusters=n_cluster).fit(dataset).inertia_
plt.plot(n_sequence, inertia_vec, 'ro-')
plt.show()
"""
Explanation: 3. Método do cotovelo
Implemete o método do cotovelo e mostre o melhor K para o conjunto de dados.
End of explanation
"""
#### CODE HERE ####
"""
Explanation: 4. Dataset Real
Exercícios
1 - Aplique o algoritmo do K-means desenvolvido por você no datatse iris [1]. Mostre os resultados obtidos utilizando pelo menos duas métricas de avaliação de clusteres [2].
[1] http://archive.ics.uci.edu/ml/datasets/iris
[2] http://scikit-learn.org/stable/modules/clustering.html#clustering-evaluation
Dica: você pode utilizar as métricas completeness e homogeneity.
2 - Tente melhorar o resultado obtido na questão anterior utilizando uma técnica de mineração de dados. Explique a diferença obtida.
Dica: você pode tentar normalizar os dados [3].
- [3] https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html
3 - Qual o número de clusteres (K) você escolheu na questão anterior? Desenvolva o Método do Cotovelo sem usar biblioteca e descubra o valor de K mais adequado. Após descobrir, utilize o valor obtido no algoritmo do K-means.
4 - Utilizando os resultados da questão anterior, refaça o cálculo das métricas e comente os resultados obtidos. Houve uma melhoria? Explique.
End of explanation
"""
|
fonnesbeck/baseball | notebooks/Pitch Classification.ipynb | mit | from pybaseball import statcast
pitch_data = statcast(start_dt='2017-04-01', end_dt='2017-04-30')
pitch_data.shape
pitch_data.pitch_type.value_counts()
pitch_type = pitch_data.pop('pitch_type')
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
pitch_data, pitch_type, test_size=0.33, random_state=42)
"""
Explanation: Question 3
The accompanying CSV files ‘pitchclassificationtrain.csv’ and ‘pitchclassificationtest.csv’ contain information from nearly 20,000 pitches from five different pitchers over three years. There are 13 columns:
pitchid: a unique identifier for each pitch.
pitcherid: identity of the pitcher (1-5). The identities are the same in both datasets. Pitcher 3 in the training set is the same pitcher as Pitcher 3 in the test set.
yearid: year in which the pitch occurred (1-3).
throws: handedness of the pitcher, where 1 = right-handed and 2 = left-handed
height: height in inches of the pitcher.
initspeed: initial speed of the pitch as it leaves the pitcher's hand, reported in MPH
breakx: horizontal distance in inches between where a pitch crossed the plate and where a hypothetical spinless pitch would have, where negative is inside to a right-handed hitter.
breakz: vertical distance in inches between where a pitch crossed the plate and where a hypothetical spinless pitch would have, where negative is closer to the ground.
initposx: horizontal position of the release point of the pitch. The position is measured in feet from the center of the rubber when the pitch is released, where negative is towards the third-base side of the rubber.
initposz: vertical position of the release point of the pitch. The position is measured in feet above the ground.
extension: distance in feet in front of the pitching rubber from which the pitcher releases the ball.
spinrate: how fast the ball is spinning as it leaves the pitcher's hand, reported in RPM
type: type of pitch that was thrown (will only appear in the training dataset).
Your goal is to give the most likely pitch type for all of the pitches in the test dataset using information from the training dataset. Note that the pitchers in the datasets do not correspond with any specific real pitchers but are meant to be representative of real data. Please include the following with your final submission:
CSV with two columns: the pitchid and the corresponding predicted pitch type
write-up of your method and results, including any tables or figures that help communicate your findings
all code used to solve the problem
This is a multi-class classification problem. One approach is to use a parametric statistical model, and express the pitch class as a categorical outcome. However, this requires the explicit specification of the combinations of variables thought to be predictive of pitch type a priori. A more flexible approach, and an effective one when plenty of data are available (as we have here), is to use an ensemble machine learning algorithm. A key advantage of such methods is that higher-order interactions among variables that may be powerful predictors can be discovered automatically, without pre-specification.
An effective ensemble method for classification is gradient boosting. Boosting combines a set of "weak" learners (ones that, individually, perform only slightly better than chance) to a yield a system with very high predictive performance. The idea is that by sequentially applying very fast, simple models, we can get a total model error which is better than any of the individual pieces. Each successive fit to a new weak learner yields and estimate of residual error, which is used to weight the remaining observations such that the next learner emphasizes the classification of observations that have not yet been correctly classified. The process is stagewise, meaning that existing trees are left unchanged as the model is enlarged. Only the fitted value for each observation is re-estimated at each step to reflect the contribution of the newly added tree. The final model is a linear combination of many trees (usually hundreds to thousands) that can be thought of as a classification/regression model where each term itself is a tree.
On average, boosting outperforms competing algorithms like random forests or support vector machines, but requires less data than deep neural networks, which are often applied to large classification tasks.
XGBoost is an open-source, high-performance implementation of gradient boosting methods for decision trees. It features natively-parallel tree construction, out-of-core computation, and cache optimization, and is readily deployable on clusters. It typically offers better speed and performance relative to scikit-learn and other comparable libraries.
Import data from csv files.
End of explanation
"""
prediction_cols = ['p_throws', 'release_spin_rate', 'effective_speed', 'release_extension',
'vx0', 'vy0', 'vz0', 'ax', 'ay', 'az']
"""
Explanation: Extract columns to be used for prediction. Pitcher and year are probably not predictive, so I am leaving them out.
End of explanation
"""
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder().fit(y_train)
y_train_encoded = encoder.transform(y_train)
"""
Explanation: Relabel pitch types using the scikit-learn label encoder (XGboost requires sequential labels).
End of explanation
"""
import xgboost as xgb
dtrain = xgb.DMatrix(X_train[prediction_cols], label=y_train_encoded)
dtest = xgb.DMatrix(X_test[prediction_cols])
"""
Explanation: XGBoost runs faster using its own binary data structures:
End of explanation
"""
xgb_params = {
'max_depth': 12,
'eta': 0.2,
'nthread': 4, # Use 4 cores for multiprocessing
'num_class': 6, # pitch types
'objective': 'multi:softmax' # use softmax multi-class classification
}
boosted_classifier = xgb.train(xgb_params, dtrain, num_boost_round=30)
"""
Explanation: We can specify the hyperparameters of the model, to use as a starting point for the analysis.
End of explanation
"""
from sklearn.metrics import accuracy_score
predictions = boosted_classifier.predict(dtrain)
accuracy_score(y_train_encoded, predictions)
"""
Explanation: Using arbitrarily-chosen hyperparemeters, the model achieves nearly perfect accuracy on the training set.
End of explanation
"""
param_grid = [(max_depth, min_child_weight, eta) for max_depth in (6, 8, 10, 12)
for min_child_weight in (7, 9, 11, 13)
for eta in (0.15, 0.2, 0.25)]
min_merror = np.inf
best_params = None
for max_depth, min_child_weight, eta in param_grid:
print("CV with max_depth={}, min_child_weight={}, eta={}".format(
max_depth,
min_child_weight,
eta))
# Update our parameters
xgb_params['max_depth'] = max_depth
xgb_params['min_child_weight'] = min_child_weight
xgb_params['eta'] = eta
# Run CV
cv_results = xgb.cv(
xgb_params,
dtrain,
num_boost_round=50,
nfold=5,
metrics={'merror'},
early_stopping_rounds=3
)
# Update best score
mean_merror = cv_results['test-merror-mean'].min()
boost_rounds = cv_results['test-merror-mean'].idxmin()
print("\tmerror {} for {} rounds".format(mean_merror, boost_rounds))
if mean_merror < min_merror:
min_merror = mean_merror
best_params = (max_depth, min_child_weight, eta)
print("Best params: {}, {}, {}, merror: {}".format(best_params[0], best_params[1], best_params[2], min_merror))
"""
Explanation: However, this model may be overfit to the training dataset, so I am going to use 5-fold cross-validation to select hyperparameters that result in the best fit.
End of explanation
"""
xgb_params = {
'max_depth': 10,
'eta': 0.2,
'min_child_weight': 9, # minimum sum of instance weight needed in a child
'nthread': 4, # use 4 cores for multiprocessing
'num_class': 6, # pitch types
'objective': 'multi:softmax' # use softmax multi-class classification
}
boosted_classifier = xgb.train(xgb_params, dtrain, num_boost_round=30)
"""
Explanation: I can select the best parameters from the cross-validation procedure to use to predict on the test data (I could do a more refined search of the hyperparameter space, but the multiclass errors appear to be broadly similar across the range of values that I used, so I will stick with these).
End of explanation
"""
from sklearn.metrics import accuracy_score
predictions = boosted_classifier.predict(dtrain)
accuracy_score(y_train_encoded, predictions)
"""
Explanation: The accuracy score for the training data is nominally lower than the original model, but not much; moreover this model should perform better in out-of-sample prediction.
End of explanation
"""
xgb.plot_importance(boosted_classifier)
"""
Explanation: Below is a plot of feature importances, using the F-score. This quantifies how many times a particular variable is used as a splitting variable across all the trees. This ranking makes intiutive sense, with movement and velocity being the most relevant factors.
End of explanation
"""
test_predictions = boosted_classifier.predict(dtest)
test_predictions
"""
Explanation: Generate predictions on the test set using fitted classifier.
End of explanation
"""
predicted_pitches = pd.Series(encoder.inverse_transform(test_predictions.astype(int)), index=X_test.index)
predicted_pitches.name = 'pitch_type'
predicted_pitches.index.name = 'pitchid'
predicted_pitches.to_csv('predicted_pitches_fonnesbeck.csv', header=True)
"""
Explanation: Back-transform label encoding, and export to predicted_pitches_fonnesbeck.csv.
End of explanation
"""
|
CalPolyPat/phys202-2015-work | assignments/assignment07/AlgorithmsEx01.ipynb | mit | %matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import re
"""
Explanation: Algorithms Exercise 1
Imports
End of explanation
"""
def tokenize(s, stop_words=None, punctuation='`~!@#$%^&*()_-+={[}]|\:;"<,>.?/}\t'):
"""Split a string into a list of words, removing punctuation and stop words."""
if type(stop_words)==str:
stopwords=list(stop_words.split(" "))
else:
stopwords=stop_words
lines = s.splitlines()
words = [re.split(" |--|-", line) for line in lines]
filtwords = []
# stopfiltwords = []
for w in words:
for ch in w:
result = list(filter(lambda x:x not in punctuation, ch))
filtwords.append("".join(result))
if stopwords != None:
filtwords=list(filter(lambda x:x not in stopwords and x != '', filtwords))
filtwords=[f.lower() for f in filtwords]
return filtwords
assert tokenize("This, is the way; that things will end", stop_words=['the', 'is']) == \
['this', 'way', 'that', 'things', 'will', 'end']
wasteland = """APRIL is the cruellest month, breeding
Lilacs out of the dead land, mixing
Memory and desire, stirring
Dull roots with spring rain.
"""
assert tokenize(wasteland, stop_words='is the of and') == \
['april','cruellest','month','breeding','lilacs','out','dead','land',
'mixing','memory','desire','stirring','dull','roots','with','spring',
'rain']
assert tokenize("hello--world")==['hello', 'world']
"""
Explanation: Word counting
Write a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic:
Split the string into lines using splitlines.
Split each line into a list of words and merge the lists for each line.
Use Python's builtin filter function to remove all punctuation.
If stop_words is a list, remove all occurences of the words in the list.
If stop_words is a space delimeted string of words, split them and remove them.
Remove any remaining empty words.
Make all words lowercase.
End of explanation
"""
def count_words(data):
"""Return a word count dictionary from the list of words in data."""
wordcount={}
for d in data:
if d in wordcount:
wordcount[d] += 1
else:
wordcount[d] = 1
return wordcount
assert count_words(tokenize('this and the this from and a a a')) == \
{'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}
"""
Explanation: Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.
End of explanation
"""
def sort_word_counts(wc):
"""Return a list of 2-tuples of (word, count), sorted by count descending."""
def getkey(item):
return item[1]
sortedwords = [(i,wc[i]) for i in wc]
return sorted(sortedwords, key=getkey, reverse=True)
assert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \
[('a', 4), ('this', 3), ('and', 2), ('the', 1)]
"""
Explanation: Write a function sort_word_counts that return a list of sorted word counts:
Each element of the list should be a (word, count) tuple.
The list should be sorted by the word counts, with the higest counts coming first.
To perform this sort, look at using the sorted function with a custom key and reverse
argument.
End of explanation
"""
f = open('mobydick_chapter1.txt', 'r')
swc = sort_word_counts(count_words(tokenize(f.read(), stop_words='the of and a to in is it that as')))
print(len(swc))
assert swc[0]==('i',43)
assert len(swc)==849
#I changed the assert to length 849 instead of 848. I wasn't about to search through the first chapter of moby dick to find the odd puncuation that caused one extra word to pop up,.
"""
Explanation: Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt:
Read the file into a string.
Tokenize with stop words of 'the of and a to in is it that as'.
Perform a word count, the sort and save the result in a variable named swc.
End of explanation
"""
words50 = np.array(swc)
f=plt.figure(figsize=(25,5))
plt.plot(np.linspace(0,50,50), words50[:50,1], 'ko')
plt.xlim(0,50)
plt.xticks(np.linspace(0,50,50),words50[:50,0]);
assert True # use this for grading the dotplot
"""
Explanation: Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...
End of explanation
"""
|
PMEAL/OpenPNM | examples/reference/uncategorized/overview_of_domain_syntax.ipynb | mit | import numpy as np
import openpnm as op
pn = op.network.Cubic([2, 4, 1])
print(pn)
"""
Explanation: OpenPNM Version 3: The new @domain syntax
The latest version of OpenPNM includes a new syntax feature with several uses. This notebooks outlines benefits of this new feature, starting with the superficial or immediately visible aspects, then dives into the underlying impacts.
Using the @ sytax to read and write data
At its core, the @ syntax uses the already existing labels feature of OpenPNM. Labels have been integral to the use of OpenPNM since its inception, but the new @ syntax moves the use of labels to forefront.
Start by generating a simple 2D cubic network:
End of explanation
"""
print(pn['pore.left'])
"""
Explanation: This network includes a few pre-defined labels (e.g. 'pore.left' and 'throat.surface'), which are boolean masks of True|False values which indicate whether that label applies to a given pore/throat or not. As shown below, the label 'pore.left' applies to pores [0, 1, 2, 3]:
End of explanation
"""
pn['pore.coords'][pn['pore.left']]
"""
Explanation: We can use labels as masks to view only values for given locations, as follows:
End of explanation
"""
pn['pore.coords'][pn.pores('left')]
"""
Explanation: or using the pores method:
End of explanation
"""
pn['pore.coords@left']
"""
Explanation: In OpenPNM V3 there is a very handy new syntax, the @ symbol, used as follows:
End of explanation
"""
pn['pore.values'] = 1.0
"""
Explanation: This certainly saves some typing! It's also pretty intuitive since @ is usually read as "at", inferring a location.
The @ sytax can also be used to write data as well. Let's create an array of 1.0s, then use the @ syntax to change them:
End of explanation
"""
pn['pore.values@left'] = 2.0
print(pn['pore.values'])
"""
Explanation: If we supply a scalar, it is written to all locations belong to 'left':
End of explanation
"""
pn['pore.values@right'] = [4, 5, 6, 7]
print(pn['pore.values'])
"""
Explanation: We can of course pass in an array, which must have the correct number of elements:
End of explanation
"""
pn['pore.new_array@left'] = 2.0
print(pn['pore.new_array'])
"""
Explanation: One useful bonus is that you can create an array and assign values to certain locations at the same time:
End of explanation
"""
pn['pore.new_array@front'] = 3.0
print(pn['pore.new_array'])
"""
Explanation: The above line created an empty array of nans, then added 2.0 to the pores labelled 'left'. This was not previously possible without first creating an empty array before adding 2.0 to specific locations.
You can use any label that is defined, and it will overwrite any values already present if that label happens to overlap the label used previously:
End of explanation
"""
print(pn['pore.new_array@left'])
"""
Explanation: which overwrote some locations that had 2.0, since some pores are both 'front' and 'left', as well as overwrote some of the nan values.
End of explanation
"""
pn.add_model(propname='pore.seed',
model=op.models.geometry.pore_seed.random,
domain='left',
seed=0,
num_range=[0.1, 0.5])
"""
Explanation: Using the @ Syntax to Define Subdomains
Using the @ symbol for data read/write as shown above is actually a side effect of a major conceptual shift made in V3. The Geometry and Physics objects are now gone. There was essentially only one use case for these, which was to model heterogeneous domains, like bimodal pore size distributions or layered structures.
In V2 this was accomplished by using 2 (or more) Geometry objects to represent each class of pores, with unique pore-scale models attached to each. Without getting lost in the details, it is sufficient to say that having separate objects for managing each class of pores (and/or throats) created a lot of complications, both to the user and to the maintenence of the backend.
In V3 we have developed what we think is a much tidier approach to managing heterogeneous domains. Instead of creating multiple Geometry objects (and consequently multiple Physics objects), you now add all the pore-scale models to the Network and Phase objects directly. The trick is that when adding models you specify one additional argument: the domain (i.e. pores or throats) to which the model applies, as follows:
End of explanation
"""
print(pn)
"""
Explanation: where domain is given the label 'left' which has already been defined on the network.
This means that to create a heterogeneous model you only need to create labels marking the pores and/or throats of each domain, then pass those labels when adding models. You can also leave domain unspecified (None) which means the model is applied everywhere. For the above case, we can see that the 'pore.seed' model was computed for 4 locations (corresponding the 4 pores labelled 'left'):
End of explanation
"""
pn.add_model(propname='pore.seed',
model=op.models.geometry.pore_seed.random,
domain='right',
seed=0,
num_range=[0.5, 0.9])
"""
Explanation: The power of this new approach is really visible when we consider applying a model with different parameters to a different set of pores:
End of explanation
"""
print(pn)
"""
Explanation: Now the pore.seed' values exist on 8 locations.
End of explanation
"""
print(pn.models)
"""
Explanation: The new approach was made possible by changing how pore-scale models are stored on objects. Each object has a models attribute, which is a dict where the key corresponds to the property being calculated. So the model to compute 'pore.seed' is stored as pn.models['pore.seed']. The new @ notation makes it possible to store multiple models for 'pore.seed' that apply to different location on the same object. This can be seen below by printing the models attribute:
End of explanation
"""
pn.pores('left')
pn['pore.left'][[4, 5]] = True
del pn['pore.seed']
pn.run_model('pore.seed@left')
print(pn)
"""
Explanation: Appending @ to the model name creates a unique dictionary key. OpenPNM recoginizes that the models in 'pore.seed@left' and 'pore.seed@right' both compute values of 'pore.seed', and directs the outputs of each function to the correct locations, which it can infer from the @right/left portion of the key.
Other Advantages of the @ Syntax
There are many upsides to this approach, as will be demonstrated in the following sections.
Defining and Changing Subdomain Locations
It becomes trivial to define and redefine the locations of a domain. This simply requires changing where pn['pore.left'] is True. This is demonstrated as follows:
End of explanation
"""
del pn.models['pore.seed@left']
del pn.models['pore.seed@right']
pn.add_model(propname='pore.seed',
model=op.models.geometry.pore_seed.random)
pn.add_model(propname='pore.diameter',
model=op.models.geometry.pore_size.normal,
domain='left',
scale=0.1,
loc=1,
seeds='pore.seed')
pn.add_model(propname='pore.diameter',
model=op.models.geometry.pore_size.normal,
domain='right',
scale=2,
loc=10,
seeds='pore.seed')
"""
Explanation: It can now been observed that 'pore.seed' values are found in 6 locations because the domain labelled 'left' was expanded by 2 pores.
Mixing Full Domain and Subdomain Models
When defining two separate subdomains, the pore and throat sizes are often the only thing that is different. In V2, however, it was recommended practice to include ALL the additional models on each subdomain object as well, such as volume calculations. With the @ syntax, only models that actualy differ between the domains need to be specifically dealt with.
This is demonstrated below by first deleting the individual 'pore.seed' models applied above, and replacing them with a single model that applies uniform values on all locations, then applying two different normal distributions to the 'left' and 'right' domains.
End of explanation
"""
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 2, figsize=[12, 6])
ax[0].plot(pn.Ps, pn['pore.seed'], 'o')
ax[0].set_ylabel('pore seed')
ax[0].set_xlabel('pore index')
ax[1].plot(pn.pores('left'), pn['pore.diameter@left'], 'o', label='pore.left')
ax[1].plot(pn.pores('right'), pn['pore.diameter@right'], 'o', label='pore.right')
ax[1].set_ylabel('pore diameter')
ax[1].set_xlabel('pore index')
ax[1].legend();
"""
Explanation: As can be seen in the figures below, the 'pore.seed' values are uniformly distributed on all locations, but 'pore.diameter' differs due to the different parameter used in each model.
End of explanation
"""
pn.add_model(propname='pore.volume',
model=op.models.geometry.pore_volume.sphere)
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1, figsize=[6, 6])
ax.plot(pn.Ps, pn['pore.volume'], 'o')
ax.set_ylabel('pore volume')
ax.set_xlabel('pore index');
"""
Explanation: And now we can apply a model to the full domain that computes the pore volume, using values of pore diameter that were computed uniquely for each domain:
End of explanation
"""
Ps = pn.pores(['front', 'back'])
Ts = pn.find_neighbor_throats(Ps, asmask=True)
pn['throat.front'] = Ts
pn['throat.back'] = ~Ts
pn.add_model(propname='throat.diameter',
model=op.models.geometry.throat_size.from_neighbor_pores,
domain='front',
mode='min')
pn.add_model(propname='throat.diameter',
model=op.models.geometry.throat_size.from_neighbor_pores,
domain='back',
mode='max')
"""
Explanation: If an algorithm updates the labels then it effectively changes the domains! Re-running the models would automatically apply to the new locations! Not quite sure what this is useful for...maybe catalyst deactivation? Oooh, multiphase flow and percolation! The percolation algorithm could put True/False for occupancy, then a pore-scale model for hydraulic conductance that is applied to 'pore.invaded' domain, the it could update!
Mixing Many Subdomains of Different Shape
Because subdomains are now very abstract (actuallyjust labels), it is possible to define multiple subdomains with different shape and apply models to each. So far we have added 'pore.seed' and 'pore.diameter' models to the 'left' and 'right' pores. We can now freely add another set of models to the 'front' and 'back', even though they partially overlap:
End of explanation
"""
print(pn)
"""
Explanation: Now we can see that the throat diameters have beed added to the network:
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.20/_downloads/2567f25ca4c6b483c12d38184d7fe9d7/plot_decoding_xdawn_eeg.ipynb | bsd-3-clause | # Authors: Alexandre Barachant <alexandre.barachant@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.preprocessing import MinMaxScaler
from mne import io, pick_types, read_events, Epochs, EvokedArray
from mne.datasets import sample
from mne.preprocessing import Xdawn
from mne.decoding import Vectorizer
print(__doc__)
data_path = sample.data_path()
"""
Explanation: XDAWN Decoding From EEG data
ERP decoding with Xdawn ([1], [2]). For each event type, a set of
spatial Xdawn filters are trained and applied on the signal. Channels are
concatenated and rescaled to create features vectors that will be fed into
a logistic regression.
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.3
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4}
n_filter = 3
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 20, fir_design='firwin')
events = read_events(event_fname)
picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
epochs = Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=picks, baseline=None, preload=True,
verbose=False)
# Create classification pipeline
clf = make_pipeline(Xdawn(n_components=n_filter),
Vectorizer(),
MinMaxScaler(),
LogisticRegression(penalty='l1', solver='liblinear',
multi_class='auto'))
# Get the labels
labels = epochs.events[:, -1]
# Cross validator
cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=42)
# Do cross-validation
preds = np.empty(len(labels))
for train, test in cv.split(epochs, labels):
clf.fit(epochs[train], labels[train])
preds[test] = clf.predict(epochs[test])
# Classification report
target_names = ['aud_l', 'aud_r', 'vis_l', 'vis_r']
report = classification_report(labels, preds, target_names=target_names)
print(report)
# Normalized confusion matrix
cm = confusion_matrix(labels, preds)
cm_normalized = cm.astype(float) / cm.sum(axis=1)[:, np.newaxis]
# Plot confusion matrix
fig, ax = plt.subplots(1)
im = ax.imshow(cm_normalized, interpolation='nearest', cmap=plt.cm.Blues)
ax.set(title='Normalized Confusion matrix')
fig.colorbar(im)
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
fig.tight_layout()
ax.set(ylabel='True label', xlabel='Predicted label')
"""
Explanation: Set parameters and read data
End of explanation
"""
fig, axes = plt.subplots(nrows=len(event_id), ncols=n_filter,
figsize=(n_filter, len(event_id) * 2))
fitted_xdawn = clf.steps[0][1]
tmp_info = epochs.info.copy()
tmp_info['sfreq'] = 1.
for ii, cur_class in enumerate(sorted(event_id)):
cur_patterns = fitted_xdawn.patterns_[cur_class]
pattern_evoked = EvokedArray(cur_patterns[:n_filter].T, tmp_info, tmin=0)
pattern_evoked.plot_topomap(
times=np.arange(n_filter),
time_format='Component %d' if ii == 0 else '', colorbar=False,
show_names=False, axes=axes[ii], show=False)
axes[ii, 0].set(ylabel=cur_class)
fig.tight_layout(h_pad=1.0, w_pad=1.0, pad=0.1)
"""
Explanation: The patterns_ attribute of a fitted Xdawn instance (here from the last
cross-validation fold) can be used for visualization.
End of explanation
"""
|
ledeprogram/algorithms | class6/donow/m0nica_Class6_DoNow.ipynb | gpl-3.0 | import pandas as pd
import matplotlib.pyplot as plt
#DISPLAY MOTPLOTLIB INLINE WITH THE NOTEBOOK AS OPPOSED TO POP UP WINDOW
%matplotlib inline
import statsmodels.formula.api as smf # package we'll be using for linear regression
"""
Explanation: 1. Import the necessary packages to read in the data, plot, and create a linear regression model
End of explanation
"""
df = pd.read_csv('../data/hanford.csv')
df.head()
"""
Explanation: 2. Read in the hanford.csv file
End of explanation
"""
df.corr()
"""
Explanation: <img src="images/hanford_variables.png">
3. Calculate the basic descriptive statistics on the data
End of explanation
"""
lm = smf.ols(formula="Mortality~Exposure",data=df).fit() #notice the formula regresses Y on X (Y~X)
lm.params
lm.summary() # R sQUARED IS 0.858 WHICH should be investigated!
intercept, slope = lm.params
ax = df.plot(kind='scatter', x='Exposure', y='Mortality', alpha=0.5)
ax.set_title('Camcer Mortality Rates Based on Exposure')
ax.set_xlabel('Index of Exposure')
ax.set_ylabel('Cancer Mortality per 100,000 man-years')
"""
Explanation: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
End of explanation
"""
df.plot(kind="scatter",x="Exposure",y="Mortality")
plt.plot(df["Exposure"],slope*df["Exposure"]+intercept,"-",color="red") #we create the best fit line from the values in the fit model
"""
Explanation: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
End of explanation
"""
R_squared = 0.858
"""
Explanation: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
End of explanation
"""
index_ex = 10
plt.plot(index_ex,slope*index_ex+intercept,"-",color="red") #we create the best fit line from the values in the fit model
# y = mx + b
intercept = 114.7156
slope * 10 + 114.7156
def predicting_mortality_rate(exposure):
return slope * exposure + intercept
predicting_mortality_rate(10)
"""
Explanation: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
End of explanation
"""
|
AC209ConsumerConfidence/AC209ConsumerConfidence.github.io | NYTimes_API_Final.ipynb | gpl-3.0 | from nytimesarticle import articleAPI
api = articleAPI('ca372b5c9318406780fe9ebef28e96a1')
"""
Explanation: <hr width=80%>
<center>Obtaining the Data</center>
<hr width=80%>
Consumer Confidence Index
New York Times Articles
Article Search API
Peculiarities of the API
Downloading the Data
Working with the Files
Consumer Confidence Index
The consumer confidence index (CCI) is based on survey results of real consumers. They are asked their opinions of current and future economic conditions as well as about their personal economic situation. These survey questions are encoded and normalized to a baseline of 100 coming from the 1985 results. These results are obtained on a monthly basis by the Organisation for Economic Co-operation and Development and can be downloaded directly as a CSV from https://data.oecd.org/leadind/consumer-confidence-index-cci.htm.
New York Times Articles
Article Search API
The New York Times Article Search API allows for searching and obtaining headlines and lead paragraphs of articles dating back to 1851. Along with each article, there is metadata like the date it was published and the section in which it appeared. There is definitely some possibility that not all articles make it into the database, but an inspection of modern articles finds an order of 10s of articles per day which seems reasonable. The API call returns the data as JSON which can be used as such or transformed into CSV.
To access the API, one needs to obtain an API key from https://developer.nytimes.com/signup. And install the API using:
python
! pip install nytimesarticle
End of explanation
"""
def downloadToFile(startdate, enddate, filename):
"""
Makes API calls to extract id, publication date, headline, and lead paragraph from NY Times articles in the date range.
Then, saves the data to a local file in csv format.
startdate: start of date range to extract (yyyymmdd)
enddate: end of date range to extract (yyyymmdd)
filename: csv file to create and append to
"""
startdate = datetime.datetime.strptime(str(startdate), '%Y%m%d')
enddate = datetime.datetime.strptime(str(enddate), '%Y%m%d')
sliceStart = startdate
while (sliceStart<enddate):
leads = []
ids = []
dates = []
headlines = []
sliceEnd = min(sliceStart + datetime.timedelta(weeks=1), enddate)
sliceStartInt = int(sliceStart.strftime('%Y%m%d'))
sliceEndInt = int(sliceEnd.strftime('%Y%m%d'))
print 'Downloading from {} to {}'.format(sliceStartInt, sliceEndInt)
while True:
try:
numhits = api.search(fl = ['_id'],begin_date = sliceStartInt, end_date=sliceEndInt,fq = {'section_name':'Business'}, page=1)['response']['meta']['hits']
time.sleep(1)
break
except:
print 'JSON error avoided'
pages = int(math.ceil(float(numhits)/10))
time.sleep(1)
pbar2 = ProgressBar(pages)
print '{} pages to download'.format(pages) # Note that you can't download past page number 100
for page in range(1,min(pages+1,100)):
while True:
try:
articles = api.search(fl= ['_id','headline','lead_paragraph','pub_date'], begin_date = sliceStartInt, end_date=sliceEndInt,fq = {'section_name':'Business'}, page=page)
time.sleep(1)
break
except:
print 'JSON error avoided'
pbar2.increment()
for i in articles['response']['docs']:
if (i['lead_paragraph'] is not None) and (i['headline'] != []):
headlines.append(i['headline']['main'])
leads.append(i['lead_paragraph'])
ids.append(i['_id'])
dates.append(i['pub_date'])
pbar2.finish()
sliceStart = sliceEnd
zipped = zip(ids, dates, headlines, leads)
if zipped:
with open(filename, "a") as f:
writer = csv.writer(f)
for line in zipped:
writer.writerow([unicode(s).encode("utf-8") for s in line])
downloadToFile(19900101, 19900115, 'Sample_Output.csv')
"""
Explanation: Peculiarities of the API
The first thing to note are the usage limits for the API. Calls are limited to 1000 per day and 5 per second. Therefore, we need to make sure that our function sleeps between each call. The trickier issue with the API is that it will only return 100 pages of results from any given search. This means that searching for a year long window will have too many results and you will just get the first few weeks which fills the 100 pages. For this reason, we iterate through search windows of one week and monitor the number of pages found to make sure that it never exceeds 100 from any given search.
Downloading the Data
We will save each year of data as a separate CSV. The steps of the downloading the data to CSV is as follows.
1. Denote the first week long interval to search
* Make an API call to search for articles in that week from the business section
* Check how many pages are returned from the search
* Iterate through the pages and articles in the page
* Extract data from JSON and put into CSV format
* After getting one week of data as a CSV, append to the file
End of explanation
"""
all_data_list = []
for year in range(1990,1992):
data = pd.read_csv('{}_Output.csv'.format(year), header=None)
all_data_list.append(data) # list of dataframes
data = pd.concat(all_data_list, axis=0)
data.columns = ['id','date','headline', 'lead']
data.head()
"""
Explanation: Working with the Files
Let's just check what we have in the files now. We can iterate over the yearly CSV files to make a dataframe with all of the data.
End of explanation
"""
|
autism-research-centre/Autism-Gradients | .ipynb_checkpoints/Gradients-checkpoint.ipynb | gpl-3.0 | ## lets start with some actual script
# import useful things
import numpy as np
import os
import nibabel as nib
from sklearn.metrics import pairwise_distances
# get a list of inputs
from os import listdir
from os.path import isfile, join
import os.path
# little helper function to return the proper filelist with the full path
def listdir_nohidden(path):
for f in os.listdir(path):
if not f.startswith('.'):
yield f
def listdir_fullpath(d):
return [os.path.join(d, f) for f in listdir_nohidden(d)]
# and create a filelist
onlyfiles = listdir_fullpath("data/Outputs/cpac/filt_noglobal/rois_cc400")
"""
Explanation: Created on Mon Dec 01 15:05:56 2016
@author: Richard
Required packages:
pySTATIS
numpy
mapalign
nibabel
sklearn
cluster_roi
suggested file struture:
main/
main/cpac/filt_noglobal/rois_cc400/ > for data files
main/Affn/ > for adjacency matrices
main/Embs/ > for diffusion embedding files
download ABIDE data:
http://preprocessed-connectomes-project.org/abide/download.html
python download_abide_preproc.py -d rois_cc400 -p cpac -s filt_noglobal -o data/ -x 'M' -gt 18 -lt 55
End of explanation
"""
# check to see which files contains nodes with missing information
missingarray = []
for i in onlyfiles:
# load timeseries
filename = i
ts_raw = np.loadtxt(filename)
# check zero columns
missingn = np.where(~ts_raw.any(axis=0))[0]
missingarray.append(missingn)
# select the ones that don't have missing data
ids = np.where([len(i) == 0 for i in missingarray])[0]
selected = [onlyfiles[i] for i in ids]
# could be useful to have one without pathnames later one
selected2 = [os.path.basename(onlyfiles[i]) for i in ids]
print(len(selected))
"""
Explanation: Check all files to see if any have missing nodal information and create a selection list based on the ones that are 100% complete.
End of explanation
"""
# run the diffusion embedding
from mapalign import embed
for i in selected:
# load timeseries
#print i
filename = i
ts = np.loadtxt(filename)
# create correlation matrix
dcon = np.corrcoef(ts.T)
dcon[np.isnan(dcon)] = 0
# Get number of nodes
N = dcon.shape[0]
# threshold
perc = np.array([np.percentile(x, 90) for x in dcon])
for ii in range(dcon.shape[0]):
#print "Row %d" % ii
dcon[ii, dcon[ii,:] < perc[ii]] = 0
# If there are any left then set them to zero
dcon[dcon < 0] = 0
# compute the pairwise correctionlation distances
aff = 1 - pairwise_distances(dcon, metric = 'cosine')
# start saving
savename = os.path.basename(filename)
np.save("./data/Outputs/Affn/"+savename+"_cosine_affinity.npy", aff)
# get the diffusion maps
emb, res = embed.compute_diffusion_map(aff, alpha = 0.5)
# Save results
np.save("./data/Outputs/Embs/"+savename+"_embedding_dense_emb.npy", emb)
np.save("./data/Outputs/Embs/"+savename+"_embedding_dense_res.npy", res)
X = res['vectors']
X = (X.T/X[:,0]).T[:,1:]
np.save("./data/Outputs/Embs/"+savename+"_embedding_dense_res_veconly.npy", X) #store vectors only
"""
Explanation: run the diffusion embedding
End of explanation
"""
%%capture
from pySTATIS import statis
#load vectors
names = list(xrange(392))
X = [np.load("./data/Outputs/Embs/"+ os.path.basename(filename)+"_embedding_dense_res_veconly.npy") for filename in selected2]
out = statis.statis(X, names, fname='statis_results.npy')
statis.project_back(X, out['Q'], path = "./data/Outputs/Regs/",fnames = selected2)
np.save("Mean_Vec.npy",out['F'])
# saving everything in one dump
import pickle
with open('output.pickle' ,'w') as f:
pickle.dump([selected, out],f)
"""
Explanation: Run Statis to back-project the grouped embeddings
End of explanation
"""
%matplotlib inline
import matplotlib.pylab as plt
import nilearn
import nilearn.plotting
import numpy as np
import nibabel as nib
def rebuild_nii(num):
data = np.load('Mean_Vec.npy')
a = data[:,num].copy()
nim = nib.load('cc400_roi_atlas.nii')
imdat=nim.get_data()
imdat_new = imdat.copy()
for n, i in enumerate(np.unique(imdat)):
if i != 0:
imdat_new[imdat == i] = a[n-1] * 10000 # scaling factor. Could also try to get float values in nifti...
nim_out = nib.Nifti1Image(imdat_new, nim.get_affine(), nim.get_header())
nim_out.set_data_dtype('float32')
# to save:
nim_out.to_filename('Gradient_'+ str(num) +'_res.nii')
nilearn.plotting.plot_epi(nim_out)
return(nim_out)
for i in range(10):
nims = rebuild_nii(i)
"""
Explanation: plotting
plot to surface for inspection
this cell in only necessary for plotting below
End of explanation
"""
import pandas as pd
# read in csv
df_phen = pd.read_csv('Phenotypic_V1_0b_preprocessed1.csv')
# add a column that matches the filename
for i in df_phen:
df_phen['filename'] = join(df_phen['FILE_ID']+"_rois_cc400.1D")
df_phen['filenamelpy'] = join(df_phen['FILE_ID']+"_rois_cc400.1D.npy")
df_phen['selec'] = np.where(df_phen['filename'].isin((selected2)), 1, 0)
"""
Explanation: Output everything to an excel file
End of explanation
"""
from scipy import stats
grdnt_slope = []
for i in selected2:
# load gradients
# print i
filename = i
grdnt = np.load("./data/Outputs/Regs/" + filename + ".npy")
# do we need a specific ordering of the nodes??
y = list(xrange(392))
temp = []
for ii in range(10):
x = sorted(grdnt[:,ii]) # just sort in ascending order?
slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
temp.append(slope)
grdnt_slope.append(temp)
grdnt_slope = np.array(grdnt_slope)
# make it into a dataframe
data_grdnt = pd.DataFrame(grdnt_slope)
data_grdnt['file'] = selected2
"""
Explanation: Compare the slopes across subjects
End of explanation
"""
data = df_phen.loc[df_phen["selec"] == 1]
data['filenamelow'] = data['filename'].str.lower()
data = data.sort(['filenamelow'])
output = data.merge(data_grdnt, left_on='filename',right_on='file',how='outer')
output.to_csv('Combined.csv', sep='\t')
"""
Explanation: And write them to an excel file
End of explanation
"""
## numpy is used for creating fake data
%matplotlib inline
import numpy as np
import matplotlib as mpl
## agg backend is used to create plot as a .png file
mpl.use('agg')
import matplotlib.pyplot as plt
df = pd.DataFrame(output, columns = ['DX_GROUP', 0,1,2,3,4,5,6,7,8,9])
ASC = df['DX_GROUP'] == 2
NT = df['DX_GROUP'] == 1
G1 = df[ASC]
G2 = df[NT]
# some plotting options
fs = 10 # fontsize
flierprops = dict(marker='o', markerfacecolor='green', markersize=12,
linestyle='none')
## combine the groups collections into a list
Grd0 = [G1[0], G2[0]]
Grd1 = [G1[1], G2[1]]
Grd2 = [G1[2], G2[2]]
Grd3 = [G1[3], G2[3]]
Grd4 = [G1[4], G2[4]]
Grd5 = [G1[5], G2[5]]
Grd6 = [G1[6], G2[6]]
Grd7 = [G1[7], G2[7]]
Grd8 = [G1[8], G2[8]]
Grd9 = [G1[9], G2[9]]
fig, axes = plt.subplots(nrows=2, ncols=5, figsize=(6, 6), sharey=True)
axes[0, 0].boxplot(Grd0, patch_artist=True)
axes[0, 0].set_title('G0', fontsize=fs)
axes[0, 1].boxplot(Grd1, patch_artist=True)
axes[0, 1].set_title('G1', fontsize=fs)
axes[0, 2].boxplot(Grd2, patch_artist=True)
axes[0, 2].set_title('G2', fontsize=fs)
axes[0, 3].boxplot(Grd3, patch_artist=True)
axes[0, 3].set_title('G3', fontsize=fs)
axes[0, 4].boxplot(Grd4, patch_artist=True)
axes[0, 4].set_title('G4', fontsize=fs)
axes[1, 0].boxplot(Grd5, patch_artist=True)
axes[1, 0].set_title('G5', fontsize=fs)
axes[1, 1].boxplot(Grd6, patch_artist=True)
axes[1, 1].set_title('G6', fontsize=fs)
axes[1, 2].boxplot(Grd7, patch_artist=True)
axes[1, 2].set_title('G7', fontsize=fs)
axes[1, 3].boxplot(Grd8, patch_artist=True)
axes[1, 3].set_title('G8', fontsize=fs)
axes[1, 4].boxplot(Grd9, patch_artist=True)
axes[1, 4].set_title('G9', fontsize=fs)
fig.suptitle("Gradient Slopes")
fig.subplots_adjust(hspace=0.4)
"""
Explanation: Plot some stuff
End of explanation
"""
def exact_mc_perm_test(xs, ys, nmc):
n, k = len(xs), 0
diff = np.abs(np.mean(xs) - np.mean(ys))
zs = np.concatenate([xs, ys])
for j in range(nmc):
np.random.shuffle(zs)
k += diff < np.abs(np.mean(zs[:n]) - np.mean(zs[n:]))
return k / nmc
print(exact_mc_perm_test(G1[0],G2[0],1000))
print(exact_mc_perm_test(G1[1],G2[1],1000))
"""
Explanation: Permutations
End of explanation
"""
%matplotlib inline
# this cell in only necessary for plotting below
import matplotlib.pylab as plt
import nilearn
import nilearn.plotting
import numpy as np
import nibabel as nib
from os import listdir
from os.path import isfile, join
def rebuild_nii(num):
data = np.load('Mean_Vec.npy')
a = data[:,num].copy()
nim = nib.load('cc400_roi_atlas.nii')
imdat=nim.get_data()
imdat_new = imdat.copy()
for n, i in enumerate(np.unique(imdat)):
if i != 0:
imdat_new[imdat == i] = a[n-1] * 100000 # scaling factor. Could also try to get float values in nifti...
nim_out = nib.Nifti1Image(imdat_new, nim.get_affine(), nim.get_header())
nim_out.set_data_dtype('float32')
# to save:
# nim_out.to_filename('res.nii')
nilearn.plotting.plot_epi(nim_out)
def rebuild_nii_individ(num):
onlyfiles = [f for f in listdir_nohidden('./data/Outputs/Regs/') if isfile(join('./data/Outputs/Regs/', f))]
for index in range(178):
sub = onlyfiles[index]
print(sub)
data = np.load('./data/Outputs/Regs/%s' % sub)
a = data[:,num].astype('float32')
nim = nib.load('cc400_roi_atlas.nii')
imdat = nim.get_data().astype('float32')
#print(np.unique(a))
for i in np.unique(imdat):
#a[a>0.1] = 0
#a[a<-0.1] = 0
if i != 0 and i < 392:
imdat[imdat == i] = a[int(i)-1] # scaling factor. Could also try to get float values in nifti...
elif i >= 392:
imdat[imdat == i] = np.nan
nim_out = nib.Nifti1Image(imdat, nim.get_affine(), nim.get_header())
nim_out.set_data_dtype('float32')
# to save:
nim_out.to_filename(os.getcwd() + '/data/Outputs/individual/' + 'res' + sub + str(num) + '.nii')
print(os.getcwd())
# nilearn.plotting.plot_epi(nim_out)
"""
Explanation: Some quality control
End of explanation
"""
nims = rebuild_nii_individ(0)
!fslview resCaltech_0051474_rois_cc400.1D.npy.nii
"""
Explanation: Check all individual images
End of explanation
"""
|
phoebe-project/phoebe2-docs | development/examples/minimal_synthetic.ipynb | gpl-3.0 | #!pip install -I "phoebe>=2.4,<2.5"
"""
Explanation: Minimal Example to Produce a Synthetic Light Curve
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
"""
b.add_dataset('lc', times=np.linspace(0,1,201), dataset='mylc')
"""
Explanation: Adding Datasets
Now we'll create an empty lc dataset:
End of explanation
"""
b.run_compute(irrad_method='none')
"""
Explanation: Running Compute
Now we'll compute synthetics at the times provided using the default options
End of explanation
"""
afig, mplfig = b['mylc@model'].plot(show=True)
afig, mplfig = b['mylc@model'].plot(x='phases', show=True)
"""
Explanation: Plotting
Now we can simply plot the resulting synthetic light curve.
End of explanation
"""
|
tpin3694/tpin3694.github.io | statistics/pearsons_correlation_coefficient.ipynb | mit | import statistics as stats
"""
Explanation: Title: Pearson's Correlation Coefficient
Slug: pearsons_correlation_coefficient
Summary: Pearson's Correlation Coefficient in Python.
Date: 2016-02-08 12:00
Category: Statistics
Tags: Basics
Authors: Chris Albon
Based on this StackOverflow answer by cbare.
Preliminaries
End of explanation
"""
x = [1,2,3,4,5,6,7,8,9]
y = [2,1,2,4.5,7,6.5,6,9,9.5]
"""
Explanation: Create Data
End of explanation
"""
# Create a function
def pearson(x,y):
# Create n, the number of observations in the data
n = len(x)
# Create lists to store the standard scores
standard_score_x = []
standard_score_y = []
# Calculate the mean of x
mean_x = stats.mean(x)
# Calculate the standard deviation of x
standard_deviation_x = stats.stdev(x)
# Calculate the mean of y
mean_y = stats.mean(y)
# Calculate the standard deviation of y
standard_deviation_y = stats.stdev(y)
# For each observation in x
for observation in x:
# Calculate the standard score of x
standard_score_x.append((observation - mean_x)/standard_deviation_x)
# For each observation in y
for observation in y:
# Calculate the standard score of y
standard_score_y.append((observation - mean_y)/standard_deviation_y)
# Multiple the standard scores together, sum them, then divide by n-1, return that value
return (sum([i*j for i,j in zip(standard_score_x, standard_score_y)]))/(n-1)
# Show Pearson's Correlation Coefficient
pearson(x,y)
"""
Explanation: Calculate Pearson's Correlation Coefficient
There are a number of equivalent expression ways to calculate Pearson's correlation coefficient (also called Pearson's r). Here is one.
$$r={\frac {1}{n-1}}\sum {i=1}^{n}\left({\frac {x{i}-{\bar {x}}}{s_{x}}}\right)\left({\frac {y_{i}-{\bar {y}}}{s_{y}}}\right)$$
where $s_{x}$ and $s_{y}$ are the sample standard deviation for $x$ and $y$, and $\left({\frac {x_{i}-{\bar {x}}}{s_{x}}}\right)$ is the standard score for $x$ and $y$.
End of explanation
"""
|
jepegit/cellpy | dev_utils/tab_completion.ipynb | mit | df1 = pd.DataFrame(data=np.random.rand(5, 3), columns=["a b c".split()])
df2 = pd.DataFrame(
data=np.random.rand(5, 3), columns=["current voltage capacity".split()]
)
df3 = pd.DataFrame(data=np.random.rand(5, 3), columns=["d e f".split()])
df_dict = {"first": df1, "second": df2, "third": df3}
"""
Explanation: Crating some dataframes and collecting them into a dictionary
End of explanation
"""
from collections import namedtuple
ExperimentsTuple = namedtuple("MyTuple", sorted(df_dict))
current_experiments = ExperimentsTuple(**df_dict)
current_experiments.second.current
"""
Explanation: Trying some solutions that might help in getting tab-completion
End of explanation
"""
c_experiments = namedtuple("experiments", sorted(df_dict))(**df_dict)
c_experiments.first.a
"""
Explanation: One-liner...
End of explanation
"""
import box
df_box = box.Box(df_dict)
df_box.first
df_box["first"]
df_box["new_df"] = pd.DataFrame(
data=np.random.rand(5, 3), columns=["one two three".split()]
)
df_box.new_df
df_box["20190101_FC_data_01"] = pd.DataFrame(
data=np.random.rand(5, 3), columns=["foo bar baz".split()]
)
"""
Explanation: Using box
End of explanation
"""
df_dict["first"]
df_dict["20190101_FC_data_01"] = pd.DataFrame(
data=np.random.rand(5, 3), columns=["foo bar baz".split()]
)
"""
Explanation: However, starting keys with integers does not seem to work when using box for tab completion.
What I have learned
Tab completion can probably be easiest implemented by creating namedtuple if I dont want to depend on other packages.
However, using python box gives the added benefit of also allowing dictionary-type look-up in addtion to . based.
By the way, Jupyter can tab-complete dictionary keys also.
End of explanation
"""
cell_id = "20190101_FC_data_01"
pre = ""
new_id = "".join((pre, cell_id))
new_id
new_id.split("cell_")
new_id.lstrip("cell_")
"""
Explanation: Conclussion
Lets go for python box
End of explanation
"""
|
impactlab/eemeter | docs/datastore_basic_usage.ipynb | mit | # library imports
import pandas as pd
import requests
import pytz
"""
Explanation: Datastore basic usage
The datastore is a tool for using the eemeter which automates
and helps to scales some of the most frequent tasks accomplished by the
eemeter. These tasks include data loading and storage, meter
running, and result storage and inspection. It puts a REST API
in front of the eemeter.
Note:
For small and large datasets, the ETL toolkit exists to ease and
speed up this process. That toolkit relies upon the API described
in this tutorial and in the datastore API documentation. For the
purpose of this tutorial, we will not be using the ETL toolkit.
For more information on the ETL toolkit, see its API documentation.
Loading data
For this tutorial, we will use the python requests package to make
requests to the datastore. We will use the same dataset used in the
eemeter tutorial, available for download here:
project data CSV
energy data CSV
This tutorial is also available as a jupyter notebook.
End of explanation
"""
base_url = "http://0.0.0.0:8000"
headers = {"Authorization": "Bearer tokstr"}
"""
Explanation: If you followed the datastore development setup instructions, you will
already have run the command to create a superuser and access credentials.
python manage.py dev_seed
If you haven't already done so, do so now. The dev_seed command
creates a demo admin user and a sample project.
username: demo,
password: demo-password,
API access token: tokstr.
project owner: 1
Ensure that your development server is running locally on port 8000 before continuing.
python manage.py runserver
Each request will include an Authorization header
Authorization: Bearer tokstr
End of explanation
"""
url = base_url + "/api/v1/projects/"
projects = requests.get(url, headers=headers).json()
projects
"""
Explanation: Let's test the API by requesting a list of projects in the datastore. Since the dev_seed command creates a sample project, this will return a response showing that project.
End of explanation
"""
url = base_url + "/api/v1/consumption_metadatas/?summary=True&projects={}".format(projects[0]['id'])
consumption_metadatas = requests.get(url, headers=headers).json()
consumption_metadatas[0]
"""
Explanation: Although we'll delete this one in a moment, we can first explore a
bit to get a feel for the API. Then we'll create a project of our own.
Energy trace data will be associated with this project by foreign key.
It is organized into time series by trace_id, and the following request
will show all traces associated with a particular project. Note the
difference between the 'id' field and the 'project_id' field.
The 'project_id' field is the unique label that was associated with
it by an external source; the 'id' field is the database table primary
key.
There are two tables used to store energy data:
Consumption metadata:
project_id: foreign key of the project this belongs to
trace_id: unique id of the trace
interpretation: the fuel (electricity/natural gas) and the direction and type of flow (net/total consumption, supply, generation)
unit: physical units of the values provided in records
Consumption records:
metadata_id: foreign key to consumption metadata
start: the start date and time of record
value: the energy reading as reported
estimated: boolean indicating whether or not the reading was estimated
For consumption records, the end is implicit in the start of the next temporal record. The last record should be null (if it's not, it will be treated as such).
Let's inspect the traces associated with this project. We can do so
using the project primary key 'id' as a filter (we use the summary flag so that we
don't pull every record):
End of explanation
"""
url = base_url + "/api/v1/consumption_records/?metadata={}".format(consumption_metadatas[0]['id'])
consumption_records = requests.get(url, headers=headers).json()
consumption_records[:3]
"""
Explanation: We can also query for consumption records by metadata primary key.
End of explanation
"""
url = base_url + "/api/v1/projects/{}/".format(projects[0]['id'])
requests.delete(url, headers=headers)
project_data = pd.read_csv('sample-project-data.csv',
parse_dates=['retrofit_start_date', 'retrofit_end_date']).iloc[0]
project_data
data = {
"project_id": project_data.project_id,
"zipcode": str(project_data.zipcode),
"baseline_period_end": pytz.UTC.localize(project_data.retrofit_start_date).isoformat(),
"reporting_period_start": pytz.UTC.localize(project_data.retrofit_end_date).isoformat(),
"project_owner": 1,
}
print(data)
url = base_url + "/api/v1/projects/"
new_project = requests.post(url, json=data, headers=headers).json()
new_project
"""
Explanation: Now we'll delete the project that was created by the dev_seed command and make one of our own.
End of explanation
"""
url = base_url + "/api/v1/projects/"
requests.post(url, json=data, headers=headers).json()
"""
Explanation: If you try to post another project with the same project_id, you'll get an error message.
End of explanation
"""
data = [
{
"project_id": project_data.project_id,
"zipcode": str(project_data.zipcode),
"baseline_period_end": pytz.UTC.localize(project_data.retrofit_start_date).isoformat(),
"reporting_period_start": pytz.UTC.localize(project_data.retrofit_end_date).isoformat(),
"project_owner_id": 1,
}
]
print(data)
url = base_url + "/api/v1/projects/sync/"
requests.post(url, json=data, headers=headers).json()
"""
Explanation: However, there is another endpoint you can hit to sync the project - update it if it exists, create it if it doesn't. This endpoint works almost the same way, but expects a list of data in a slightly different format:
End of explanation
"""
energy_data = pd.read_csv('sample-energy-data_project-ABC_zipcode-50321.csv',
parse_dates=['date'], dtype={'zipcode': str})
energy_data.head()
"""
Explanation: Now we can give this project some consumption data. Ene
End of explanation
"""
interpretation_mapping = {"electricity": "E_C_S"}
data = [
{
"project_project_id": energy_data.iloc[0]["project_id"],
"interpretation": interpretation_mapping[energy_data.iloc[0]["fuel"]],
"unit": energy_data.iloc[0]["unit"].upper(),
"label": energy_data.iloc[0]["trace_id"].upper()
}
]
data
url = base_url + "/api/v1/consumption_metadatas/sync/"
consumption_metadatas = requests.post(url, json=data, headers=headers).json()
consumption_metadatas
"""
Explanation: Then we'll the sync endpoint for consumption metadata, which will create a new record or update an existing record. We have one trace here:
End of explanation
"""
data = [{
"metadata_id": consumption_metadatas[0]['id'],
"start": pytz.UTC.localize(row.date.to_datetime()).isoformat(),
"value": row.value,
"estimated": row.estimated,
} for _, row in energy_data.iterrows()]
data[:3]
url = base_url + "/api/v1/consumption_records/sync2/"
consumption_records = requests.post(url, json=data, headers=headers)
consumption_records.text
"""
Explanation: Let's turn that CSV into records.
End of explanation
"""
url = base_url + "/api/v1/consumption_records/?metadata={}".format(consumption_metadatas[0]['id'])
consumption_records = requests.get(url, json=data, headers=headers).json()
consumption_records[:3]
"""
Explanation: We can verify that these records were created by querying by consumption metadata id.
End of explanation
"""
data = {
"project": new_project['id'],
"meter_class": "EnergyEfficiencyMeter",
"meter_settings": {}
}
data
url = base_url + "/api/v1/project_runs/"
project_run = requests.post(url, json=data, headers=headers).json()
project_run
"""
Explanation: We now have a simple project with a single trace of data. Now we will move to running a meter on that project:
Running meters
To run a meter, make a request to create a "project run". This request will start a job that runs a meter and saves its results.
There are a few components to this request.
"project": the primary key of the project.
"meter_class": the name of the class of the eemeter meter to run.
"meter_settings": any special settings to send to the meter class.
End of explanation
"""
url = base_url + "/api/v1/project_runs/{}/".format(project_run['id'])
project_runs = requests.get(url, headers=headers).json()
project_runs
"""
Explanation: This creates a task to run the meter on the indicated project.
These results can be viewed by requesting the project run by primary key - as it completes, its status will change to SUCCESS or FAILED. If FAILED, it will indicate a traceback of the error that occured. While it runs, its status will be RUNNING; before it has started running, its status will be PENDING.
End of explanation
"""
url = base_url + "/api/v1/project_results/"
project_results = requests.get(url, headers=headers).json()
project_results
"""
Explanation: If this project run succeeded, we can inspect its results.
Inspecting results
Results all fall under the ProjectResult API
End of explanation
"""
|
CRPropa/CRPropa3 | doc/pages/example_notebooks/advanced/CustomObserver.v4.ipynb | gpl-3.0 | import crpropa
class ObserverPlane(crpropa.ObserverFeature):
"""
Detects all particles after crossing the plane. Defined by position (any
point in the plane) and vectors v1 and v2.
"""
def __init__(self, position, v1, v2):
crpropa.ObserverFeature.__init__(self)
# calculate three points of a plane
self.__v1 = v1
self.__v2 = v2
self.__x0 = position
def distanceToPlane(self, X):
"""
Always positive for one side of plane and negative for the other side.
"""
dX = np.asarray([X.x - self.__x0[0], X.y - self.__x0[1], X.z - self.__x0[2]])
V = np.linalg.det([self.__v1, self.__v2, dX])
return V
def checkDetection(self, candidate):
currentDistance = self.distanceToPlane(candidate.current.getPosition())
previousDistance = self.distanceToPlane(candidate.previous.getPosition())
candidate.limitNextStep(abs(currentDistance))
if np.sign(currentDistance) == np.sign(previousDistance):
return crpropa.NOTHING
else:
return crpropa.DETECTED
"""
Explanation: Custom Observer
This example defines a plane-observer in python.
End of explanation
"""
from crpropa import Mpc, nG, EeV
import numpy as np
turbSpectrum = crpropa.SimpleTurbulenceSpectrum(Brms=1*nG, lMin = 2*Mpc, lMax=5*Mpc, sIndex=5./3.)
gridprops = crpropa.GridProperties(crpropa.Vector3d(0), 128, 1 * Mpc)
BField = crpropa.SimpleGridTurbulence(turbSpectrum, gridprops)
m = crpropa.ModuleList()
m.add(crpropa.PropagationCK(BField, 1e-4, 0.1 * Mpc, 5 * Mpc))
m.add(crpropa.MaximumTrajectoryLength(25 * Mpc))
# Observer
out = crpropa.TextOutput("sheet.txt")
o = crpropa.Observer()
# The Observer feature has to be created outside of the class attribute
# o.add(ObserverPlane(...)) will not work for custom python modules
plo = ObserverPlane(np.asarray([0., 0, 0]) * Mpc, np.asarray([0., 1., 0.]) * Mpc, np.asarray([0., 0., 1.]) * Mpc)
o.add(plo)
o.setDeactivateOnDetection(False)
o.onDetection(out)
m.add(o)
# source setup
source = crpropa.Source()
source.add(crpropa.SourcePosition(crpropa.Vector3d(0, 0, 0) * Mpc))
source.add(crpropa.SourceIsotropicEmission())
source.add(crpropa.SourceParticleType(crpropa.nucleusId(1, 1)))
source.add(crpropa.SourceEnergy(1 * EeV))
m.run(source, 1000)
out.close()
"""
Explanation: As test, we propagate some particles in a random field with a sheet observer:
End of explanation
"""
%matplotlib inline
from mpl_toolkits.mplot3d import Axes3D
import pylab as plt
ax = plt.subplot(111, projection='3d')
data = plt.loadtxt('sheet.txt')
ax.scatter(data[:,5], data[:,6], data[:,7] )
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
ax.set_xlim(20,-20)
ax.set_ylim(20,-20)
ax.set_zlim(20,-20)
ax.view_init(25, 95)
"""
Explanation: and plot the final position of the particles in 3D
End of explanation
"""
bins = np.linspace(-20,20, 50)
plt.hist(data[:,5], bins=bins, label='X', histtype='step')
plt.hist(data[:,6], bins=bins, label='Y', histtype='step')
plt.hist(data[:,7], bins=bins, label='Z', histtype='step')
plt.legend()
plt.show()
"""
Explanation: or as a histogram. Note the width of the X distribution, which is due to the particles being detected after crossing.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.20/_downloads/2369809188e1e28fb4d0ad564cdfa36d/plot_source_space_time_frequency.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, source_band_induced_power
print(__doc__)
"""
Explanation: Compute induced power in the source space with dSPM
Returns STC files ie source estimates of induced power
for different bands in the source space. The inverse method
is linear based on dSPM inverse operator.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
tmin, tmax, event_id = -0.2, 0.5, 1
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
inverse_operator = read_inverse_operator(fname_inv)
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, include=include, exclude='bads')
# Load condition 1
event_id = 1
events = events[:10] # take 10 events to keep the computation time low
# Use linear detrend to reduce any edge artifacts
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6),
preload=True, detrend=1)
# Compute a source estimate per frequency band
bands = dict(alpha=[9, 11], beta=[18, 22])
stcs = source_band_induced_power(epochs, inverse_operator, bands, n_cycles=2,
use_fft=False, n_jobs=1)
for b, stc in stcs.items():
stc.save('induced_power_%s' % b)
"""
Explanation: Set parameters
End of explanation
"""
plt.plot(stcs['alpha'].times, stcs['alpha'].data.mean(axis=0), label='Alpha')
plt.plot(stcs['beta'].times, stcs['beta'].data.mean(axis=0), label='Beta')
plt.xlabel('Time (ms)')
plt.ylabel('Power')
plt.legend()
plt.title('Mean source induced power')
plt.show()
"""
Explanation: plot mean power
End of explanation
"""
|
ruxi/tools | docs/notebooks/1_Notebook_DevNotes_XyDB.ipynb | mit | %ls dist
"""
Explanation: update changes to pypi
```bash
update pypi
rm -r dist # remove old source files
python setup.py sdist # make source distribution
python setup.py bdist_wheel # make build distribution with .whl file
twine upload dist/ # pip install twine
```
End of explanation
"""
import os.path
# create folder if doesn't exist
folders = ['ruxitools', 'tests']
for x in folders:
os.makedirs(x, exist_ok=True)
!tree | grep -v __pycache__ | grep -v .cpython #hides grep'd keywords
"""
Explanation: DevNotes: XyDB.py
created: Fri Oct 21 13:16:57 CDT 2016
author: github.com/ruxi
This notebook was used to construct this repo
Purpose
XyDB is a database-like containers for derivative data
The intended usecase of XyDB is to store dervative data in a database-like
container and bind it as an attribute to the source data. It solves the
problem of namespace pollution by confining intermediate data forms to
the original dataset in a logical and structured manner. The limitation
of this object is that it exists in memory only. For more persistent storage
solutions, its recommended to use an actual database library such as
blaze, mongoDB, or SQLite. Conversely, the advantage is residual information
is not left over after a session.
Specifications
keys (list): list keywords for all records (names for intermediate data configurations)
push (func): Adds record to database
pull (func): Pulls record from database (ducktyped)
Records are accessible via attributes by keyname
Returns dictionary records
pull.<config keyword>
show (func): Show record from database. (ducktyped)
Records are accessible via attributes by keyname
Returns namedtuple objects based on db records.
show.<config keyword>.<attribute name>
Project architecture
Structure of the repository according to The Hitchhiker's Guide to Python
Target directory sturcture
|-- LISCENSE
|-- README.md
|-- setup.py
|-- requirements.txt
|-- Makefile
|-- .gitignore
|-- docs/
|-- notebooks/
|-- ruxitools/
|-- __init__.py
|-- xydb/
|-- __init__.py
|-- XyDB.py
|-- test/
|-- __init__.py
|-- test_XyDB.py
Writing guides
Python docstrings styles
Some resources on documentation conventions
Programs:
bash
pip install sphinxcontrib-napoleon
Guides:
* Google Python Style Guide
Unit Tests
Guides on how to write unit tests:
http://nedbatchelder.com/text/test0.html
https://cgoldberg.github.io/python-unittest-tutorial/
Packaging and distribution
packaging.python.org
python-packaging readthedocs
setup.py
See minimal example
Module implmentation
Create directory tree
End of explanation
"""
# %load ruxitools/__init__.py
"""
Explanation: The Code
End of explanation
"""
# %load ruxitools/xydb.py
#!/usr/bin/env python
__author__ = "github.com/ruxi"
__copyright__ = "Copyright 2016, ruxitools"
__email__ = "ruxi.github@gmail.com"
__license__ = "MIT"
__status__ = "Development"
__version__ = "0.1"
from collections import namedtuple
class XyDB(object):
"""XyDB is a database-like containers for intermediate data
The intended usecase of XyDB is to store intermediate data in a database-like
container and bind it as an attribute to the source data. It solves the
problem of namespace pollution by confining intermediate data forms to
the original dataset in a logical and structured manner. The limitation
of this object is that it exists in memory only. For more persistent storage
solutions, its recommended to use an actual database library such as
blaze, mongoDB, or SQLite. Conversely, the advantage is residual information
is not left over after a session.
Example:
Defined a namedtuple for input validation, then assign this function
as an attribute of your source data object, usually a pandas dataframe.
import XyDB
from collections import namedtuple
# define input validation schema
input_val = namedtuple("data", ['key','desc', 'X', 'y'])
# define data
myData = pd.DataFrame()
# assign class function
myData.Xy = XyDB(input_val, verbose = True)
# add data to DB
myRecord = dict(key='config1'
, desc='dummydata'
, X=[0,1,0]
, y=['a','b','a])
myData.Xy.push(**myRecord)
# show data
myData.Xy.config1.desc
"""
def __init__(self, schema = None, verbose=True, welcome=True):
"""
Arguments:
schema (default: None | NamedTuple):
Accepts a NamedTuple subclass with a "key" field
which is used for input validation when records
are "push"ed
verbose (default: True | boolean)
If false, suppresses print commands. Including this message
welcome (default: True | boolean)
Suppresses printing of the docstring upon initialization
"""
self._db = {}
self._show = lambda: None
self._pull = lambda: None
self._verbose = verbose
# print docstring
if welcome:
print (self.__doc__)
# Input Validation (optional) can be spec'd out by NameTuple.
# Input NamedTuple requires 'key' field
self._schema = False if schema is None else schema
if self._schema:
if "key" not in dir(self._schema):
raise Exception("namedtuple must have 'key' as a field")
#@db.setter
def push(self, key, *args, **kwargs):
"""Adds records (dict) to database"""
if not(type(key)==str):
raise Exception('key must be string')
# Create database record entry (a dict)
if self._schema: # is user-defined
self._input_validator = self._schema
record = self._input_validator(key, *args,**kwargs)
else: # the schema is inferred from every push
entry_dict = dict(key=key, *args,**kwargs)
self._input_validator = namedtuple('Data', list(entry_dict.keys()))
record = self._input_validator(**entry_dict)
# The record is added to the database.
self._db[record.key] = record
if self._verbose:
print('Record added {}'.format(record.key))
self._update()
def _update(self):
"""updates dyanamic attribute access for self.show & self.pull"""
for key in self.keys:
# self.show.<key> = namedtuple
setattr(self._show
, key
, self._db[key]
)
# self.pull.<key> = dict
setattr(self._pull,
key,
self.db[key]._asdict()
)
@property
def db(self):
"""Intermediate data accessible by keyword. Returns a dict"""
return self._db
@property
def keys(self):
"""
list configuration keywords
Returns:
list
"""
return self.db.keys()
@property
def show(self):
"""
Show record from database. Accessible by attribute via keyname
Returns:
namedtuple objects
Usage:
show.<config keyword>.<attribute name>
"""
return self._show
@property
def pull(self):
"""
Pull record from database. Accessible by attribute via keyname
Returns:
dictionary
Usage:
pull.<config keyword>
"""
return self._pull
"""
Explanation: Main
End of explanation
"""
# %load tests/test_xydb.py
__author__ = "github.com/ruxi"
__copyright__ = "Copyright 2016, ruxitools"
__email__ = "ruxi.github@gmail.com"
__license__ = "MIT"
__status__ = "Development"
__version__ = "0.1"
import unittest
import collections
from ruxitools.xydb import XyDB
class TestXydb(unittest.TestCase):
"""test if unittest works"""
############
# set-up #
############
def dummycase(self):
# dummy record
key = 'dummy0'
desc = 'test case'
X = [1,2,3,4]
y = ['a','b','c','d']
return dict(key=key, desc=desc, X=X, y=y)
def badcase_nokey(self):
desc = 'test case'
X = [1,2,3,4]
return dict(desc=desc, X=X)
def badcase_KeyNotStr(self):
key = [1,2,3,4]
X = "x is a str"
return dict(jey=key, X=X)
def mockschema(self):
input_validation = collections.namedtuple("Xy", ['key','desc', 'X', 'y'])
return input_validation
def push_record_noschema(self, record):
xy = XyDB(verbose=False)
xy.push(**record)
return xy
def push_record_w_schema(self, record, schema):
xy = XyDB(schema=schema, verbose=False)
xy.push(**record)
return xy
###########
# TESTS #
###########
def test_positive_control(self):
self.assertTrue(True)
def test_init_args(self):
xy = XyDB()
xy = XyDB(verbose=False)
xy = XyDB(verbose=True)
def test_PushRecord_NoSchema(self):
record = self.dummycase()
self.push_record_noschema(record)
def test_PushRecord_WithSchema(self):
record = self.dummycase()
schema = self.mockschema()
self.push_record_w_schema(record=record, schema=schema)
def test_PushRecord_NoKey(self):
"""negative test"""
record = self.badcase_nokey()
with self.assertRaises(TypeError):
self.push_record_noschema(record)
def test_PushRecord_KeyNotStr(self):
"""negative test"""
record = self.badcase_KeyNotStr()
with self.assertRaises(TypeError):
self.push_record_noschema(record)
def test_ShowRecord(self):
record = self.dummycase()
xy = self.push_record_noschema(record)
getattr(xy.show, record['key'])
def test_ShowRecord_NonExistKey(self):
"""negative test"""
record = self.dummycase()
key = record['key'] + "spike"
xy = self.push_record_noschema(record)
with self.assertRaises(KeyError):
getattr(xy.show, record[key])
def test_PullRecord(self):
record = self.dummycase()
xy = self.push_record_noschema(record)
getattr(xy.pull, record['key'])
def test_PullRecord_NonExistKey(self):
"""negative test"""
record = self.dummycase()
key = record['key'] + "spike"
xy = self.push_record_noschema(record)
with self.assertRaises(KeyError):
getattr(xy.pull, record[key])
def test_keys_NoRecords(self):
"""is dict_keys returned"""
xy = XyDB()
xy.keys
self.assertTrue(type(xy.keys)==type({}.keys())
, "Expecting dict_keys, instead got {}".format(type(xy.keys))
)
def test_keys_WithRecords(self):
record = self.dummycase()
xy = XyDB()
xy.push(**record)
xy.keys
def test_db_IsDict(self):
record = self.dummycase()
xy = self.push_record_noschema(record)
self.assertTrue(type(xy.db)==dict)
def test_otherattributes(self):
record = self.dummycase()
schema = self.mockschema()
xy = self.push_record_w_schema(record, schema)
xy._update
if __name__ == '__main__':
unittest.main()
"""
Explanation: Unit Tests
End of explanation
"""
!nosetests --tests=tests --with-coverage #conda install nose, coverage
!coverage report -mi #conda install nose, coverage
"""
Explanation: Testing
End of explanation
"""
# %load setup.py
from setuptools import setup, find_packages
import sys
if sys.version_info[:2]<(3,5):
sys.exit("ruxitools requires python 3.5 or higher")
# defining variables
install_requires = []
tests_require = [
'mock'
, 'nose'
]
# How mature is this project? Common values are
# 3 - Alpha
# 4 - Beta
# 5 - Production/Stable
classifier = [
"Programming Language :: Python",
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: MIT License',
'Natural Language :: English',
'Operating System :: Unix',
'Programming Language :: Python :: 3 :: Only'
]
keywords='ruxi tools ruxitools xydb intermediate data containers',
# setup
setup(
name='ruxitools'
, version="0.2.6"
, description="Misc general use functions. XyDB: container fo intermediate data. "
, url="http://github.com/ruxi/tools"
, author="ruxi"
, author_email="ruxi.github@gmail.com"
, license="MIT"
, packages=find_packages()#['ruxitools']
, tests_require=tests_require
, test_suite= 'nose.collector'
, classifiers = classifier
, keywords=keywords
)
"""
Explanation: Repository set-up
setup.py
Format based on minimal example
ReadTheDocs setuptools
End of explanation
"""
# %load README.md
# ruxitools
Miscellaneous tools.
# Installation
method1:
pip install -e git+https://github.com/ruxi/tools.git
method2:
git clone https://github.com/ruxi/tools.git
cd tools
python setup.py install
python setup.py tests
# Modules
## XyDB: a container for intermediate data
XyDB is used to organize intermediate data by attaching it to the source dataset.
It solves the problem of namespace pollution, especially if many intermediate
datasets are derived from the source.
Usage:
```python
from ruxitools.xydb import XyDB
# attach container to source data
mydata.Xy = XyDB()
# store intermediate info & documentation into the containers
mydata.Xy.push(dict(
key="config1" # keyword
, X=[mydata*2] # intermediate data
, desc = "multiply by 2" # description of operation
))
# To retrieve intermediate data as a dict:
mydata.Xy.pull.config1
# To retrieve intermediate data as attributes:
mydata.Xy.show.config1.desc
# To show keys
mydata.Xy.keys
```
# TODO:
requirements.txt - not sure if it works
"""
Explanation: register & upload to PyPi
Docs on python wheels (needed for pip)
recommended way reigster and upload
bash
python setup.py register # Not recommended, but did it this way. See guide
Create source distribution
python setup.py sdist
Create build distribution (python wheels for pip)
bash
python setup.py bdist_wheel
Upload distribution
bash
twine upload dist/* # pip install twine
All together
bash
python setup.py sdist
python setup.py bdist_wheel
twine upload dist/*
README.md
End of explanation
"""
# %load MANIFEST.in
include README.md
include LICENSE
# %load LICENSE
MIT License
Copyright (c) 2016 github.com/ruxi
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
Explanation: MANIFEST.in
packaging.python manifest.ini docs
End of explanation
"""
#!python setup.py test
"""
Explanation: Repository testing
bash
python setup.py test
End of explanation
"""
# %load .travis.yml
os: linux
language: python
python:
- 3.5
# command to install dependencies
install:
- "pip install -r requirements.txt"
- "pip install ."
# command to run tests
script: nosetests
"""
Explanation: TravisCI
For continuous integration testing
Hitchhiker's guide to Python: Travis-CI
travisCI official docs
End of explanation
"""
|
tensorflow/docs-l10n | site/zh-cn/lite/performance/post_training_integer_quant.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
import numpy as np
assert float(tf.__version__[:3]) >= 2.3
"""
Explanation: 训练后整数量化
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/lite/performance/post_training_integer_quant"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/lite/performance/post_training_integer_quant.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/lite/performance/post_training_integer_quant.ipynb"> <img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png"> 在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/lite/performance/post_training_integer_quant.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
</table>
概述
整数量化是一种优化策略,可将 32 位浮点数(如权重和激活输出)转换为 8 位定点数。这样可以缩减模型大小并加快推理速度,这对低功耗设备(如微控制器)很有价值。仅支持整数的加速器(如 Edge TPU)也需要使用此数据格式。
在本教程中,您将从头开始训练一个 MNIST 模型、将其转换为 TensorFlow Lite 文件,并使用训练后量化对其进行量化。最后,您将检查转换后模型的准确率并将其与原始浮点模型进行比较。
实际上,对模型进行量化的程度有几种选项。在本教程中,您将执行“全整数量化”,它会将所有权重和激活输出转换为 8 位整数数据,而其他策略可能会将部分数据保留为浮点。
要详细了解各种量化策略,请阅读 TensorFlow Lite 模型优化。
设置
为了量化输入和输出张量,我们需要使用 TensorFlow r2.3 中新添加的 API:
End of explanation
"""
# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images.astype(np.float32) / 255.0
test_images = test_images.astype(np.float32) / 255.0
# Define the model architecture
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=5,
validation_data=(test_images, test_labels)
)
"""
Explanation: 生成 TensorFlow 模型
我们将构建一个简单的模型来对 MNIST 数据集中的数字进行分类。
此训练不会花很长时间,因为只对模型进行 5 个周期的训练,训练到约 98% 的准确率。
End of explanation
"""
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
"""
Explanation: 转换为 TensorFlow Lite 模型
现在,您可以使用 TFLiteConverter API 将训练的模型转换为 TensorFlow Lite 格式,并应用不同程度的量化。
请注意,某些版本的量化会将部分数据保留为浮点格式。因此,以下各个部分将以量化程度不断增加的顺序展示每个选项,直到获得完全由 int8 或 uint8 数据组成的模型。(请注意,我们在每个部分中重复了一些代码,使您能够看到每个选项的全部量化步骤。)
首先,下面是一个没有量化的转换后模型:
End of explanation
"""
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model_quant = converter.convert()
"""
Explanation: 它现在是一个 TensorFlow Lite 模型,但所有参数数据仍使用 32 位浮点值。
使用动态范围量化进行转换
现在,我们启用默认的 optimizations 标记来量化所有固定参数(例如权重):
End of explanation
"""
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
# Model has only one input so each data point has one element.
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
tflite_model_quant = converter.convert()
"""
Explanation: 现在,进行了权重量化的模型要略小一些,但其他变量数据仍为浮点格式。
使用浮点回退量化进行转换
要量化可变数据(例如模型输入/输出和层之间的中间体),您需要提供 RepresentativeDataset。这是一个生成器函数,它提供一组足够大的输入数据来代表典型值。转换器可以通过该函数估算所有可变数据的动态范围。(相比训练或评估数据集,此数据集不必唯一。)为了支持多个输入,每个代表性数据点都是一个列表,并且列表中的元素会根据其索引被馈送到模型。
End of explanation
"""
interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
input_type = interpreter.get_input_details()[0]['dtype']
print('input: ', input_type)
output_type = interpreter.get_output_details()[0]['dtype']
print('output: ', output_type)
"""
Explanation: 现在,所有权重和可变数据都已量化,并且与原始 TensorFlow Lite 模型相比,该模型要小得多。
但是,为了与传统上使用浮点模型输入和输出张量的应用保持兼容,TensorFlow Lite 转换器将模型的输入和输出张量保留为浮点:
End of explanation
"""
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
# Ensure that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Set the input and output tensors to uint8 (APIs added in r2.3)
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model_quant = converter.convert()
"""
Explanation: 这通常对兼容性有利,但它无法兼容执行全整数运算的设备(如 Edge TPU)。
此外,如果 TensorFlow Lite 不包括某个运算的量化实现,则上述过程可能会将该运算保留为浮点格式。您仍能通过此策略完成转换,并得到一个更小、更高效的模型,但它还是不兼容仅支持整数的硬件。(此 MNIST 模型中的所有算子都有量化的实现。)
因此,为了确保端到端全整数模型,您还需要几个参数…
使用仅整数量化进行转换
为了量化输入和输出张量,并让转换器在遇到无法量化的运算时引发错误,使用一些附加参数再次转换模型:
End of explanation
"""
interpreter = tf.lite.Interpreter(model_content=tflite_model_quant)
input_type = interpreter.get_input_details()[0]['dtype']
print('input: ', input_type)
output_type = interpreter.get_output_details()[0]['dtype']
print('output: ', output_type)
"""
Explanation: 内部量化与上文相同,但您可以看到输入和输出张量现在是整数格式:
End of explanation
"""
import pathlib
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
# Save the unquantized/float model:
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
# Save the quantized model:
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_model_quant)
"""
Explanation: 现在,您有了一个整数量化模型,该模型使用整数数据作为模型的输入和输出张量,因此它兼容仅支持整数的硬件(如 Edge TPU)。
将模型另存为文件
您需要 .tflite 文件才能在其他设备上部署模型。因此,我们将转换的模型保存为文件,然后在下面运行推断时加载它们。
End of explanation
"""
# Helper function to run inference on a TFLite model
def run_tflite_model(tflite_file, test_image_indices):
global test_images
# Initialize the interpreter
interpreter = tf.lite.Interpreter(model_path=str(tflite_file))
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()[0]
output_details = interpreter.get_output_details()[0]
predictions = np.zeros((len(test_image_indices),), dtype=int)
for i, test_image_index in enumerate(test_image_indices):
test_image = test_images[test_image_index]
test_label = test_labels[test_image_index]
# Check if the input type is quantized, then rescale input data to uint8
if input_details['dtype'] == np.uint8:
input_scale, input_zero_point = input_details["quantization"]
test_image = test_image / input_scale + input_zero_point
test_image = np.expand_dims(test_image, axis=0).astype(input_details["dtype"])
interpreter.set_tensor(input_details["index"], test_image)
interpreter.invoke()
output = interpreter.get_tensor(output_details["index"])[0]
predictions[i] = output.argmax()
return predictions
"""
Explanation: 运行 TensorFlow Lite 模型
现在,我们使用 TensorFlow Lite Interpreter 运行推断来比较模型的准确率。
首先,我们需要一个函数,该函数使用给定的模型和图像运行推断,然后返回预测值:
End of explanation
"""
import matplotlib.pylab as plt
# Change this to test a different image
test_image_index = 1
## Helper function to test the models on one image
def test_model(tflite_file, test_image_index, model_type):
global test_labels
predictions = run_tflite_model(tflite_file, [test_image_index])
plt.imshow(test_images[test_image_index])
template = model_type + " Model \n True:{true}, Predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[test_image_index]), predict=str(predictions[0])))
plt.grid(False)
"""
Explanation: 在单个图像上测试模型
现在,我们来比较一下浮点模型和量化模型的性能:
tflite_model_file 是使用浮点数据的原始 TensorFlow Lite 模型。
tflite_model_quant_file 是我们使用全整数量化转换的上一个模型(它使用 uint8 数据作为输入和输出)。
我们来创建另一个函数打印预测值:
End of explanation
"""
test_model(tflite_model_file, test_image_index, model_type="Float")
"""
Explanation: 现在测试浮点模型:
End of explanation
"""
test_model(tflite_model_quant_file, test_image_index, model_type="Quantized")
"""
Explanation: 然后测试量化模型:
End of explanation
"""
# Helper function to evaluate a TFLite model on all images
def evaluate_model(tflite_file, model_type):
global test_images
global test_labels
test_image_indices = range(test_images.shape[0])
predictions = run_tflite_model(tflite_file, test_image_indices)
accuracy = (np.sum(test_labels== predictions) * 100) / len(test_images)
print('%s model accuracy is %.4f%% (Number of test samples=%d)' % (
model_type, accuracy, len(test_images)))
"""
Explanation: 在所有图像上评估模型
现在,我们使用在本教程开始时加载的所有测试图像来运行两个模型:
End of explanation
"""
evaluate_model(tflite_model_file, model_type="Float")
"""
Explanation: 评估浮点模型:
End of explanation
"""
evaluate_model(tflite_model_quant_file, model_type="Quantized")
"""
Explanation: 评估量化模型:
End of explanation
"""
|
AGrosserHH/GAN | DCGAN/DCGAN.ipynb | apache-2.0 | import numpy as np
from keras.datasets import mnist
import keras
from keras.layers import Input, UpSampling2D, Conv2DTranspose, Conv2D, LeakyReLU
from keras.layers.core import Reshape,Dense,Dropout,Activation,Flatten
from keras.models import Sequential
from keras.optimizers import RMSprop, Adam
from tensorflow.examples.tutorials.mnist import input_data
from keras.layers.normalization import *
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
"""
Explanation: Generative Adversarial Networks
Generative Adversarial Networks are invented by Ian Goodfellow (https://arxiv.org/abs/1406.2661).
"There are many interesting recent development in deep learning…The most important one, in my opinion, is adversarial training (also called GAN for Generative Adversarial Networks). This, and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion." – Yann LeCun
One network generates candidates and one evaluates them, i.e. we have two models, a generative model and a discriminative model. Before looking at GANs, let’s briefly review the difference between generative and discriminative models:
- A discriminative model learns a function that maps the input data (x) to some desired output class label (y). In probabilistic terms, they directly learn the conditional distribution P(y|x).
- A generative model tries to learn the joint probability of the input data and labels simultaneously, i.e. P(x,y). This can be converted to P(y|x) for classification via Bayes rule, but the generative ability could be used for something else as well, such as creating likely new (x, y) samples.
The discriminative model has the task of determining whether a given image looks natural (an image from the dataset) or looks like it has been artificially created. The task of the generator is to create images so that the discriminator gets trained to produce the correct outputs. This can be thought of as a zero-sum or minimax two player game. Or Goodfellow describes it
"the generative model is pitted against an adversary: a discriminative model that learns to determine whether a sample is from the model distribution or the data distribution. The generative model can be thought of as analogous to a team of counterfeiters, trying to produce fake currency and use it without detection, while the discriminative model is analogous to the police, trying to detect the counterfeit currency. Competition in this game drives both teams to improve their methods until the counterfeits are indistiguishable from the genuine articles."
The generator is typically a deconvolutional neural network, and the discriminator is a convolutional neural network. Convolutional networks are a bottom-up approach where the input signal is subjected to multiple layers of convolutions, non-linearities and sub-sampling. By contrast, each layer in our Deconvolutional Network is top-down; it seeks to generate the input signal by a sum over convolutions of the feature maps (as opposed to the input) with learned filters. Given an input and a set of filters, inferring the feature map activations requires solving a multi-component deconvolution problem that is computationally challenging.
Here is a short overview of the process:
<img src="images/GAN.png">
What are the pros and cons of Generative Adversarial Networks?
- https://www.quora.com/What-are-the-pros-and-cons-of-using-generative-adversarial-networks-a-type-of-neural-network
Why are they important?
The discriminator now is aware of the “internal representation of the data” because it has been trained to understand the differences between real images from the dataset and artificially created ones. Thus, it can be used as a feature extractor that you can use in a CNN.
End of explanation
"""
(X_train, y_train), (X_test, y_test) = mnist.load_data()
x_train = input_data.read_data_sets("mnist",one_hot=True).train.images
x_train = x_train.reshape(-1, 28,28, 1).astype(np.float32)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
pixels = x_train[0]
pixels = pixels.reshape((28, 28))
# Plot
plt.imshow(pixels, cmap='gray')
plt.show()
"""
Explanation: MNIST database
The MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning.
The MNIST database contains 60,000 training images and 10,000 testing images. Half of the training set and half of the test set were taken from NIST's training dataset, while the other half of the training set and the other half of the test set were taken from NIST's testing dataset.
End of explanation
"""
# Build discriminator
Dis = Sequential()
input_shape = (28,28,1)
#output 14 x 14 x 64
Dis.add(Conv2D(64, 5, strides = 2, input_shape = input_shape, padding='same'))
Dis.add(LeakyReLU(0.2))
Dis.add(Dropout(0.2))
#output 7 x 7 x 128
Dis.add(Conv2D(128, 5, strides = 2, input_shape = input_shape, padding='same'))
Dis.add(LeakyReLU(0.2))
Dis.add(Dropout(0.2))
#output 4 x 4 x 256
Dis.add(Conv2D(256, 5, strides = 2, input_shape = input_shape, padding='same'))
Dis.add(LeakyReLU(0.2))
Dis.add(Dropout(0.2))
#output 4 x 4 x 512
Dis.add(Conv2D(512, 5, strides = 1, input_shape = input_shape, padding='same'))
Dis.add(LeakyReLU(0.2))
Dis.add(Dropout(0.2))
# Out: 1-dim probability
Dis.add(Flatten())
Dis.add(Dense(1))
Dis.add(Activation('sigmoid'))
Dis.summary()
"""
Explanation: Description of discriminator
As the discriminator network needs to differentiate between real and fake images the discriminator takes in [1,28,28] image vectors. For this purpose several convolutional layers are used.
End of explanation
"""
#Build generator
g_input = Input(shape=[100])
Gen = Sequential()
Gen.add(Dense(7*7*256, input_dim=100,kernel_initializer="glorot_normal"))
Gen.add(BatchNormalization(momentum=0.9))
Gen.add(Activation('relu'))
Gen.add(Reshape((7, 7,256)))
#G.add(Dropout(0.2))
# Input 7 x 7 x 256
# Output 14 x 14 x 128
Gen.add(UpSampling2D())
Gen.add(Conv2DTranspose(int(128), 5, padding='same',kernel_initializer="glorot_normal"))
Gen.add(BatchNormalization(momentum=0.9))
Gen.add(Activation('relu'))
# Input 14 x 14 x 128
# Output 28 x 28 x 64
Gen.add(UpSampling2D())
Gen.add(Conv2DTranspose(int(64), 5, padding='same',kernel_initializer="glorot_normal"))
Gen.add(BatchNormalization(momentum=0.9))
Gen.add(Activation('relu'))
# Input 28 x 28 x 64
# Output 28 x 28 x 32
Gen.add(Conv2DTranspose(int(32), 5, padding='same',kernel_initializer="glorot_normal"))
Gen.add(BatchNormalization(momentum=0.9))
Gen.add(Activation('relu'))
# Out: 28 x 28 x 1
Gen.add( Conv2DTranspose(1, 5, padding='same',kernel_initializer="glorot_normal"))
Gen.add( Activation('sigmoid'))
Gen.summary()
# Discriminator model
optimizer = Adam(lr=0.0002, beta_1=0.5)
DM = Sequential()
DM.add(Dis)
DM.compile(loss='binary_crossentropy', optimizer=optimizer,metrics=['accuracy'])
DM.summary()
"""
Explanation: Description of generator
For the generator we generate 100 random inputs and eventually map them down to a [1,28,28] pixel so that the they have the same shape as the MNIST data.
In Keras, for Deconvolution there is the command "Conv2DTranspose": Transposed convolution layer (sometimes called Deconvolution). The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
End of explanation
"""
# Adversarial model
optimizer = Adam(lr=0.0002, beta_1=0.5)
AM = Sequential()
AM.add(Gen)
AM.add(Dis)
AM.compile(loss='binary_crossentropy', optimizer=optimizer,metrics=['accuracy'])
AM.summary()
# Freeze weights in discriminator D for stacked training
def make_trainable(net, val):
net.trainable = val
for l in net.layers:
l.trainable = val
make_trainable(Dis, False)
"""
Explanation: When training the GAN we are searching for an equilibrium point, which is the optimal point in a minimax game:
- The generator will models the real data,
- and the discriminator will output probability of 0.5 as the output of the generator = real data
End of explanation
"""
train_steps=50000
batch_size=256
noise_input = None
for i in range(train_steps):
images_train = x_train[np.random.randint(0,x_train.shape[0], size=batch_size),:,:,:]
noise = np.random.normal(0.0, 1.0, size=[batch_size, 100])
images_fake = Gen.predict(noise)
make_trainable(Dis, True)
x = np.concatenate((images_train, images_fake))
y = np.ones([2*batch_size, 1])
y[batch_size:, :] = 0
d_loss = DM.train_on_batch(x, y)
make_trainable(Dis, False)
y = np.ones([batch_size, 1])
noise = np.random.normal(0.0, 1.0, size=[batch_size, 100])
a_loss = AM.train_on_batch(noise, y)
Gen.save('Generator_model.h5')
"""
Explanation: The algorithm for training a GAN is the following:
1. Generate images using G and random noise (G predicts random images)
2. Perform a Batch update of weights in A given generated images, real images, and labels.
3. Perform a Batch update of weights in G given noise and forced “real” labels in the full GAN.
4. Repeat
End of explanation
"""
noise = np.random.normal(0.0, 1.0,size=[256,100])
generated_images = Gen.predict(noise)
for i in range(10):
pixels =generated_images[i]
pixels = pixels.reshape((28, 28))
# Plot
plt.imshow(pixels, cmap='gray')
plt.show()
"""
Explanation: Based on the trained model we want to check whether the generator has learnt the correct images.
End of explanation
"""
|
jegibbs/phys202-2015-work | assignments/assignment02/ProjectEuler6.ipynb | mit | sum_of_squares = sum([i ** 2 for i in range(1,101)])
"""
Explanation: Project Euler: Problem 6
https://projecteuler.net/problem=6
The sum of the squares of the first ten natural numbers is,
$$1^2 + 2^2 + ... + 10^2 = 385$$
The square of the sum of the first ten natural numbers is,
$$(1 + 2 + ... + 10)^2 = 55^2 = 3025$$
Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 − 385 = 2640.
Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum.
Find the sum of the squares of the first 100 natural numbers
End of explanation
"""
square_of_sum = (sum([i for i in range(1,101)])) ** 2
"""
Explanation: Find the square of the sum of the first 100 natural numbers
End of explanation
"""
difference = square_of_sum - sum_of_squares
print(difference)
"""
Explanation: Find and print the difference
End of explanation
"""
# This cell will be used for grading, leave it at the end of the notebook.
"""
Explanation: Success!
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.16/_downloads/plot_background_filtering.ipynb | bsd-3-clause | import numpy as np
from scipy import signal, fftpack
import matplotlib.pyplot as plt
from mne.time_frequency.tfr import morlet
from mne.viz import plot_filter, plot_ideal_filter
import mne
sfreq = 1000.
f_p = 40.
flim = (1., sfreq / 2.) # limits for plotting
"""
Explanation: Background information on filtering
Here we give some background information on filtering in general,
and how it is done in MNE-Python in particular.
Recommended reading for practical applications of digital
filter design can be found in Parks & Burrus [1] and
Ifeachor and Jervis [2], and for filtering in an
M/EEG context we recommend reading Widmann et al. 2015 [7]_.
To see how to use the default filters in MNE-Python on actual data, see
the sphx_glr_auto_tutorials_plot_artifacts_correction_filtering.py
tutorial.
Problem statement
The practical issues with filtering electrophysiological data are covered
well by Widmann et al. in [7]_, in a follow-up to an article where they
conclude with this statement:
Filtering can result in considerable distortions of the time course
(and amplitude) of a signal as demonstrated by VanRullen (2011) [[3]_].
Thus, filtering should not be used lightly. However, if effects of
filtering are cautiously considered and filter artifacts are minimized,
a valid interpretation of the temporal dynamics of filtered
electrophysiological data is possible and signals missed otherwise
can be detected with filtering.
In other words, filtering can increase SNR, but if it is not used carefully,
it can distort data. Here we hope to cover some filtering basics so
users can better understand filtering tradeoffs, and why MNE-Python has
chosen particular defaults.
Filtering basics
Let's get some of the basic math down. In the frequency domain, digital
filters have a transfer function that is given by:
\begin{align}H(z) &= \frac{b_0 + b_1 z^{-1} + b_2 z^{-2} + ... + b_M z^{-M}}
{1 + a_1 z^{-1} + a_2 z^{-2} + ... + a_N z^{-M}} \
&= \frac{\sum_0^Mb_kz^{-k}}{\sum_1^Na_kz^{-k}}\end{align}
In the time domain, the numerator coefficients $b_k$ and denominator
coefficients $a_k$ can be used to obtain our output data
$y(n)$ in terms of our input data $x(n)$ as:
\begin{align}:label: summations
y(n) &= b_0 x(n) + b_1 x(n-1) + ... + b_M x(n-M)
- a_1 y(n-1) - a_2 y(n - 2) - ... - a_N y(n - N)\\
&= \sum_0^M b_k x(n-k) - \sum_1^N a_k y(n-k)\end{align}
In other words, the output at time $n$ is determined by a sum over:
1. The numerator coefficients $b_k$, which get multiplied by
the previous input $x(n-k)$ values, and
2. The denominator coefficients $a_k$, which get multiplied by
the previous output $y(n-k)$ values.
Note that these summations in :eq:summations correspond nicely to
(1) a weighted moving average and (2) an autoregression.
Filters are broken into two classes: FIR_ (finite impulse response) and
IIR_ (infinite impulse response) based on these coefficients.
FIR filters use a finite number of numerator
coefficients $b_k$ ($\forall k, a_k=0$), and thus each output
value of $y(n)$ depends only on the $M$ previous input values.
IIR filters depend on the previous input and output values, and thus can have
effectively infinite impulse responses.
As outlined in [1]_, FIR and IIR have different tradeoffs:
* A causal FIR filter can be linear-phase -- i.e., the same time delay
across all frequencies -- whereas a causal IIR filter cannot. The phase
and group delay characteristics are also usually better for FIR filters.
* IIR filters can generally have a steeper cutoff than an FIR filter of
equivalent order.
* IIR filters are generally less numerically stable, in part due to
accumulating error (due to its recursive calculations).
In MNE-Python we default to using FIR filtering. As noted in Widmann et al.
2015 [7]_:
Despite IIR filters often being considered as computationally more
efficient, they are recommended only when high throughput and sharp
cutoffs are required (Ifeachor and Jervis, 2002 [2]_, p. 321),
...FIR filters are easier to control, are always stable, have a
well-defined passband, can be corrected to zero-phase without
additional computations, and can be converted to minimum-phase.
We therefore recommend FIR filters for most purposes in
electrophysiological data analysis.
When designing a filter (FIR or IIR), there are always tradeoffs that
need to be considered, including but not limited to:
1. Ripple in the pass-band
2. Attenuation of the stop-band
3. Steepness of roll-off
4. Filter order (i.e., length for FIR filters)
5. Time-domain ringing
In general, the sharper something is in frequency, the broader it is in time,
and vice-versa. This is a fundamental time-frequency tradeoff, and it will
show up below.
FIR Filters
First we will focus first on FIR filters, which are the default filters used by
MNE-Python.
Designing FIR filters
Here we'll try designing a low-pass filter, and look at trade-offs in terms
of time- and frequency-domain filter characteristics. Later, in
tut_effect_on_signals, we'll look at how such filters can affect
signals when they are used.
First let's import some useful tools for filtering, and set some default
values for our data that are reasonable for M/EEG data.
End of explanation
"""
nyq = sfreq / 2. # the Nyquist frequency is half our sample rate
freq = [0, f_p, f_p, nyq]
gain = [1, 1, 0, 0]
third_height = np.array(plt.rcParams['figure.figsize']) * [1, 1. / 3.]
ax = plt.subplots(1, figsize=third_height)[1]
plot_ideal_filter(freq, gain, ax, title='Ideal %s Hz lowpass' % f_p, flim=flim)
"""
Explanation: Take for example an ideal low-pass filter, which would give a value of 1 in
the pass-band (up to frequency $f_p$) and a value of 0 in the stop-band
(down to frequency $f_s$) such that $f_p=f_s=40$ Hz here
(shown to a lower limit of -60 dB for simplicity):
End of explanation
"""
n = int(round(0.1 * sfreq)) + 1
t = np.arange(-n // 2, n // 2) / sfreq # center our sinc
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (0.1 sec)', flim=flim)
"""
Explanation: This filter hypothetically achieves zero ripple in the frequency domain,
perfect attenuation, and perfect steepness. However, due to the discontunity
in the frequency response, the filter would require infinite ringing in the
time domain (i.e., infinite order) to be realized. Another way to think of
this is that a rectangular window in frequency is actually sinc_ function
in time, which requires an infinite number of samples, and thus infinite
time, to represent. So although this filter has ideal frequency suppression,
it has poor time-domain characteristics.
Let's try to naïvely make a brick-wall filter of length 0.1 sec, and look
at the filter itself in the time domain and the frequency domain:
End of explanation
"""
n = int(round(1. * sfreq)) + 1
t = np.arange(-n // 2, n // 2) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (1.0 sec)', flim=flim)
"""
Explanation: This is not so good! Making the filter 10 times longer (1 sec) gets us a
bit better stop-band suppression, but still has a lot of ringing in
the time domain. Note the x-axis is an order of magnitude longer here,
and the filter has a correspondingly much longer group delay (again equal
to half the filter length, or 0.5 seconds):
End of explanation
"""
n = int(round(10. * sfreq)) + 1
t = np.arange(-n // 2, n // 2) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (10.0 sec)', flim=flim)
"""
Explanation: Let's make the stop-band tighter still with a longer filter (10 sec),
with a resulting larger x-axis:
End of explanation
"""
trans_bandwidth = 10 # 10 Hz transition band
f_s = f_p + trans_bandwidth # = 50 Hz
freq = [0., f_p, f_s, nyq]
gain = [1., 1., 0., 0.]
ax = plt.subplots(1, figsize=third_height)[1]
title = '%s Hz lowpass with a %s Hz transition' % (f_p, trans_bandwidth)
plot_ideal_filter(freq, gain, ax, title=title, flim=flim)
"""
Explanation: Now we have very sharp frequency suppression, but our filter rings for the
entire second. So this naïve method is probably not a good way to build
our low-pass filter.
Fortunately, there are multiple established methods to design FIR filters
based on desired response characteristics. These include:
1. The Remez_ algorithm (:func:`scipy.signal.remez`, `MATLAB firpm`_)
2. Windowed FIR design (:func:`scipy.signal.firwin2`, `MATLAB fir2`_
and :func:`scipy.signal.firwin`)
3. Least squares designs (:func:`scipy.signal.firls`, `MATLAB firls`_)
4. Frequency-domain design (construct filter in Fourier
domain and use an :func:`IFFT <scipy.fftpack.ifft>` to invert it)
<div class="alert alert-info"><h4>Note</h4><p>Remez and least squares designs have advantages when there are
"do not care" regions in our frequency response. However, we want
well controlled responses in all frequency regions.
Frequency-domain construction is good when an arbitrary response
is desired, but generally less clean (due to sampling issues) than
a windowed approach for more straightfroward filter applications.
Since our filters (low-pass, high-pass, band-pass, band-stop)
are fairly simple and we require precisel control of all frequency
regions, here we will use and explore primarily windowed FIR
design.</p></div>
If we relax our frequency-domain filter requirements a little bit, we can
use these functions to construct a lowpass filter that instead has a
transition band, or a region between the pass frequency $f_p$
and stop frequency $f_s$, e.g.:
End of explanation
"""
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10-Hz transition (1.0 sec)',
flim=flim)
"""
Explanation: Accepting a shallower roll-off of the filter in the frequency domain makes
our time-domain response potentially much better. We end up with a
smoother slope through the transition region, but a much cleaner time
domain signal. Here again for the 1 sec filter:
End of explanation
"""
n = int(round(sfreq * 0.5)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10-Hz transition (0.5 sec)',
flim=flim)
"""
Explanation: Since our lowpass is around 40 Hz with a 10 Hz transition, we can actually
use a shorter filter (5 cycles at 10 Hz = 0.5 sec) and still get okay
stop-band attenuation:
End of explanation
"""
n = int(round(sfreq * 0.2)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10-Hz transition (0.2 sec)',
flim=flim)
"""
Explanation: But then if we shorten the filter too much (2 cycles of 10 Hz = 0.2 sec),
our effective stop frequency gets pushed out past 60 Hz:
End of explanation
"""
trans_bandwidth = 25
f_s = f_p + trans_bandwidth
freq = [0, f_p, f_s, nyq]
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 50-Hz transition (0.2 sec)',
flim=flim)
"""
Explanation: If we want a filter that is only 0.1 seconds long, we should probably use
something more like a 25 Hz transition band (0.2 sec = 5 cycles @ 25 Hz):
End of explanation
"""
h_min = mne.fixes.minimum_phase(h)
plot_filter(h_min, sfreq, freq, gain, 'Minimum-phase', flim=flim)
"""
Explanation: So far we have only discussed acausal filtering, which means that each
sample at each time point $t$ is filtered using samples that come
after ($t + \Delta t$) and before ($t - \Delta t$) $t$.
In this sense, each sample is influenced by samples that come both before
and after it. This is useful in many cases, espcially because it does not
delay the timing of events.
However, sometimes it can be beneficial to use causal filtering,
whereby each sample $t$ is filtered only using time points that came
after it.
Note that the delay is variable (whereas for linear/zero-phase filters it
is constant) but small in the pass-band. Unlike zero-phase filters, which
require time-shifting backward the output of a linear-phase filtering stage
(and thus becoming acausal), minimum-phase filters do not require any
compensation to achieve small delays in the passband. Note that as an
artifact of the minimum phase filter construction step, the filter does
not end up being as steep as the linear/zero-phase version.
We can construct a minimum-phase filter from our existing linear-phase
filter with the minimum_phase function (that will be in SciPy 0.19's
:mod:scipy.signal), and note that the falloff is not as steep:
End of explanation
"""
dur = 10.
center = 2.
morlet_freq = f_p
tlim = [center - 0.2, center + 0.2]
tticks = [tlim[0], center, tlim[1]]
flim = [20, 70]
x = np.zeros(int(sfreq * dur) + 1)
blip = morlet(sfreq, [morlet_freq], n_cycles=7)[0].imag / 20.
n_onset = int(center * sfreq) - len(blip) // 2
x[n_onset:n_onset + len(blip)] += blip
x_orig = x.copy()
rng = np.random.RandomState(0)
x += rng.randn(len(x)) / 1000.
x += np.sin(2. * np.pi * 60. * np.arange(len(x)) / sfreq) / 2000.
"""
Explanation: Applying FIR filters
Now lets look at some practical effects of these filters by applying
them to some data.
Let's construct a Gaussian-windowed sinusoid (i.e., Morlet imaginary part)
plus noise (random + line). Note that the original, clean signal contains
frequency content in both the pass band and transition bands of our
low-pass filter.
End of explanation
"""
transition_band = 0.25 * f_p
f_s = f_p + transition_band
filter_dur = 6.6 / transition_band / 2. # sec
n = int(sfreq * filter_dur)
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent:
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
fir_design='firwin')
x_v16 = np.convolve(h, x)[len(h) // 2:]
plot_filter(h, sfreq, freq, gain, 'MNE-Python 0.16 default', flim=flim)
"""
Explanation: Filter it with a shallow cutoff, linear-phase FIR (which allows us to
compensate for the constant filter delay):
End of explanation
"""
transition_band = 0.25 * f_p
f_s = f_p + transition_band
filter_dur = 6.6 / transition_band # sec
n = int(sfreq * filter_dur)
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent:
# h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
fir_design='firwin2')
x_v14 = np.convolve(h, x)[len(h) // 2:]
plot_filter(h, sfreq, freq, gain, 'MNE-Python 0.14 default', flim=flim)
"""
Explanation: Filter it with a different design mode fir_design="firwin2", and also
compensate for the constant filter delay. This method does not produce
quite as sharp a transition compared to fir_design="firwin", despite
being twice as long:
End of explanation
"""
transition_band = 0.5 # Hz
f_s = f_p + transition_band
filter_dur = 10. # sec
n = int(sfreq * filter_dur)
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent
# h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
h_trans_bandwidth=transition_band,
filter_length='%ss' % filter_dur,
fir_design='firwin2')
x_v13 = np.convolve(np.convolve(h, x)[::-1], h)[::-1][len(h) - 1:-len(h) - 1]
plot_filter(h, sfreq, freq, gain, 'MNE-Python 0.13 default', flim=flim)
"""
Explanation: This is actually set to become the default type of filter used in MNE-Python
in 0.14 (see tut_filtering_in_python).
Let's also filter with the MNE-Python 0.13 default, which is a
long-duration, steep cutoff FIR that gets applied twice:
End of explanation
"""
h = mne.filter.design_mne_c_filter(sfreq, l_freq=None, h_freq=f_p + 2.5)
x_mne_c = np.convolve(h, x)[len(h) // 2:]
transition_band = 5 # Hz (default in MNE-C)
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
plot_filter(h, sfreq, freq, gain, 'MNE-C default', flim=flim)
"""
Explanation: Let's also filter it with the MNE-C default, which is a long-duration
steep-slope FIR filter designed using frequency-domain techniques:
End of explanation
"""
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
phase='minimum', fir_design='firwin')
x_min = np.convolve(h, x)
transition_band = 0.25 * f_p
f_s = f_p + transition_band
filter_dur = 6.6 / transition_band # sec
n = int(sfreq * filter_dur)
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
plot_filter(h, sfreq, freq, gain, 'Minimum-phase filter', flim=flim)
"""
Explanation: And now an example of a minimum-phase filter:
End of explanation
"""
axes = plt.subplots(1, 2)[1]
def plot_signal(x, offset):
t = np.arange(len(x)) / sfreq
axes[0].plot(t, x + offset)
axes[0].set(xlabel='Time (sec)', xlim=t[[0, -1]])
X = fftpack.fft(x)
freqs = fftpack.fftfreq(len(x), 1. / sfreq)
mask = freqs >= 0
X = X[mask]
freqs = freqs[mask]
axes[1].plot(freqs, 20 * np.log10(np.maximum(np.abs(X), 1e-16)))
axes[1].set(xlim=flim)
yticks = np.arange(7) / -30.
yticklabels = ['Original', 'Noisy', 'FIR-firwin (0.16)', 'FIR-firwin2 (0.14)',
'FIR-steep (0.13)', 'FIR-steep (MNE-C)', 'Minimum-phase']
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_v16, offset=yticks[2])
plot_signal(x_v14, offset=yticks[3])
plot_signal(x_v13, offset=yticks[4])
plot_signal(x_mne_c, offset=yticks[5])
plot_signal(x_min, offset=yticks[6])
axes[0].set(xlim=tlim, title='FIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-0.200, 0.025], yticks=yticks, yticklabels=yticklabels,)
for text in axes[0].get_yticklabels():
text.set(rotation=45, size=8)
axes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
mne.viz.tight_layout()
plt.show()
"""
Explanation: Both the MNE-Python 0.13 and MNE-C filhters have excellent frequency
attenuation, but it comes at a cost of potential
ringing (long-lasting ripples) in the time domain. Ringing can occur with
steep filters, especially on signals with frequency content around the
transition band. Our Morlet wavelet signal has power in our transition band,
and the time-domain ringing is thus more pronounced for the steep-slope,
long-duration filter than the shorter, shallower-slope filter:
End of explanation
"""
sos = signal.iirfilter(2, f_p / nyq, btype='low', ftype='butter', output='sos')
plot_filter(dict(sos=sos), sfreq, freq, gain, 'Butterworth order=2', flim=flim)
# Eventually this will just be from scipy signal.sosfiltfilt, but 0.18 is
# not widely adopted yet (as of June 2016), so we use our wrapper...
sosfiltfilt = mne.fixes.get_sosfiltfilt()
x_shallow = sosfiltfilt(sos, x)
"""
Explanation: IIR filters
MNE-Python also offers IIR filtering functionality that is based on the
methods from :mod:scipy.signal. Specifically, we use the general-purpose
functions :func:scipy.signal.iirfilter and :func:scipy.signal.iirdesign,
which provide unified interfaces to IIR filter design.
Designing IIR filters
Let's continue with our design of a 40 Hz low-pass filter, and look at
some trade-offs of different IIR filters.
Often the default IIR filter is a Butterworth filter_, which is designed
to have a maximally flat pass-band. Let's look at a few orders of filter,
i.e., a few different number of coefficients used and therefore steepness
of the filter:
<div class="alert alert-info"><h4>Note</h4><p>Notice that the group delay (which is related to the phase) of
the IIR filters below are not constant. In the FIR case, we can
design so-called linear-phase filters that have a constant group
delay, and thus compensate for the delay (making the filter
acausal) if necessary. This cannot be done with IIR filters, as
they have a non-linear phase (non-constant group delay). As the
filter order increases, the phase distortion near and in the
transition band worsens. However, if acausal (forward-backward)
filtering can be used, e.g. with :func:`scipy.signal.filtfilt`,
these phase issues can theoretically be mitigated.</p></div>
End of explanation
"""
sos = signal.iirfilter(8, f_p / nyq, btype='low', ftype='butter', output='sos')
plot_filter(dict(sos=sos), sfreq, freq, gain, 'Butterworth order=8', flim=flim)
x_steep = sosfiltfilt(sos, x)
"""
Explanation: The falloff of this filter is not very steep.
<div class="alert alert-info"><h4>Note</h4><p>Here we have made use of second-order sections (SOS)
by using :func:`scipy.signal.sosfilt` and, under the
hood, :func:`scipy.signal.zpk2sos` when passing the
``output='sos'`` keyword argument to
:func:`scipy.signal.iirfilter`. The filter definitions
given in tut_filtering_basics_ use the polynomial
numerator/denominator (sometimes called "tf") form ``(b, a)``,
which are theoretically equivalent to the SOS form used here.
In practice, however, the SOS form can give much better results
due to issues with numerical precision (see
:func:`scipy.signal.sosfilt` for an example), so SOS should be
used when possible to do IIR filtering.</p></div>
Let's increase the order, and note that now we have better attenuation,
with a longer impulse response:
End of explanation
"""
sos = signal.iirfilter(8, f_p / nyq, btype='low', ftype='cheby1', output='sos',
rp=1) # dB of acceptable pass-band ripple
plot_filter(dict(sos=sos), sfreq, freq, gain,
'Chebychev-1 order=8, ripple=1 dB', flim=flim)
"""
Explanation: There are other types of IIR filters that we can use. For a complete list,
check out the documentation for :func:scipy.signal.iirdesign. Let's
try a Chebychev (type I) filter, which trades off ripple in the pass-band
to get better attenuation in the stop-band:
End of explanation
"""
sos = signal.iirfilter(8, f_p / nyq, btype='low', ftype='cheby1', output='sos',
rp=6)
plot_filter(dict(sos=sos), sfreq, freq, gain,
'Chebychev-1 order=8, ripple=6 dB', flim=flim)
"""
Explanation: And if we can live with even more ripple, we can get it slightly steeper,
but the impulse response begins to ring substantially longer (note the
different x-axis scale):
End of explanation
"""
axes = plt.subplots(1, 2)[1]
yticks = np.arange(4) / -30.
yticklabels = ['Original', 'Noisy', 'Butterworth-2', 'Butterworth-8']
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_shallow, offset=yticks[2])
plot_signal(x_steep, offset=yticks[3])
axes[0].set(xlim=tlim, title='IIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-0.125, 0.025], yticks=yticks, yticklabels=yticklabels,)
for text in axes[0].get_yticklabels():
text.set(rotation=45, size=8)
axes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.show()
"""
Explanation: Applying IIR filters
Now let's look at how our shallow and steep Butterworth IIR filters
perform on our Morlet signal from before:
End of explanation
"""
x = np.zeros(int(2 * sfreq))
t = np.arange(0, len(x)) / sfreq - 0.2
onset = np.where(t >= 0.5)[0][0]
cos_t = np.arange(0, int(sfreq * 0.8)) / sfreq
sig = 2.5 - 2.5 * np.cos(2 * np.pi * (1. / 0.8) * cos_t)
x[onset:onset + len(sig)] = sig
iir_lp_30 = signal.iirfilter(2, 30. / sfreq, btype='lowpass')
iir_hp_p1 = signal.iirfilter(2, 0.1 / sfreq, btype='highpass')
iir_lp_2 = signal.iirfilter(2, 2. / sfreq, btype='lowpass')
iir_hp_2 = signal.iirfilter(2, 2. / sfreq, btype='highpass')
x_lp_30 = signal.filtfilt(iir_lp_30[0], iir_lp_30[1], x, padlen=0)
x_hp_p1 = signal.filtfilt(iir_hp_p1[0], iir_hp_p1[1], x, padlen=0)
x_lp_2 = signal.filtfilt(iir_lp_2[0], iir_lp_2[1], x, padlen=0)
x_hp_2 = signal.filtfilt(iir_hp_2[0], iir_hp_2[1], x, padlen=0)
xlim = t[[0, -1]]
ylim = [-2, 6]
xlabel = 'Time (sec)'
ylabel = 'Amplitude ($\mu$V)'
tticks = [0, 0.5, 1.3, t[-1]]
axes = plt.subplots(2, 2)[1].ravel()
for ax, x_f, title in zip(axes, [x_lp_2, x_lp_30, x_hp_2, x_hp_p1],
['LP$_2$', 'LP$_{30}$', 'HP$_2$', 'LP$_{0.1}$']):
ax.plot(t, x, color='0.5')
ax.plot(t, x_f, color='k', linestyle='--')
ax.set(ylim=ylim, xlim=xlim, xticks=tticks,
title=title, xlabel=xlabel, ylabel=ylabel)
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.show()
"""
Explanation: Some pitfalls of filtering
Multiple recent papers have noted potential risks of drawing
errant inferences due to misapplication of filters.
Low-pass problems
Filters in general, especially those that are acausal (zero-phase), can make
activity appear to occur earlier or later than it truly did. As
mentioned in VanRullen 2011 [3], investigations of commonly (at the time)
used low-pass filters created artifacts when they were applied to smulated
data. However, such deleterious effects were minimal in many real-world
examples in Rousselet 2012 [5].
Perhaps more revealing, it was noted in Widmann & Schröger 2012 [6] that
the problematic low-pass filters from VanRullen 2011 [3]:
Used a least-squares design (like :func:scipy.signal.firls) that
included "do-not-care" transition regions, which can lead to
uncontrolled behavior.
Had a filter length that was independent of the transition bandwidth,
which can cause excessive ringing and signal distortion.
High-pass problems
When it comes to high-pass filtering, using corner frequencies above 0.1 Hz
were found in Acunzo et al. 2012 [4]_ to:
"...generate a systematic bias easily leading to misinterpretations of
neural activity.”
In a related paper, Widmann et al. 2015 [7] also came to suggest a 0.1 Hz
highpass. And more evidence followed in Tanner et al. 2015 [8] of such
distortions. Using data from language ERP studies of semantic and syntactic
processing (i.e., N400 and P600), using a high-pass above 0.3 Hz caused
significant effects to be introduced implausibly early when compared to the
unfiltered data. From this, the authors suggested the optimal high-pass
value for language processing to be 0.1 Hz.
We can recreate a problematic simulation from Tanner et al. 2015 [8]_:
"The simulated component is a single-cycle cosine wave with an amplitude
of 5µV, onset of 500 ms poststimulus, and duration of 800 ms. The
simulated component was embedded in 20 s of zero values to avoid
filtering edge effects... Distortions [were] caused by 2 Hz low-pass and
high-pass filters... No visible distortion to the original waveform
[occurred] with 30 Hz low-pass and 0.01 Hz high-pass filters...
Filter frequencies correspond to the half-amplitude (-6 dB) cutoff
(12 dB/octave roll-off)."
<div class="alert alert-info"><h4>Note</h4><p>This simulated signal contains energy not just within the
pass-band, but also within the transition and stop-bands -- perhaps
most easily understood because the signal has a non-zero DC value,
but also because it is a shifted cosine that has been
*windowed* (here multiplied by a rectangular window), which
makes the cosine and DC frequencies spread to other frequencies
(multiplication in time is convolution in frequency, so multiplying
by a rectangular window in the time domain means convolving a sinc
function with the impulses at DC and the cosine frequency in the
frequency domain).</p></div>
End of explanation
"""
def baseline_plot(x):
all_axes = plt.subplots(3, 2)[1]
for ri, (axes, freq) in enumerate(zip(all_axes, [0.1, 0.3, 0.5])):
for ci, ax in enumerate(axes):
if ci == 0:
iir_hp = signal.iirfilter(4, freq / sfreq, btype='highpass',
output='sos')
x_hp = sosfiltfilt(iir_hp, x, padlen=0)
else:
x_hp -= x_hp[t < 0].mean()
ax.plot(t, x, color='0.5')
ax.plot(t, x_hp, color='k', linestyle='--')
if ri == 0:
ax.set(title=('No ' if ci == 0 else '') +
'Baseline Correction')
ax.set(xticks=tticks, ylim=ylim, xlim=xlim, xlabel=xlabel)
ax.set_ylabel('%0.1f Hz' % freq, rotation=0,
horizontalalignment='right')
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.suptitle(title)
plt.show()
baseline_plot(x)
"""
Explanation: Similarly, in a P300 paradigm reported by Kappenman & Luck 2010 [12]_,
they found that applying a 1 Hz high-pass decreased the probaility of
finding a significant difference in the N100 response, likely because
the P300 response was smeared (and inverted) in time by the high-pass
filter such that it tended to cancel out the increased N100. However,
they nonetheless note that some high-passing can still be useful to deal
with drifts in the data.
Even though these papers generally advise a 0.1 HZ or lower frequency for
a high-pass, it is important to keep in mind (as most authors note) that
filtering choices should depend on the frequency content of both the
signal(s) of interest and the noise to be suppressed. For example, in
some of the MNE-Python examples involving ch_sample_data,
high-pass values of around 1 Hz are used when looking at auditory
or visual N100 responses, because we analyze standard (not deviant) trials
and thus expect that contamination by later or slower components will
be limited.
Baseline problems (or solutions?)
In an evolving discussion, Tanner et al. 2015 [8] suggest using baseline
correction to remove slow drifts in data. However, Maess et al. 2016 [9]
suggest that baseline correction, which is a form of high-passing, does
not offer substantial advantages over standard high-pass filtering.
Tanner et al. [10]_ rebutted that baseline correction can correct for
problems with filtering.
To see what they mean, consider again our old simulated signal x from
before:
End of explanation
"""
n_pre = (t < 0).sum()
sig_pre = 1 - np.cos(2 * np.pi * np.arange(n_pre) / (0.5 * n_pre))
x[:n_pre] += sig_pre
baseline_plot(x)
"""
Explanation: In response, Maess et al. 2016 [11]_ note that these simulations do not
address cases of pre-stimulus activity that is shared across conditions, as
applying baseline correction will effectively copy the topology outside the
baseline period. We can see this if we give our signal x with some
consistent pre-stimulus activity, which makes everything look bad.
<div class="alert alert-info"><h4>Note</h4><p>An important thing to keep in mind with these plots is that they
are for a single simulated sensor. In multielectrode recordings
the topology (i.e., spatial pattiern) of the pre-stimulus activity
will leak into the post-stimulus period. This will likely create a
spatially varying distortion of the time-domain signals, as the
averaged pre-stimulus spatial pattern gets subtracted from the
sensor time courses.</p></div>
Putting some activity in the baseline period:
End of explanation
"""
|
trsherborne/learn-python | giag.ipynb | mit | # -*- coding: utf-8 -*-
%matplotlib inline
import numpy as np
import pandas as pd
from pandas_datareader import data as web
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.finance as mpf
# Choose a stock
ticker = 'GOOG'
# Choose a start date in US format MM/DD/YYYY
stock_start = '10/2/2014'
# Choose an end date in US format MM/DD/YYYY
stock_end = '10/2/2016'
# Retrieve the Data from Google's Finance Database
stock = web.DataReader(ticker,data_source='google',
start = stock_start,end=stock_end)
# Print a table of the Data to see what we have just fetched
stock.tail()
# Generate the logarithm of the ratio between each days closing price
stock['Log_Ret'] = np.log(stock['Close']/stock['Close'].shift(1))
# Generate the rolling standard deviation across the time series data
stock['Volatility'] = pd.rolling_std(stock['Log_Ret'],window=100)*np.sqrt(100)
# Create a plot of changing Closing Price and Volatility
stock[['Close','Volatility']].plot(subplots=True,color='b',figsize=(8,6))
"""
Explanation: LSESU Applicable Maths Give It A Go Python Demo
This is some slightly more advanced Python to what we will do in the first 2 weeks.
By the end of the course you will be able to do all of this and more!
Demo 1: Plotting the closing price and volatility of a stock in 5 lines
First we have to import some "libraries", which is a fancy way of saying using something someone else already wrote
End of explanation
"""
# Choose a start and end date in a slightly different format to before (YYYY/MM/DD)
start = (2015, 10, 2)
end = (2016, 4,2)
company = "S&P 500"
ticker = "^GSPC"
quotes = mpf.quotes_historical_yahoo_ohlc(ticker, start, end)
print(quotes[:2])
# We use Matplotlib to generate plots
fig, ax = plt.subplots(figsize=(8, 5))
fig.subplots_adjust(bottom=0.2)
mpf.candlestick_ohlc(ax, quotes, width=0.6, colorup='b', colordown='r')
# Running this block produces an ugly output
# We can try again with some fancier formatting tricks
fig, ax = plt.subplots(figsize=(8, 5))
fig.subplots_adjust(bottom=0.2)
mpf.candlestick_ohlc(ax, quotes, width=0.6, colorup='b', colordown='r')
# Adding some formatting sugar
plt.title("Candlestick Chart for "+company+" ("+ticker+")"+" "+str(start)+" to "+str(end))
plt.grid(True) # Set a title
ax.xaxis_date() # dates on the x-axis
ax.autoscale_view() #Scale the image
plt.setp(plt.gca().get_xticklabels(), rotation=30) # Format labels
"""
Explanation: Demo 2: Plotting a candlestick chart for any stock in 11 lines of code
End of explanation
"""
# -*- coding: utf-8 -*-
%matplotlib inline
import numpy as np
import pandas as pd
from pandas_datareader import data as web
# Choose a stock
ticker = 'GOOG'
# Choose a start date in US format MM/DD/YYYY
stock_start = '10/2/2015'
# Choose an end date in US format MM/DD/YYYY
stock_end = '10/2/2016'
# Retrieve the Data from Google's Finance Database
stock = web.DataReader(ticker,data_source='google',
start = stock_start,end=stock_end)
# Generate the logarithm of the ratio between each days closing price
stock['Log_Ret'] = np.log(stock['Close']/stock['Close'].shift(1))
# Generate the rolling standard deviation across the time series data
stock['Volatility'] = pd.rolling_std(stock['Log_Ret'],window=252)*np.sqrt(252)
google[['Close','Volatility']].plot(subplots=True,color='b',figsize=(8,6))
"""
Explanation: Complete Code for Demo 1
End of explanation
"""
# -*- coding: utf-8 -*-
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.finance as mpf
start = (2015, 10, 2)
end = (2016, 4,2)
company = "S&P 500"
ticker = "^GSPC"
quotes = mpf.quotes_historical_yahoo_ohlc(ticker, start, end)
print(quotes[:2])
fig, ax = plt.subplots(figsize=(8, 5))
fig.subplots_adjust(bottom=0.2)
mpf.candlestick_ohlc(ax, quotes, width=0.6, colorup='b', colordown='r')
plt.title("Candlestick Chart for "+company+" ("+ticker+")"+" "+str(start)+" to "+str(end))
plt.grid(True) # Set a title
ax.xaxis_date() # dates on the x-axis
ax.autoscale_view() #Scale the image
plt.setp(plt.gca().get_xticklabels(), rotation=30) # Format labels
"""
Explanation: Complete Code for Demo 2
End of explanation
"""
|
rashikaranpuria/Machine-Learning-Specialization | Clustering_&_Retrieval/Week4/Assignment2/4_em-with-text-data_blank.ipynb | mit | import graphlab
"""
Explanation: Fitting a diagonal covariance Gaussian mixture model to text data
In a previous assignment, we explored k-means clustering for a high-dimensional Wikipedia dataset. We can also model this data with a mixture of Gaussians, though with increasing dimension we run into two important issues associated with using a full covariance matrix for each component.
* Computational cost becomes prohibitive in high dimensions: score calculations have complexity cubic in the number of dimensions M if the Gaussian has a full covariance matrix.
* A model with many parameters require more data: bserve that a full covariance matrix for an M-dimensional Gaussian will have M(M+1)/2 parameters to fit. With the number of parameters growing roughly as the square of the dimension, it may quickly become impossible to find a sufficient amount of data to make good inferences.
Both of these issues are avoided if we require the covariance matrix of each component to be diagonal, as then it has only M parameters to fit and the score computation decomposes into M univariate score calculations. Recall from the lecture that the M-step for the full covariance is:
\begin{align}
\hat{\Sigma}k &= \frac{1}{N_k^{soft}} \sum{i=1}^N r_{ik} (x_i-\hat{\mu}_k)(x_i - \hat{\mu}_k)^T
\end{align}
Note that this is a square matrix with M rows and M columns, and the above equation implies that the (v, w) element is computed by
\begin{align}
\hat{\Sigma}{k, v, w} &= \frac{1}{N_k^{soft}} \sum{i=1}^N r_{ik} (x_{iv}-\hat{\mu}{kv})(x{iw} - \hat{\mu}_{kw})
\end{align}
When we assume that this is a diagonal matrix, then non-diagonal elements are assumed to be zero and we only need to compute each of the M elements along the diagonal independently using the following equation.
\begin{align}
\hat{\sigma}^2_{k, v} &= \hat{\Sigma}{k, v, v} \
&= \frac{1}{N_k^{soft}} \sum{i=1}^N r_{ik} (x_{iv}-\hat{\mu}_{kv})^2
\end{align}
In this section, we will use an EM implementation to fit a Gaussian mixture model with diagonal covariances to a subset of the Wikipedia dataset. The implementation uses the above equation to compute each variance term.
We'll begin by importing the dataset and coming up with a useful representation for each article. After running our algorithm on the data, we will explore the output to see whether we can give a meaningful interpretation to the fitted parameters in our model.
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Import necessary packages
End of explanation
"""
from em_utilities import *
"""
Explanation: We also have a Python file containing implementations for several functions that will be used during the course of this assignment.
End of explanation
"""
wiki = graphlab.SFrame('people_wiki.gl/').head(5000)
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
"""
Explanation: Load Wikipedia data and extract TF-IDF features
Load Wikipedia data and transform each of the first 5000 document into a TF-IDF representation.
End of explanation
"""
tf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf')
"""
Explanation: Using a utility we provide, we will create a sparse matrix representation of the documents. This is the same utility function you used during the previous assignment on k-means with text data.
End of explanation
"""
tf_idf = normalize(tf_idf)
"""
Explanation: As in the previous assignment, we will normalize each document's TF-IDF vector to be a unit vector.
End of explanation
"""
for i in range(5):
doc = tf_idf[i]
print(np.linalg.norm(doc.todense()))
"""
Explanation: We can check that the length (Euclidean norm) of each row is now 1.0, as expected.
End of explanation
"""
from sklearn.cluster import KMeans
np.random.seed(5)
num_clusters = 25
# Use scikit-learn's k-means to simplify workflow
kmeans_model = KMeans(n_clusters=num_clusters, n_init=5, max_iter=400, random_state=1, n_jobs=-1)
kmeans_model.fit(tf_idf)
centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_
means = [centroid for centroid in centroids]
"""
Explanation: EM in high dimensions
EM for high-dimensional data requires some special treatment:
* E step and M step must be vectorized as much as possible, as explicit loops are dreadfully slow in Python.
* All operations must be cast in terms of sparse matrix operations, to take advantage of computational savings enabled by sparsity of data.
* Initially, some words may be entirely absent from a cluster, causing the M step to produce zero mean and variance for those words. This means any data point with one of those words will have 0 probability of being assigned to that cluster since the cluster allows for no variability (0 variance) around that count being 0 (0 mean). Since there is a small chance for those words to later appear in the cluster, we instead assign a small positive variance (~1e-10). Doing so also prevents numerical overflow.
We provide the complete implementation for you in the file em_utilities.py. For those who are interested, you can read through the code to see how the sparse matrix implementation differs from the previous assignment.
You are expected to answer some quiz questions using the results of clustering.
Initializing mean parameters using k-means
Recall from the lectures that EM for Gaussian mixtures is very sensitive to the choice of initial means. With a bad initial set of means, EM may produce clusters that span a large area and are mostly overlapping. To eliminate such bad outcomes, we first produce a suitable set of initial means by using the cluster centers from running k-means. That is, we first run k-means and then take the final set of means from the converged solution as the initial means in our EM algorithm.
End of explanation
"""
num_docs = tf_idf.shape[0]
weights = []
for i in xrange(num_clusters):
# Compute the number of data points assigned to cluster i:
num_assigned = len(cluster_assignment[cluster_assignment==i]) # YOUR CODE HERE
w = float(num_assigned) / num_docs
weights.append(w)
"""
Explanation: Initializing cluster weights
We will initialize each cluster weight to be the proportion of documents assigned to that cluster by k-means above.
End of explanation
"""
covs = []
for i in xrange(num_clusters):
member_rows = tf_idf[cluster_assignment==i]
cov = (member_rows.power(2) - 2*member_rows.dot(diag(means[i]))).sum(axis=0).A1 / member_rows.shape[0] \
+ means[i]**2
cov[cov < 1e-8] = 1e-8
covs.append(cov)
"""
Explanation: Initializing covariances
To initialize our covariance parameters, we compute $\hat{\sigma}{k, j}^2 = \sum{i=1}^{N}(x_{i,j} - \hat{\mu}_{k, j})^2$ for each feature $j$. For features with really tiny variances, we assign 1e-8 instead to prevent numerical instability. We do this computation in a vectorized fashion in the following code block.
End of explanation
"""
out = EM_for_high_dimension(tf_idf, means, covs, weights, cov_smoothing=1e-10)
out['loglik']
"""
Explanation: Running EM
Now that we have initialized all of our parameters, run EM.
End of explanation
"""
# Fill in the blanks
def visualize_EM_clusters(tf_idf, means, covs, map_index_to_word):
print('')
print('==========================================================')
num_clusters = len(means)
for c in xrange(num_clusters):
print('Cluster {0:d}: Largest mean parameters in cluster '.format(c))
print('\n{0: <12}{1: <12}{2: <12}'.format('Word', 'Mean', 'Variance'))
# The k'th element of sorted_word_ids should be the index of the word
# that has the k'th-largest value in the cluster mean. Hint: Use np.argsort().
sorted_word_ids = np.argsort(means[c])[::-1][:len(means[c])] # YOUR CODE HERE
for i in sorted_word_ids[:5]:
print '{0: <12}{1:<10.2e}{2:10.2e}'.format(map_index_to_word['category'][i],
means[c][i],
covs[c][i])
print '\n=========================================================='
'''By EM'''
visualize_EM_clusters(tf_idf, out['means'], out['covs'], map_index_to_word)
"""
Explanation: Interpret clustering results
In contrast to k-means, EM is able to explicitly model clusters of varying sizes and proportions. The relative magnitude of variances in the word dimensions tell us much about the nature of the clusters.
Write yourself a cluster visualizer as follows. Examining each cluster's mean vector, list the 5 words with the largest mean values (5 most common words in the cluster). For each word, also include the associated variance parameter (diagonal element of the covariance matrix).
A sample output may be:
```
==========================================================
Cluster 0: Largest mean parameters in cluster
Word Mean Variance
football 1.08e-01 8.64e-03
season 5.80e-02 2.93e-03
club 4.48e-02 1.99e-03
league 3.94e-02 1.08e-03
played 3.83e-02 8.45e-04
...
```
End of explanation
"""
np.random.seed(5)
num_clusters = len(means)
num_docs, num_words = tf_idf.shape
random_means = []
random_covs = []
random_weights = []
for k in range(num_clusters):
# Create a numpy array of length num_words with random normally distributed values.
# Use the standard univariate normal distribution (mean 0, variance 1).
# YOUR CODE HERE
mean = 1. * np.random.randn(num_words)
# Create a numpy array of length num_words with random values uniformly distributed between 1 and 5.
# YOUR CODE HERE
cov = np.random.uniform(1.,5.,num_words)
# Initially give each cluster equal weight.
# YOUR CODE HERE
weight = 1.
random_means.append(mean)
random_covs.append(cov)
random_weights.append(weight)
"""
Explanation: Quiz Question. Select all the topics that have a cluster in the model created above. [multiple choice]
Comparing to random initialization
Create variables for randomly initializing the EM algorithm. Complete the following code block.
End of explanation
"""
out_random_init = EM_for_high_dimension(tf_idf, random_means, random_covs, random_weights, cov_smoothing=1e-5)
out_random_init
"""
Explanation: Quiz Question: Try fitting EM with the random initial parameters you created above. (Use cov_smoothing=1e-5.) Store the result to out_random_init. What is the final loglikelihood that the algorithm converges to?
End of explanation
"""
out_random_init['loglik'] > out['loglik']
"""
Explanation: Quiz Question: Is the final loglikelihood larger or smaller than the final loglikelihood we obtained above when initializing EM with the results from running k-means?
End of explanation
"""
# YOUR CODE HERE. Use visualize_EM_clusters, which will require you to pass in tf_idf and map_index_to_word.
visualize_EM_clusters(tf_idf, out_random_init['means'], out_random_init['covs'], map_index_to_word)
"""
Explanation: Quiz Question: For the above model, out_random_init, use the visualize_EM_clusters method you created above. Are the clusters more or less interpretable than the ones found after initializing using k-means?
End of explanation
"""
|
nvergos/DAT-ATX-1_Project | Notebooks/3. Dimensionality Reduction.ipynb | mit | import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy import stats
import seaborn as sns
sns.set(rc={"axes.labelsize": 15});
# Some nice default configuration for plots
plt.rcParams['figure.figsize'] = 10, 7.5;
plt.rcParams['axes.grid'] = True;
plt.gray();
"""
Explanation: DAT-ATX-1 Capstone Project
3. Dimensionality Reduction
For the final part of this project, we will extend our study of text classification. Using Principal Component Analysis and Truncated Singular Value Decomposition (methods for dimensionality reduction) we will attempt to replicate the same quality of modeling with a fraction of the features.
The outline of the procedure we are going to follow is:
Turn a corpus of text documents (restaurant names, street addresses) into feature vectors using a Bag of Words representation,
We will apply Principal Component Analysis to decompose the feature vectors into "simpler," meaningful pieces.
Dimensionality reduction is frequently performed as a pre-processing step before another learning algorithm is applied.
Motivations
The number of features in our dataset can be difficult to manage, or even misleading (e.g. if the relationships are actually simpler than they appear).
reduce computational expense
reduce susceptibility to overfitting
reduce noise in the dataset
enhance our intuition
0. Import libraries & packages
End of explanation
"""
#Reading the dataset in a dataframe using Pandas
df = pd.read_csv("data.csv")
#Print first observations
df.head()
df.columns
"""
Explanation: 1. Import dataset
End of explanation
"""
from sklearn.feature_extraction.text import CountVectorizer
# Turn the text documents into vectors
vectorizer = CountVectorizer(min_df=1, stop_words="english")
X = vectorizer.fit_transform(df['Restaurant_Name']).toarray()
y = df['Letter_Grade']
target_names = y.unique()
# Train/Test split and cross validation:
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, train_size = 0.8)
X_train.shape
"""
Explanation: Our first collection of feature vectors will come from the Restaurant_Name column. We are still trying to predict whether a restaurant falls under the "pristine" category (Grade A, score greater than 90) or not. We could also try to see whether we could predict a restaurant's grade (A, B, C or F)
2. Dimensionality Reduction Techniques
Restaurant Names as a Bag-of-words model
End of explanation
"""
from sklearn.decomposition import TruncatedSVD
svd_two = TruncatedSVD(n_components=2, random_state=42)
X_train_svd = svd_two.fit_transform(X_train)
pc_df = pd.DataFrame(X_train_svd) # cast resulting matrix as a data frame
sns.pairplot(pc_df, diag_kind='kde');
# Percentage of variance explained for each component
def pca_summary(pca):
return pd.DataFrame([np.sqrt(pca.explained_variance_),
pca.explained_variance_ratio_,
pca.explained_variance_ratio_.cumsum()],
index = ["Standard deviation", "Proportion of Variance", "Cumulative Proportion"],
columns = (map("PC{}".format, range(1, len(pca.components_)+1))))
pca_summary(svd_two)
# Only 3.5% of the variance is explained in the data
svd_two.explained_variance_ratio_.sum()
from itertools import cycle
def plot_PCA_2D(data, target, target_names):
colors = cycle('rgbcmykw')
target_ids = range(len(target_names))
plt.figure()
for i, c, label in zip(target_ids, colors, target_names):
plt.scatter(data[target == i, 0], data[target == i, 1],
c=c, label=label)
plt.legend()
plot_PCA_2D(X_train_svd, y_train, target_names)
"""
Explanation: Even though we do not have more features (3430) than rows of data (14888), we can still attempt to reduce the feature space by using Truncated SVD:
Truncated Singular Value Decomposition for Dimensionality Reduction
Once we have extracted a vector representation of the data, it's a good idea to project the data on the first 2D of a Singular Value Decomposition (i.e.. Principal Component Analysis) to get a feel of the data. Note that the TruncatedSVD class can accept scipy.sparse matrices as input (as an alternative to numpy arrays). We will use it to visualize the first two principal components of the vectorized dataset.
End of explanation
"""
# Now, let's try with 100 components to see how much it explains
svd_hundred = TruncatedSVD(n_components=100, random_state=42)
X_train_svd_hundred = svd_hundred.fit_transform(X_train)
# 43.7% of the variance is explained in the data for 100 dimensions
# This is mostly due to the High dimension of data and sparcity of the data
svd_hundred.explained_variance_ratio_.sum()
plt.figure(figsize=(10, 7))
plt.bar(range(100), svd_hundred.explained_variance_)
"""
Explanation: This must be the most uninformative plot in the history of plots. Obviously 2 principal components aren't enough. Let's try with 100:
End of explanation
"""
svd_sparta = TruncatedSVD(n_components=300, random_state=42)
X_train_svd_sparta = svd_sparta.fit_transform(X_train)
X_test_svd_sparta = svd_sparta.fit_transform(X_test)
svd_sparta.explained_variance_ratio_.sum()
"""
Explanation: Is it worth it to keep adding dimensions? Recall that we started with a 3430-dimensional feature space which we have already reduced to 100 dimensions, and according to the graph above each dimension over the 100th one will be adding less than 0.5% in our explanation of the variance. Let us try once more with 300 dimensions, to see if we can get something respectably over 50% (so we can be sure we are doing better than a coin toss)
End of explanation
"""
plt.figure(figsize=(10, 7))
plt.bar(range(300), svd_sparta.explained_variance_)
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn import cross_validation
from sklearn.naive_bayes import MultinomialNB
# Fit a classifier on the training set
classifier = MultinomialNB().fit(np.absolute(X_train_svd_sparta), y_train)
print("Training score: {0:.1f}%".format(
classifier.score(X_train_svd_sparta, y_train) * 100))
# Evaluate the classifier on the testing set
print("Testing score: {0:.1f}%".format(
classifier.score(X_test_svd_sparta, y_test) * 100))
"""
Explanation: 66.2% of the variance is explained through our model. This is quite respectable.
End of explanation
"""
streets = df['Geocode'].apply(pd.Series)
streets = df['Geocode'].tolist()
split_streets = [i.split(' ', 1)[1] for i in streets]
split_streets = [i.split(' ', 1)[1] for i in split_streets]
split_streets = [i.split(' ', 1)[0] for i in split_streets]
split_streets[0]
import re
shortword = re.compile(r'\W*\b\w{1,3}\b')
for i in range(len(split_streets)):
split_streets[i] = shortword.sub('', split_streets[i])
# Create a new column with the street:
df['Street_Words'] = split_streets
from sklearn.feature_extraction.text import CountVectorizer
# Turn the text documents into vectors
vectorizer = CountVectorizer(min_df=1, stop_words="english")
X = vectorizer.fit_transform(df['Street_Words']).toarray()
y = df['Letter_Grade']
target_names = y.unique()
# Train/Test split and cross validation:
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, train_size = 0.8)
X_train.shape
from sklearn.decomposition import TruncatedSVD
svd_two = TruncatedSVD(n_components=2, random_state=42)
X_train_svd = svd_two.fit_transform(X_train)
pc_df = pd.DataFrame(X_train_svd) # cast resulting matrix as a data frame
sns.pairplot(pc_df, diag_kind='kde');
pca_summary(svd_two)
# 25% of the variance is explained in the data when we use only TWO principal components!
svd_two.explained_variance_ratio_.sum()
# Now, let's try with 10 components to see how much it explains
svd_ten = TruncatedSVD(n_components=10, random_state=42)
X_train_svd_ten = svd_ten.fit_transform(X_train)
# 53.9% of the variance is explained in the data for 10 dimensions
# This is mostly due to the High dimension of data and sparcity of the data
svd_ten.explained_variance_ratio_.sum()
"""
Explanation: Restaurant Streets as a Bag-of-words model
End of explanation
"""
|
re-mint/eotarchive-quantitative | Check Database Tables.ipynb | gpl-3.0 | import requests
import io
import pandas
from itertools import chain
def makeurl(tablename,start,end):
return "https://iaspub.epa.gov/enviro/efservice/{tablename}/JSON/rows/{start}:{end}".format_map(locals())
def table_count(tablename):
url= "https://iaspub.epa.gov/enviro/efservice/{tablename}/COUNT/JSON".format_map(locals())
out=requests.get(url)
try:
return out.json()[0]['TOTALQUERYRESULTS']
except Exception as e:
print(e)
print(out.text)
return -1
table_names=[
"BREPORT_CYCLE",
"RCR_HHANDLER",
"RCR_BGM_BASIC",
"PUB_DIM_FACILITY",
"PUB_FACTS_SUBP_GHG_EMISSION",
"PUB_FACTS_SECTOR_GHG_EMISSION",
"PUB_DIM_SUBPART",
"PUB_DIM_GHG",
"PUB_DIM_SECTOR",
"PUB_DIM_SUBSECTOR",
"PUB_DIM_FACILITY",
"AA_MAKEUP_CHEMICAL_INFO",
"AA_SUBPART_LEVEL_INFORMATION",
"AA_SPENT_LIQUOR_INFORMATION",
"AA_FOSSIL_FUEL_INFORMATION",
"AA_FOSSIL_FUEL_TIER_2_INFO",
"AA_CEMS_DETAILS",
"AA_TIER_4_CEMS_QUARTERLY_CO2",
"PUB_DIM_FACILITY",
"EE_CEMS_DETAILS",
"EE_CEMS_INFO",
"EE_FACILITY_INFO",
"EE_NOCEMS_MONTHLYDETAILS",
"EE_NOCEMSTIO2DETAILS",
"EE_SUBPART_LEVEL_INFORMATION",
"EE_TIER4CEMS_QTRDTLS",
"PUB_DIM_FACILITY",
"GG_FACILITY_INFO",
"GG_NOCEMS_ZINC_DETAILS",
"GG_SUBPART_LEVEL_INFORMATION",
"PUB_DIM_FACILITY",
"II_BIOGAS_REC_PROC",
"II_CH4_GEN_PROCESS",
"II_EQU_II1_OR_II2",
"II_EQU_II4_INPUT",
"II_EQUATION_II3",
"II_EQUATION_II6",
"II_EQUATION_II7",
"II_SUBPART_LEVEL_INFORMATION",
"II_PROCESS_DETAILS",
"PUB_DIM_FACILITY",
"NN_SUBPART_LEVEL_INFORMATION",
"NN_NGL_FRACTIONATOR_METHODS",
"NN_LDC_NAT_GAS_DELIVERIES",
"NN_LDC_DETAILS",
"PUB_DIM_FACILITY",
"R_SUBPART_LEVEL_INFORMATION",
"R_FACILITY_INFO",
"R_SMELTING_FURNACE_INFO",
"R_FEEDSTOCK_INFO",
"PUB_DIM_FACILITY",
"TT_SUBPART_GHG_INFO",
"TT_LANDFILL_DETAILS",
"TT_LF_GAS_COLL_DETAILS",
"TT_WASTE_DEPTH_DETAILS",
"TT_WASTESTREAM_DETLS",
"TT_HIST_WASTE_METHOD",
"PUB_DIM_FACILITY",
"W_SUBPART_LEVEL_INFORMATION",
"W_LIQUIDS_UNLOADING",
"W_TRANSMISSION_TANKS",
"W_PNEUMATIC_DEVICES",
"W_WELL_COMPLETION_HYDRAULIC",
"W_WELL_TESTING",
]
"""
Explanation: Team members have produced a list of know database tables.
I'm going to try to represent those in machine-readable format, and run tests against the API for existence and row-count
Table Names Document
https://docs.google.com/spreadsheets/d/1LDDH-qxJunBqqkS1EfG2mhwgwFi7PylXtz3GYsGjDzA/edit#gid=933879858
End of explanation
"""
table_count(table_names[0])
%%time
table_counts={
table_name:table_count(table_name)
for table_name in table_names
}
pandas.Series(table_counts)
len(table_counts)
"""
Explanation: For each table, I want to
* assert that it actually exists
* get a rowcount
End of explanation
"""
|
tschinz/iPython_Workspace | 02_WP/General/ETH_DDR_Calculations.ipynb | gpl-2.0 | import numpy as np
resolutions = [360, 600, 1200, 2400, 4800] # dpi
inch2mm = 25.4 # mm/inch
framelength_bytes = 8192
pixel_bitnb = 4
physical_frame_length = np.empty(shape=[len(resolutions)], dtype=np.float64) # mm
for i in range(len(resolutions)):
physical_frame_length[i] = (inch2mm / resolutions[i]) * (framelength_bytes * 8 / pixel_bitnb)
for i in range(len(resolutions)):
print("Resolution: {:4} dpi Physical Frame Length: {} mm".format(resolutions[i], physical_frame_length[i]))
"""
Explanation: Ethernet Calculations
Physical Frame Length
$pixel_{Pitch} = \frac{25400\frac{\mu m}{inch}}{{Resolution}}$
$physical_frame_{length} = pixel_{Pitch} * pixel_per_frame$
End of explanation
"""
import numpy as np
memory_sizes = [1073741824, 2147483648, 4294967296, 8589934592] # [Bytes] = 1GB, 2GB, 4GB, 8GB
memory_width = 1024 # Pixels
section_nbr = 16
memory_depth = np.empty(shape=[len(memory_sizes)], dtype=np.integer)
section_depth = np.empty(shape=[len(memory_sizes)], dtype=np.integer)
for i in range(len(memory_sizes)):
memory_depth[i] = memory_sizes[i] / memory_width
section_depth[i] = memory_depth[i] / section_nbr
print("| Memory Size | Memory Width | Memory Depth | Section Nbr | Section Depth |")
for i in range(len(memory_sizes)):
print("| {:4} GBytes | {:4} Pixels | {:8} Bytes | {:11} | {:7} Bytes |".format(memory_sizes[i]/1024/1024/1024, memory_width, memory_depth[i], section_nbr, section_depth[i]))
"""
Explanation: DDR Calculations
Memory size
End of explanation
"""
import numpy as np
inch2mm = 25.4 # mm/inch
ph_name = ["KM 1024i", "KY KJ4B", "KY KJ4B_1200_64k"]
resolutions = [360, 600, 1200] # dpi
nozzle_nbr = [5312, 5312, 5312] # noozles
f_jetting = [30, 40, 64] # kHz
bits_per_pixel = 4
read_factor = 2
pixel_pitch = np.empty(shape=[len(resolutions)], dtype=np.float64)
substrate_speed = np.empty(shape=[len(resolutions)], dtype=np.float64)
printhead_bitrate = np.empty(shape=[len(resolutions)], dtype=np.float64)
ddr2_bitrate_read = np.empty(shape=[len(resolutions)], dtype=np.float64)
for i in range(len(resolutions)):
pixel_pitch[i] = inch2mm / resolutions[i]
substrate_speed[i] = f_jetting[i] * pixel_pitch[i]
printhead_bitrate[i] = nozzle_nbr[i] * bits_per_pixel * f_jetting[i]
ddr2_bitrate_read[i] = printhead_bitrate[i] * read_factor
print("| PH Name | Resolution | Nozzles | f Jetting | Substrate Speed | PH Bitrate | DDR2 Read Bitrate |")
for i in range(len(resolutions)):
print("| {:16} | {:6} dpi | {:7} | {:5} kHz | {:11.4} m/s | {:6} Mbits/s | {:9} Mbits/s | ".format(ph_name[i], resolutions[i], nozzle_nbr[i], f_jetting[i], substrate_speed[i], printhead_bitrate[i]/1024, ddr2_bitrate_read[i]/1024))
"""
Explanation: Data Rate required
End of explanation
"""
|
pastas/pastas | concepts/hantush_response.ipynb | mit | import numpy as np
import pandas as pd
import pastas as ps
ps.show_versions()
"""
Explanation: Hantush response functions
This notebook compares the two Hantush response function implementations in Pastas.
Developed by D.A. Brakenhoff (Artesia, 2021)
Contents
Hantush versus HantushWellModel
Which Hantush should I use?
Synthetic example
End of explanation
"""
# A defined so that 100 m3/day results in 5 m drawdown
A = -5 / 100.0
a = 200
b = 0.5
d = 0.0 # reference level
# auto-correlated residuals AR(1)
sigma_n = 0.05
alpha = 50
sigma_r = sigma_n / np.sqrt(1 - np.exp(-2 * 14 / alpha))
print(f'sigma_r = {sigma_r:.2f} m')
"""
Explanation: Hantush versus HantushWellModel
The reason there are two implementations in Pastas is that each implementation currently has advantages and disadvantages. We will discuss those soon, but first let's introduce the two implementations. The two Hantush response functions are very similar, but differ in the definition of the parameters. The table below shows the formulas for both implementations.
| Name | Fitting parameters | Formula | Description |
|------------------|-------------|:------------------------------------------------------------------------|--------------------------------------------------------------------------------|
| Hantush | 3 - A, a, b | $$ \theta(t) = At^{-1} e^{-t/a - ab/t} $$ | Response function commonly used for groundwater abstraction wells. |
| HantushWellModel | 3 - A', a, b' | $$ \theta(r,t) = A^\prime \text{K}_0 \left( 2r \sqrt{b^\prime} \right) t^{-1} e^{-t/a - abr^2/t} $$ | Implementation of the Hantush well function that allows scaling with distance. |
In the first implementation the parameters $A$, $a$, and $b$ can be written as:
$$
\begin{align}
A &= \frac{\text{K}_0 \left( 2\sqrt{b} \right)}{2 \pi T} \
a &= cS \
b &= \frac{r^2}{4 \lambda^2}
\end{align}
$$
In this case parameter $A$ is also known as the "gain", which is equal to the steady-state contribution of a stress with unit 1. For example, the drawdown caused by a well with a continuous extraction rate of 1.0 (the units don't really matter here and are determined by what units the user puts in).
In the second implementation, the definition of the parameter $A^\prime$ is different, which allows the distance $r$ between an extraction well and an observation well to be passed as a variable. This allows multiple wells to have the same response function, which can be useful to e.g. reduce the number of parameters in a model with multiple extraction wells. Note that $r$ is never optimized, but has to be provided by the user.
$$
\begin{align}
A^\prime &= \frac{1}{2 \pi T} \
a &= cS \
b^\prime &= \frac{1}{4 \lambda^2}
\end{align}
$$
Which Hantush should I use?
So why two implementations? Well, there are advantages and disadvantages to both implementations, which are listed below.
<!-- Table Does not render pretty in docs...>
<!-- |Name | Pro| Con|
|:--|:----|:-----|
|**Hantush**|<ul><li>Parameter A is the gain, which makes it easier to interpret the results.</li> <li>Estimates the uncertainty of the gain directly.</li></ul>|<ul><li>Cannot be used to simulate multiple wells.</li><li>More challenging to relate to aquifer characteristics.</li></ul>|
|**HantushWellModel**|<ul><li>Can be used with WellModel to simulate multiple wells with one response function.</li><li>Easier to relate parameters to aquifer characteristics.</li></ul>|<ul><li>Does not directly estimate the uncertainty of the gain but can be calculated using special methods.</li><li>More sensitive to the initial value of parameters, in rare cases the initial parameter values have to be tweaked to get a good fit result.</li></ul>| -->
Hantush
Pro:
- Parameter A is the gain, which makes it easier to interpret the results.
- Estimates the uncertainty of the gain directly.
Con:
- Cannot be used to simulate multiple wells.
- More challenging to relate to aquifer characteristics.
HantushWellModel
Pro:
- Can be used with WellModel to simulate multiple wells with one response function.
- Easier to relate parameters to aquifer characteristics.
Con:
- Does not directly estimate the uncertainty of the gain but this can be calculated using special methods.
- More sensitive to the initial value of parameters, in rare cases the initial parameter values have to be tweaked to get a good fit result.
So which one should you use? It depends on your use-case:
Use Hantush if you are considering a single extraction well and you're interested in calculating the gain and the uncertainty of the gain.
Use HantushWellModel if you are simulating multiple extraction wells or want to pass the distance between extraction and observation well as a known parameter.
Of course these aren't strict rules and it is encouraged to explore different model structures when building your timeseries models. But as a first general guiding principle this should help in selecting which approach is appropriate to your specific problem.
Synthetic example
A synthetic example is used to show both Hantush implementations. First, we create a synthetic timeseries generated with the Hantush response function to which we add autocorrelated residuals. We set the parameter values for the Hantush response function:
End of explanation
"""
# head observations between 2000 and 2010
idx = pd.date_range("2000", "2010", freq="D")
ho = pd.Series(index=idx, data=0)
# extraction of 100 m3/day between 2002 and 2006
well = pd.Series(index=idx, data=0.0)
well.loc["2002":"2006"] = 100.0
"""
Explanation: Create a head observations timeseries and a timeseries with the well extraction rate.
End of explanation
"""
ml0 = ps.Model(ho) # alleen de tijdstippen waarop gemeten is worden gebruikt
rm = ps.StressModel(well, ps.Hantush, name='well', up=False)
ml0.add_stressmodel(rm)
ml0.set_parameter('well_A', initial=A)
ml0.set_parameter('well_a', initial=a)
ml0.set_parameter('well_b', initial=b)
ml0.set_parameter('constant_d', initial=d)
hsynthetic_no_error = ml0.simulate()[ho.index]
"""
Explanation: Create the synthetic head timeseries based on the extraction rate and the parameters we defined above.
End of explanation
"""
delt = (ho.index[1:] - ho.index[:-1]).values / pd.Timedelta("1d")
np.random.seed(1)
noise = sigma_n * np.random.randn(len(ho))
residuals = np.zeros_like(noise)
residuals[0] = noise[0]
for i in range(1, len(ho)):
residuals[i] = np.exp(-delt[i - 1] / alpha) * residuals[i - 1] + noise[i]
hsynthetic = hsynthetic_no_error + residuals
"""
Explanation: Add the auto-correlated residuals.
End of explanation
"""
ax = hsynthetic_no_error.plot(label='synthetic heads (no error)', figsize=(10, 5))
hsynthetic.plot(ax=ax, color="C1", label="synthetic heads (with error)")
ax.legend(loc='best')
ax.set_ylabel("head (m+ref)")
ax.grid(b=True)
"""
Explanation: Plot the timeseries.
End of explanation
"""
# Hantush
ml_h1 = ps.Model(hsynthetic, name="gain")
wm_h1 = ps.StressModel(well, ps.Hantush, name='well', up=False)
ml_h1.add_stressmodel(wm_h1)
ml_h1.solve(report=False, noise=True)
"""
Explanation: Create three models:
Model with Hantush response function.
Model with HantushWellModel response function, but $r$ is not passed as a known parameter.
Model with WellModel, which uses HantushWellModel and where $r$ is set to 1.0 m.
All three models should yield the similar results and be able to estimate the true values of the parameters reasonably well.
End of explanation
"""
# HantushWellModel
ml_h2 = ps.Model(hsynthetic, name="scaled")
wm_h2 = ps.StressModel(well, ps.HantushWellModel, name='well', up=False)
ml_h2.add_stressmodel(wm_h2)
ml_h2.solve(report=False, noise=True)
# WellModel
r = np.array([1.0]) # parameter r
well.name = "well"
ml_h3 = ps.Model(hsynthetic, name="wellmodel")
wm_h3 = ps.WellModel([well], ps.HantushWellModel, "well", r, up=False)
ml_h3.add_stressmodel(wm_h3)
ml_h3.solve(report=False, noise=True, solver=ps.LmfitSolve)
"""
Explanation: Solve with noise model and Hantush_scaled
End of explanation
"""
axes = ps.plots.compare([ml_h1, ml_h2, ml_h3], adjust_height=True,
figsize=(10, 8));
"""
Explanation: Plot a comparison of all three models. The three models all yield similar results (all the lines overlap).
End of explanation
"""
df = pd.DataFrame(index=["well_gain", "well_a", "well_b"],
columns=["True value", "Hantush",
"HantushWellModel", "WellModel"])
df["True value"] = A, a, b
df["Hantush"] = (
# gain (same as A in this case)
wm_h1.rfunc.gain(ml_h1.get_parameters("well")),
# a
ml_h1.parameters.loc["well_a", "optimal"],
# b
ml_h1.parameters.loc["well_b", "optimal"]
)
df["HantushWellModel"] = (
# gain (not same as A)
wm_h2.rfunc.gain(ml_h2.get_parameters("well")),
# a
ml_h2.parameters.loc["well_a", "optimal"],
# b
ml_h2.parameters.loc["well_b", "optimal"]
)
df["WellModel"] = (
# gain, use WellModel.get_parameters() to get params: A, a, b and r
wm_h3.rfunc.gain(wm_h3.get_parameters(model=ml_h3, istress=0)),
# a
ml_h3.parameters.loc["well_a", "optimal"],
# b (multiply parameter value by r^2 for comparison)
ml_h3.parameters.loc["well_b", "optimal"] * r[0]**2
)
df
"""
Explanation: Compare the optimized parameters for each model with the true values we defined at the beginning of this example. Note that we're comparing the value of the gain (not parameter $A$) and that each model has its own method for calculating the gain. As expected, the parameter estimates are reasonably close to the true values defined above.
End of explanation
"""
def variance_gain(ml, wm_name, istress=None):
"""Calculate variance of the gain for WellModel.
Variance of the gain is calculated based on propagation of
uncertainty using optimal values and the variances of A and b
and the covariance between A and b.
Parameters
----------
ml : pastas.Model
optimized model
wm_name : str
name of the WellModel
istress : int or list of int, optional
index of stress to calculate variance of gain for
Returns
-------
var_gain : float
variance of the gain calculated from model results
for parameters A and b
See Also
--------
pastas.HantushWellModel.variance_gain
"""
wm = ml.stressmodels[wm_name]
if ml.fit is None:
raise AttributeError("Model not optimized! Run solve() first!")
if wm.rfunc._name != "HantushWellModel":
raise ValueError("Response function must be HantushWellModel!")
# get parameters and (co)variances
A = ml.parameters.loc[wm_name + "_A", "optimal"]
b = ml.parameters.loc[wm_name + "_b", "optimal"]
var_A = ml.fit.pcov.loc[wm_name + "_A", wm_name + "_A"]
var_b = ml.fit.pcov.loc[wm_name + "_b", wm_name + "_b"]
cov_Ab = ml.fit.pcov.loc[wm_name + "_A", wm_name + "_b"]
if istress is None:
r = np.asarray(wm.distances)
elif isinstance(istress, int) or isinstance(istress, list):
r = wm.distances[istress]
else:
raise ValueError("Parameter 'istress' must be None, list or int!")
return wm.rfunc.variance_gain(A, b, var_A, var_b, cov_Ab, r=r)
# create dataframe
var_gain = pd.DataFrame(index=df.columns[1:])
# add calculated gain
var_gain["gain"] = df.iloc[0, 1:].values
# Hantush: variance gain is computed directly
var_gain.loc["Hantush", "var gain"] = ml_h1.fit.pcov.loc["well_A", "well_A"]
# HantushWellModel: calculate variance gain
var_gain.loc["HantushWellModel", "var gain"] = wm_h2.rfunc.variance_gain(
ml_h2.parameters.loc["well_A", "optimal"], # A
ml_h2.parameters.loc["well_b", "optimal"], # b
ml_h2.fit.pcov.loc["well_A", "well_A"], # var_A
ml_h2.fit.pcov.loc["well_b", "well_b"], # var_b
ml_h2.fit.pcov.loc["well_A", "well_b"] # cov_Ab
)
# WellModel: calculate variance gain using helper function
var_gain.loc["WellModel", "var gain"] = variance_gain(ml_h3, "well", istress=0)
# calculate std dev gain
var_gain["std gain"] = np.sqrt(var_gain["var gain"])
# show table
var_gain.style.format("{:.5e}")
"""
Explanation: Recall from earlier that when using ps.Hantush the gain and uncertainty of the gain are calculated directly. This is not the case for ps.HantushWellModel, so to obtain the uncertainty of the gain when using that response function there is a method called ps.HantushWellModel.variance_gain() that computes the variance based on the optimal values and (co)variance of parameters $A$ and $b$.
The code below shows the calculated gain for each model, and how to calculate the variance and standard deviation of the gain for each model. The results show that the calculated values are all very close, as was expected.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/miroc/cmip6/models/nicam16-9s/land.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'nicam16-9s', 'land')
"""
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: MIROC
Source ID: NICAM16-9S
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
"""
|
phoebe-project/phoebe2-docs | development/examples/eccentric_ellipsoidal.ipynb | gpl-3.0 | #!pip install -I "phoebe>=2.4,<2.5"
"""
Explanation: Eccentric Ellipsoidal (Heartbeat)
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
import numpy as np
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
"""
b.set_value('q', value=0.7)
b.set_value('period', component='binary', value=10)
b.set_value('sma', component='binary', value=25)
b.set_value('incl', component='binary', value=0)
b.set_value('ecc', component='binary', value=0.9)
print(b.filter(qualifier='requiv*', context='component'))
b.set_value('requiv', component='primary', value=1.1)
b.set_value('requiv', component='secondary', value=0.9)
"""
Explanation: Now we need a highly eccentric system that nearly overflows at periastron and is slightly eclipsing.
End of explanation
"""
b.add_dataset('lc',
compute_times=phoebe.linspace(-2, 2, 201),
dataset='lc01')
b.add_dataset('orb', compute_times=phoebe.linspace(-2, 2, 201))
anim_times = phoebe.linspace(-2, 2, 101)
b.add_dataset('mesh',
compute_times=anim_times,
coordinates='uvw',
dataset='mesh01')
"""
Explanation: Adding Datasets
We'll add light curve, orbit, and mesh datasets.
End of explanation
"""
b.run_compute(irrad_method='none')
"""
Explanation: Running Compute
End of explanation
"""
afig, mplfig = b.plot(kind='lc', x='phases', t0='t0_perpass', show=True)
"""
Explanation: Plotting
End of explanation
"""
afig, mplfig = b.plot(time=0.0,
z={'orb': 'ws'},
c={'primary': 'blue', 'secondary': 'red'},
fc={'primary': 'blue', 'secondary': 'red'},
ec='face',
uncover={'orb': True},
trail={'orb': 0.1},
highlight={'orb': False},
tight_layout=True,
show=True)
"""
Explanation: Now let's make a nice figure.
Let's go through these options:
* time: make the plot at this single time
* z: by default, orbits plot in 2d, but since we're overplotting with a mesh, we want the z-ordering to be correct, so we'll have them plot with w-coordinates in the z-direction.
* c: (will be ignored by the mesh): set the color to blue for the primary and red for the secondary (will only affect the orbits as the light curve is not tagged with any component).
* fc: (will be ignored by everything but the mesh): set the facecolor to be blue for the primary and red for the secondary.
* ec: disable drawing the edges of the triangles in a separate color. We could also set this to 'none', but then we'd be able to "see-through" the triangle edges.
* uncover: for the orbit, uncover based on the current time.
* trail: for the orbit, let's show a "trail" behind the current position.
* highlight: disable highlighting for the orbit, since the mesh will be in the same position.
* tight_layout: use matplotlib's tight layout to ensure we have enough padding between axes to see the labels.
End of explanation
"""
afig, mplfig = b.plot(times=anim_times,
z={'orb': 'ws'},
c={'primary': 'blue', 'secondary': 'red'},
fc={'primary': 'blue', 'secondary': 'red'},
ec='face',
uncover={'orb': True},
trail={'orb': 0.1},
highlight={'orb': False},
tight_layout=True, pad_aspect=False,
animate=True,
save='eccentric_ellipsoidal.gif',
save_kwargs={'writer': 'imagemagick'})
"""
Explanation: Now let's animate the same figure in time. We'll use the same arguments as the static plot above, with the following exceptions:
times: pass our array of times that we want the animation to loop over.
pad_aspect: pad_aspect doesn't work with animations, so we'll disable to avoid the warning messages.
animate: self-explanatory.
save: we could use show=True, but that doesn't always play nice with jupyter notebooks
save_kwargs: may need to change these for your setup, to create a gif, passing {'writer': 'imagemagick'} is often useful.
End of explanation
"""
|
CondensedOtters/PHYSIX_Utils | Projects/Moog_2016-2019/CO2/CO2_NN/forces.ipynb | gpl-3.0 | import sys
sys.path.append("/Users/mathieumoog/Documents/LibAtomicSim/Python/")
"""
Explanation: Matching Atomic Forces using Neural Nets and Gaussians Overlaps
Loading Tech Stuff
First we load the python path to LibAtomicSim, which will give us some useful functions
End of explanation
"""
# NN
import keras
# Descriptor (unused)
import dscribe
# Custom Libs
import cpmd
import filexyz
# Maths
import numpy as np
from scipy.spatial.distance import cdist
# Plots
import matplotlib
matplotlib.use('nbAgg')
import matplotlib.pyplot as plt
# Scalers
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.decomposition import PCA, KernelPCA
from keras.regularizers import l2
#['GTK3Agg', 'GTK3Cairo', 'MacOSX', 'nbAgg', 'Qt4Agg', 'Qt4Cairo', 'Qt5Agg', 'Qt5Cairo', 'TkAgg', 'TkCairo', 'WebAgg', 'WX', 'WXAgg', 'WXCairo', 'agg', 'cairo', 'pdf', 'pgf', 'ps', 'svg', 'template']
# Need to install some of those stuff at some point
"""
Explanation: Then we load the modules, some issues remain with matplotlib on Jupyter, but we'll fix them later.
End of explanation
"""
def getDistance1Dsq( position1, position2, length):
dist = position1-position2
half_length = length*0.5
if dist > half_length :
dist -= length
elif dist < -half_length:
dist += length
return dist*dist
def getDistanceOrtho( positions, index1, index2, cell_lengths ):
dist=0
for i in range(3):
dist += getDistance1Dsq( positions[index1,i], positions[index2,i], cell_lengths[i] )
return np.sqrt(dist)
def getDistance( position1, position2, cell_length ):
dist=0
for i in range(3):
dist += getDistance1Dsq( position1[i], position2[i], cell_lengths[i] )
return np.sqrt(dist)
def computeDistanceMatrix( positions, cell_lengths):
nb_atoms = len(positions[:,0])
matrix = np.zeros(( nb_atoms, nb_atoms ))
for atom in range(nb_atoms):
for atom2 in range(atom+1,nb_atoms):
dist = getDistanceOrtho( positions, atom, atom2, cell_lengths )
matrix[atom,atom2] = dist
matrix[atom2,atom] = dist
return matrix
"""
Explanation: Then we write some functions that are not yet on LibAtomicSim, but should be soon(ish)
End of explanation
"""
volume=8.82
temperature=3000
nb_type=2
nbC=32
nbO=64
run_nb=1
nb_atoms=nbC+nbO
path_sim = str( "/Users/mathieumoog/Documents/CO2/" + str(volume) + "/" + str(temperature) + "K/" + str(run_nb) + "-run/")
"""
Explanation: Data Parameters
End of explanation
"""
cell_lengths = np.ones(3)*volume
ftraj_path = str( path_sim + "FTRAJECTORY" )
positions, velocities, forces = cpmd.readFtraj( ftraj_path, True )
nb_step = positions.shape[0]
ang2bohr = 0.529177
positions = positions*ang2bohr
for i in range(3):
positions[:,:,i] = positions[:,:,i] % cell_lengths[i]
"""
Explanation: Loading Trajectory
Here we load the trajectory, including forces and velocities, and convert the positions back into angstroms, while the forces are still in a.u (although we could do everything in a.u.).
End of explanation
"""
sigma_C = 0.9
sigma_O = 0.9
size_data = nb_step*nbC
dx = 0.1
positions_offset = np.zeros( (6,3), dtype=float )
size_off = 6
n_features=int(2*(size_off+1))
for i,ival in enumerate(np.arange(0,size_off,2)):
positions_offset[ ival , i ] += dx
positions_offset[ ival+1 , i ] -= dx
"""
Explanation: Data parametrization
Setting up the parameters for the data construction.
End of explanation
"""
max_step = 1000
start_step = 1000
stride = 10
size_data = max_step*nbC
data = np.zeros( (max_step*nbC, size_off+1, nb_type ), dtype=float )
for step in np.arange(start_step,stride*max_step+start_step,stride):
# Distance from all atoms (saves time?)
matrix = computeDistanceMatrix( positions[step,:,:], cell_lengths)
for carbon in range(nbC):
# Data Adress
add_data = int((step-start_step)/stride)*nbC + carbon
# C-C
for carbon2 in range(nbC):
# Gaussians at atomic site
data[ add_data, 0, 0 ] += np.exp( -(matrix[carbon,carbon2]*matrix[carbon,carbon2])/(2*sigma_C*sigma_C) )
# Gaussians with small displacement from site
if carbon != carbon2:
for i in range(size_off):
dist = getDistance( positions[step, carbon2, :], (positions[step,carbon,:]+positions_offset[i,:])%cell_lengths[0], cell_lengths )
data[ add_data, i+1, 0 ] += np.exp( -(dist*dist)/(2*sigma_C*sigma_C) )
# C-O
for oxygen in range(nbC,nb_atoms):
# Gaussians at atomic site
data[ add_data, 0, 1 ] += np.exp( -(matrix[carbon,oxygen]*matrix[carbon,oxygen])/(2*sigma_O*sigma_O) )
# Gaussians with small displacement from site
for i in range(size_off):
dist = getDistance( positions[step, oxygen,:], (positions[step,carbon,:]+positions_offset[i,:])%cell_lengths[0], cell_lengths )
data[ add_data, i+1, 1 ] += np.exp( -(dist*dist)/(2*sigma_O*sigma_O) )
"""
Explanation: Building complete data set, with the small caveat that we don't seek to load all of the positions for time constraints (for now at least).
End of explanation
"""
nb_data_train = 30000
nb_data_test = 1000
size_data = max_step*nbC
if nb_data_train + nb_data_test > data.shape[0]:
print("Datasets larger than amount of available data")
data = data.reshape( size_data, int(2*(size_off+1)) )
choice = np.random.choice( size_data, nb_data_train+nb_data_test, replace=False)
choice_train = choice[0:nb_data_train]
choice_test = choice[nb_data_train:nb_data_train+nb_data_test]
"""
Explanation: Creating test and train set
Here we focus on the carbon atoms, and we create the input and output shape of the data. The input is created by reshaping the positions array, while the output is simply the forces reshaped. Once this is done, we chose the train et test set by making sure that there is no overlap between them.
End of explanation
"""
input_train = data[ choice_train ]
input_test = data[ choice_test ]
output_total = forces[start_step:start_step+max_step*stride:stride,0:nbC,0].reshape(size_data,1)
output_train = output_total[ choice_train ]
output_test = output_total[ choice_test ]
"""
Explanation: Here we reshape the data and choose the point for the train and test set making sure that they do not overlap
End of explanation
"""
# Creating Scalers
scaler = []
scaler.append( StandardScaler() )
scaler.append( StandardScaler() )
# Fitting Scalers
scaler[0].fit( input_train )
scaler[1].fit( output_train )
# Scaling input and output
input_train_scale = scaler[0].transform( input_train )
input_test_scale = scaler[0].transform( input_test)
output_train_scale = scaler[1].transform( output_train )
output_test_scale = scaler[1].transform( output_test )
"""
Explanation: Scaling input and output for the Neural Net
End of explanation
"""
# Iteration parameters
loss_fct = 'mean_squared_error' # Loss function in the NN
optimizer = 'Adam' # Choice of optimizers for training of the NN weights
learning_rate = 0.001
n_epochs = 5000 # Number of epoch for optimization?
patience = 100 # Patience for convergence
restore_weights = True
batch_size = 16
early_stop_metric=['mse']
# Subnetorks structure
activation_fct = 'tanh' # Activation function in the dense hidden layers
nodes = [15,15,15]
# Dropout rates
dropout_rate_init = 0.2
dropout_rate_within = 0.5
"""
Explanation: Neural Net Structure
Here we set the NN parameters
End of explanation
"""
# Individual net structure
force_net = keras.Sequential(name='force_net')
#force_net.add( keras.layers.Dropout( dropout_rate_init ) )
for node in nodes:
force_net.add( keras.layers.Dense( node, activation=activation_fct, kernel_constraint=keras.constraints.maxnorm(3)))
#force_net.add( keras.layers.Dropout( dropout_rate_within ) )
force_net.add( keras.layers.Dense( 1, activation='linear') )
input_layer = keras.layers.Input(shape=(n_features,), name="gauss_input")
output_layer = force_net( input_layer )
model = keras.models.Model(inputs=input_layer ,outputs=output_layer )
model.compile(loss=loss_fct, optimizer=optimizer, metrics=['mse'])
keras.utils.plot_model(model,to_file="/Users/mathieumoog/network.png", show_shapes=True, show_layer_names=True )
early_stop = keras.callbacks.EarlyStopping( monitor='val_loss', mode='min', verbose=2, patience=patience, restore_best_weights=True)
history = model.fit( input_train_scale, output_train_scale, validation_data=( input_test_scale, output_test_scale ), epochs=n_epochs, verbose=2, callbacks=[early_stop])
predictions = model.predict( input_test_scale )
plt.plot( output_test_scale, predictions,"r." )
plt.plot( output_train_scale, output_train_scale,"g." )
plt.plot( output_test_scale, output_test_scale,"b." )
plt.show()
"""
Explanation: Here we create the neural net structure and compile it
End of explanation
"""
|
letsgoexploring/economicData | business-cycle-data/python/.ipynb_checkpoints/business_cycle_data-checkpoint.ipynb | mit | import pandas as pd
import numpy as np
import fredpy as fp
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
# Export path: Set to empty string '' if you want to export data to current directory
export_path = '../Csv/'
# Load FRED API key
fp.api_key = fp.load_api_key('fred_api_key.txt')
"""
Explanation: U.S. Business Cycle Data
This notebook downloads, manages, and exports several data series for studying business cycles in the US. Four files are created in the csv directory:
File name | Description |
---------------------------------------------|------------------------------------------------------|
rbc_data_actual_trend.csv | RBC data with actual and trend values |
rbc_data_actual_trend_cycle.csv | RBC data with actual, trend, and cycle values |
business_cycle_data_actual_trend.csv | Larger data set with actual and trend values |
business_cycle_data_actual_trend_cycle.csv | Larger data set with actual, trend, and cycle values |
The first two files are useful for studying basic RBC models. The second two contain all of the RBC data plus money, inflation, and inflation data.
End of explanation
"""
# Download data
gdp = fp.series('GDP')
consumption = fp.series('PCEC')
investment = fp.series('GPDI')
government = fp.series('GCE')
exports = fp.series('EXPGS')
imports = fp.series('IMPGS')
net_exports = fp.series('NETEXP')
hours = fp.series('HOANBS')
deflator = fp.series('GDPDEF')
pce_deflator = fp.series('PCECTPI')
cpi = fp.series('CPIAUCSL')
m2 = fp.series('M2SL')
tbill_3mo = fp.series('TB3MS')
unemployment = fp.series('UNRATE')
# Base year for NIPA deflators
cpi_base_year = cpi.units.split(' ')[1].split('=')[0]
# Base year for CPI
nipa_base_year = deflator.units.split(' ')[1].split('=')[0]
# Convert monthly M2, 3-mo T-Bill, and unemployment to quarterly
m2 = m2.as_frequency('Q')
tbill_3mo = tbill_3mo.as_frequency('Q')
unemployment = unemployment.as_frequency('Q')
cpi = cpi.as_frequency('Q')
# Deflate GDP, consumption, investment, government expenditures, net exports, and m2 with the GDP deflator
def deflate(series,deflator):
deflator, series = fp.window_equalize([deflator, series])
series = series.divide(deflator).times(100)
return series
gdp = deflate(gdp,deflator)
consumption = deflate(consumption,deflator)
investment = deflate(investment,deflator)
government = deflate(government,deflator)
net_exports = deflate(net_exports,deflator)
exports = deflate(exports,deflator)
imports = deflate(imports,deflator)
m2 = deflate(m2,deflator)
# pce inflation as percent change over past year
pce_deflator = pce_deflator.apc()
# cpi inflation as percent change over past year
cpi = cpi.apc()
# GDP deflator inflation as percent change over past year
deflator = deflator.apc()
# Convert unemployment, 3-mo T-Bill, pce inflation, cpi inflation, GDP deflator inflation data to rates
unemployment = unemployment.divide(100)
tbill_3mo = tbill_3mo.divide(100)
pce_deflator = pce_deflator.divide(100)
cpi = cpi.divide(100)
deflator = deflator.divide(100)
# Make sure that the RBC data has the same data range
gdp,consumption,investment,government,exports,imports,net_exports,hours = fp.window_equalize([gdp,consumption,investment,government,exports,imports,net_exports,hours])
# T-Bill data doesn't neet to go all the way back to 1930s
tbill_3mo = tbill_3mo.window([gdp.data.index[0],'2222'])
metadata = pd.Series(dtype=str,name='Values')
metadata['nipa_base_year'] = nipa_base_year
metadata['cpi_base_year'] = cpi_base_year
metadata.to_csv(export_path+'/business_cycle_metadata.csv')
"""
Explanation: Download and manage data
Download the following series from FRED:
FRED series ID | Name | Frequency |
---------------|------|-----------|
GDP | Gross Domestic Product | Q |
PCEC | Personal Consumption Expenditures | Q |
GPDI | Gross Private Domestic Investment | Q |
GCE | Government Consumption Expenditures and Gross Investment | Q |
EXPGS | Exports of Goods and Services | Q |
IMPGS | Imports of Goods and Services | Q |
NETEXP | Net Exports of Goods and Services | Q |
HOANBS | Nonfarm Business Sector: Hours Worked for All Employed Persons | Q |
GDPDEF | Gross Domestic Product: Implicit Price Deflator | Q |
PCECTPI | Personal Consumption Expenditures: Chain-type Price Index | Q |
CPIAUCSL | Consumer Price Index for All Urban Consumers: All Items in U.S. City Average | M |
M2SL | M2 | M |
TB3MS | 3-Month Treasury Bill Secondary Market Rate | M |
UNRATE | Unemployment Rate | M |
Monthly series (M2, T-Bill, unemployment rate) are converted to quarterly frequencies. CPI and PCE inflation rates are computed as the percent change in the indices over the previous year. GDP, consumption, investment, government expenditures, net exports and M2 are deflated by the GDP deflator. The data ranges for nataional accounts series (GDP, consumption, investment, government expenditures, net exports) and hours are equalized to the largest common date range.
End of explanation
"""
# Set the capital share of income
alpha = 0.35
# Average saving rate
s = np.mean(investment.data/gdp.data)
# Average quarterly labor hours growth rate
n = (hours.data[-1]/hours.data[0])**(1/(len(hours.data)-1)) - 1
# Average quarterly real GDP growth rate
g = ((gdp.data[-1]/gdp.data[0])**(1/(len(gdp.data)-1)) - 1) - n
# Compute annual depreciation rate
depA = fp.series('M1TTOTL1ES000')
gdpA = fp.series('gdpa')
gdpA = gdpA.window([gdp.data.index[0],gdp.data.index[-1]])
gdpA,depA = fp.window_equalize([gdpA,depA])
deltaKY = np.mean(depA.data/gdpA.data)
delta = (n+g)*deltaKY/(s-deltaKY)
# print calibrated values:
print('Avg saving rate: ',round(s,5))
print('Avg annual labor growth:',round(4*n,5))
print('Avg annual gdp growth: ',round(4*g,5))
print('Avg annual dep rate: ',round(4*delta,5))
# Construct the capital series. Note that the GPD and investment data are reported on an annualized basis
# so divide by 4 to get quarterly data.
capital = np.zeros(len(gdp.data))
capital[0] = gdp.data[0]/4*s/(n+g+delta)
for t in range(len(gdp.data)-1):
capital[t+1] = investment.data[t]/4 + (1-delta)*capital[t]
# Save in a fredpy series
capital = fp.to_fred_series(data = capital,dates =gdp.data.index,units = gdp.units,title='Capital stock of the US',frequency='Quarterly')
"""
Explanation: Compute capital stock for US using the perpetual inventory method
Next, compute the quarterly capital stock series for the US using the perpetual inventory method. The discrete-time Solow growth model is given by:
\begin{align}
Y_t & = A_tK_t^{\alpha}L_t^{1-\alpha} \tag{1}\
C_t & = (1-s)Y_t \tag{2}\
Y_t & = C_t + I_t \tag{3}\
K_{t+1} & = I_t + (1-\delta)K_t \tag{4}\
A_{t+1} & = (1+g)A_t \tag{5}\
L_{t+1} & = (1+n)L_t \tag{6}.
\end{align}
Here the model is assumed to be quarterly so $n$ is the quarterly growth rate of labor hours, $g$ is the quarterly growth rate of TFP, and $\delta$ is the quarterly rate of depreciation of the capital stock. Given a value of the quarterly depreciation rate $\delta$, an investment series $I_t$, and an initial capital stock $K_0$, the law of motion for the capital stock, Equation (4), can be used to compute an implied capital series. But we don't know $K_0$ or $\delta$ so we'll have to calibrate these values using statistics computed from the data that we've already obtained.
Let lowercase letters denote a variable that's been divided by $A_t^{1/(1-\alpha)}L_t$. E.g.,
\begin{align}
y_t = \frac{Y_t}{A_t^{1/(1-\alpha)}L_t}\tag{7}
\end{align}
Then (after substituting consumption from the model), the scaled version of the model can be written as:
\begin{align}
y_t & = k_t^{\alpha} \tag{8}\
i_t & = sy_t \tag{9}\
k_{t+1} & = i_t + (1-\delta-n-g')k_t,\tag{10}
\end{align}
where $g' = g/(1-\alpha)$ is the growth rate of $A_t^{1/(1-\alpha)}$. In the steady state:
\begin{align}
k & = \left(\frac{s}{\delta+n+g'}\right)^{\frac{1}{1-\alpha}} \tag{11}
\end{align}
which means that the ratio of capital to output is constant:
\begin{align}
\frac{k}{y} & = \frac{s}{\delta+n+g'} \tag{12}
\end{align}
and therefore the steady state ratio of depreciation to output is:
\begin{align}
\overline{\delta K/ Y} & = \frac{\delta s}{\delta + n + g'} \tag{13}
\end{align}
where $\overline{\delta K/ Y}$ is the long-run average ratio of depreciation to output. We can use Equation (13) to calibrate $\delta$ given $\overline{\delta K/ Y}$, $s$, $n$, and $g'$.
Furthermore, in the steady state, the growth rate of output is constant:
\begin{align}
\frac{\Delta Y}{Y} & = n + g' \tag{14}
\end{align}
Assume $\alpha = 0.35$.
Calibrate $s$ as the average of ratio of investment to GDP.
Calibrate $n$ as the average quarterly growth rate of labor hours.
Calibrate $g'$ as the average quarterly growth rate of real GDP minus n.
Calculate the average ratio of depreciation to GDP $\overline{\delta K/ Y}$ and use the result to calibrate $\delta$. That is, find the average ratio of Current-Cost Depreciation of Fixed Assets (FRED series ID: M1TTOTL1ES000) to GDP (FRED series ID: GDPA). Then calibrate $\delta$ from the following steady state relationship:
\begin{align}
\delta & = \frac{\left( \overline{\delta K/ Y} \right)\left(n + g' \right)}{s - \left( \overline{\delta K/ Y} \right)} \tag{15}
\end{align}
Calibrate $K_0$ by asusming that the capital stock is initially equal to its steady state value:
\begin{align}
K_0 & = \left(\frac{s}{\delta + n + g'}\right) Y_0 \tag{16}
\end{align}
Then, armed with calibrated values for $K_0$ and $\delta$, compute $K_1, K_2, \ldots$ recursively. See Timothy Kehoe's notes for more information on the perpetual inventory method:
http://users.econ.umn.edu/~tkehoe/classes/GrowthAccountingNotes.pdf
End of explanation
"""
# Compute TFP
tfp = gdp.data/capital.data**alpha/hours.data**(1-alpha)
tfp = fp.to_fred_series(data = tfp,dates =gdp.data.index,units = gdp.units,title='TFP of the US',frequency='Quarterly')
"""
Explanation: Compute total factor productivity
Use the Cobb-Douglas production function:
\begin{align}
Y_t & = A_tK_t^{\alpha}L_t^{1-\alpha} \tag{17}
\end{align}
and data on GDP, capital, and hours with $\alpha=0.35$ to compute an implied series for $A_t$.
End of explanation
"""
# Convert real GDP, consumption, investment, government expenditures, net exports and M2
# into thousands of dollars per civilian 16 and over
gdp = gdp.per_capita(civ_pop=True).times(1000)
consumption = consumption.per_capita(civ_pop=True).times(1000)
investment = investment.per_capita(civ_pop=True).times(1000)
government = government.per_capita(civ_pop=True).times(1000)
exports = exports.per_capita(civ_pop=True).times(1000)
imports = imports.per_capita(civ_pop=True).times(1000)
net_exports = net_exports.per_capita(civ_pop=True).times(1000)
hours = hours.per_capita(civ_pop=True).times(1000)
capital = capital.per_capita(civ_pop=True).times(1000)
m2 = m2.per_capita(civ_pop=True).times(1000)
# Scale hours per person to equal 100 on October (Quarter III) of GDP deflator base year.
hours.data = hours.data/hours.data.loc[base_year+'-10-01']*100
"""
Explanation: Additional data management
Now that we have used the aggregate production data to compute an implied capital stock and TFP, we can scale the production data and M2 by the population.
End of explanation
"""
fig, axes = plt.subplots(3,4,figsize=(6*4,4*3))
axes[0][0].plot(gdp.data)
axes[0][0].set_title('GDP')
axes[0][0].set_ylabel('Thousands of '+base_year+' $')
axes[0][1].plot(consumption.data)
axes[0][1].set_title('Consumption')
axes[0][1].set_ylabel('Thousands of '+base_year+' $')
axes[0][2].plot(investment.data)
axes[0][2].set_title('Investment')
axes[0][2].set_ylabel('Thousands of '+base_year+' $')
axes[0][3].plot(government.data)
axes[0][3].set_title('Gov expenditure')
axes[0][3].set_ylabel('Thousands of '+base_year+' $')
axes[1][0].plot(capital.data)
axes[1][0].set_title('Capital')
axes[1][0].set_ylabel('Thousands of '+base_year+' $')
axes[1][1].plot(hours.data)
axes[1][1].set_title('Hours')
axes[1][1].set_ylabel('Index ()'+base_year+'=100)')
axes[1][2].plot(tfp.data)
axes[1][2].set_title('TFP')
axes[1][3].plot(m2.data)
axes[1][3].set_title('M2')
axes[1][3].set_ylabel('Thousands of '+base_year+' $')
axes[2][0].plot(tbill_3mo.data*100)
axes[2][0].set_title('3mo T-Bill')
axes[2][0].set_ylabel('Percent')
axes[2][1].plot(pce_deflator.data*100)
axes[2][1].set_title('PCE Inflation')
axes[2][1].set_ylabel('Percent')
axes[2][2].plot(cpi.data*100)
axes[2][2].set_title('CPI Inflation')
axes[2][2].set_ylabel('Percent')
axes[2][3].plot(unemployment.data*100)
axes[2][3].set_title('Unemployment rate')
axes[2][3].set_ylabel('Percent');
"""
Explanation: Plot aggregate data
End of explanation
"""
# HP filter to isolate trend and cyclical components
gdp_log_cycle,gdp_log_trend= gdp.log().hp_filter()
consumption_log_cycle,consumption_log_trend= consumption.log().hp_filter()
investment_log_cycle,investment_log_trend= investment.log().hp_filter()
government_log_cycle,government_log_trend= government.log().hp_filter()
exports_log_cycle,exports_log_trend= exports.log().hp_filter()
imports_log_cycle,imports_log_trend= imports.log().hp_filter()
# net_exports_log_cycle,net_exports_log_trend= net_exports.log().hp_filter()
capital_log_cycle,capital_log_trend= capital.log().hp_filter()
hours_log_cycle,hours_log_trend= hours.log().hp_filter()
tfp_log_cycle,tfp_log_trend= tfp.log().hp_filter()
deflator_cycle,deflator_trend= deflator.hp_filter()
pce_deflator_cycle,pce_deflator_trend= pce_deflator.hp_filter()
cpi_cycle,cpi_trend= cpi.hp_filter()
m2_log_cycle,m2_log_trend= m2.log().hp_filter()
tbill_3mo_cycle,tbill_3mo_trend= tbill_3mo.hp_filter()
unemployment_cycle,unemployment_trend= unemployment.hp_filter()
"""
Explanation: Compute HP filter of data
End of explanation
"""
fig, axes = plt.subplots(3,4,figsize=(6*4,4*3))
axes[0][0].plot(gdp.data)
axes[0][0].plot(np.exp(gdp_log_trend.data),c='r')
axes[0][0].set_title('GDP')
axes[0][0].set_ylabel('Thousands of '+base_year+' $')
axes[0][1].plot(consumption.data)
axes[0][1].plot(np.exp(consumption_log_trend.data),c='r')
axes[0][1].set_title('Consumption')
axes[0][1].set_ylabel('Thousands of '+base_year+' $')
axes[0][2].plot(investment.data)
axes[0][2].plot(np.exp(investment_log_trend.data),c='r')
axes[0][2].set_title('Investment')
axes[0][2].set_ylabel('Thousands of '+base_year+' $')
axes[0][3].plot(government.data)
axes[0][3].plot(np.exp(government_log_trend.data),c='r')
axes[0][3].set_title('Gov expenditure')
axes[0][3].set_ylabel('Thousands of '+base_year+' $')
axes[1][0].plot(capital.data)
axes[1][0].plot(np.exp(capital_log_trend.data),c='r')
axes[1][0].set_title('Capital')
axes[1][0].set_ylabel('Thousands of '+base_year+' $')
axes[1][1].plot(hours.data)
axes[1][1].plot(np.exp(hours_log_trend.data),c='r')
axes[1][1].set_title('Hours')
axes[1][1].set_ylabel('Index ()'+base_year+'=100)')
axes[1][2].plot(tfp.data)
axes[1][2].plot(np.exp(tfp_log_trend.data),c='r')
axes[1][2].set_title('TFP')
axes[1][3].plot(m2.data)
axes[1][3].plot(np.exp(m2_log_trend.data),c='r')
axes[1][3].set_title('M2')
axes[1][3].set_ylabel('Thousands of '+base_year+' $')
axes[2][0].plot(tbill_3mo.data*100)
axes[2][0].plot(tbill_3mo_trend.data*100,c='r')
axes[2][0].set_title('3mo T-Bill')
axes[2][0].set_ylabel('Percent')
axes[2][1].plot(pce_deflator.data*100)
axes[2][1].plot(pce_deflator_trend.data*100,c='r')
axes[2][1].set_title('PCE Inflation')
axes[2][1].set_ylabel('Percent')
axes[2][2].plot(cpi.data*100)
axes[2][2].plot(cpi_trend.data*100,c='r')
axes[2][2].set_title('CPI Inflation')
axes[2][2].set_ylabel('Percent')
axes[2][3].plot(unemployment.data*100)
axes[2][3].plot(unemployment_trend.data*100,c='r')
axes[2][3].set_title('Unemployment rate')
axes[2][3].set_ylabel('Percent')
ax = fig.add_subplot(1,1,1)
ax.axis('off')
ax.plot(0,0,label='Actual')
ax.plot(0,0,c='r',label='Trend')
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),ncol=2)
"""
Explanation: Plot aggregate data with trends
End of explanation
"""
fig, axes = plt.subplots(3,4,figsize=(6*4,4*3))
axes[0][0].plot(gdp_log_cycle.data)
axes[0][0].set_title('GDP')
axes[0][0].set_ylabel('Thousands of '+base_year+' $')
axes[0][1].plot(consumption_log_cycle.data)
axes[0][1].set_title('Consumption')
axes[0][1].set_ylabel('Thousands of '+base_year+' $')
axes[0][2].plot(investment_log_cycle.data)
axes[0][2].set_title('Investment')
axes[0][2].set_ylabel('Thousands of '+base_year+' $')
axes[0][3].plot(government_log_cycle.data)
axes[0][3].set_title('Gov expenditure')
axes[0][3].set_ylabel('Thousands of '+base_year+' $')
axes[1][0].plot(capital_log_cycle.data)
axes[1][0].set_title('Capital')
axes[1][0].set_ylabel('Thousands of '+base_year+' $')
axes[1][1].plot(hours_log_cycle.data)
axes[1][1].set_title('Hours')
axes[1][1].set_ylabel('Index ()'+base_year+'=100)')
axes[1][2].plot(tfp_log_cycle.data)
axes[1][2].set_title('TFP')
axes[1][3].plot(m2_log_cycle.data)
axes[1][3].set_title('M2')
axes[1][3].set_ylabel('Thousands of '+base_year+' $')
axes[2][0].plot(tbill_3mo_cycle.data)
axes[2][0].set_title('3mo T-Bill')
axes[2][0].set_ylabel('Percent')
axes[2][1].plot(pce_deflator_cycle.data)
axes[2][1].set_title('PCE Inflation')
axes[2][1].set_ylabel('Percent')
axes[2][2].plot(cpi_cycle.data)
axes[2][2].set_title('CPI Inflation')
axes[2][2].set_ylabel('Percent')
axes[2][3].plot(unemployment_cycle.data)
axes[2][3].set_title('Unemployment rate')
axes[2][3].set_ylabel('Percent');
"""
Explanation: Plot cyclical components of the data
End of explanation
"""
# Create a DataFrame with actual and trend data
data = pd.DataFrame({
'gdp':gdp.data,
'gdp_trend':np.exp(gdp_log_trend.data),
'gdp_cycle':gdp_log_cycle.data,
'consumption':consumption.data,
'consumption_trend':np.exp(consumption_log_trend.data),
'consumption_cycle':consumption_log_cycle.data,
'investment':investment.data,
'investment_trend':np.exp(investment_log_trend.data),
'investment_cycle':investment_log_cycle.data,
'government':government.data,
'government_trend':np.exp(government_log_trend.data),
'government_cycle':government_log_cycle.data,
'exports':exports.data,
'exports_trend':np.exp(exports_log_trend.data),
'exports_cycle':exports_log_cycle.data,
'imports':imports.data,
'imports_trend':np.exp(imports_log_trend.data),
'imports_cycle':imports_log_cycle.data,
'hours':hours.data,
'hours_trend':np.exp(hours_log_trend.data),
'hours_cycle':hours_log_cycle.data,
'capital':capital.data,
'capital_trend':np.exp(capital_log_trend.data),
'capital_cycle':capital_log_cycle.data,
'tfp':tfp.data,
'tfp_trend':np.exp(tfp_log_trend.data),
'tfp_cycle':tfp_log_cycle.data,
'real_m2':m2.data,
'real_m2_trend':np.exp(m2_log_trend.data),
'real_m2_cycle':m2_log_cycle.data,
't_bill_3mo':tbill_3mo.data,
't_bill_3mo_trend':tbill_3mo_trend.data,
't_bill_3mo_cycle':tbill_3mo_cycle.data,
'cpi_inflation':cpi.data,
'cpi_inflation_trend':cpi_trend.data,
'cpi_inflation_cycle':cpi_cycle.data,
'pce_inflation':pce_deflator.data,
'pce_inflation_trend':pce_deflator_trend.data,
'pce_inflation_cycle':pce_deflator_cycle.data,
'unemployment':unemployment.data,
'unemployment_trend':unemployment_trend.data,
'unemployment_cycle':unemployment_cycle.data,
})
# RBC data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
data[columns_ordered].dropna().to_csv(export_path+'rbc_data_actual_trend.csv',index=True)
# Create a DataFrame with actual, trend, and cycle data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
columns_ordered.append(name+'_cycle')
data[columns_ordered].dropna().to_csv(export_path+'rbc_data_actual_trend_cycle.csv',index=True)
# More comprehensive Business Cycle Data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp','real_m2','t_bill_3mo','pce_inflation','unemployment']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
data[columns_ordered].dropna().to_csv(export_path+'business_cycle_data_actual_trend.csv',index=True)
# Create a DataFrame with actual, trend, and cycle data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp','real_m2','t_bill_3mo','pce_inflation','unemployment']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
columns_ordered.append(name+'_cycle')
data[columns_ordered].dropna().to_csv(export_path+'business_cycle_data_actual_trend_cycle.csv')
"""
Explanation: Create data files
End of explanation
"""
|
susantabiswas/Natural-Language-Processing | Notebooks/Word_Prediction_Quadgram_In_Constant_Time.ipynb | mit | from nltk.util import ngrams
from collections import defaultdict
from collections import OrderedDict
import string
import time
import gc
start_time = time.time()
"""
Explanation: Word prediction using Quadgram
This program reads the corpus line by line.This reads the corpus one line at a time loads it into the memory
Time Complexity for word prediction : O(1)
Time Complexity for word prediction with rank 'r': O(r)
<u>Import corpus</u>
End of explanation
"""
#returns: string
#arg: string
#remove punctuations and make the string lowercase
def removePunctuations(sen):
#split the string into word tokens
temp_l = sen.split()
i = 0
#changes the word to lowercase and removes punctuations from it
for word in temp_l :
for l in word :
if l in string.punctuation:
word = word.replace(l," ")
temp_l[i] = word.lower()
i=i+1
#spliting is being don here beacause in sentences line here---so after punctuation removal it should
#become "here so"
content = " ".join(temp_l)
return content
"""
Explanation: <u>Do preprocessing</u>:
Remove the punctuations and lowercase the tokens
End of explanation
"""
#returns : void
#arg: string,dict,dict,dict,dict
#loads the corpus for the dataset and makes the frequency count of quadgram and trigram strings
def loadCorpus(file_path,bi_dict,tri_dict,quad_dict,vocab_dict):
w1 = '' #for storing the 3rd last word to be used for next token set
w2 = '' #for storing the 2nd last word to be used for next token set
w3 = '' #for storing the last word to be used for next token set
token = []
word_len = 0
#open the corpus file and read it line by line
with open(file_path,'r') as file:
for line in file:
#split the line into tokens
token = line.split()
i = 0
#for each word in the token list ,remove pucntuations and change to lowercase
for word in token :
for l in word :
if l in string.punctuation:
word = word.replace(l," ")
token[i] = word.lower()
i=i+1
#make the token list into a string
content = " ".join(token)
token = content.split()
word_len = word_len + len(token)
if not token:
continue
#add the last word from previous line
if w3!= '':
token.insert(0,w3)
temp0 = list(ngrams(token,2))
#since we are reading line by line some combinations of word might get missed for pairing
#for trigram
#first add the previous words
if w2!= '':
token.insert(0,w2)
#tokens for trigrams
temp1 = list(ngrams(token,3))
#insert the 3rd last word from previous line for quadgram pairing
if w1!= '':
token.insert(0,w1)
#add new unique words to the vocaulary set if available
for word in token:
if word not in vocab_dict:
vocab_dict[word] = 1
else:
vocab_dict[word]+= 1
#tokens for quadgrams
temp2 = list(ngrams(token,4))
#count the frequency of the bigram sentences
for t in temp0:
sen = ' '.join(t)
bi_dict[sen] += 1
#count the frequency of the trigram sentences
for t in temp1:
sen = ' '.join(t)
tri_dict[sen] += 1
#count the frequency of the quadgram sentences
for t in temp2:
sen = ' '.join(t)
quad_dict[sen] += 1
#then take out the last 3 words
n = len(token)
#store the last few words for the next sentence pairing
w1 = token[n -3]
w2 = token[n -2]
w3 = token[n -1]
return word_len
"""
Explanation: Tokenize and load the corpus data
End of explanation
"""
#returns: void
#arg: dict,dict,dict,dict,dict,int
#creates dict for storing probable words with their probabilities for a trigram sentence
def createProbableWordDict(bi_dict,tri_dict,quad_dict,prob_dict,vocab_dict,token_len):
for quad_sen in quad_dict:
prob = 0.0
quad_token = quad_sen.split()
tri_sen = ' '.join(quad_token[:3])
tri_count = tri_dict[tri_sen]
if tri_count != 0:
prob = interpolatedProbability(quad_token,token_len, vocab_dict, bi_dict, tri_dict, quad_dict,
l1 = 0.25, l2 = 0.25, l3 = 0.25 , l4 = 0.25)
if tri_sen not in prob_dict:
prob_dict[tri_sen] = []
prob_dict[tri_sen].append([prob,quad_token[-1]])
else:
prob_dict[tri_sen].append([prob,quad_token[-1]])
prob = None
tri_count = None
quad_token = None
tri_sen = None
"""
Explanation: Create a Hash Table for Probable words for Trigram sentences
End of explanation
"""
#returns: void
#arg: dict
#for sorting the probable word acc. to their probabilities
def sortProbWordDict(prob_dict):
for key in prob_dict:
if len(prob_dict[key])>1:
sorted(prob_dict[key],reverse = True)
"""
Explanation: Sort the probable words
End of explanation
"""
#returns: string
#arg: string,dict,int
#does prediction for the the sentence
def doPrediction(sen,prob_dict,rank = 1):
if sen in prob_dict:
if rank <= len(prob_dict[sen]):
return prob_dict[sen][rank-1][1]
else:
return prob_dict[sen][0][1]
else:
return "Can't predict"
"""
Explanation: <u>Driver function for doing the prediction</u>
End of explanation
"""
#returns: float
#arg: float,float,float,float,list,list,dict,dict,dict,dict
#for calculating the interpolated probablity
def interpolatedProbability(quad_token,token_len, vocab_dict, bi_dict, tri_dict, quad_dict,
l1 = 0.25, l2 = 0.25, l3 = 0.25 , l4 = 0.25):
sen = ' '.join(quad_token)
prob =(
l1*(quad_dict[sen] / tri_dict[' '.join(quad_token[0:3])])
+ l2*(tri_dict[' '.join(quad_token[1:4])] / bi_dict[' '.join(quad_token[1:3])])
+ l3*(bi_dict[' '.join(quad_token[2:4])] / vocab_dict[quad_token[2]])
+ l4*(vocab_dict[quad_token[3]] / token_len)
)
return prob
"""
Explanation: <u> For Computing Interpolated Probability</u>
End of explanation
"""
#returns: string
#arg: void
#for taking input from user
def takeInput():
cond = False
#take input
while(cond == False):
sen = input('Enter the string\n')
sen = removePunctuations(sen)
temp = sen.split()
if len(temp) < 3:
print("Please enter atleast 3 words !")
else:
cond = True
temp = temp[-3:]
sen = " ".join(temp)
return sen
"""
Explanation: <u>For Taking input from the User</u>
End of explanation
"""
"""
def main():
#variable declaration
tri_dict = defaultdict(int) #for keeping count of sentences of three words
quad_dict = defaultdict(int) #for keeping count of sentences of three words
vocab_dict = defaultdict(int) #for storing the different words with their frequencies
prob_dict = OrderedDict() #for storing the probabilities of probable words for a sentence
bi_dict = defaultdict(int)
#load the corpus for the dataset
token_len = loadCorpus('corpusfile.txt',bi_dict,tri_dict,quad_dict,vocab_dict)
print("---Preprocessing Time for Corpus loading: %s seconds ---" % (time.time() - start_time))
start_time1 = time.time()
#creates a dictionary of probable words
createProbableWordDict(bi_dict,tri_dict,quad_dict,prob_dict,vocab_dict,token_len)
#sort the dictionary of probable words
sortProbWordDict(prob_dict)
# writeProbWords(prob_dict)
gc.collect()
print("---Preprocessing Time for Creating Probable Word Dict: %s seconds ---" % (time.time() - start_time1))
""""
"""
if __name__ == '__main__':
main()
"""
"""
Explanation: <u>main function</u>
End of explanation
"""
#variable declaration
tri_dict = defaultdict(int) #for keeping count of sentences of three words
quad_dict = defaultdict(int) #for keeping count of sentences of three words
vocab_dict = defaultdict(int) #for storing the different words with their frequencies
prob_dict = OrderedDict() #for storing the probabilities of probable words for a sentence
bi_dict = defaultdict(int)
#load the corpus for the dataset
token_len = loadCorpus('corpusfile.txt',bi_dict,tri_dict,quad_dict,vocab_dict)
print("---Preprocessing Time for Corpus loading: %s seconds ---" % (time.time() - start_time))
start_time1 = time.time()
#creates a dictionary of probable words
createProbableWordDict(bi_dict,tri_dict,quad_dict,prob_dict,vocab_dict,token_len)
#sort the dictionary of probable words
sortProbWordDict(prob_dict)
# writeProbWords(prob_dict)
gc.collect()
print("---Preprocessing Time for Creating Probable Word Dict: %s seconds ---" % (time.time() - start_time1))
sen = takeInput()
start_time2 = time.time()
prediction = doPrediction(sen,prob_dict)
print("Word Prediction:",prediction)
print("---Time for Prediction Operation: %s seconds ---" % (time.time() - start_time2))
"""
Explanation: <i><u>For Debugging Purpose Only</u></i>
<i>Uncomment the above two cells and ignore running the cells below if not debugging</i>
End of explanation
"""
|
LargePanda/GEAR_Network | notebooks/.ipynb_checkpoints/pipeline-checkpoint.ipynb | gpl-3.0 | import json
from data_collection_util import *
# load original profile
with open("../profile/profile.json") as f:
orig_profile = json.load(f)
import codecs
with codec.open("../profile/profile2.json", "w") as f:
json.dump(f)
# load original profile
with open("../profile/profile.json") as f:
orig_profile = json.load(f)
"""
Explanation: Load original profile
End of explanation
"""
sample_profile = {u'cluster_id': 0,
u'gear_collaborators': [],
u'mathsci_id': u'MR304864',
u'member_id': 12,
u'member_type': u'member',
u'name': u'Steven',
u'organization': u'University of Illinois at Urbana-Champaign',
u'other_collaborators': u'Indranil Biswas, Jim Glazebrook, Tomas Gomez, Adam Jacob, Franz Kamber, Vincent Mercat, Vicente Munoz, Peter Newstead, Mathias Stemmler',
u'photo': u'BradlowSteven.jpg',
u'pos_x': 0,
u'pos_y': 0,
u'research_interests': u'Higgs Bundles',
u'short_bio': u"I'm interested in moduli spaces associated with holomorphic vector bundles. In particular, I'm a big fan of applications of Higgs bundle technology to the study of surface group representation varieties. Before I die, I'd like to be able to compute the surface group representation corresponding to any given Higgs bundle, and vice versa.",
u'surname': u'Bradlow',
u'title': u'GEAR Member',
u'website': u''}
"""
Explanation: In original profile, each gear member has the following arrtibutes:
website
gear_collaborators
pos_x
pos_y
surname
name
title
photo
other_collaborators
member_type
short_bio
mathsci_id
cluster_id
member_id
organization
research_interests
A sample member profile looks like this:
End of explanation
"""
mappers = make_mappers(orig_profile)
gear_mathsci_mapper = mappers[0]
mathsci_gear_mapper = mappers[1]
"""
Explanation: Build mappers for id and mathscinet id
In this step, we build mapping between gear_id and mathscinet_id. For example, given a gear member id, gear_mathsci_mapper will return a mathscinet id
End of explanation
"""
paper_set = download_full_paper_set(orig_profile)
"""
Explanation: Download paper list for each member with mathsci_id
In this step, the program iterates through all members. If a member has valid mathscinet id, then we retrieve the paper list of that member.
End of explanation
"""
sp={'authors': ['MR367870', 'MR1001390'],
'date': 2012,
'description': u'Conner, Gregory R. ; Kent, Curtis Inverse limits of finite rank free groups. J. Group Theory 15 (2012), no. 6, 823\u2013829. (Reviewer: David Meier) 20E05 (20E18)',
'id': u'MR2997025',
'url': u'http://www.ams.org/mathscinet/search/publdoc.html?pg1=MR&s1=MR2997025'}
"""
Explanation: paper_set has the following structure:
- 'member 0': paper a, paper b
- 'member 1': paper b, paper c, paper d
- 'member 2': paper c
for each paper, the structure is as follows:
date
url
id
description
authors
A sample paper looks like this:
End of explanation
"""
paper_set['MR304864']
"""
Explanation: In our paper_set, Professor Bradlow has the following papers:
End of explanation
"""
paper_2011_meta = filter_2011(paper_set)
paper_set_2011 = paper_2011_meta[0]
count_2011 = paper_2011_meta[1]
"""
Explanation: Keep in mind that we only look at papers published after 2011. Hence, we filter the papers and get paper_set_2011
End of explanation
"""
# download papers citing gear member papers
download_gear_papers(paper_set_2011, count_2011)
"""
Explanation: For co-citation papers, we need to know what papers are citing papers in paper_set_2011.
This process may take 10 minutes, depending on network.
End of explanation
"""
# update coauthorship/cocitation data
full_paper_list = []
useful_paper = set()
for ending_year in range(2011, 2017):
update_collaborators(orig_profile, paper_set_2011, 2011, ending_year, mathsci_gear_mapper, useful_paper)
update_citations(orig_profile, paper_set_2011, 2011, ending_year, mathsci_gear_mapper, full_paper_list, useful_paper)
len(useful_paper)
"""
Explanation: Matrix generation
We have one function update_collaborators that updates the co-authorship relation and the other function update_citations that updates the co-citation relation.
End of explanation
"""
orig_profile['items'][12]
"""
Explanation: These two functions will add additional data fields to authors.
Let's look at Professor Bradlow's profile again:
End of explanation
"""
# print matrix
for ending_year in range(2011, 2017):
matrix_maker(orig_profile, 2011, ending_year)
import codecs
def export_paper(the_paper_list):
print "Exporting papers ..."
output_path = os.path.join( '..', 'website_input', 'papers.json')
export = {}
for p in the_paper_list:
export[p['id']] = p
with codecs.open(output_path, "w", 'utf-8') as f:
json.dump(export, f, indent=4, separators=(',', ': '), ensure_ascii = False)
len(full_paper_list)
export_paper(full_paper_list)
export_profile(orig_profile)
def export_profile(profile):
output_path = os.path.join( '..', 'website_input', 'profile.json')
with open(output_path, "w") as f:
json.dump(unicode(profile), f, ensure_ascii = False)
output_path = os.path.join( '..', 'website_input', 'profile.json')
with open(output_path, "w") as f:
json.dump(unicode(profile), f, ensure_ascii = False)
output_path = os.path.join( '..', 'website_input', 'profile.json')
with open(output_path, "r") as f:
p = json.load(f)
p.keys()
"""
Explanation: Co-author
For co-author, we look at two types of fields:
'2011-2015 collaborators details': 43: [u'MR3323627', u'MR2999985'], 49: [u'MR3323627', u'MR2999985']
'2011-2015 collaborators sizes': 43: 2, 49: 2
It means that, Professor Bradlow (member id 12), has co-authored 2 papers (with paper id 'MR3323627' and 'MR2999985') with Member 43, and 2 papers (with paper id 'MR3323627' and 'MR2999985') with Member 49.
For co-citation, the idea is the similar
Matrix export
We then output the matrix to files
End of explanation
"""
|
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies | ex02-Read SST data, create and save nino3 time series.ipynb | mit | %matplotlib inline
import numpy as np
from numpy import nonzero
import matplotlib.pyplot as plt # to generate plots
from mpl_toolkits.basemap import Basemap # plot on map projections
import matplotlib.dates as mdates
import datetime
from netCDF4 import Dataset # http://unidata.github.io/netcdf4-python/
from netCDF4 import netcdftime
from netcdftime import utime
"""
Explanation: Read SST data, Create and Save nino3 time series
In this notebook, we will finish these tasks
* subsample SST data over the nino3 area
* calcualte climatology
* calculate anomalies
* calculate regional mean
* plot regional mean SST time series
* save data
1. Load basic libraries
End of explanation
"""
ncfile = 'data\skt.mon.mean.nc'
fh = Dataset(ncfile, mode='r') # file handle, open in read only mode
lon = fh.variables['lon'][:]
lat = fh.variables['lat'][:]
nctime = fh.variables['time'][:]
t_unit = fh.variables['time'].units
skt = fh.variables['skt'][:]
try :
t_cal = fh.variables['time'].calendar
except AttributeError : # Attribute doesn't exist
t_cal = u"gregorian" # or standard
fh.close() # close the file
"""
Explanation: 2. Set and read input NetCDF file info
2.1 Read data
End of explanation
"""
utime = netcdftime.utime(t_unit, calendar = t_cal)
datevar = utime.num2date(nctime)
print(datevar.shape)
datevar[0:5]
"""
Explanation: 2.2 Parse time
End of explanation
"""
idx_lat_n3 = (lat>=-5.0) * (lat<=5.0)
idx_lon_n3 = (lon>=210.0) * (lon<=270.0)
"""
Explanation: 3. Subregion for nino3 area
Lat: -5 ~ 5
Lon: 210 ~ 270
3.1 Get indices of time, lat and lon over the nino3 area
End of explanation
"""
years = np.array([idx.year for idx in datevar])
idx_tim_n3 = (years>=1970) * (years<=1999)
"""
Explanation: time: 1970-1999
End of explanation
"""
idxtim = nonzero(idx_tim_n3)[0]
#idxlat = nonzero(idx_lat_n3)[0]
idxlon = nonzero(idx_lon_n3)[0]
idxlon
"""
Explanation: Get Index using np.nonzero
End of explanation
"""
lat_n3 = lat[idx_lat_n3]
lon_n3 = lon[idx_lon_n3]
dates_n3 = datevar[idx_tim_n3]
skt_n3 = skt[idx_tim_n3, :, :][:,idx_lat_n3,:][:,:,idx_lon_n3]
print(skt_n3.shape)
print(dates_n3.shape)
"""
Explanation: 3.2 Extract data over nino3 area
Use logical indexing is fine for 1D array; however, a little funny for multiple dimension array.
End of explanation
"""
skt_n3 = np.reshape(skt_n3, (12,30,6,33), order='F')
skt_n3 = np.transpose(skt_n3, (1, 0, 2, 3))
skt_n3.shape
"""
Explanation: 4. Calculate region means
4.1 Transform skt_n3 from months|lat|lon => 12|year|lat|lon => year|12|lat|lon
End of explanation
"""
clima_skt_n3 = np.mean(skt_n3, axis=0)
clima_skt_n3.shape
"""
Explanation: 4.2 Calculate monthly climatology
End of explanation
"""
num_repeats = 30 # 30 years
clima_skt_n3 = np.vstack([clima_skt_n3]*num_repeats)
clima_skt_n3.shape
clima_skt_n3 = np.reshape(clima_skt_n3, (12,30,6,33),order='F')
clima_skt_n3 = np.transpose(clima_skt_n3, (1, 0, 2, 3))
clima_skt_n3.shape
ssta = skt_n3-clima_skt_n3
ssta2 = np.reshape(ssta,(30,12,6*33), order='F') # 30x12x198
ssta3 = np.mean(ssta2, axis=2); # 30x12
ssta3.shape
ssta_series = np.reshape(ssta3.T,(12*30,1), order='F'); # 1x360
ssta_series.shape
"""
Explanation: 4.3 Calculate anomaly of SST over nino3 area
End of explanation
"""
import matplotlib.dates as mdates
from matplotlib.dates import MonthLocator, WeekdayLocator, DateFormatter
import matplotlib.ticker as ticker
fig, ax = plt.subplots(1, 1 , figsize=(15,5))
ax.plot(dates_n3, ssta_series)
ax.set_ylim((-4,4))
#horiz_line_data = np.array([0 for i in np.arange(len(dates_n3))])
#ax.plot(dates_n3, horiz_line_data, 'r--')
ax.axhline(0, color='r')
ax.set_title('NINO3 SSTA 1970-1999')
ax.set_ylabel(['$^oC$'])
ax.set_xlabel('Date')
# rotate and align the tick labels so they look better
fig.autofmt_xdate()
# use a more precise date string for the x axis locations in the toolbar
ax.fmt_xdata = mdates.DateFormatter('%Y')
"""
Explanation: 5. Have a beautiful look
End of explanation
"""
np.savez('data/ssta.nino3.30y.npz', ssta_series=ssta_series)
"""
Explanation: 6. Save data
End of explanation
"""
|
JasonSanchez/w261 | week5/HW5-Phase2_update_10121400.ipynb | mit | !ls | grep "mo"
"""
Explanation: MIDS - w261 Machine Learning At Scale
Course Lead: Dr James G. Shanahan (email Jimi via James.Shanahan AT gmail.com)
Assignment - HW5 - Phase 2
Group Members:
Jim Chen, Memphis, TN, jim.chen@ischool.berkeley.edu
Manuel Moreno, Salt Lake City, UT, momoreno@ischool.berkeley.edu
Rahul Ragunathan
Jason Sanchez, San Francisco, CA, jason.sanchez@ischool.berkeley.edu
Class: MIDS w261 Fall 2016 Group 2
Week: 5
Due Time: 2 Phases.
HW5 Phase 1
This can be done on a local machine (with a unit test on the cloud such as AltaScale's PaaS or on AWS) and is due Tuesday, Week 6 by 8AM (West coast time). It will primarily focus on building a unit/systems and for pairwise similarity calculations pipeline (for stripe documents)
HW5 Phase 2
This will require the AltaScale cluster and will be due Tuesday, Week 7 by 8AM (West coast time).
The focus of HW5 Phase 2 will be to scale up the unit/systems tests to the Google 5 gram corpus. This will be a group exercise
<a name="1">
Instructions </a>
MIDS UC Berkeley, Machine Learning at Scale
DATSCIW261 ASSIGNMENT #5
Version 2016-09-25
=== INSTRUCTIONS for SUBMISSIONS ===
Follow the instructions for submissions carefully.
https://docs.google.com/forms/d/1ZOr9RnIe_A06AcZDB6K1mJN4vrLeSmS2PD6Xm3eOiis/viewform?usp=send_form
Documents:
IPython Notebook, published and viewable online.
PDF export of IPython Notebook.
Table of Contents <a name="TOC"></a>
HW5.0: Short answers
HW5.1: Short answers
HW5.2: Joins
HW5.3
HW5.4
HW5.5
HW5.6
HW5.7
HW5.8
HW5.9
<a name="2">
HW Problems
Back to Table of Contents
HW5.0 <a name="1.0"></a>
Back to Table of Contents
What is a data warehouse? What is a Star schema? When is it used?
Data warehouse: Stores a large amount of relational, semi-structured, and unstructured data. Is used for business intelligence and data science.
A star schema has fact tables and many dimension tables that connect to the fact tables. Fact tables record events such as sales or website visits and encodes details of the events as keys (user_id, product_id, store_id, ad_id). The dimension tables store the detailed information about each of these keys.
Star schemas provide simple approached to structuring data warehouses in a relational way.
HW5.1 <a name="1.1"></a>
Back to Table of Contents
In the database world What is 3NF? Does machine learning use data in 3NF? If so why?
In what form does ML consume data?
Why would one use log files that are denormalized?
3NF means third normal form. It is used to transform large flat files that have repeated data into a linked collection of smaller tables that can be joined on a set of common keys.
Machine learning does not use data in 3NF. Instead it uses large flat files so the details that are hidden by the keys can be used in the algorithms.
Log files can track specific events of interest. A denormalized log file allows a company to track these events in real time conditioned on specific customer features. Alternatively, a model can be running that triggers appropriate responses based on the next predicted action of a user given the user's latest action.
HW5.2 <a name="1.2"></a>
Back to Table of Contents
Using MRJob, implement a hashside join (memory-backed map-side) for left, right and inner joins. Run your code on the data used in HW 4.4: (Recall HW 4.4: Find the most frequent visitor of each page using mrjob and the output of 4.2 (i.e., transfromed log file). In this output please include the webpage URL, webpageID and Visitor ID.)
Justify which table you chose as the Left table in this hashside join.
Please report the number of rows resulting from:
(1) Left joining Table Left with Table Right
(2) Right joining Table Left with Table Right
(3) Inner joining Table Left with Table Right
List data files to use for joins
End of explanation
"""
!wc -l anonymous-msweb-preprocessed.data && echo
!head anonymous-msweb-preprocessed.data
!cp anonymous-msweb-preprocessed.data log.txt
"""
Explanation: Count lines in log dataset. View the first 10 lines. Rename data to log.txt
End of explanation
"""
!cat mostFrequentVisitors.txt | cut -f 1,2 -d',' > urls.txt
!wc -l urls.txt && echo
!head urls.txt
"""
Explanation: Convert the output of 4.4 to be just url and url_id. Save as urls.txt.
End of explanation
"""
%%writefile join.py
from mrjob.job import MRJob
from mrjob.step import MRStep
# Avoid broken pipe error
from signal import signal, SIGPIPE, SIG_DFL
signal(SIGPIPE,SIG_DFL)
class Join(MRJob):
def configure_options(self):
super(Join, self).configure_options()
self.add_passthrough_option(
'--join',
default="left",
help="Options: left, inner, right")
def mapper_init(self):
self.join = self.options.join
self.urls_used = set()
self.urls = {}
try:
open("urls.txt")
filename = "urls.txt"
except FileNotFoundError:
filename = "limited_urls.txt"
with open(filename) as urls:
for line in urls:
url, key = line.strip().replace('"',"").split(",")
self.urls[key] = url
def mapper(self, _, lines):
try:
url = lines[2:6]
if self.join in ["inner", "left"]:
yield (lines, self.urls[url])
elif self.join in ["right"]:
yield (self.urls[url], lines)
self.urls_used.add(url)
except KeyError:
if self.join in ["inner", "right"]:
pass
else:
yield (lines, "None")
def mapper_final(self):
for key, value in self.urls.items():
if key not in self.urls_used:
yield (self.urls[key], "*")
def reducer(self, url, values):
quick_stash = 0
for val in values:
if val != "*":
quick_stash += 1
yield (val, url)
if quick_stash == 0:
yield ("None", url)
def steps(self):
join = self.options.join
if join in ["inner", "left"]:
mrsteps = [MRStep(mapper_init=self.mapper_init,
mapper=self.mapper)]
if join == "right":
mrsteps = [MRStep(mapper_init=self.mapper_init,
mapper=self.mapper,
mapper_final=self.mapper_final,
reducer=self.reducer)]
return mrsteps
if __name__ == "__main__":
Join.run()
"""
Explanation: The urls.txt file is much smaller than the log.txt data and should be what is loaded into memory. This means it would be the right-side table in a left-side join.
End of explanation
"""
!head -n 5 urls.txt > limited_urls.txt
"""
Explanation: Make a file with only the first five urls to test left and inner join.
End of explanation
"""
!head log.txt | python join.py --file limited_urls.txt --join left -q
"""
Explanation: Using the first ten lines of the log file and left joining it to the first five lines of the urls file, we see that some of the urls are returned as "None." This is correct behavior.
End of explanation
"""
!head log.txt | python join.py --file limited_urls.txt --join inner -q
"""
Explanation: Performing the same operation, but with an inner join, we see the lines that were "None" are dropped.
End of explanation
"""
!head -n 100 log.txt | python join.py --file urls.txt --join right -r local -q | head -n 15
"""
Explanation: To prove the right-side join works, we can only use the first 100 log entries. We see that urls without corresponding log entries are listed as "None" and that all urls are returned in alphabetical order.
End of explanation
"""
!head -n 50 log.txt | python join.py --file limited_urls.txt --join right -r local -q
"""
Explanation: By using the limited urls file, we see that only five urls are returned and every logged page visit to those pages are returned (at least within the first 50 log entries).
End of explanation
"""
%%writefile mini_5gram.txt
A BILL FOR ESTABLISHING RELIGIOUS 59 59 54
A Biography of General George 92 90 74
A Case Study in Government 102 102 78
A Case Study of Female 447 447 327
A Case Study of Limited 55 55 43
A Child's Christmas in Wales 1099 1061 866
A Circumstantial Narrative of the 62 62 50
A City by the Sea 62 60 49
A Collection of Fairy Tales 123 117 80
A Collection of Forms of 116 103 82
"""
Explanation: HW5.3 <a name="1.3"></a> Systems tests on n-grams dataset (Phase1) and full experiment (Phase 2)
Back to Table of Contents
3. HW5.3.0 Run Systems tests locally (PHASE1)
Back to Table of Contents
A large subset of the Google n-grams dataset
https://aws.amazon.com/datasets/google-books-ngrams/
which we have placed in a bucket/folder on Dropbox and on s3:
https://www.dropbox.com/sh/tmqpc4o0xswhkvz/AACUifrl6wrMrlK6a3X3lZ9Ea?dl=0
s3://filtered-5grams/
In particular, this bucket contains (~200) files (10Meg each) in the format:
(ngram) \t (count) \t (pages_count) \t (books_count)
The next cell shows the first 10 lines of the googlebooks-eng-all-5gram-20090715-0-filtered.txt file.
DISCLAIMER: Each record is already a 5-gram. We should calculate the stripes cooccurrence data from the raw text and not from the 5-gram preprocessed data. Calculating pairs on this 5-gram is a little corrupt as we will be double counting cooccurences. Having said that this exercise can still pull out some similar terms.
Data for systems test
mini_5gram.txt
End of explanation
"""
%%writefile atlas.txt
atlas boon 50 50 50
boon cava dipped 10 10 10
atlas dipped 15 15 15
"""
Explanation: atlas.txt
End of explanation
"""
with open("mini_stripes.txt", "w") as f:
f.writelines([
'"DocA"\t{"X":20, "Y":30, "Z":5}\n',
'"DocB"\t{"X":100, "Y":20}\n',
'"DocC"\t{"M":5, "N":20, "Z":5, "Y":1}\n'
])
!cat mini_stripes.txt
"""
Explanation: mini_stripes.txt
End of explanation
"""
%%writefile MakeStripes.py
from mrjob.job import MRJob
from collections import Counter
class MakeStripes(MRJob):
def mapper(self, _, lines):
terms, term_count, page_count, book_count = lines.split("\t")
terms = terms.split()
term_count = int(term_count)
for item in terms:
yield (item, {term:term_count for term in terms if term != item})
def combiner(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
def reducer(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
if __name__ == "__main__":
MakeStripes.run()
"""
Explanation: TASK: Phase 1
Complete 5.4 and 5.5 and systems test them using the above test datasets. Phase 2 will focus on the entire Ngram dataset.
To help you through these tasks please verify that your code gives the following results (for stripes, inverted index, and pairwise similarities).
Make stripes
End of explanation
"""
%%writefile atlas_desired_results.txt
"atlas" {"dipped": 15, "boon": 50}
"boon" {"atlas": 50, "dipped": 10, "cava": 10}
"cava" {"dipped": 10, "boon": 10}
"dipped" {"atlas": 15, "boon": 10, "cava": 10}
"""
Explanation: Desired result
End of explanation
"""
!cat atlas.txt | python MakeStripes.py -q > atlas_stripes.txt
!cat atlas_stripes.txt
"""
Explanation: Actual result
End of explanation
"""
%%writefile InvertIndex.py
from mrjob.job import MRJob
from mrjob.protocol import JSONProtocol
from collections import Counter
class InvertIndex(MRJob):
MRJob.input_protocol = JSONProtocol
def mapper(self, key, words):
n_words = len(words)
for word in words:
yield (word, {key:n_words})
def combiner(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
def reducer(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
if __name__ == "__main__":
InvertIndex.run()
"""
Explanation: Actual result matches desired result
Inverted index
End of explanation
"""
!cat mini_stripes.txt | python InvertIndex.py -q > mini_stripes_inverted.txt
!cat mini_stripes_inverted.txt
!cat atlas_stripes.txt | python InvertIndex.py -q > atlas_inverted.txt
!cat atlas_inverted.txt
"""
Explanation: Desired result
Systems test mini_stripes - Inverted Index
——————————————————————————————————————————
"M" | DocC 4 |
"N" | DocC 4 |
"X" | DocA 3 | DocB 2 |
"Y" | DocA 3 | DocB 2 | DocC 4 |
"Z" | DocA 3 | DocC 4 |
systems test atlas-boon - Inverted Index
——————————————————————————————————————————
"atlas" | boon 3 | dipped 3 |
"dipped" | atlas 2 | boon 3 | cava 2 |
"boon" | atlas 2 | cava 2 | dipped 3 |
"cava" | boon 3 | dipped 3 |
Actual result
End of explanation
"""
%%writefile Similarity.py
from mrjob.job import MRJob
from mrjob.protocol import JSONProtocol
from itertools import combinations
from statistics import mean
class Similarity(MRJob):
MRJob.input_protocol = JSONProtocol
def mapper(self, key_term, docs):
doc_names = docs.keys()
for doc_pairs in combinations(sorted(list(doc_names)), 2):
yield (doc_pairs, 1)
for name in doc_names:
yield (name, 1)
def combiner(self, key, value):
yield (key, sum(value))
def reducer_init(self):
self.words = {}
self.results = []
def reducer(self, doc_or_docs, count):
if isinstance(doc_or_docs, str):
self.words[doc_or_docs] = sum(count)
else:
d1, d2 = doc_or_docs
d1_n_words, d2_n_words = self.words[d1], self.words[d2]
intersection = sum(count)
jaccard = round(intersection/(d1_n_words + d2_n_words - intersection), 3)
cosine = round(intersection/(d1_n_words**.5 * d2_n_words**.5), 3)
dice = round(2*intersection/(d1_n_words + d2_n_words), 3)
overlap = round(intersection/min(d1_n_words, d2_n_words), 3)
average = round(mean([jaccard, cosine, dice, overlap]), 3)
self.results.append([doc_or_docs, {"jacc":jaccard, "cos":cosine,
"dice":dice, "ol":overlap, "ave":average}])
def reducer_final(self):
for doc, result in sorted(self.results, key=lambda x: x[1]["ave"], reverse=True):
yield (doc, result)
if __name__ == "__main__":
Similarity.run()
"""
Explanation: The tests pass
Similarity
End of explanation
"""
!cat mini_stripes_inverted.txt | python Similarity.py -q --jobconf mapred.reduce.tasks=1
!cat atlas_inverted.txt | python Similarity.py -q --jobconf mapred.reduce.tasks=1
"""
Explanation: Desired results
Systems test mini_stripes - Similarity measures
| average | pair | cosine | jaccard | overlap | dice |
|-|-|-|-|-|-|
| 0.741582 | DocA - DocB | 0.816497 | 0.666667 | 1.000000 | 0.800000 |
| 0.488675 | DocA - DocC | 0.577350 | 0.400000 | 0.666667 | 0.571429 |
| 0.276777 | DocB - DocC | 0.353553 | 0.200000 | 0.500000 | 0.333333 |
Systems test atlas-boon 2 - Similarity measures
| average | pair | cosine | jaccard | overlap | dice |
|-|-|-|-|-|-|
|1.000000 | atlas - cava | 1.000000 | 1.000000 | 1.000000 | 1.000000|
| 0.625000 | boon - dipped | 0.666667 | 0.500000 | 0.666667 | 0.666667|
| 0.389562 | cava - dipped | 0.408248 | 0.250000 | 0.500000 | 0.400000|
| 0.389562 | boon - cava | 0.408248 | 0.250000 | 0.500000 | 0.400000|
| 0.389562 | atlas - dipped | 0.408248 | 0.250000 | 0.500000 | 0.400000|
| 0.389562 | atlas - boon | 0.408248 | 0.250000 | 0.500000 | 0.400000|
Actual results
End of explanation
"""
!cat atlas-boon-systems-test.txt | python MakeStripes.py -q | python InvertIndex.py -q | python Similarity.py -q --jobconf mapred.reduce.tasks=1
"""
Explanation: The numbers calculated exactly match the systems test except for the average calculations of the mini_stripes set. In this instance, the systems test calculations are not correct.
From beginning to end
End of explanation
"""
%%writefile GetIndexandOtherWords.py
import heapq
from re import findall
from mrjob.job import MRJob
from mrjob.step import MRStep
class TopList(list):
def __init__(self, max_size, num_position=0):
"""
Just like a list, except the append method adds the new value to the
list only if it is larger than the smallest value (or if the size of
the list is less than max_size).
If each element of the list is an int or float, uses that value for
comparison. If the elements in the list are lists or tuples, uses the
list_position element of the list or tuple for the comparison.
"""
self.max_size = max_size
self.pos = num_position
def _get_key(self, x):
return x[self.pos] if isinstance(x, (list, tuple)) else x
def append(self, val):
if len(self) < self.max_size:
heapq.heappush(self, val)
elif self._get_key(self[0]) < self._get_key(val):
heapq.heapreplace(self, val)
def final_sort(self):
return sorted(self, key=self._get_key, reverse=True)
class GetIndexandOtherWords(MRJob):
"""
Usage: python GetIndexandOtherWords.py --index-range 9000-10000 --top-n-words 10000 --use-term-counts True
Given n-gram formatted data, outputs a file of the form:
index term
index term
...
word term
word term
...
Where there would be 1001 index words and 10000 total words. Each word would be ranked based
on either the term count listed in the Google n-gram data (i.e. the counts found in the
underlying books) or the ranks would be based on the word count of the n-grams in the actual
dataset (i.e. ignore the numbers/counts associated with each n-gram and count each n-gram
exactly once).
"""
def configure_options(self):
super(GetIndexandOtherWords, self).configure_options()
self.add_passthrough_option(
'--index-range',
default="9-10",
help="Specify the range of the index words. ex. 9-10 means the ninth and " +
"tenth most popular words will serve as the index")
self.add_passthrough_option(
'--top-n-words',
default="10",
help="Specify the number of words to output in all")
self.add_passthrough_option(
'--use-term-counts',
default="True",
choices=["True","False"],
help="When calculating the most frequent words, choose whether to count " +
"each word based on the term counts reported by Google or just based on " +
"the number of times the word appears in an n-gram")
self.add_passthrough_option(
'--return-counts',
default="False",
choices=["True","False"],
help="The final output includes the counts of each word")
def mapper_init(self):
# Ensure command line options are sane
top_n_words = int(self.options.top_n_words)
last_index_word = int(self.options.index_range.split("-")[1])
if top_n_words < last_index_word:
raise ValueError("""--top-n-words value (currently %d) must be equal to or greater than
--index-range value (currently %d).""" % (top_n_words, last_index_word))
self.stop_words = set(['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves',
'you', 'your', 'yours', 'yourself', 'yourselves', 'he', 'him',
'his', 'himself', 'she', 'her', 'hers', 'herself', 'it',
'its', 'itself', 'they', 'them', 'their', 'theirs',
'themselves', 'what', 'which', 'who', 'whom', 'this', 'that',
'these', 'those', 'am', 'is', 'are', 'was', 'were', 'be',
'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does',
'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or',
'because', 'as', 'until', 'while', 'of', 'at', 'by', 'for',
'with', 'about', 'against', 'between', 'into', 'through',
'during', 'before', 'after', 'above', 'below', 'to', 'from',
'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under',
'again', 'further', 'then', 'once', 'here', 'there', 'when',
'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few',
'more', 'most', 'other', 'some', 'such', 'no', 'nor', 'not',
'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't',
'can', 'will', 'just', 'don', 'should', 'now'])
def mapper(self, _, lines):
terms, term_count, page_count, book_count = lines.split("\t")
# Either use the ngram term count for the count or count each word just once
if self.options.use_term_counts == "True":
term_count = int(term_count)
else:
term_count = 1
# Iterate through each term. Skip stop words
for term in findall(r'[a-z]+', terms.lower()):
if term in self.stop_words:
pass
else:
yield (term, term_count)
def combiner(self, term, counts):
yield (term, sum(counts))
def reducer_init(self):
"""
Accumulates the top X words and yields them. Note: should only use if
you want to emit a reasonable amount of top words (i.e. an amount that
could fit on a single computer.)
"""
self.top_n_words = int(self.options.top_n_words)
self.TopTerms = TopList(self.top_n_words, num_position=1)
def reducer(self, term, counts):
self.TopTerms.append((term, sum(counts)))
def reducer_final(self):
for pair in self.TopTerms:
yield pair
def mapper_single_key(self, term, count):
"""
Send all the data to a single reducer
"""
yield (1, (term, count))
def reducer_init_top_vals(self):
# Collect top words
self.top_n_words = int(self.options.top_n_words)
self.TopTerms = TopList(self.top_n_words, num_position=1)
# Collect index words
self.index_range = [int(num) for num in self.options.index_range.split("-")]
self.index_low, self.index_high = self.index_range
# Control if output shows counts or just words
self.return_counts = self.options.return_counts == "True"
def reducer_top_vals(self, _, terms):
for term in terms:
self.TopTerms.append(term)
def reducer_final_top_vals(self):
TopTerms = self.TopTerms.final_sort()
if self.return_counts:
# Yield index words
for term in TopTerms[self.index_low-1:self.index_high]:
yield ("index", term)
# Yield all words
for term in TopTerms:
yield ("words", term)
else:
# Yield index words
for term in TopTerms[self.index_low-1:self.index_high]:
yield ("index", term[0])
# Yield all words
for term in TopTerms:
yield ("words", term[0])
def steps(self):
"""
Step one: Yield top n-words from each reducer. Means dataset size is
n-words * num_reducers. Guarantees overall top n-words are
sent to the next step.
"""
mr_steps = [MRStep(mapper_init=self.mapper_init,
mapper=self.mapper,
combiner=self.combiner,
reducer_init=self.reducer_init,
reducer_final=self.reducer_final,
reducer=self.reducer),
MRStep(mapper=self.mapper_single_key,
reducer_init=self.reducer_init_top_vals,
reducer=self.reducer_top_vals,
reducer_final=self.reducer_final_top_vals)]
return mr_steps
if __name__ == "__main__":
GetIndexandOtherWords.run()
"""
Explanation: PHASE 2: Full-scale experiment on Google N-gram data
<a name="2.1">
2.1 Vocab Identification
Back to Table of Contents
This section scans through the corpus file(s) and identify the top-n frequent words as vocabularies.
We utilize heapq to reduce the amount of data to transfer using hadoop.
This approach can run into memory constraints if our goal is to return the top k results where k is so large the resulting ordered list cannot fit into memory on a single machine (i.e. billions of results). In practice, we only care about a small number of the top results (for example, in this problem we only need to return the top 1000 results. 1000 results are trivially stored in memory).
The code uses multiple reducers. In the last MapReduce step, all data is sent to a single reducer by use of a single key; however, the data that is sent is never stored in memory (only the top k results are) and at most k*n_reducers observations would be sent to this reducer, which means the total data sent is very small and could easily fit on a single hard drive. If the data is so large it cannot fit on a single hard drive, we could add more MR steps to reduce the size of the data by 90% for each added step.
That said, we estimate that the code could work without any changes on a dataset with 100 trillion words if we were asked to return the top 100,000 words and had a cluster of 1,000 machines available.
End of explanation
"""
!cat mini_5gram.txt | python GetIndexandOtherWords.py --index-range 16-20 \
--top-n-words 20 \
--return-counts False \
--use-term-counts True \
-q > vocabs
!cat vocabs
"""
Explanation: Test getting the index and other valid words excluding stop words on the mini_5gram.txt dataset. Return the top 10 most common words (based on the term counts) and mark the ninth and tenth most common words as the index words.
End of explanation
"""
!cat mini_5gram.txt | python GetIndexandOtherWords.py --index-range 16-20 \
--top-n-words 20 \
--return-counts True \
--use-term-counts True \
-q
"""
Explanation: To spot check the results, view the term counts of each word.
End of explanation
"""
%%writefile MakeStripes.py
from mrjob.job import MRJob
from collections import Counter
from sys import stderr
from re import findall
class MakeStripes(MRJob):
def mapper_init(self):
"""
Read in index words and word list.
"""
self.stripes = {}
self.indexlist, self.wordslist = [],[]
with open('vocabs', 'r') as vocabFile:
for line in vocabFile:
word_type, word = line.replace('"', '').split()
if word_type == 'index':
self.indexlist.append(word)
else:
self.wordslist.append(word)
# Convert to sets to make lookups faster
self.indexlist = set(self.indexlist)
self.wordslist = set(self.wordslist)
def mapper(self, _, lines):
"""
Make stripes using index and words list
"""
terms, term_count, page_count, book_count = lines.split("\t")
term_count = int(term_count)
terms = findall(r'[a-z]+', terms.lower())
for item in terms:
if item in self.indexlist:
for val in terms:
if val != item and val in self.wordslist:
yield item, {val:term_count}
def combiner(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
def reducer(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
if __name__ == "__main__":
MakeStripes.run()
!python MakeStripes.py --file vocabs mini_5gram.txt -q
"""
Explanation: 3. HW5.3.2 Full-scale experiment: EDA of Google n-grams dataset (PHASE 2)
Back to Table of Contents
Do some EDA on this dataset using mrjob, e.g.,
Longest 5-gram (number of characters)
Top 10 most frequent words (please use the count information), i.e., unigrams
20 Most/Least densely appearing words (count/pages_count) sorted in decreasing order of relative frequency
Distribution of 5-gram sizes (character length). E.g., count (using the count field) up how many times a 5-gram of 50 characters shows up. Plot the data graphically using a histogram.
We included all of this analysis at the end.
See here.
<a name="2.2">
2.2 Stripe Creation
Back to Table of Contents
The section takes the output from 2.1 and create stripes based on the vocabularies identified.
End of explanation
"""
%%writefile InvertIndex.py
from mrjob.job import MRJob
from mrjob.protocol import JSONProtocol
from collections import Counter
class InvertIndex(MRJob):
MRJob.input_protocol = JSONProtocol
def mapper(self, key, words):
"""
Convert each stripe to inverted index
"""
n_words = len(words)
for word in words:
yield (word, {key:n_words})
def combiner(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
def reducer(self, keys, values):
values_sum = Counter()
for val in values:
values_sum += Counter(val)
yield keys, dict(values_sum)
if __name__ == "__main__":
InvertIndex.run()
!python MakeStripes.py --file vocabs mini_5gram.txt -q | python InvertIndex.py -q
"""
Explanation: <a name="2.3">
2.3 Invert Index Creation
Back to Table of Contents
End of explanation
"""
%%writefile Similarity.py
from mrjob.job import MRJob
from mrjob.protocol import JSONProtocol
from itertools import combinations
class Similarity(MRJob):
MRJob.input_protocol = JSONProtocol
def mapper(self, key_term, docs):
"""
Make co-occurrence keys for each pair of documents in the inverted
index and make keys representing each document.
"""
doc_names = docs.keys()
for doc_pairs in combinations(sorted(list(doc_names)), 2):
yield (doc_pairs, 1)
for name in doc_names:
yield (name, 1)
def combiner(self, key, value):
yield (key, sum(value))
### Custom partitioner code goes here
def reducer_init(self):
self.words = {}
self.results = []
def reducer(self, doc_or_docs, count):
if isinstance(doc_or_docs, str):
self.words[doc_or_docs] = sum(count)
else:
d1, d2 = doc_or_docs
d1_n_words, d2_n_words = self.words[d1], self.words[d2]
intersection = float(sum(count))
jaccard = round(intersection/(d1_n_words + d2_n_words - intersection), 3)
cosine = round(intersection/(d1_n_words**.5 * d2_n_words**.5), 3)
dice = round(2*intersection/(d1_n_words + d2_n_words), 3)
overlap = round(intersection/min(d1_n_words, d2_n_words), 3)
average = round(sum([jaccard, cosine, dice, overlap])/4.0, 3)
self.results.append([doc_or_docs, {"jacc":jaccard, "cos":cosine,
"dice":dice, "ol":overlap, "ave":average}])
def reducer_final(self):
for doc, result in sorted(self.results, key=lambda x: x[1]["ave"], reverse=True):
yield (doc, result)
if __name__ == "__main__":
Similarity.run()
"""
Explanation: <a name="2.4">
2.4 Similarity Calculation
Back to Table of Contents
End of explanation
"""
!python MakeStripes.py --file vocabs mini_5gram.txt -q | python InvertIndex.py -q | python Similarity.py -q --jobconf mapred.reduce.tasks=1
"""
Explanation: Doesn't return anything because there are no co-occurring words in the inverted index.
End of explanation
"""
%%time
!cat Temp_data/googlebooks-eng-all-5gram-20090715-* | wc
"""
Explanation: Time the full calculations a slightly larger dataset
First, let's see how large the full dataset is. This number won't be exactly correct because my download was interrupted halfway through and I only have 185 items total.
End of explanation
"""
int(57432975*(200/185))
"""
Explanation: Because there are 200 files that make up the 5-gram dataset (at least that is what I thought I heard), the true line count of the dataset is about:
End of explanation
"""
%%time
!cat Temp_data/googlebooks-eng-all-5gram-20090715-9* | wc
"""
Explanation: We are going to operate on a subset of this data.
End of explanation
"""
3435179/62089702
"""
Explanation: This sample of the data is only a few percent (about 5.5%) of the full size of the dataset.
End of explanation
"""
%%time
!cat Temp_data/googlebooks-eng-all-5gram-20090715-9* | python GetIndexandOtherWords.py --index-range 9001-10000 \
--top-n-words 10000 \
--return-counts True \
--use-term-counts False \
-q > vocabs
!head vocabs
"""
Explanation: Create index and words to use
We find that many of the index words only occur one time.
End of explanation
"""
%%time
!cat Temp_data/googlebooks-eng-all-5gram-20090715-9* | python GetIndexandOtherWords.py --index-range 9001-10000 \
--top-n-words 10000 \
--return-counts True \
--use-term-counts True \
-q > vocabs
!head vocabs
"""
Explanation: Here we will return the term count of each word (not the 5gram-based count).
End of explanation
"""
%%time
!cat Temp_data/googlebooks-eng-all-5gram-20090715-9* | python GetIndexandOtherWords.py --index-range 9001-10000 \
--top-n-words 10000 \
--return-counts False \
--use-term-counts True \
-q > vocabs
!head vocabs
"""
Explanation: This code is very similar to what we would run on the full dataset
End of explanation
"""
%%time
!cat Temp_data/googlebooks-eng-all-5gram-20090715-9* | python MakeStripes.py --file vocabs -q | python InvertIndex.py -q | python Similarity.py -q --jobconf mapred.reduce.tasks=1 > similarities.txt
!head similarities.txt
"""
Explanation: Make stripes, invert index, and calculate similarities. Print top similarities.
End of explanation
"""
minutes_for_small_job = 3
n_small_jobs_in_big_job = 200/11
total_minutes_one_computer = minutes_for_small_job*n_small_jobs_in_big_job
computers_in_cluster = 50
total_minutes_for_cluster = total_minutes_one_computer/computers_in_cluster
total_minutes_for_cluster
"""
Explanation: It about 3 minutes to run this code. The code processes 11 out of 200 files. It currently uses one machine. If the cluster has 50 machines available, we would expect it to take only a few minutes for these core operations to run.
End of explanation
"""
%%writefile CustomPartitioner.py
from mrjob.job import MRJob
from sys import stderr
import numpy as np
from operator import itemgetter
from random import random
class CustomPartitioner(MRJob):
def __init__(self, *args, **kwargs):
super(CustomPartitioner, self).__init__(*args, **kwargs)
self.N = 30
self.NUM_REDUCERS = 4
def mapper_init(self):
def makeKeyHash(key, num_reducers):
byteof = lambda char: int(format(ord(char), 'b'), 2)
current_hash = 0
for c in key:
current_hash = (current_hash * 31 + byteof(c))
return current_hash % num_reducers
# printable ascii characters, starting with 'A'
keys = [str(chr(i)) for i in range(65,65+self.NUM_REDUCERS)]
partitions = []
for key in keys:
partitions.append([key, makeKeyHash(key, self.NUM_REDUCERS)])
parts = sorted(partitions,key=itemgetter(1))
self.partition_keys = list(np.array(parts)[:,0])
self.partition_file = np.arange(0,self.N,self.N/(self.NUM_REDUCERS))[::-1]
print((keys, partitions, parts, self.partition_keys, self.partition_file), file=stderr)
def mapper(self, _, lines):
terms, term_count, page_count, book_count = lines.split("\t")
terms = terms.split()
term_count = int(term_count)
for item in terms:
yield (item, term_count)
for item in ["A", "B", "H", "I"]:
yield (item, 0)
def reducer_init(self):
self.reducer_unique_key = int(random()*900000+100000)
def reducer(self, keys, values):
yield (self.reducer_unique_key, (keys, sum(values)))
if __name__ == "__main__":
CustomPartitioner.run()
!cat atlas.txt | python CustomPartitioner.py -r local -q
"""
Explanation: Experimental code on custom partitioner
End of explanation
"""
%%writefile ngram.py
from mrjob.job import MRJob
from collections import Counter
import operator
class NGram(MRJob):
def mapper_init(self):
self.length = 0
self.longest = 0
self.distribution = Counter()
def mapper(self, _, lines):
# extract word/count sets
ngram, count, pages, _ = lines.split("\t")
count, pages = int(count), int(pages)
# loop to count word length
words = ngram.lower().split()
for w in words:
yield (w, {'count':count, 'pages':pages})
# Count of ngram length
n_gram_character_count = len(ngram)
yield n_gram_character_count, count
# determine if longest word on mapper
if n_gram_character_count > self.length:
self.length = n_gram_character_count
self.longest = [words, n_gram_character_count]
yield (self.longest)
def combiner(self, word, counts):
if isinstance(word,str):
count = 0
pages = 0
for x in counts:
count += x['count']
pages += x['pages']
yield word, {'count':count,'pages':pages}
#aggregate counts
elif isinstance(word,int):
yield word, sum(counts)
#yield long ngrams
else:
for x in counts:
yield word, x
def reducer_init(self):
self.longest = []
self.length = 0
self.counts = Counter()
self.pages = Counter()
self.distribution = Counter()
def reducer(self, key, values):
# use Counter word totals
for val in values:
if isinstance(key,str):
self.counts += Counter({key:val['count']})
self.pages += Counter({key:val['pages']})
# aggregate distribution numbers
elif isinstance(key,int):
self.distribution += Counter({key:val})
else:
# Determine if longest ngram on reducer
if val > self.length:
self.longest = [key, val]
self.length = val
def reducer_final(self):
# yield density calculation
for x in sorted(self.counts):
yield ('mrj_dens',{x:(1.*self.counts[x]/self.pages[x])})
# Use most_common counter function
for x in self.counts.most_common(10):
yield x
# return longest item
if self.longest:
yield self.longest
# yield distribution values
for x in self.distribution:
yield ('mrj_dist', {x:self.distribution[x]})
if __name__ == "__main__":
NGram.run()
!python ngram.py --jobconf mapred.reduce.tasks=1 < googlebooks-eng-all-5gram-20090715-0-filtered-first-10-lines.txt -q > dataout.txt
!cat dataout.txt
%matplotlib inline
import json
import operator
import numpy as np
import matplotlib.pyplot as plt
# sorted density list
def density(data):
x = data
sorted_x = sorted(x.items(), key=operator.itemgetter(1), reverse=True)
print sorted_x[:20]
# distribution plot
def distribution(data):
plt.scatter(data.keys(), data.values(), alpha=0.5)
plt.show()
# loader
def driver():
datain = open('dataout.txt','r')
densdata = {}
distdata = {}
# clean the mess I made
for line in datain:
parts = line.split('\t')
temp = parts[1][1:-2].replace('"', '').split(':')
mrj_val = parts[0].replace('"', '')
if mrj_val == "mrj_dens":
densdata[temp[0]]=float(temp[1])
elif mrj_val == "mrj_dist":
distdata[int(temp[0])]=int(temp[1])
#Execute density sort
density(densdata)
#Execute distribution plot
distribution(distdata)
driver()
"""
Explanation: Results of experiment so far: Cannot force specific keys into specific partitions when running locally. Will try again on VM.
<a name="2.5">
2.5 Ngram Ranking and Plotting
Back to Table of Contents
End of explanation
"""
%%writefile NLTKBenchMark.py
import nltk
import json
import numpy as np
from nltk.corpus import wordnet as wn
from mrjob.job import MRJob
from mrjob.step import MRStep
class NLTKBenchMark(MRJob):
def mapper(self, _, lines):
#parse the output file and identify the pair of words
pair, avg = lines.split("\t")
pair = json.loads(pair)
word1, word2 = pair[0], pair[1]
hit = 0
#for each word, extract the list of synonyms from nltk corpus, convert to set to remove duplicates
syn1 = set([l.name() for s in wn.synsets(word1) for l in s.lemmas()])
syn2 = set([l.name() for s in wn.synsets(word2) for l in s.lemmas()])
#keep track of words that have no synonym using '~nosync'
if len(syn1) == 0:
yield '~nosyn', [word1]
if len(syn2) == 0:
yield '~nosyn', [word2]
'''
for each occurence of word, increment the count
for word A, synset is the number of synonyms of the other word B
this value is used for calculating recall
this method becomes confusing/problematic if a word appears multiple times in the final output
if there is a hit for word A, set the hit to 1, and set the hit for the other word B to 0 (to avoid double count)
if there is not a hit for A and B, set the hit to 0 for both
'''
if word2 in syn1:
yield word2, {'hit':1, 'count':1, 'synset':len(syn1)}
yield word1, {'hit':0, 'count':1, 'synset':len(syn2)}
elif word1 in syn2:
yield word1, {'hit':1, 'count':1, 'synset':len(syn2)}
yield word2, {'hit':0, 'count':1, 'synset':len(syn1)}
else:
yield word1, {'hit':0, 'count':1, 'synset':len(syn2)}
yield word2, {'hit':0, 'count':1, 'synset':len(syn1)}
def combiner(self, term, values):
#combine '~nosyn' into a bigger list and yield the list
if term == '~nosyn':
nosynList = []
for value in values:
nosynList = nosynList+value
yield term, nosynList
else:
counters = {'hit':0, 'count':0, 'synset':0}
for value in values:
counters['hit'] += value['hit']
counters['count'] += value['count']
counters['synset'] = value['synset']
yield term, counters
def reducer_init(self):
self.plist = []
self.rlist = []
self.flist = []
def reducer(self, term, values):
#yield the final list of words that have no synonym
if term == '~nosyn':
nosynList = []
for value in values:
nosynList = nosynList+value
yield term, nosynList
else:
counters = {'hit':0.0, 'count':0.0, 'synset':0.0}
precision, recall, F1 = 0,0,0
for value in values:
counters['hit'] += value['hit']
counters['count'] += value['count']
counters['synset'] = value['synset']
if counters['hit'] > 0 and counters['synset'] > 0:
precision = float(counters['hit'])/float(counters['count'])
recall = float(counters['hit'])/float(counters['synset'])
F1 = 2*precision*recall/(precision+recall)
self.plist.append(precision)
self.rlist.append(recall)
self.flist.append(F1)
yield term, counters
elif counters['synset'] > 0:
self.plist.append(precision)
self.rlist.append(recall)
self.flist.append(F1)
yield term, counters
def reducer_final(self):
#compute the mean of all collected measurements
yield 'precision', np.mean(self.plist)
yield 'recall', np.mean(self.rlist)
yield 'F1', np.mean(self.flist)
if __name__ == "__main__":
NLTKBenchMark.run()
!python NLTKBenchMark.py nltk_bench_sample.txt
''' Performance measures '''
from __future__ import division
import numpy as np
import json
import nltk
from nltk.corpus import wordnet as wn
import sys
#print all the synset element of an element
def synonyms(string):
syndict = {}
for i,j in enumerate(wn.synsets(string)):
syns = j.lemma_names()
for syn in syns:
syndict.setdefault(syn,1)
return syndict.keys()
hits = []
TP = 0
FP = 0
TOTAL = 0
flag = False # so we don't double count, but at the same time don't miss hits
## For this part we can use one of three outputs. They are all the same, but were generated differently
# 1. the top 1000 from the full sorted dataset -> sortedSims[:1000]
# 2. the top 1000 from the partial sort aggragate file -> sims2/top1000sims
# 3. the top 1000 from the total order sort file -> head -1000 sims_parts/part-00004
top1000sims = []
with open("nltk_bench_sample.txt","r") as f:
for line in f.readlines():
line = line.strip()
lisst, avg = line.split("\t")
lisst = eval(lisst)
lisst.append(avg)
top1000sims.append(lisst)
measures = {}
not_in_wordnet = []
for line in top1000sims:
TOTAL += 1
words=line[0:2]
for word in words:
if word not in measures:
measures[word] = {"syns":0,"opps": 0,"hits":0}
measures[word]["opps"] += 1
syns0 = synonyms(words[0])
measures[words[1]]["syns"] = len(syns0)
if len(syns0) == 0:
not_in_wordnet.append(words[0])
if words[1] in syns0:
TP += 1
hits.append(line)
flag = True
measures[words[1]]["hits"] += 1
syns1 = synonyms(words[1])
measures[words[0]]["syns"] = len(syns1)
if len(syns1) == 0:
not_in_wordnet.append(words[1])
if words[0] in syns1:
if flag == False:
TP += 1
hits.append(line)
measures[words[0]]["hits"] += 1
flag = False
precision = []
recall = []
f1 = []
for key in measures:
p,r,f = 0,0,0
if measures[key]["hits"] > 0 and measures[key]["syns"] > 0:
p = measures[key]["hits"]/measures[key]["opps"]
r = measures[key]["hits"]/measures[key]["syns"]
f = 2 * (p*r)/(p+r)
# For calculating measures, only take into account words that have synonyms in wordnet
if measures[key]["syns"] > 0:
precision.append(p)
recall.append(r)
f1.append(f)
# Take the mean of each measure
print "—"*110
print "Number of Hits:",TP, "out of top",TOTAL
print "Number of words without synonyms:",len(not_in_wordnet)
print "—"*110
print "Precision\t", np.mean(precision)
print "Recall\t\t", np.mean(recall)
print "F1\t\t", np.mean(f1)
print "—"*110
print "Words without synonyms:"
print "-"*100
for word in not_in_wordnet:
print synonyms(word),word
"""
Explanation: <a name="2.6">
2.6 NLTK Benchmarking
Back to Table of Contents
This section examine the output pairs using nltk library.
For each pair of words, we examine whether one is identified as a synonym of the other by nltk.
Based on the "hit" data, we compute precision, recall and F1 score of the output.
With limited pair in the output, it is possible to run everything within a python script.
We also prepare a mapreduce job in case the number of pairs increase
End of explanation
"""
|
gkc1000/pyscf | pyscf/nao/notebook/AWS/example-ase-siesta-pyscf-ch4-dens-change-gpu.ipynb | apache-2.0 | # import libraries and set up the molecule geometry
from ase.units import Ry, eV, Ha
from ase.calculators.siesta import Siesta
from ase import Atoms
import numpy as np
import matplotlib.pyplot as plt
from timeit import default_timer as timer
from ase.build import molecule
CH4 = molecule("CH4")
# visualization of the particle
from ase.visualize import view
view(CH4, viewer='x3d')
"""
Explanation: Easy Ab initio calculation with ASE-Siesta-Pyscf
No installation necessary, just download a ready to go container for any system, or run it into the cloud
We first import the necessary libraries and define the system using ASE
End of explanation
"""
# enter siesta input and run siesta
siesta = Siesta(
mesh_cutoff=150 * Ry,
basis_set='DZP',
pseudo_qualifier='lda',
energy_shift=(10 * 10**-3) * eV,
fdf_arguments={
'SCFMustConverge': False,
'COOP.Write': True,
'WriteDenchar': True,
'PAO.BasisType': 'split',
'DM.Tolerance': 1e-4,
'DM.MixingWeight': 0.1,
'MaxSCFIterations': 300,
'DM.NumberPulay': 4,
'XML.Write': True})
CH4.set_calculator(siesta)
e = CH4.get_potential_energy()
"""
Explanation: We can then run the DFT calculation using Siesta
End of explanation
"""
# compute polarizability using pyscf-nao
freq = np.arange(0.0, 15.0, 0.05)
t1 = timer()
siesta.pyscf_tddft(label="siesta", jcutoff=7, iter_broadening=0.15/Ha,
xc_code='LDA,PZ', tol_loc=1e-6, tol_biloc=1e-7, freq = freq)
t2 = timer()
print("CPU timing: ", t2-t1)
cpu_pol = siesta.results["polarizability inter"]
t1 = timer()
siesta.pyscf_tddft(label="siesta", jcutoff=7, iter_broadening=0.15/Ha,
xc_code='LDA,PZ', tol_loc=1e-6, tol_biloc=1e-7, freq = freq, GPU=True)
t2 = timer()
print("GPU timing: ", t2-t1)
gpu_pol = siesta.results["polarizability inter"]
# plot polarizability with matplotlib
%matplotlib inline
fig = plt.figure(1, figsize=(16, 9))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
ax1.plot(siesta.results["freq range"], siesta.results["polarizability nonin"][:, 0, 0].imag)
ax2.plot(siesta.results["freq range"], cpu_pol[:, 0, 0].imag)
ax2.plot(siesta.results["freq range"], gpu_pol[:, 0, 0].imag, "--")
ax1.set_xlabel(r"$\omega$ (eV)")
ax2.set_xlabel(r"$\omega$ (eV)")
ax1.set_ylabel(r"Im($P_{xx}$) (au)")
ax2.set_ylabel(r"Im($P_{xx}$) (au)")
ax1.set_title(r"Non interacting")
ax2.set_title(r"Interacting")
fig.tight_layout()
"""
Explanation: The TDDFT calculations with PySCF-NAO
End of explanation
"""
res = 10.5/Ha
lim = 20.0 # Bohr
box = np.array([[-lim, lim],
[-lim, lim],
[-lim, lim]])
from pyscf.nao.m_comp_spatial_distributions import spatial_distribution
spd = spatial_distribution(siesta.results["density change inter"], freq/Ha, box, label="siesta")
spd.get_spatial_density(10.5/Ha)
center = np.array([spd.dn_spatial.shape[0]/2, spd.dn_spatial.shape[1]/2, spd.dn_spatial.shape[2]/2], dtype=int)
fig2 = plt.figure(2, figsize=(15, 12))
cmap="seismic"
ax1 = fig2.add_subplot(1, 3, 1)
vmax = np.max(abs(spd.dn_spatial[center[0], :, :].imag))
vmin = -vmax
ax1.imshow(spd.dn_spatial[center[0], :, :].imag, interpolation="bicubic", vmin=vmin, vmax=vmax, cmap=cmap, extent=[spd.mesh[1][0], spd.mesh[1][spd.mesh[1].shape[0]-1], spd.mesh[2][0], spd.mesh[2][spd.mesh[2].shape[0]-1]])
ax2 = fig2.add_subplot(1, 3, 2)
vmax = np.max(abs(spd.dn_spatial[:, center[1], :].imag))
vmin = -vmax
ax2.imshow(spd.dn_spatial[:, center[1], :].imag, interpolation="bicubic", vmin=vmin, vmax=vmax, cmap=cmap, extent=[spd.mesh[0][0], spd.mesh[0][spd.mesh[0].shape[0]-1], spd.mesh[2][0], spd.mesh[2][spd.mesh[2].shape[0]-1]])
ax3 = fig2.add_subplot(1, 3, 3)
vmax = np.max(abs(spd.dn_spatial[:, :, center[2]].imag))
vmin = -vmax
ax3.imshow(spd.dn_spatial[:, :, center[2]].imag, interpolation="bicubic", vmin=vmin, vmax=vmax, cmap=cmap, extent=[spd.mesh[0][0], spd.mesh[0][spd.mesh[0].shape[0]-1], spd.mesh[1][0], spd.mesh[1][spd.mesh[1].shape[0]-1]])
ax1.set_xlabel(r"y (Bohr)")
ax2.set_xlabel(r"x (Bohr)")
ax3.set_xlabel(r"x (Bohr)")
ax1.set_ylabel(r"z (Bohr)")
ax2.set_ylabel(r"z (Bohr)")
ax3.set_ylabel(r"y (Bohr)")
ax1.set_title(r"Im($\delta n$) in the $x$ plane")
ax2.set_title(r"Im($\delta n$) in the $y$ plane")
ax3.set_title(r"Im($\delta n$) in the $z$ plane")
"""
Explanation: Compute the spatial distributoin of the density change at resonance frequency
End of explanation
"""
|
regata/dbda2e_py | chapters/2.ipynb | mit | import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
faces = np.arange(1,5)
faces
"""
Explanation: Introduction: Credibility, Models, and Parameters
Exercise 2.1
Exercise 2.2
Additional Exercise 1
Exercise 2.1
Purpose: To get you actively manipulating mathematical models of probabilities.
End of explanation
"""
p_A = lambda x: 1/4
p_B = lambda x: x/10
p_C = lambda x: 12/(25*x)
"""
Explanation: define each model
End of explanation
"""
f, axs = plt.subplots(1,3,figsize=(12,4), sharey=True)
models = (p_A, p_B, p_C)
for i, m in enumerate(models):
probs = list(map(m, faces))
axs[i].bar(faces, probs, align='center')
axs[i].set_xticks(faces)
axs[0].set_title('model A')
axs[1].set_title('model B')
axs[2].set_title('model C')
axs[0].set_ylabel('p(x)')
plt.show()
"""
Explanation: plot probabilitites
End of explanation
"""
die1 = [25, 25, 25, 25]
die2 = [48, 24, 16, 12]
f, axs = plt.subplots(1,2,figsize=(8,4), sharey=True)
axs[0].bar(faces, die1, align='center', color='tomato')
axs[0].set_xticks(faces)
axs[0].set_title('die 1')
axs[1].bar(faces, die2, align='center', color='tomato')
axs[1].set_xticks(faces)
axs[1].set_title('die 2')
axs[0].set_ylabel('# of rolls')
plt.show()
"""
Explanation: Exercise 2.2
Purpose: To get you actively thinking about how data cause credibilities to shift.
End of explanation
"""
p_healthy_coin = lambda side: .95 if side == 'neg' else .05
p_disease_coin = lambda side: .95 if side == 'pos' else .05
"""
Explanation: Additional Exercise 1
This exercise comes from https://sites.google.com/site/doingbayesiandataanalysis/exercises
Purpose: Thinking about prior probabilities in reallocation of credibility in disease diagnosis.
End of explanation
"""
n_coins = 10000
healthy_factory_prior = .99
n_healthy = n_coins * healthy_factory_prior
n_disease = n_coins * (1 - healthy_factory_prior)
n_healthy, n_disease
n_pos_healthy = n_healthy * p_healthy_coin('pos')
n_pos_healthy
n_pos_disease = n_disease * p_disease_coin('pos')
n_pos_disease
disease_factory_posterior = n_pos_disease / (n_pos_healthy + n_pos_disease)
disease_factory_posterior
"""
Explanation: part B
End of explanation
"""
|
authman/DAT210x | Module4/Module4 - Lab4.ipynb | mit | import math, random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import scipy.io
from mpl_toolkits.mplot3d import Axes3D
# Look pretty...
# matplotlib.style.use('ggplot')
# plt.style.use('ggplot')
"""
Explanation: DAT210x - Programming with Python for DS
Module4- Lab4
End of explanation
"""
def Plot2D(T, title, x, y, num_to_plot=40):
# This method picks a bunch of random samples (images in your case)
# to plot onto the chart:
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title(title)
ax.set_xlabel('Component: {0}'.format(x))
ax.set_ylabel('Component: {0}'.format(y))
x_size = (max(T[:,x]) - min(T[:,x])) * 0.08
y_size = (max(T[:,y]) - min(T[:,y])) * 0.08
for i in range(num_to_plot):
img_num = int(random.random() * num_images)
x0, y0 = T[img_num,x]-x_size/2., T[img_num,y]-y_size/2.
x1, y1 = T[img_num,x]+x_size/2., T[img_num,y]+y_size/2.
img = df.iloc[img_num,:].reshape(num_pixels, num_pixels)
ax.imshow(img, aspect='auto', cmap=plt.cm.gray, interpolation='nearest', zorder=100000, extent=(x0, x1, y0, y1))
# It also plots the full scatter:
ax.scatter(T[:,x],T[:,y], marker='.',alpha=0.7)
"""
Explanation: Some Boilerplate Code
For your convenience, we've included some boilerplate code here which will help you out. You aren't expected to know how to write this code on your own at this point, but it'll assist with your visualizations and loading of the .mat file. We've added some notes to the code in case you're interested in knowing what it's doing:
End of explanation
"""
mat = scipy.io.loadmat('Datasets/face_data.mat')
df = pd.DataFrame(mat['images']).T
num_images, num_pixels = df.shape
num_pixels = int(math.sqrt(num_pixels))
# Rotate the pictures, so we don't have to crane our necks:
for i in range(num_images):
df.loc[i,:] = df.loc[i,:].reshape(num_pixels, num_pixels).T.reshape(-1)
"""
Explanation: A .MAT file is a MATLAB file type. The faces dataset could have came in through .png images, but we'll show you how to do that in another lab. For now, you'll see how to import .mats:
End of explanation
"""
# .. your code here ..
"""
Explanation: And Now, The Assignment
Implement PCA here. Reduce the dataframe df down to three components. Once you've done that, call Plot2D.
The format is: Plot2D(T, title, x, y, num_to_plot=40):
T Your transformed data, stored in an NDArray.
title Your chart's title
x Index of the principal component you want displayed on the x-axis; set it to 0 or 1
y Index of the principal component you want displayed on the y-axis; set it to 1 or 2
End of explanation
"""
# .. your code here ..
"""
Explanation: Implement Isomap here. Reduce the dataframe df down to three components. Once you've done that, call Plot2D using the first two components:
End of explanation
"""
# .. your code here ..
plt.show()
"""
Explanation: If you're up for a challenge, draw your dataframes in 3D. Even if you're not up for a challenge, just do it anyway. You might have to increase the dimensionality of your transformed dataset:
End of explanation
"""
|
root-mirror/training | SummerStudentCourse/2019/Exercises/WorkingWithFiles/WritingOnFilesExercise.ipynb | gpl-2.0 | import ROOT
"""
Explanation: Writing on files
This is a Python notebook in which you will practice the concepts learned during the lectures.
Startup ROOT
Import the ROOT module: this will activate the integration layer with the notebook automatically
End of explanation
"""
rndm = ROOT.TRandom3(1)
filename = "histos.root"
# Here open a file and create three histograms
for i in xrange(1024):
# Use the following lines to feed the Fill method of the histograms in order to fill
rndm.Gaus()
rndm.Exp(1)
rndm.Uniform(-4,4)
# Here write the three histograms on the file and close the file
"""
Explanation: Writing histograms
Create a TFile containing three histograms filled with random numbers distributed according to a Gaus, an exponential and a uniform distribution.
Close the file: you will reopen it later.
End of explanation
"""
! ls .
! echo Now listing the content of the file
! rootls -l #filename here
"""
Explanation: Now, you can invoke the ls command from within the notebook to list the files in this directory. Check that the file is there. You can invoke the rootls command to see what's inside the file.
End of explanation
"""
%jsroot on
f = ROOT.TFile(filename)
c = ROOT.TCanvas()
c.Divide(2,2)
c.cd(1)
f.gaus.Draw()
# finish the drawing in each pad
# Draw the Canvas
"""
Explanation: Access the histograms and draw them in Python. Remember that you need to create a TCanvas before and draw it too in order to inline the plots in the notebooks.
You can switch to the interactive JavaScript visualisation using the %jsroot on "magic" command.
End of explanation
"""
%%cpp
TFile f("histos.root");
TH1F *hg, *he, *hu;
f.GetObject("gaus", hg);
// ... read the histograms and draw them in each pad
"""
Explanation: You can now repeat the exercise above using C++. Transform the cell in a C++ cell using the %%cpp "magic".
End of explanation
"""
f = ROOT.TXMLFile("histos.xml","RECREATE")
hg = ROOT.TH1F("gaus","Gaussian numbers", 64, -4, 4)
he = ROOT.TH1F("expo","Exponential numbers", 64, -4, 4)
hu = ROOT.TH1F("unif","Uniform numbers", 64, -4, 4)
for i in xrange(1024):
hg.Fill(rndm.Gaus())
# ... Same as above!
! ls -l histos.xml histos.root
! cat histos.xml
"""
Explanation: Inspect the content of the file: TXMLFile
ROOT provides a different kind of TFile, TXMLFile. It has the same interface and it's very useful to better understand how objects are written in files by ROOT.
Repeat the exercise above, either on Python or C++ - your choice, using a TXMLFILE rather than a TFile and then display its content with the cat command. Can you see how the content of the individual bins of the histograms is stored? And the colour of its markers?
Do you understand why the xml file is bigger than the root one even if they have the same content?
End of explanation
"""
|
dereneaton/ipyrad | newdocs/API-analysis/cookbook-digest_genomes.ipynb | gpl-3.0 | # conda install ipyrad -c bioconda
import ipyrad.analysis as ipa
"""
Explanation: <span style="color:gray">ipyrad-analysis toolkit: </span> digest genomes
The purpose of this tool is to digest a genome file in silico using the same restriction enzymes that were used for an empirical data set to attempt to extract homologous data from the genome file. This can be a useful procedure for adding additional outgroup samples to a data set.
Required software
End of explanation
"""
genome = "/home/deren/Downloads/Ahypochondriacus_459_v2.0.fa"
"""
Explanation: A genome file
You will need a genome file in fasta format (optionally it can be gzip compressed).
End of explanation
"""
digest = ipa.digest_genome(
fasta=genome,
name="amaranthus-digest",
workdir="digested_genomes",
re1="CTGCAG",
re2="AATTC",
ncopies=10,
readlen=150,
min_size=300,
max_size=500,
)
fio = open(genome)
scaffolds = fio.read().split(">")[1:]
ordered = sorted(scaffolds, key=lambda x: len(x), reverse=True)
len(ordered[0])
digest.run()
"""
Explanation: Initialize the tool
You can generate single or paired-end data, and you will likely want to restrict the size of selected fragments to be within an expected size selection window, as is typically done in empirical data sets. Here I select all fragments occuring between two restriction enzymes where the intervening fragment is 300-500bp in length. I then ask that the analysis returns the digested fragments as 150bp fastq reads, and to provide 10 copies of each one.
End of explanation
"""
ll digested_genomes/
"""
Explanation: Check results
End of explanation
"""
|
keras-team/keras-io | examples/vision/ipynb/reptile.ipynb | apache-2.0 | import matplotlib.pyplot as plt
import numpy as np
import random
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_datasets as tfds
"""
Explanation: Few-Shot learning with Reptile
Author: ADMoreau<br>
Date created: 2020/05/21<br>
Last modified: 2020/05/30<br>
Description: Few-shot classification of the Omniglot dataset using Reptile.
Introduction
The Reptile algorithm was developed by OpenAI to
perform model agnostic meta-learning. Specifically, this algorithm was designed to
quickly learn to perform new tasks with minimal training (few-shot learning).
The algorithm works by performing Stochastic Gradient Descent using the
difference between weights trained on a mini-batch of never before seen data and the
model weights prior to training over a fixed number of meta-iterations.
End of explanation
"""
learning_rate = 0.003
meta_step_size = 0.25
inner_batch_size = 25
eval_batch_size = 25
meta_iters = 2000
eval_iters = 5
inner_iters = 4
eval_interval = 1
train_shots = 20
shots = 5
classes = 5
"""
Explanation: Define the Hyperparameters
End of explanation
"""
class Dataset:
# This class will facilitate the creation of a few-shot dataset
# from the Omniglot dataset that can be sampled from quickly while also
# allowing to create new labels at the same time.
def __init__(self, training):
# Download the tfrecord files containing the omniglot data and convert to a
# dataset.
split = "train" if training else "test"
ds = tfds.load("omniglot", split=split, as_supervised=True, shuffle_files=False)
# Iterate over the dataset to get each individual image and its class,
# and put that data into a dictionary.
self.data = {}
def extraction(image, label):
# This function will shrink the Omniglot images to the desired size,
# scale pixel values and convert the RGB image to grayscale
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.rgb_to_grayscale(image)
image = tf.image.resize(image, [28, 28])
return image, label
for image, label in ds.map(extraction):
image = image.numpy()
label = str(label.numpy())
if label not in self.data:
self.data[label] = []
self.data[label].append(image)
self.labels = list(self.data.keys())
def get_mini_dataset(
self, batch_size, repetitions, shots, num_classes, split=False
):
temp_labels = np.zeros(shape=(num_classes * shots))
temp_images = np.zeros(shape=(num_classes * shots, 28, 28, 1))
if split:
test_labels = np.zeros(shape=(num_classes))
test_images = np.zeros(shape=(num_classes, 28, 28, 1))
# Get a random subset of labels from the entire label set.
label_subset = random.choices(self.labels, k=num_classes)
for class_idx, class_obj in enumerate(label_subset):
# Use enumerated index value as a temporary label for mini-batch in
# few shot learning.
temp_labels[class_idx * shots : (class_idx + 1) * shots] = class_idx
# If creating a split dataset for testing, select an extra sample from each
# label to create the test dataset.
if split:
test_labels[class_idx] = class_idx
images_to_split = random.choices(
self.data[label_subset[class_idx]], k=shots + 1
)
test_images[class_idx] = images_to_split[-1]
temp_images[
class_idx * shots : (class_idx + 1) * shots
] = images_to_split[:-1]
else:
# For each index in the randomly selected label_subset, sample the
# necessary number of images.
temp_images[
class_idx * shots : (class_idx + 1) * shots
] = random.choices(self.data[label_subset[class_idx]], k=shots)
dataset = tf.data.Dataset.from_tensor_slices(
(temp_images.astype(np.float32), temp_labels.astype(np.int32))
)
dataset = dataset.shuffle(100).batch(batch_size).repeat(repetitions)
if split:
return dataset, test_images, test_labels
return dataset
import urllib3
urllib3.disable_warnings() # Disable SSL warnings that may happen during download.
train_dataset = Dataset(training=True)
test_dataset = Dataset(training=False)
"""
Explanation: Prepare the data
The Omniglot dataset is a dataset of 1,623
characters taken from 50 different alphabets, with 20 examples for each character.
The 20 samples for each character were drawn online via Amazon's Mechanical Turk. For the
few-shot learning task, k samples (or "shots") are drawn randomly from n randomly-chosen
classes. These n numerical values are used to create a new set of temporary labels to use
to test the model's ability to learn a new task given few examples. In other words, if you
are training on 5 classes, your new class labels will be either 0, 1, 2, 3, or 4.
Omniglot is a great dataset for this task since there are many different classes to draw
from, with a reasonable number of samples for each class.
End of explanation
"""
_, axarr = plt.subplots(nrows=5, ncols=5, figsize=(20, 20))
sample_keys = list(train_dataset.data.keys())
for a in range(5):
for b in range(5):
temp_image = train_dataset.data[sample_keys[a]][b]
temp_image = np.stack((temp_image[:, :, 0],) * 3, axis=2)
temp_image *= 255
temp_image = np.clip(temp_image, 0, 255).astype("uint8")
if b == 2:
axarr[a, b].set_title("Class : " + sample_keys[a])
axarr[a, b].imshow(temp_image, cmap="gray")
axarr[a, b].xaxis.set_visible(False)
axarr[a, b].yaxis.set_visible(False)
plt.show()
"""
Explanation: Visualize some examples from the dataset
End of explanation
"""
def conv_bn(x):
x = layers.Conv2D(filters=64, kernel_size=3, strides=2, padding="same")(x)
x = layers.BatchNormalization()(x)
return layers.ReLU()(x)
inputs = layers.Input(shape=(28, 28, 1))
x = conv_bn(inputs)
x = conv_bn(x)
x = conv_bn(x)
x = conv_bn(x)
x = layers.Flatten()(x)
outputs = layers.Dense(classes, activation="softmax")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile()
optimizer = keras.optimizers.SGD(learning_rate=learning_rate)
"""
Explanation: Build the model
End of explanation
"""
training = []
testing = []
for meta_iter in range(meta_iters):
frac_done = meta_iter / meta_iters
cur_meta_step_size = (1 - frac_done) * meta_step_size
# Temporarily save the weights from the model.
old_vars = model.get_weights()
# Get a sample from the full dataset.
mini_dataset = train_dataset.get_mini_dataset(
inner_batch_size, inner_iters, train_shots, classes
)
for images, labels in mini_dataset:
with tf.GradientTape() as tape:
preds = model(images)
loss = keras.losses.sparse_categorical_crossentropy(labels, preds)
grads = tape.gradient(loss, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
new_vars = model.get_weights()
# Perform SGD for the meta step.
for var in range(len(new_vars)):
new_vars[var] = old_vars[var] + (
(new_vars[var] - old_vars[var]) * cur_meta_step_size
)
# After the meta-learning step, reload the newly-trained weights into the model.
model.set_weights(new_vars)
# Evaluation loop
if meta_iter % eval_interval == 0:
accuracies = []
for dataset in (train_dataset, test_dataset):
# Sample a mini dataset from the full dataset.
train_set, test_images, test_labels = dataset.get_mini_dataset(
eval_batch_size, eval_iters, shots, classes, split=True
)
old_vars = model.get_weights()
# Train on the samples and get the resulting accuracies.
for images, labels in train_set:
with tf.GradientTape() as tape:
preds = model(images)
loss = keras.losses.sparse_categorical_crossentropy(labels, preds)
grads = tape.gradient(loss, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
test_preds = model.predict(test_images)
test_preds = tf.argmax(test_preds).numpy()
num_correct = (test_preds == test_labels).sum()
# Reset the weights after getting the evaluation accuracies.
model.set_weights(old_vars)
accuracies.append(num_correct / classes)
training.append(accuracies[0])
testing.append(accuracies[1])
if meta_iter % 100 == 0:
print(
"batch %d: train=%f test=%f" % (meta_iter, accuracies[0], accuracies[1])
)
"""
Explanation: Train the model
End of explanation
"""
# First, some preprocessing to smooth the training and testing arrays for display.
window_length = 100
train_s = np.r_[
training[window_length - 1 : 0 : -1], training, training[-1:-window_length:-1]
]
test_s = np.r_[
testing[window_length - 1 : 0 : -1], testing, testing[-1:-window_length:-1]
]
w = np.hamming(window_length)
train_y = np.convolve(w / w.sum(), train_s, mode="valid")
test_y = np.convolve(w / w.sum(), test_s, mode="valid")
# Display the training accuracies.
x = np.arange(0, len(test_y), 1)
plt.plot(x, test_y, x, train_y)
plt.legend(["test", "train"])
plt.grid()
train_set, test_images, test_labels = dataset.get_mini_dataset(
eval_batch_size, eval_iters, shots, classes, split=True
)
for images, labels in train_set:
with tf.GradientTape() as tape:
preds = model(images)
loss = keras.losses.sparse_categorical_crossentropy(labels, preds)
grads = tape.gradient(loss, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
test_preds = model.predict(test_images)
test_preds = tf.argmax(test_preds).numpy()
_, axarr = plt.subplots(nrows=1, ncols=5, figsize=(20, 20))
sample_keys = list(train_dataset.data.keys())
for i, ax in zip(range(5), axarr):
temp_image = np.stack((test_images[i, :, :, 0],) * 3, axis=2)
temp_image *= 255
temp_image = np.clip(temp_image, 0, 255).astype("uint8")
ax.set_title(
"Label : {}, Prediction : {}".format(int(test_labels[i]), test_preds[i])
)
ax.imshow(temp_image, cmap="gray")
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.show()
"""
Explanation: Visualize Results
End of explanation
"""
|
ioos/system-test | content/downloads/notebooks/2015-10-12-fetching_data.ipynb | unlicense | from datetime import datetime, timedelta
event_date = datetime(2015, 8, 15)
start = event_date - timedelta(days=4)
stop = event_date + timedelta(days=4)
"""
Explanation: This notebook shows a typical workflow to query a Catalog Service for the Web (CSW) and creates a request for data endpoints that are suitable for download.
The catalog of choice is the NGCD geoportal (http://www.ngdc.noaa.gov/geoportal/csw) and we want to query it using a geographical bounding box, a time range, and a variable of interested.
The example below will fetch Sea Surface Temperature (SST) data from all available observations and models in the Boston Harbor region.
The goal is to assess the water temperature for the of the Boston Light Swim event.
We will search for data $\pm$ 4 days centered at the event date.
End of explanation
"""
spacing = 0.25
bbox = [-71.05-spacing, 42.28-spacing,
-70.82+spacing, 42.38+spacing]
"""
Explanation: The bounding box is slightly larger than the Boston harbor to assure we get some data.
End of explanation
"""
import iris
from utilities import CF_names
sos_name = 'sea_water_temperature'
name_list = CF_names[sos_name]
units = iris.unit.Unit('celsius')
"""
Explanation: The CF_names object is just a Python dictionary whose keys are SOS names and the values contain all possible combinations of temperature variables names in the CF conventions. Note that we also define a units object.
We use the object units to coerce all data to celcius.
End of explanation
"""
from owslib import fes
from utilities import fes_date_filter
kw = dict(wildCard='*',
escapeChar='\\',
singleChar='?',
propertyname='apiso:AnyText')
or_filt = fes.Or([fes.PropertyIsLike(literal=('*%s*' % val), **kw)
for val in name_list])
# Exclude ROMS Averages and History files.
not_filt = fes.Not([fes.PropertyIsLike(literal='*Averages*', **kw)])
begin, end = fes_date_filter(start, stop)
filter_list = [fes.And([fes.BBox(bbox), begin, end, or_filt, not_filt])]
"""
Explanation: Now it is time to stitch all that together.
For that we will use OWSLib*.
Constructing the filter is probably the most complex part.
We start with a list comprehension using the fes.Or to create the variables filter.
The next step is to exclude some unwanted results (ROMS Average files) using fes.Not.
To select the desired dates we wrote a wrapper function that takes the start and end dates of the event.
Finally, we apply the fes.And to join all the conditions above in one filter list.
* OWSLib is a Python package for client programming with Open Geospatial Consortium (OGC) web service (hence OWS) interface standards, and their related content models.
End of explanation
"""
from owslib.csw import CatalogueServiceWeb
csw = CatalogueServiceWeb('http://www.ngdc.noaa.gov/geoportal/csw',
timeout=60)
csw.getrecords2(constraints=filter_list, maxrecords=1000, esn='full')
fmt = '{:*^64}'.format
print(fmt(' Catalog information '))
print("CSW version: {}".format(csw.version))
print("Number of datasets available: {}".format(len(csw.records.keys())))
"""
Explanation: Now we are ready to load a csw object and feed it with the filter we created.
End of explanation
"""
from utilities import service_urls
dap_urls = service_urls(csw.records, service='odp:url')
sos_urls = service_urls(csw.records, service='sos:url')
print(fmt(' SOS '))
for url in sos_urls:
print('{}'.format(url))
print(fmt(' DAP '))
for url in dap_urls:
print('{}.html'.format(url))
"""
Explanation: We found 13 datasets!
Not bad for such a narrow search area and time-span.
What do we have there?
Let's use the custom service_urls function to split the datasets into OPeNDAP and SOS endpoints.
End of explanation
"""
from utilities import is_station
non_stations = []
for url in dap_urls:
try:
if not is_station(url):
non_stations.append(url)
except RuntimeError as e:
print("Could not access URL {}. {!r}".format(url, e))
dap_urls = non_stations
print(fmt(' Filtered DAP '))
for url in dap_urls:
print('{}.html'.format(url))
"""
Explanation: We will ignore the SOS endpoints for now and use only the DAP endpoints.
But note that some of those SOS and DAP endpoints look suspicious.
The Scripps Institution of Oceanography (SIO/UCSD) data should not appear in a search for the Boston Harbor.
That is a known issue and we are working to sort it out. Meanwhile we have to filter out all observations form the DAP with the is_station function.
However, that filter still leaves behind URLs like http://tds.maracoos.org/thredds/dodsC/SST-Three-Agg.nc.html. That is probably satellite data and not model.
In an ideal world all datasets would have the metadata coverage_content_type defined. With the coverage_content_type we could tell models apart automatically.
Until then we will have to make due with the heuristic function is_model from the utilities module.
The is_model function works by comparing the metadata (and sometimes the data itself) against a series of criteria,
like grid conventions,
to figure out if a dataset is model data or not.
Because the function operates on the data we will call it later on when we start downloading the data.
End of explanation
"""
from pyoos.collectors.ndbc.ndbc_sos import NdbcSos
collector_ndbc = NdbcSos()
collector_ndbc.set_bbox(bbox)
collector_ndbc.end_time = stop
collector_ndbc.start_time = start
collector_ndbc.variables = [sos_name]
ofrs = collector_ndbc.server.offerings
title = collector_ndbc.server.identification.title
print(fmt(' NDBC Collector offerings '))
print('{}: {} offerings'.format(title, len(ofrs)))
"""
Explanation: We still need to find endpoints for the observations.
For that we'll use pyoos' NdbcSos and CoopsSoscollectors.
The pyoos API is different from OWSLib's, but note that we are re-using the same query variables we create for the catalog search (bbox, start, stop, and sos_name.)
End of explanation
"""
from utilities import collector2table, get_ndbc_longname
ndbc = collector2table(collector=collector_ndbc)
names = []
for s in ndbc['station']:
try:
name = get_ndbc_longname(s)
except ValueError:
name = s
names.append(name)
ndbc['name'] = names
ndbc.set_index('name', inplace=True)
ndbc.head()
"""
Explanation: That number is misleading!
Do we have 955 buoys available there?
What exactly are the offerings?
There is only one way to find out.
Let's get the data!
End of explanation
"""
from pyoos.collectors.coops.coops_sos import CoopsSos
collector_coops = CoopsSos()
collector_coops.set_bbox(bbox)
collector_coops.end_time = stop
collector_coops.start_time = start
collector_coops.variables = [sos_name]
ofrs = collector_coops.server.offerings
title = collector_coops.server.identification.title
print(fmt(' Collector offerings '))
print('{}: {} offerings'.format(title, len(ofrs)))
from utilities import get_coops_metadata
coops = collector2table(collector=collector_coops)
names = []
for s in coops['station']:
try:
name = get_coops_metadata(s)[0]
except ValueError:
name = s
names.append(name)
coops['name'] = names
coops.set_index('name', inplace=True)
coops.head()
"""
Explanation: That makes more sense.
Two buoys were found in the bounding box,
and the name of at least one of them makes sense.
Now the same thing for CoopsSos.
End of explanation
"""
from pandas import concat
all_obs = concat([coops, ndbc])
all_obs.head()
from pandas import DataFrame
from owslib.ows import ExceptionReport
from utilities import pyoos2df, save_timeseries
iris.FUTURE.netcdf_promote = True
data = dict()
col = 'sea_water_temperature (C)'
for station in all_obs.index:
try:
idx = all_obs['station'][station]
df = pyoos2df(collector_ndbc, idx, df_name=station)
if df.empty:
df = pyoos2df(collector_coops, idx, df_name=station)
data.update({idx: df[col]})
except ExceptionReport as e:
print("[{}] {}:\n{}".format(idx, station, e))
"""
Explanation: We found one more.
Now we can merge both into one table and start downloading the data.
End of explanation
"""
from pandas import date_range
index = date_range(start=start, end=stop, freq='1H')
for k, v in data.iteritems():
data[k] = v.reindex(index=index, limit=1, method='nearest')
obs_data = DataFrame.from_dict(data)
obs_data.head()
"""
Explanation: The cell below reduces or interpolates,
depending on the original frequency of the data,
to 1 hour frequency time-series.
End of explanation
"""
import warnings
from iris.exceptions import (CoordinateNotFoundError, ConstraintMismatchError,
MergeError)
from utilities import (quick_load_cubes, proc_cube, is_model,
get_model_name, get_surface)
cubes = dict()
for k, url in enumerate(dap_urls):
print('\n[Reading url {}/{}]: {}'.format(k+1, len(dap_urls), url))
try:
cube = quick_load_cubes(url, name_list,
callback=None, strict=True)
if is_model(cube):
cube = proc_cube(cube, bbox=bbox,
time=(start, stop), units=units)
else:
print("[Not model data]: {}".format(url))
continue
cube = get_surface(cube)
mod_name, model_full_name = get_model_name(cube, url)
cubes.update({mod_name: cube})
except (RuntimeError, ValueError,
ConstraintMismatchError, CoordinateNotFoundError,
IndexError) as e:
print('Cannot get cube for: {}\n{}'.format(url, e))
"""
Explanation: And now the same for the models. Note that now we use the is_model to filter out non-model endpotins.
End of explanation
"""
from iris.pandas import as_series
from utilities import (make_tree, get_nearest_water,
add_station, ensure_timeseries, remove_ssh)
model_data = dict()
for mod_name, cube in cubes.items():
print(fmt(mod_name))
try:
tree, lon, lat = make_tree(cube)
except CoordinateNotFoundError as e:
print('Cannot make KDTree for: {}'.format(mod_name))
continue
# Get model series at observed locations.
raw_series = dict()
for station, obs in all_obs.iterrows():
try:
kw = dict(k=10, max_dist=0.08, min_var=0.01)
args = cube, tree, obs.lon, obs.lat
series, dist, idx = get_nearest_water(*args, **kw)
except ValueError as e:
status = "No Data"
print('[{}] {}'.format(status, obs.name))
continue
if not series:
status = "Land "
else:
series = as_series(series)
raw_series.update({obs['station']: series})
status = "Water "
print('[{}] {}'.format(status, obs.name))
if raw_series: # Save that model series.
model_data.update({mod_name: raw_series})
del cube
"""
Explanation: And now we can use the iris cube objects we collected to download model data near the buoys we found above.
We will need get_nearest_water to search the 10 nearest model
points at least 0.08 degrees away from each buys.
(This step is still a little bit clunky and need some improvements!)
End of explanation
"""
import matplotlib.pyplot as plt
buoy = '44013'
fig , ax = plt.subplots(figsize=(11, 2.75))
obs_data[buoy].plot(ax=ax, label='Buoy')
for model in model_data.keys():
try:
model_data[model][buoy].plot(ax=ax, label=model)
except KeyError:
pass # Could not find a model at this location.
leg = ax.legend()
buoy = '44029'
fig , ax = plt.subplots(figsize=(11, 2.75))
obs_data[buoy].plot(ax=ax, label='Buoy')
for model in model_data.keys():
try:
model_data[model][buoy].plot(ax=ax, label=model)
except KeyError:
pass # Could not find a model at this location.
leg = ax.legend()
buoy = '8443970'
fig , ax = plt.subplots(figsize=(11, 2.75))
obs_data[buoy].plot(ax=ax, label='Buoy')
for model in model_data.keys():
try:
model_data[model][buoy].plot(ax=ax, label=model)
except KeyError:
pass # Could not find a model at this location.
leg = ax.legend()
"""
Explanation: To end this post let's plot the 3 buoys we found together with the nearest model grid point.
End of explanation
"""
HTML(html)
"""
Explanation: That is it!
We fetched data based only on a bounding box, time-range, and variable name.
The workflow is not as smooth as we would like.
We had to mix OWSLib catalog searches with to different pyoos collector to download the observed and modeled data.
Another hiccup are all the workarounds used to go from iris cubes to pandas series/dataframes.
There is a clear need to a better way to represent CF feature types in a single Python object.
To end this post check out the full version of the Boston Light Swim notebook. (Specially the interactive map at the end.)
End of explanation
"""
|
kimegitee/deep-learning | image-classification/dlnd_image_classification.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 15
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
from sklearn import preprocessing
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# TODO: Implement Function
return preprocessing.normalize(x.reshape((-1, 3072))).reshape((-1, 32, 32, 3))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# TODO: Implement Function
return preprocessing.label_binarize(x, classes = range(10))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, shape = (None,) + image_shape, name = 'x')
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, shape = (None, n_classes), name = 'y')
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, name = 'keep_prob')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# TODO: Implement Function
input_depth = x_tensor.get_shape().as_list()[3]
conv_filter = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], input_depth, conv_num_outputs]))
conv_strides = (1,) + conv_strides + (1,)
conv_out = tf.nn.conv2d(x_tensor, conv_filter, conv_strides, padding = 'SAME')
conv_bias = tf.Variable(tf.zeros([conv_num_outputs]))
conv_out_with_bias = tf.nn.bias_add(conv_out, conv_bias)
relu_out = tf.nn.relu(conv_out)
pool_strides = (1,) + pool_strides + (1,)
pool_ksize = (1,) + pool_ksize + (1,)
pool_out = tf.nn.max_pool(relu_out, pool_ksize, pool_strides, padding = 'SAME')
return pool_out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
shape = x_tensor.get_shape().as_list()[1:]
size = 1
for i in shape:
size *= i
return tf.reshape(x_tensor, (-1, size))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
weights_shape = x_tensor.get_shape().as_list()[1:] + [num_outputs]
weights = tf.Variable(tf.truncated_normal(weights_shape))
bias = tf.Variable(tf.zeros([num_outputs]))
fully_conn = tf.add(tf.matmul(x_tensor, weights), bias)
fully_conn = tf.nn.relu(fully_conn)
return fully_conn
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
weights_shape = x_tensor.get_shape().as_list()[1:] + [num_outputs]
weights = tf.Variable(tf.truncated_normal(weights_shape))
bias = tf.Variable(tf.zeros([num_outputs]))
fully_conn = tf.add(tf.matmul(x_tensor, weights), bias)
return fully_conn
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_1 = conv2d_maxpool(x, 40, (5, 5), (2, 2), (2, 2), (1, 1))
conv_2 = conv2d_maxpool(conv_1, 300, (3, 3), (2, 2), (2, 2), (1, 1))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flat = flatten(conv_2)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc_1 = fully_conn(flat, 2000)
fc_drop_1 = tf.nn.dropout(fc_1, keep_prob)
fc_2 = fully_conn(fc_1, 1000)
fc_drop_2 = tf.nn.dropout(fc_2, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out = output(fc_2, 10)
# TODO: return output
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# TODO: Implement Function
session.run(optimizer, feed_dict = {x: feature_batch, y: label_batch, keep_prob: keep_probability})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
loss = session.run(cost, feed_dict = {x: feature_batch, y: label_batch, keep_prob: 1.0})
acc = session.run(accuracy, feed_dict = {x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Loss: {:>9.2f} Acc:{:<.3f}'.format(loss, acc))
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 50
batch_size = 1024
keep_probability = 0.85
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
mcamack/Jupyter-Notebooks | tensorflow/tensorflow-Regression-Regularization.ipynb | apache-2.0 | %matplotlib inline
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
learning_rate = 0.01 # Hyperparameters
training_epochs = 100
x_train = np.linspace(-1, 1, 101) # Dataset
y_train = 2 * x_train + np.random.randn(*x_train.shape) * 0.33
X = tf.placeholder(tf.float32) # tf placeholder nodes for input/output
Y = tf.placeholder(tf.float32)
w = tf.Variable(0.0, name="weights") # Weights variable
def model(X, w): # defines model as Y = wX
return tf.multiply(X, w)
y_model = model(X, w) # Cost Function
cost = tf.square(Y-y_model)
# Defines the operation to be called on each iteration of the learning algorithm
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
sess = tf.Session() # Setup the tf Session and init variables
init = tf.global_variables_initializer()
sess.run(init)
for epoch in range(training_epochs): # Loop thru dataset multiple times
for (x, y) in zip(x_train, y_train): # Loop thru each point in the dataset
sess.run(train_op, feed_dict={X: x, Y: y})
w_val = sess.run(w) # Get final parameter value
sess.close() # Close the session
plt.scatter(x_train, y_train) # Plot the original data
y_learned = x_train*w_val
plt.plot(x_train, y_learned, 'r') # Plot the best-fit line
plt.show()
"""
Explanation: Tensorflow Regression
Regression algorithm tries to find the function that best maps an input to an output in the simplest way, without overcomplicating things.
* Input can be discrete or continuous, but the output is always continuous
* Classification is for discrete outputs
How well is the algorithm working and how do we find the best function? We want a function that is not biased towards the training data it learned from and we don't want the results to wildly vary just because the real data is slightly different than the training set. We want it to generalize to unseen data as well.
Variance - indicates how sensitive a prediction is to the training set
* low variance is desired because it shouldn't matter how we choose the training set
* measures how badly the reponses vary
Bias - indicates the strength of the assumptions made on the training set
* low bias is desired to prevent overfitting in order to make a more generalized model
* measures how far off the the model is from the truth
Cost Function is used to evaluate each candidate solution
* Higher cost means a worse solution, want the lowest cost
* Tensorflow loops through all the data (an epoch) looking for the best possible value
* Any cost function can be used, typically sum of squared errors:
* the error difference between each data point and the chosen solution is squared (to penalize larger errors) and then added together to get a single "score" for that solution
* the lowest score ends up being the best possible solution
Linear Regression
End of explanation
"""
%matplotlib inline
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
learning_rate = 0.01 # Hyperparameters
training_epochs = 40
trX = np.linspace(-1, 1, 101) # Dataset based on 5th deg polynomial
num_coeffs = 6
trY_coeffs = [1, 2, 3, 4, 5, 6]
trY = 0
for i in range(num_coeffs):
trY += trY_coeffs[i] * np.power(trX, i)
trY += np.random.randn(*trX.shape) * 1.5 # Add noise
plt.scatter(trX, trY)
plt.show()
X = tf.placeholder(tf.float32) # tf placeholder nodes for input/output
Y = tf.placeholder(tf.float32)
w = tf.Variable(0.0, name="weights") # Weights variable
def model(X, w): # defines model as 5th deg poly
terms = []
for i in range(num_coeffs):
term = tf.multiply(w[i], tf.pow(X, i))
terms.append(term)
return tf.add_n(terms)
w = tf.Variable([0.] * num_coeffs, name="parameters") # Sets param vector to zeros
y_model = model(X, w)
cost = (tf.pow(Y-y_model, 2)) # Cost Function
# Defines the operation to be called on each iteration of the learning algorithm
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
sess = tf.Session() # Setup the tf Session and init variables
init = tf.global_variables_initializer()
sess.run(init)
for epoch in range(training_epochs): # Loop thru dataset multiple times
for (x, y) in zip(trX, trY): # Loop thru each point in the dataset
sess.run(train_op, feed_dict={X: x, Y: y})
w_val = sess.run(w) # Get final parameter value
print("5th deg polynomial coeffs:\n", w_val)
sess.close() # Close the session
plt.scatter(trX, trY) # Plot the original data
trY2 = 0
for i in range(num_coeffs): # Plot the result
trY2 += w_val[i] * np.power(trX, i)
plt.plot(trX, trY2, 'r')
plt.show()
"""
Explanation: Polynomial Regression
When a simple linear function won't fit the data, a polynomial function offers more flexibility. An Nth degree polynomial: $f(x) = w_nx^n + ... + w_1x + w_0$ can also describe a linear function when $n=1$
End of explanation
"""
|
FireCARES/data | analysis/validated-boundaries-vs-government-unit-density.ipynb | mit | import psycopg2
from psycopg2.extras import RealDictCursor
import pandas as pd
# import geopandas as gpd
# from shapely import wkb
# from shapely.geometry import mapping as to_geojson
# import folium
pd.options.display.max_columns = None
pd.options.display.max_rows = None
#pd.set_option('display.float_format', lambda x: '%.3f' % x)
%matplotlib inline
conn = psycopg2.connect('service=firecares')
nfirs = psycopg2.connect('service=nfirs')
"""
Explanation: Validated boundaries to government unit incident density comparison
The backing theory for this notebook is proving that we will be able to use the highest-density (fire count vs government unit area) government unit to determine a department's boundary for departments that do not have boundaries.
End of explanation
"""
# Create materialized view of all usgs govt units in FireCARES
q = """
create materialized view if not exists usgs_governmentunits as
(
select id, population, county_name as name, 'countyorequivalent' as source, geom from usgs_countyorequivalent where geom is not null
union
select id, population, place_name as name, 'incorporatedplace' as source, geom from usgs_incorporatedplace where geom is not null
union
select id, population, minorcivildivision_name as name, 'minorcivildivision' as source, geom from usgs_minorcivildivision where geom is not null
union
select id, population, name, 'nativeamericanarea' as source, geom from usgs_nativeamericanarea where geom is not null
union
select id, 0 as population, name, 'reserve' as source, geom from usgs_reserve where geom is not null
union
select id, population, state_name as name, 'stateorterritoryhigh' as source, geom from usgs_stateorterritoryhigh where geom is not null
union
select id, population, place_name as name, 'unincorporatedplace' as source, geom from usgs_unincorporatedplace where geom is not null
);
create unique index on usgs_governmentunits (id, source);
create index on usgs_governmentunits using gist (geom);
"""
with conn.cursor() as c:
c.execute(q)
conn.commit()
# Link remote firecares usgs_governmentunits view to nfirs-local usgs_government units
q = """
create foreign table usgs_governmentunits (id integer, population integer, name character varying(120), source text, geom geometry)
server firecares
options (table_name 'usgs_governmentunits');
"""
with nfirs.cursor() as c:
c.execute(q)
nfirs.commit()
# Old nfirs.firestation_firedepartment foreign table columns needed to be synced
q = """
alter foreign TABLE firestation_firedepartment add column archived boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column domain_name character varying(255);
alter foreign TABLE firestation_firedepartment add column owned_tracts_geom public.geometry(MultiPolygon,4326);
alter foreign TABLE firestation_firedepartment add column display_metrics boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column boundary_verified boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column cfai_accredited boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column ems_transport boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column staffing_verified boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column stations_verified boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column census_override boolean NOT NULL;
alter foreign TABLE firestation_firedepartment add column additional_fdids character varying(255);
"""
with nfirs.cursor() as c:
c.execute(q)
nfirs.commit()
q = """
create foreign table if not exists firecares_core_address (id integer NOT NULL,
address_line1 character varying(100) NOT NULL,
address_line2 character varying(100),
city character varying(50) NOT NULL,
state_province character varying(40) NOT NULL,
postal_code character varying(10) NOT NULL,
geom public.geometry(Point,4326),
geocode_results text,
country_id character varying(2) NOT NULL)
server firecares
options (table_name 'firecares_core_address');
"""
with nfirs.cursor() as c:
c.execute(q)
nfirs.commit()
"""
Explanation: DB migration/setup
End of explanation
"""
q = """
select id, fdid, state, name
from firestation_firedepartment
where boundary_verified = true;
"""
with nfirs.cursor(cursor_factory=RealDictCursor) as c:
c.execute(q)
fds = c.fetchall()
q = """
with fires as (select * from joint_buildingfires
inner join joint_incidentaddress
using (fdid, inc_no, inc_date, state, exp_no)
where state = %(state)s and fdid = %(fdid)s
),
govt_units as (
select gu.name, gu.source, gu.id, gu.geom, fd.id as fc_id, fd.geom as fd_geom, ST_Distance(addr.geom, ST_Centroid(gu.geom)) as distance_to_headquarters
from firestation_firedepartment fd
inner join firecares_core_address addr
on addr.id = fd.headquarters_address_id
join usgs_governmentunits gu
on ST_Intersects(ST_Buffer(addr.geom, 0.05), gu.geom)
where
fd.fdid = %(fdid)s and fd.state = %(state)s and source != 'stateorterritoryhigh'
)
select gu.fc_id, count(fires) / ST_Area(gu.geom) as density, count(fires), ST_Area(ST_SymDifference(gu.fd_geom, gu.geom)) / ST_Area(gu.fd_geom) as percent_difference_to_verified_boundary, ST_Area(gu.geom), gu.distance_to_headquarters, gu.name, gu.id, gu.source from fires
inner join govt_units gu
on ST_Intersects(fires.geom, gu.geom)
group by gu.name, gu.id, gu.geom, gu.source, gu.distance_to_headquarters, gu.fd_geom, gu.fc_id
order by ST_Area(gu.geom) / count(fires) asc;
"""
for fd in fds:
with nfirs.cursor(cursor_factory=RealDictCursor) as c:
print 'Analyzing: {} (id: {} fdid: {} {})'.format(fd['name'], fd['id'], fd['fdid'], fd['state'])
c.execute(q, dict(fdid=fd['fdid'], state=fd['state']))
items = c.fetchall()
df = pd.DataFrame(items)
df.to_csv('./boundary-analysis-{}.csv'.format(fd['id']))
"""
Explanation: Processing
End of explanation
"""
from glob import glob
df = None
for f in glob("boundary-analysis*.csv"):
if df is not None:
df = df.append(pd.read_csv(f))
else:
df = pd.read_csv(f)
df.rename(columns={'Unnamed: 0': 'rank'}, inplace=True)
selected_government_units = df[df['rank'] == 0].set_index('fc_id')
total_validated_department_count = len(selected_government_units)
perfect_fits = len(selected_government_units[selected_government_units['percent_difference_to_verified_boundary'] == 0])
print 'Perfect fits: {}/{} ({:.2%})'.format(perfect_fits, total_validated_department_count, float(perfect_fits) / total_validated_department_count)
print 'Machine-selected government unit area difference mean: {:.2%}'.format(df[df['rank'] == 0].percent_difference_to_verified_boundary.mean())
selected_government_units['percent_difference_to_verified_boundary'].hist(bins=50)
selected_government_units
df.set_index('fc_id')
df.to_csv('./validated-boundary-vs-government-unit-density.csv')
pd.read_csv('./validated-boundary-vs-government-unit-density.csv')
"""
Explanation: Results
End of explanation
"""
|
domino14/macondo | notebooks/win_pct/calculate_win_percentages.ipynb | gpl-3.0 | max_spread = 300
counter_dict_by_spread_and_tiles_remaining = {x:{
spread:0 for spread in range(max_spread,-max_spread-1,-1)} for x in range(0,94)}
win_counter_dict_by_spread_and_tiles_remaining = deepcopy(counter_dict_by_spread_and_tiles_remaining)
t0=time.time()
print('There are {} games'.format(len(win_dict)))
with open(log_file,'r') as f:
moveReader = csv.reader(f)
next(moveReader)
for i,row in enumerate(moveReader):
if (i+1)%1000000==0:
print('Processed {} rows in {} seconds'.format(i+1, time.time()-t0))
# truncate spread to the range -max_spread to max_spread
end_of_turn_tiles_left = int(row[10])-int(row[7])
end_of_turn_spread = min(max(int(row[6])-int(row[11]),-max_spread),max_spread)
if end_of_turn_tiles_left > 0:
counter_dict_by_spread_and_tiles_remaining[end_of_turn_tiles_left][end_of_turn_spread] += 1
if row[0]=='p1':
win_counter_dict_by_spread_and_tiles_remaining[
end_of_turn_tiles_left][end_of_turn_spread] += win_dict[row[1]]
else:
win_counter_dict_by_spread_and_tiles_remaining[
end_of_turn_tiles_left][end_of_turn_spread] += (1-win_dict[row[1]])
# debug rows
# if i<10:
# print(row)
# print(end_of_turn_spread)
# print(end_of_turn_tiles_left)
# print(counter_dict_by_spread_and_tiles_remaining[end_of_turn_tiles_left][end_of_turn_spread])
# print(win_counter_dict_by_spread_and_tiles_remaining[end_of_turn_tiles_left][end_of_turn_spread])
count_df = pd.DataFrame(counter_dict_by_spread_and_tiles_remaining)
win_df = pd.DataFrame(win_counter_dict_by_spread_and_tiles_remaining)
win_pct_df = win_df/count_df
fig,ax = plt.subplots(figsize=(12,8))
sns.heatmap(win_pct_df, ax=ax)
ax.set_xlabel('Tiles remaining')
ax.set_ylabel('Game spread')
ax.set_title('Win % by tiles remaining and spread')
plt.savefig('win_pct.jpg')
count_df.iloc[300:350,79:]
"""
Explanation: Can define what spread beyond which you assume player has a 0 or 100% chance of winning - using 300 as first guess.
Also, spreads now range only from 0 to positive numbers, because trailing by 50 and winning is the same outcome and leading by 50 and losing (just swapping the players' perspectives)
End of explanation
"""
win_pct_df.iloc[250:350,79:]
"""
Explanation: The 50% win line is likely a little bit above 0 spread, because when you end a turn with 0 spread, your opponent on average gets an extra half-turn more than you for the rest of the game. Let's fine that line.
End of explanation
"""
pd.options.display.max_rows = 999
"""
Explanation: Opening turn scores
End of explanation
"""
counter_dict_by_opening_turn_score = {x:0 for x in range(0,131)}
win_counter_dict_by_opening_turn_score = {x:0 for x in range(0,131)}
rows = []
t0=time.time()
print('There are {} games'.format(len(win_dict)))
with open(log_file,'r') as f:
moveReader = csv.reader(f)
next(moveReader)
for i,row in enumerate(moveReader):
if (i+1)%1000000==0:
print('Processed {} rows in {} seconds'.format(i+1, time.time()-t0))
if row[2]=='1':
counter_dict_by_opening_turn_score[int(row[5])] += 1
# check which player went first
if row[0]=='p1':
win_counter_dict_by_opening_turn_score[int(row[5])] += win_dict[row[1]]
rows.append([int(row[5]), win_dict[row[1]]])
else:
win_counter_dict_by_opening_turn_score[int(row[5])] += 1-win_dict[row[1]]
rows.append([int(row[5]), 1-win_dict[row[1]]])
# # debug rows
# if i<10:
# print(row)
tst_df=pd.DataFrame(rows).rename(columns={0:'opening turn score',1:'win'})
opening_turn_count = pd.Series(counter_dict_by_opening_turn_score)
opening_turn_win_count = pd.Series(win_counter_dict_by_opening_turn_score)
opening_turn_win_pct = opening_turn_win_count/opening_turn_count
tst = opening_turn_win_pct.dropna()
opening_turn_win_pct
fig,ax=plt.subplots()
plt.plot(tst)
plt.savefig('plot1.png')
fig,ax=plt.subplots()
sns.regplot(x='opening turn score',y='win',data=tst_df,x_estimator=np.mean,ax=ax)
plt.savefig('regression_plot.png')
fig,ax=plt.subplots()
sns.regplot(x='opening turn score',y='win',data=tst_df,x_estimator=np.mean,ax=ax,fit_reg=False)
plt.savefig('regression_plot_no_fitline.png')
"""
Explanation: Apply smoothing
We want the win percentage to increase monotonically with spread, even though we have a limited sample size and this may not always be true. Therefore, we want to be able to average win percentages over neighboring scenarios (similar spread difference and similar # of tiles remaining).
End of explanation
"""
|
CELMA-project/CELMA | MES/singleOperators/properZFailConvergence.ipynb | lgpl-3.0 | %matplotlib notebook
from IPython.display import display
from sympy import init_printing
from sympy import S, Eq, Limit
from sympy import sin, cos, tanh, pi
from sympy import symbols
from boutdata.mms import x, z
init_printing()
"""
Explanation: Why the proper Z function fails to show convergence
We will here investigate why the function called "properZ" fails to give convergence.
Initialize
End of explanation
"""
Lx=symbols('Lx')
# We multiply with cos(6*pi*x/(2*Lx)) in order to give it a modulation, and to get a non-zero value at the boundary
s = 0.15
c = 50
w = 30
f = ((1/2) - (1/2)*(tanh(s*(x-(c - (w/2))))))*cos(6*pi*x/(2*Lx))*sin(2*z)
display(Eq(symbols('f'),f))
theLimit = Limit(f,x,0,dir='+')
display(Eq(theLimit, theLimit.doit()))
theLimit = Limit(f,x,0,dir='-')
display(Eq(theLimit, theLimit.doit()))
"""
Explanation: The function called "proper Z" (as it )
End of explanation
"""
|
dxl0632/deeplearning_nd_udacity | intro-to-rnns/Anna_KaRNNa_Exercises.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
encoded[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
len(vocab)
"""
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
"""
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the batch size and number of batches we can make
batch_size = n_seqs * n_steps
n_batches = len(arr)//batch_size
# Keep only enough characters to make full batches
arr = arr[:n_batches * batch_size]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
"""
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
"""
batches = get_batches(encoded, 10, 10)
x, y = next(batches)
encoded.shape
x.shape
encoded
print('x\n', x[:10, :])
print('\ny\n', y[:10, :])
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, (batch_size, num_steps), name='inputs')
targets = tf.placeholder(tf.int32, (batch_size, num_steps), name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
"""
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise: Create the input placeholders in the function below.
End of explanation
"""
def lstm_cell(lstm_size, keep_prob):
cell = tf.contrib.rnn.BasicLSTMCell(lstm_size, reuse=tf.get_variable_scope().reuse)
return tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# # Use a basic LSTM cell
# lstm = tf.contrib.rnn.BasicLSTMCell(batch_size, reuse=tf.get_variable_scope().reuse)
# # Add dropout to the cell outputs
# drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([lstm_cell(lstm_size, keep_prob) for _ in range(num_layers)], state_is_tuple=True)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
# https://stackoverflow.com/questions/42669578/tensorflow-1-0-valueerror-attempt-to-reuse-rnncell-with-a-different-variable-s
# def lstm_cell():
# cell = tf.contrib.rnn.NASCell(state_size, reuse=tf.get_variable_scope().reuse)
# return tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=0.8)
# rnn_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)], state_is_tuple = True)
# outputs, current_state = tf.nn.dynamic_rnn(rnn_cells, x, initial_state=rnn_tuple_state)
# MultiRNNCell([BasicLSTMCell(...) for _ in range(num_layers)])
"""
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise: Below, implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
"""
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output = tf.concat(lstm_output, axis=1)
# Reshape seq_output to a 2D tensor with lstm_size columns
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.add(tf.matmul(x, softmax_w), softmax_b)
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='prediction')
return out, logits
"""
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise: Implement the output layer in the function below.
End of explanation
"""
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
"""
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise: Implement the loss calculation in the function below.
End of explanation
"""
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
"""
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
"""
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state, scope='layer')
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
"""
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
End of explanation
"""
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
"""
Explanation: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
"""
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
tpin3694/tpin3694.github.io | machine-learning/nested_cross_validation.ipynb | mit | # Load required packages
from sklearn import datasets
from sklearn.model_selection import GridSearchCV, cross_val_score
from sklearn.preprocessing import StandardScaler
import numpy as np
from sklearn.svm import SVC
"""
Explanation: Title: Nested Cross Validation
Slug: nested_cross_validation
Summary: Nested Cross Validation using scikit-learn.
Date: 2016-12-02 12:00
Category: Machine Learning
Tags: Model Evaluation
Authors: Chris Albon
Often we want to tune the parameters of a model (for example, C in a support vector machine). That is, we want to find the value of a parameter that minimizes our loss function. The best way to do this is cross validation:
Set the parameter you want to tune to some value.
Split your data into K 'folds' (sections).
Train your model using K-1 folds using the parameter value.
Test your model on the remaining fold.
Repeat steps 3 and 4 so that every fold is the test data once.
Repeat steps 1 to 5 for every possible value of the parameter.
Report the parameter that produced the best result.
However, as Cawley and Talbot point out in their 2010 paper, since we used the test set to both select the values of the parameter and evaluate the model, we risk optimistically biasing our model evaluations. For this reason, if a test set is used to select model parameters, then we need a different test set to get an unbiased evaluation of that selected model.
One way to overcome this problem is to have nested cross validations. First, an inner cross validation is used to tune the parameters and select the best model. Second, an outer cross validation is used to evaluate the model selected by the inner cross validation.
Preliminaries
End of explanation
"""
# Load the data
dataset = datasets.load_breast_cancer()
# Create X from the features
X = dataset.data
# Create y from the target
y = dataset.target
"""
Explanation: Get Data
The data for this tutorial is beast cancer data with 30 features and a binary target variable.
End of explanation
"""
# Create a scaler object
sc = StandardScaler()
# Fit the scaler to the feature data and transform
X_std = sc.fit_transform(X)
"""
Explanation: Standardize Data
End of explanation
"""
# Create a list of 10 candidate values for the C parameter
C_candidates = dict(C=np.logspace(-4, 4, 10))
# Create a gridsearch object with the support vector classifier and the C value candidates
clf = GridSearchCV(estimator=SVC(), param_grid=C_candidates)
"""
Explanation: Create Inner Cross Validation (For Parameter Tuning)
This is our inner cross validation. We will use this to hunt for the best parameters for C, the penalty for misclassifying a data point. GridSearchCV will conduct steps 1-6 listed at the top of this tutorial.
End of explanation
"""
# Fit the cross validated grid search on the data
clf.fit(X_std, y)
# Show the best value for C
clf.best_estimator_.C
"""
Explanation: The code below isn't necessary for parameter tuning using nested cross validation, however to demonstrate that our inner cross validation grid search can find the best value for the parameter C, we will run it once here:
End of explanation
"""
cross_val_score(clf, X_std, y)
"""
Explanation: Create Outer Cross Validation (For Model Evaluation)
With our inner cross validation constructed, we can use cross_val_score to evaluate the model with a second (outer) cross validation.
The code below splits the data into three folds, running the inner cross validation on two of the folds (merged together) and then evaluating the model on the third fold. This is repeated three times so that every fold is used for testing once.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.15/_downloads/plot_artifacts_detection.ipynb | bsd-3-clause | import numpy as np
import mne
from mne.datasets import sample
from mne.preprocessing import create_ecg_epochs, create_eog_epochs
# getting some data ready
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
"""
Explanation: Introduction to artifacts and artifact detection
Since MNE supports the data of many different acquisition systems, the
particular artifacts in your data might behave very differently from the
artifacts you can observe in our tutorials and examples.
Therefore you should be aware of the different approaches and of
the variability of artifact rejection (automatic/manual) procedures described
onwards. At the end consider always to visually inspect your data
after artifact rejection or correction.
Background: what is an artifact?
Artifacts are signal interference that can be
endogenous (biological) and exogenous (environmental).
Typical biological artifacts are head movements, eye blinks
or eye movements, heart beats. The most common environmental
artifact is due to the power line, the so-called line noise.
How to handle artifacts?
MNE deals with artifacts by first identifying them, and subsequently removing
them. Detection of artifacts can be done visually, or using automatic routines
(or a combination of both). After you know what the artifacts are, you need
remove them. This can be done by:
- *ignoring* the piece of corrupted data
- *fixing* the corrupted data
For the artifact detection the functions MNE provides depend on whether
your data is continuous (Raw) or epoch-based (Epochs) and depending on
whether your data is stored on disk or already in memory.
Detecting the artifacts without reading the complete data into memory allows
you to work with datasets that are too large to fit in memory all at once.
Detecting the artifacts in continuous data allows you to apply filters
(e.g. a band-pass filter to zoom in on the muscle artifacts on the temporal
channels) without having to worry about edge effects due to the filter
(i.e. filter ringing). Having the data in memory after segmenting/epoching is
however a very efficient way of browsing through the data which helps
in visualizing. So to conclude, there is not a single most optimal manner
to detect the artifacts: it just depends on the data properties and your
own preferences.
In this tutorial we show how to detect artifacts visually and automatically.
For how to correct artifacts by rejection see tut_artifacts_reject.
To discover how to correct certain artifacts by filtering see
tut_artifacts_filter and to learn how to correct artifacts
with subspace methods like SSP and ICA see tut_artifacts_correct_ssp
and tut_artifacts_correct_ica.
Artifacts Detection
This tutorial discusses a couple of major artifacts that most analyses
have to deal with and demonstrates how to detect them.
End of explanation
"""
(raw.copy().pick_types(meg='mag')
.del_proj(0)
.plot(duration=60, n_channels=100, remove_dc=False))
"""
Explanation: Low frequency drifts and line noise
End of explanation
"""
raw.plot_psd(tmax=np.inf, fmax=250)
"""
Explanation: we see high amplitude undulations in low frequencies, spanning across tens of
seconds
End of explanation
"""
average_ecg = create_ecg_epochs(raw).average()
print('We found %i ECG events' % average_ecg.nave)
average_ecg.plot_joint()
"""
Explanation: On MEG sensors we see narrow frequency peaks at 60, 120, 180, 240 Hz,
related to line noise.
But also some high amplitude signals between 25 and 32 Hz, hinting at other
biological artifacts such as ECG. These can be most easily detected in the
time domain using MNE helper functions
See tut_artifacts_filter.
ECG
finds ECG events, creates epochs, averages and plots
End of explanation
"""
average_eog = create_eog_epochs(raw).average()
print('We found %i EOG events' % average_eog.nave)
average_eog.plot_joint()
"""
Explanation: we can see typical time courses and non dipolar topographies
not the order of magnitude of the average artifact related signal and
compare this to what you observe for brain signals
EOG
End of explanation
"""
|
gcgruen/homework | foundations-homework/05/.ipynb_checkpoints/homework-05-gruen-spotify-checkpoint.ipynb | mit | import requests
lil_response = requests.get ('https://api.spotify.com/v1/search?query=Lil&type=artist&country=US&limit=50')
lil_data = lil_response.json()
print(type(lil_data))
lil_data.keys()
lil_data['artists'].keys()
lil_artists = lil_data['artists']['items']
#check on what elements are in that list:
#print (lil_artists[0])
"""
Explanation: Homework 05
Spotify
Gianna-Carina Gruen
2016-06-07
End of explanation
"""
for artist in lil_artists:
print(artist['name'], "has a popularity score of", artist['popularity'])
"""
Explanation: 1) With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score.
End of explanation
"""
#http://stackoverflow.com/questions/2600191/how-can-i-count-the-occurrences-of-a-list-item-in-python
from collections import Counter
genre_list = []
for genre in lil_artists:
if genre['genres'] != []:
genre_list = genre['genres'] + genre_list
c = Counter(genre_list)
print("These are the counts for each genre:", c)
#https://docs.python.org/2/library/collections.html
most_common = Counter(genre_list).most_common(1)
print("The most common genre is:",most_common)
for artist in lil_artists:
if artist['genres'] == []:
print(artist['name'], "has a popularity score of", artist['popularity'],
"But there are no genres listed for this artist.")
else:
artist_genres = artist['genres']
print(artist['name'], "has a popularity score of", artist['popularity'],
"This artist is associated with", ', '.join(artist_genres))
# http://stackoverflow.com/questions/5850986/joining-elements-of-a-list-python
"""
Explanation: 2) What genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed".
Tip: "how to join a list Python" might be a helpful search
End of explanation
"""
most_popular_score = 0
most_popular_name = []
for artist in lil_artists:
if artist['popularity'] > most_popular_score:
most_popular_name = artist['name']
most_popular_score = artist['popularity']
print(most_popular_name, "is the most popular, with a rating of", most_popular_score)
second_max_popular = 0
for artist in lil_artists:
if artist['popularity'] >= second_max_popular and artist['popularity'] < most_popular_score:
second_max_popular = artist['popularity']
print(artist['name'], "is the second most popular with a popularity rating of",artist['popularity'], "compared to", most_popular_name, "who has a rating of", most_popular_score)
"""
Explanation: 3) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating.
End of explanation
"""
most_followers = 0
for artist in lil_artists:
if artist['followers']['total'] > most_followers:
most_followers = artist['followers']['total']
print(artist['name'], "has the largest number followers:", artist['followers']['total'])
print("The second most popular Lils have the following amount of followers:")
second_most_followers = 0
for artist in lil_artists:
if artist['popularity'] >= second_max_popular and artist['popularity'] < 86:
second_max_popular = artist['popularity']
if artist['followers']['total'] > second_most_followers:
second_most_followers = artist['followers']['total']
print(artist['name'], artist['followers']['total'])
"""
Explanation: Is it the same artist who has the largest number of followers?
End of explanation
"""
kim_popularity = 0
for artist in lil_artists:
if artist['name'] == "Lil' Kim":
kim_popularity = (artist['popularity'])
for artist in lil_artists:
if artist['popularity'] > kim_popularity:
print(artist['name'], "has a popularity of", artist['popularity'], "which is higher than that of Lil' Kim.")
"""
Explanation: 4) Print a list of Lil's that are more popular than Lil' Kim.
End of explanation
"""
#for artist in lil_artists:
#print(artist['name'], artist['id'])
#Lil Dicky 1tqhsYv8yBBdwANFNzHtcr
toptracks_Dicky_response = requests.get('https://api.spotify.com/v1/artists/1tqhsYv8yBBdwANFNzHtcr/top-tracks?country=US')
toptracks_Dicky_data = toptracks_Dicky_response.json()
tracks_Dicky = toptracks_Dicky_data['tracks']
print("THESE ARE THE TOP TRACKS OF LIL DICKY:")
for track in tracks_Dicky:
print(track['name'])
#Lil Jon 7sfl4Xt5KmfyDs2T3SVSMK
toptracks_Jon_response = requests.get('https://api.spotify.com/v1/artists/7sfl4Xt5KmfyDs2T3SVSMK/top-tracks?country=US')
toptracks_Jon_data = toptracks_Jon_response.json()
tracks_Jon = toptracks_Jon_data['tracks']
print("THESE ARE THE TOP TRACKS OF LIL JON:")
for track in tracks_Jon:
print(track['name'])
"""
Explanation: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks.
Tip: You're going to be making two separate requests, be sure you DO NOT save them into the same variable.
End of explanation
"""
print(tracks_Dicky[0].keys())
"""
Explanation: 6) Will the world explode if a musicians swears?
Get an average popularity for their explicit songs vs. their non-explicit songs.
End of explanation
"""
explicit_Dicky_count = 0
non_explicit_Dicky_count = 0
explicit_popularity_Dicky_sum = 0
non_explicit_popularity_Dicky_sum = 0
for track in tracks_Dicky:
if track['explicit'] == True:
explicit_Dicky_count = explicit_Dicky_count + 1
explicit_popularity_Dicky_sum = explicit_popularity_Dicky_sum + track['popularity']
else:
non_explicit_Dicky_count = non_explicit_Dicky_count + 1
non_explicit_popularity_Dicky_sum = non_explicit_popularity_Dicky_sum + track['popularity']
print("The average popularity of explicit Lil Dicky songs is", explicit_popularity_Dicky_sum / explicit_Dicky_count)
if non_explicit_Dicky_count == 0:
print("There are no non-explicit Lil Dicky songs.")
else:
print("The average popularity of non-explicit Lil Dicky songs is:", non_explicit_popularity_Dicky_sum / non_explicit_Dicky_count)
explicit_Jon_count = 0
non_explicit_Jon_count = 0
explicit_popularity_Jon_sum = 0
non_explicit_popularity_Jon_sum = 0
for track in tracks_Jon:
if track['explicit'] == True:
explicit_Jon_count = explicit_Jon_count + 1
explicit_popularity_Jon_sum = explicit_popularity_Jon_sum + track['popularity']
else:
non_explicit_Jon_count = non_explicit_Jon_count + 1
non_explicit_popularity_Jon_sum = non_explicit_popularity_Jon_sum + track['popularity']
print("The average popularity of explicit Lil Jon songs is", explicit_popularity_Jon_sum / explicit_Jon_count)
if non_explicit_Jon_count == 0:
print("There are no non-explicit Lil Jon songs.")
else:
print("The average popularity of non-explicit Lil Jon songs is:", non_explicit_popularity_Jon_sum / non_explicit_Jon_count)
"""
Explanation: First solution -- this felt like a lot of repeating and as if there was a more efficient way to do it. Turns out, there is! With some explanation from Soma first -- see below.
End of explanation
"""
#function writing
def add(a, b):
value = a + b
print("the sum of", a, "and", b, "is", value)
add(5, 7)
add(1, 2)
add(4, 55)
"""
Explanation: Soma explaining how to write functions in 30 seconds of Lab:
End of explanation
"""
def average_popularity(a, b):
explicit_count = 0
non_explicit_count = 0
explicit_popularity_sum = 0
non_explicit_popularity_sum = 0
for track in a:
if track['explicit'] == True:
explicit_count = explicit_count + 1
explicit_popularity_sum = explicit_popularity_sum + track['popularity']
else:
non_explicit_count = non_explicit_count + 1
non_explicit_popularity_sum = non_explicit_popularity_sum + track['popularity']
if explicit_count == 0:
print("There are no explicit songs by", b)
else:
print("The average popularity of explicit songs by", b, "is", explicit_popularity_sum / explicit_count)
if non_explicit_count == 0:
print("There are no non-explicit songs by", b)
else:
print("The average popularity of non-explicit songs by", b, "is", non_explicit_popularity_sum / non_explicit_count)
average_popularity(tracks_Dicky, "Lil Dicky")
average_popularity(tracks_Jon, "Lil Jon")
"""
Explanation: Based on that, I re-wrote my above code using a function
End of explanation
"""
def explicit_minutes(a, b):
explicit_milliseconds = 0
non_explicit_milliseconds = 0
for track in a:
if track['explicit'] == True:
explicit_milliseconds = explicit_milliseconds + track['duration_ms']
else:
non_explicit_milliseconds = non_explicit_milliseconds + track['duration_ms']
if explicit_milliseconds !=0:
print(b, "has", explicit_milliseconds / 6000 , "minutes of explicit music.")
if non_explicit_milliseconds !=0:
print(b, "has", non_explicit_milliseconds / 6000, "minutes of non-explicit music.")
else:
print(b, "has", "has no non-explicit music.")
explicit_minutes(tracks_Dicky, "Lil Dicky")
explicit_minutes(tracks_Jon, "Lil Jon")
"""
Explanation: How many minutes of explicit songs do they have? Non-explicit?
End of explanation
"""
import requests
biggieT_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50')
biggieT_data = biggieT_response.json()
biggieT_artists = biggieT_data['artists']['items']
artist_count = 0
for artist in biggieT_artists:
artist_count = artist_count + 1
print("There are in total", artist_count, "Biggies.")
import requests
import math
offset_valueB = 0
biggieT_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50&offset=' + str(offset_valueB) + '')
biggieT_data = biggieT_response.json()
biggieT_artists = biggieT_data['artists']['items']
offset_limitB = biggieT_data['artists']['total']
offset_valueL = 0
lilT_response = requests.get('https://api.spotify.com/v1/search?query=lil&type=artist&limit=50&offset=' + str(offset_valueL) + '')
lilT_data = lilT_response.json()
lilT_artists = lilT_data['artists']['items']
offset_limitL = lilT_data['artists']['total']
page_countB = math.ceil(offset_limitB/ 50)
print("The page count for all the Biggies is:", page_countB)
page_countL = math.ceil(offset_limitL/ 50)
print("The page count for all the Lils is:", page_countL)
print("If you made 1 request every 5 seconds, it will take", page_countL * 5, "seconds for all the Lils requests to process. Whereas for the Biggies it's", page_countB* 5, ", so the total amount of time is", page_countB*5 + page_countL*5, "seconds.")
artist_count = 0
offset_value = 0
for page in range(0, 1):
biggieT_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50&offset=' + str(offset_valueB) + '')
biggieT_data = biggieT_response.json()
biggieT_artists = biggieT_data['artists']['items']
for artist in lilT_artists:
artist_count = artist_count + 1
offset_value = offset_value + 50
print("There are in total", artist_count, "Biggies.")
artist_count = 0
offset_value = 0
for page in range(0, 91):
lilT_response = requests.get('https://api.spotify.com/v1/search?query=lil&type=artist&limit=50&offset=' + str(offset_value) + '')
lilT_data = lilT_response.json()
lilT_artists = lilT_data['artists']['items']
for artist in lilT_artists:
artist_count = artist_count + 1
offset_value = offset_value + 50
print("There are in total", artist_count, "Lils.")
"""
Explanation: 7) Since we're talking about Lils, what about Biggies?
How many total "Biggie" artists are there? How many total "Lil"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies?
End of explanation
"""
# tried to solve it with a function as well, but didn't work out, gave an error message. So back to the old way.
biggie50_response = requests.get('https://api.spotify.com/v1/search?query=biggie&type=artist&limit=50')
biggie50_data = biggie50_response.json()
biggie50_artists = biggie50_data['artists']['items']
popularity_biggie50 = 0
for artist in biggie50_artists:
popularity_biggie50 = popularity_biggie50 + artist['popularity']
print("The average popularity of the top50 Biggies is", popularity_biggie50 / 50)
lil50_response = requests.get('https://api.spotify.com/v1/search?query=lil&type=artist&limit=50')
lil50_data = lil50_response.json()
lil50_artists = lil50_data['artists']['items']
popularity_lil50 = 0
for artist in lil50_artists:
popularity_lil50 = popularity_lil50 + artist['popularity']
print("The average popularity of the top50 Lils is", popularity_lil50 / 50)
if popularity_biggie50 > popularity_lil50:
print("The top50 Biggies are on average more popular than the top50 Lils.")
if popularity_biggie50 == popularity_lil50:
print("The top50 Biggies are on average as popular as the top50 Lils.")
else:
print("The top50 Lils are on average more popular than the top50 Biggies.")
"""
Explanation: 8) Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average?
End of explanation
"""
|
kmsmoo/Webnovel | Recommand System.ipynb | mit | episode_comment = pd.read_csv("data/webnovel/episode_comments.csv", index_col=0, encoding="cp949")
episode_comment["ID"] = episode_comment["object_id"].apply(lambda x: x.split("-")[0])
episode_comment["volume"] = episode_comment["object_id"].apply(lambda x: x.split("-")[1]).astype("int")
episode_comment["writer_nickname"].fillna("", inplace=True)
def make_user_id(i):
if episode_comment["writer_nickname"].loc[i] == "":
return episode_comment["writer_ip"].loc[i] + episode_comment["writer_id"].loc[i]
else:
return episode_comment["writer_nickname"].loc[i] + episode_comment["writer_id"].loc[i]
user_id = [
make_user_id(i)
for i in range(len(episode_comment))
]
episode_comment["user_id"] = user_id
episode_comment.drop(
[
"contents",
"down_count",
"modified_ymdt",
"registered_ymdt",
"ticket",
"up_count",
"writer_ip",
"writer_id",
"writer_nickname",
"writer_profile_type",
"object_id",
],
axis=1,
inplace=True
)
episode_comment.head()
main_comment = pd.read_csv("data/webnovel/main_comments.csv", index_col=0, encoding="cp949")
main_comment["ID"] = main_comment["object_id"].apply(lambda x: x.split("-")[1])
main_comment["volume"] = 0
main_comment["writer_nickname"].fillna("", inplace=True)
def make_user_id(i):
if main_comment["writer_nickname"].loc[i] == "":
return main_comment["writer_ip"].loc[i] + main_comment["writer_id"].loc[i]
else:
return main_comment["writer_nickname"].loc[i] + main_comment["writer_id"].loc[i]
user_id = [
make_user_id(i)
for i in range(len(main_comment))
]
main_comment["user_id"] = user_id
main_comment.drop(
[
"contents",
"down_count",
"modified_ymdt",
"registered_ymdt",
"ticket",
"up_count",
"writer_ip",
"writer_id",
"writer_nickname",
"writer_profile_type",
"object_id",
],
axis=1,
inplace=True
)
main_comment.head()
"""
Explanation: comment data 가져오기 및 전처리
End of explanation
"""
user_df = pd.concat([episode_comment, main_comment]).groupby(["user_id", "ID"], as_index=False).agg({"volume":np.size})
len(user_df)
df = pd.read_csv("data/webnovel/main_df.csv", encoding="cp949", index_col=0)
df["ID"] = df["ID"].astype("str")
df = user_df.merge(df, on="ID")[["user_id", "genre", "volume"]].drop_duplicates()
len(df["user_id"].unique())
romance = df[df["genre"] == 101]
no_romance = df[df["genre"] != 101]
len(romance.merge(no_romance, on="user_id"))
"""
Explanation: user dataframe 만들기
End of explanation
"""
user_size = len(user_df["user_id"].unique())
users = user_df["user_id"].unique()
users_index = {
user:index
for index, user in enumerate(users)
}
book_df = pd.read_csv("data/webnovel/main_df.csv", encoding="cp949", index_col=0)
book_size = len(book_df.ID.unique())
books = book_df.ID.unique()
len(books)
books_index = {
str(book):index
for index, book in enumerate(books)
}
user_df["book_index"] = user_df["ID"].apply(lambda x: books_index[x])
user_df["user_index"] = user_df["user_id"].apply(lambda x: users_index[x])
"""
Explanation: user, book 인덱스 및 처리
End of explanation
"""
empty_matrix = np.zeros((user_size, book_size))
for index, i in user_df.iterrows():
empty_matrix[i["user_index"], i["book_index"]] = i["volume"]
user_book_matrix = pd.DataFrame(empty_matrix, columns=books)
user_book_matrix.index = users
user_book_matrix
"""
Explanation: user * book matrix 만들기
End of explanation
"""
for i in range(15):
print(i+1, "권 이상 읽은 사람은",len(user_book_matrix[user_book_matrix.sum(axis=1)>i]), "명 입니다.")
from scipy.spatial import distance
def cosine_distance(a, b):
return 1 - distance.cosine(a, b)
def make_score(books):
"""
MAE 스코어 계산
"""
user_books_matrix_two = user_book_matrix[user_book_matrix.sum(axis=1)>books]
empty_matrix = np.zeros((50, len(user_books_matrix_two))) # 샘플 10명
users_two_index = user_books_matrix_two.index
user_books_matrix_two.index = range(len(user_books_matrix_two))
for index_1, i in user_books_matrix_two[:10].iterrows():
for index_2, j in user_books_matrix_two[index_1+1:].iterrows():
empty_matrix[index_1, index_2] = cosine_distance(i, j)
score_list = []
for i in range(10):
ID_index = []
while len(ID_index) < 11:
if empty_matrix[i].argmax() >= 1:
empty_matrix[i, empty_matrix[i].argmax()] = 0
else:
ID_index.append(empty_matrix[i].argmax())
empty_matrix[i, empty_matrix[i].argmax()] = 0
data = user_books_matrix_two.loc[i]
predict = user_books_matrix_two.loc[ID_index].mean()
score = data[data > 0] - predict[data > 0]
score_list.append(np.absolute(score).sum()/len(score))
print(np.array(score_list).mean())
return np.array(score_list).mean()
scores = list(map(make_score, [0,1,2,3,4,5,6,7,8,9]))
user_df[user_df["user_id"] == users_two_index[empty_matrix[0].argmax()]]
user_df[user_df["user_id"] == users_two_index[0]]
user_books_matrix_two
"""
Explanation: user * user cosine similarity 매트릭스 만들기
1 권 169464 명 1분 59초
2 권 57555 명 40.6초
3 권 31808 명 22.4초
4 권 20470 명 14.5초
5 권 14393 명 10.2초
6 권 10630 명 7.58초
7 권 8074 명 5.8초
8 권 6306 명 4.54초
9 권 4995 명 3.56초
10 권 4052 명 2.91초
End of explanation
"""
|
fonnesbeck/HealthPolicyPython | Introduction to Python.ipynb | cc0-1.0 | import numpy
"""
Explanation: Introduction to Python
(via xkcd)
What is Python?
Python is a modern, open source, object-oriented programming language, created by a Dutch programmer, Guido van Rossum. Officially, it is an interpreted scripting language (meaning that it is not compiled until it is run) for the C programming language; in fact, Python itself is coded in C (though there are other non-C implementations). Frequently, it is compared to languages like Perl and Ruby. It offers the power and flexibility of lower level (i.e. compiled) languages, without the steep learning curve, and without most of the associated programming overhead. The language is very clean and readable, and it is available for almost every modern computing platform.
Why use Python for scientific programming?
Python offers a number of advantages to scientists, both for experienced and novice programmers alike:
Powerful and easy to use
Python is simultaneously powerful, flexible and easy to learn and use (in general, these qualities are traded off for a given programming language). Anything that can be coded in C, FORTRAN, or Java can be done in Python, almost always in fewer lines of code, and with fewer debugging headaches. Its standard library is extremely rich, including modules for string manipulation, regular expressions, file compression, mathematics, profiling and debugging (to name only a few). Unnecessary language constructs, such as END statements and brackets are absent, making the code terse, efficient, and easy to read. Finally, Python is object-oriented, which is an important programming paradigm particularly well-suited to scientific programming, which allows data structures to be abstracted in a natural way.
Interactive
Python may be run interactively on the command line, in much the same way as Octave or S-Plus/R. Rather than compiling and running a particular program, commands may entered serially followed by the Return key. This is often useful for mathematical programming and debugging.
Extensible
Python is often referred to as a “glue” language, meaning that it is a useful in a mixed-language environment. Frequently, programmers must interact with colleagues that operate in other programming languages, or use significant quantities of legacy code that would be problematic or expensive to re-code. Python was designed to interact with other programming languages, and in many cases C or FORTRAN code can be compiled directly into Python programs (using utilities such as f2py or weave). Additionally, since Python is an interpreted language, it can sometimes be slow relative to its compiled cousins. In many cases this performance deficit is due to a short loop of code that runs thousands or millions of times. Such bottlenecks may be removed by coding a function in FORTRAN, C or Cython, and compiling it into a Python module.
Third-party modules
There is a vast body of Python modules created outside the auspices of the Python Software Foundation. These include utilities for database connectivity, mathematics, statistics, and charting/plotting. Some notables include:
NumPy: Numerical Python (NumPy) is a set of extensions that provides the ability to specify and manipulate array data structures. It provides array manipulation and computational capabilities similar to those found in Matlab or Octave.
SciPy: An open source library of scientific tools for Python, SciPy supplements the NumPy module. SciPy gathering a variety of high level science and engineering modules together as a single package. SciPy includes modules for graphics and plotting, optimization, integration, special functions, signal and image processing, genetic algorithms, ODE solvers, and others.
Matplotlib: Matplotlib is a python 2D plotting library which produces publication-quality figures in a variety of hardcopy formats and interactive environments across platforms. Its syntax is very similar to Matlab.
Pandas: A module that provides high-performance, easy-to-use data structures and data analysis tools. In particular, the DataFrame class is useful for spreadsheet-like representation and mannipulation of data. Also includes high-level plotting functionality.
IPython: An enhanced Python shell, designed to increase the efficiency and usability of coding, testing and debugging Python. It includes both a Qt-based console and an interactive HTML notebook interface, both of which feature multiline editing, interactive plotting and syntax highlighting.
Free and open
Python is released on all platforms under an open license (Python Software Foundation License), meaning that the language and its source is freely distributable. Not only does this keep costs down for scientists and universities operating under a limited budget, but it also frees programmers from licensing concerns for any software they may develop. There is little reason to buy expensive licenses for software such as Matlab or Maple, when Python can provide the same functionality for free!
Loading libraries
We use the import statement to load non-core modules into our Python environment. For example, we can load NumPy using:
End of explanation
"""
numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
"""
Explanation: Importing a library is like getting a piece of lab equipment out of a storage locker
and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space.
Once you've loaded the library,
we can ask the library to read our data file for us:
End of explanation
"""
weight_kg = 55
"""
Explanation: The expression numpy.loadtxt() is a function call
that asks Python to run the function loadtxt that belongs to the numpy library.
This dotted notation is used everywhere in Python
to refer to the parts of things as thing.component.
numpy.loadtxt has two parameters:
the name of the file we want to read,
and the delimiter that separates values on a line.
These both need to be character strings (or strings for short),
so we put them in quotes.
When we are finished typing and press Shift+Enter,
the notebook runs our command.
Since we haven't told it to do anything else with the function's output,
the notebook displays it.
In this case,
that output is the data we just loaded.
By default,
only a few rows and columns are shown
(with ... to omit elements when displaying big arrays).
To save space,
Python displays numbers as 1. instead of 1.0
when there's nothing interesting after the decimal point.
Variables
Our call to numpy.loadtxt read our file,
but didn't save the data in memory.
To do that,
we need to assign the array to a variable.
A variable is just a name for a value,
such as x, current_temperature, or subject_id.
Python's variables must begin with a letter and are case sensitive.
We can create a new variable by assigning a value to it using =.
As an illustration,
let's step back and instead of considering a table of data,
consider the simplest "collection" of data,
a single value.
The line below assigns the value 55 to a variable weight_kg:
End of explanation
"""
weight_kg
"""
Explanation: Once a variable has a value, we can print it to the screen:
End of explanation
"""
print('weight in pounds:', 2.2 * weight_kg)
"""
Explanation: and do arithmetic with it:
End of explanation
"""
weight_kg = 57.5
print('weight in kilograms is now:', weight_kg)
"""
Explanation: We can also change a variable's value by assigning it a new one:
End of explanation
"""
weight_lb = 2.2 * weight_kg
print('weight in kilograms:', weight_kg, 'and in pounds:', weight_lb)
"""
Explanation: As the example above shows,
we can print several things at once by separating them with commas.
If we imagine the variable as a sticky note with a name written on it,
assignment is like putting the sticky note on a particular value:
This means that assigning a value to one variable does not change the values of other variables.
For example,
let's store the subject's weight in pounds in a variable:
End of explanation
"""
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
"""
Explanation: and then change weight_kg:
End of explanation
"""
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
"""
Explanation: Since weight_lb doesn't "remember" where its value came from,
it isn't automatically updated when weight_kg changes.
This is different from the way spreadsheets work.
Just as we can assign a single value to a variable, we can also assign an array of values
to a variable using the same syntax. Let's re-run numpy.loadtxt and save its result:
End of explanation
"""
data
"""
Explanation: This statement doesn't produce any output because assignment doesn't display anything.
If we want to check that our data has been loaded,
we can print the variable's value:
End of explanation
"""
type(data)
"""
Explanation: Now that our data is in memory,
we can start doing things with it.
First,
let's ask what type of thing data refers to:
End of explanation
"""
data.shape
"""
Explanation: The output tells us that data currently refers to an n-dimensional array created by the NumPy library. These data corresponds to arthritis patient's inflammation. The rows are the individual patients and the columns are there daily inflammation measurements.
We can see what its shape is like this:
End of explanation
"""
data[0, 0]
data[30, 20]
"""
Explanation: This tells us that data has 60 rows and 40 columns. When we created the
variable data to store our arthritis data, we didn't just create the array, we also
created information about the array, called
attributes. This extra information describes data in
the same way an adjective describes a noun.
data.shape is an attribute of data which described the dimensions of data.
We use the same dotted notation for the attributes of variables
that we use for the functions in libraries
because they have the same part-and-whole relationship.
If we want to get a single number from the array,
we must provide an index in square brackets,
just as we do in math:
End of explanation
"""
data[0:4, 0:10]
"""
Explanation: The expression data[30, 20] may not surprise you,
but data[0, 0] might.
Programming languages like Fortran and MATLAB start counting at 1,
because that's what human beings have done for thousands of years.
Languages in the C family (including C++, Java, Perl, and Python) count from 0
because that's simpler for computers to do.
As a result,
if we have an M×N array in Python,
its indices go from 0 to M-1 on the first axis
and 0 to N-1 on the second.
It takes a bit of getting used to,
but one way to remember the rule is that
the index is how many steps we have to take from the start to get the item we want.
An index like [30, 20] selects a single element of an array,
but we can select whole sections as well.
For example,
we can select the first ten days (columns) of values
for the first four patients (rows) like this:
End of explanation
"""
print(data[5:10, 0:10])
"""
Explanation: The slice 0:4 means,
"Start at index 0 and go up to, but not including, index 4."
Again,
the up-to-but-not-including takes a bit of getting used to,
but the rule is that the difference between the upper and lower bounds is the number of values in the slice.
We don't have to start slices at 0:
End of explanation
"""
small = data[:3, 36:]
print('small is:')
small
"""
Explanation: We also don't have to include the upper and lower bound on the slice.
If we don't include the lower bound,
Python uses 0 by default;
if we don't include the upper,
the slice runs to the end of the axis,
and if we don't include either
(i.e., if we just use ':' on its own),
the slice includes everything:
End of explanation
"""
doubledata = data * 2.0
"""
Explanation: Arrays also know how to perform common mathematical operations on their values.
The simplest operations with data are arithmetic:
add, subtract, multiply, and divide.
When you do such operations on arrays,
the operation is done element-wise on the array.
Thus:
End of explanation
"""
print('original:')
data[:3, 36:]
print('doubledata:')
doubledata[:3, 36:]
"""
Explanation: will create a new array doubledata
whose elements have the value of two times the value of the corresponding elements in data:
End of explanation
"""
tripledata = doubledata + data
"""
Explanation: If,
instead of taking an array and doing arithmetic with a single value (as above)
you did the arithmetic operation with another array of the same shape,
the operation will be done on corresponding elements of the two arrays.
Thus:
End of explanation
"""
print('tripledata:')
tripledata[:3, 36:]
"""
Explanation: will give you an array where tripledata[0,0] will equal doubledata[0,0] plus data[0,0],
and so on for all other elements of the arrays.
End of explanation
"""
data.mean()
"""
Explanation: Often, we want to do more than add, subtract, multiply, and divide values of data.
Arrays also know how to do more complex operations on their values.
If we want to find the average inflammation for all patients on all days,
for example,
we can just ask the array for its mean value
End of explanation
"""
print('maximum inflammation:', data.max())
print('minimum inflammation:', data.min())
print('standard deviation:', data.std())
"""
Explanation: mean is a method of the array.
A method is simply a function that is an attribute of the array,
in the same way that the member shape does.
If variables are nouns, methods are verbs:
they are what the thing in question knows how to do.
We need empty parentheses for data.mean(),
even when we're not passing in any parameters,
to tell Python to go and do something for us. data.shape doesn't
need () because it is just a description but data.mean() requires the ()
because it is an action.
NumPy arrays have lots of useful methods:
End of explanation
"""
patient_0 = data[0, :] # 0 on the first axis, everything on the second
print('maximum inflammation for patient 0:', patient_0.max())
"""
Explanation: When analyzing data,
though,
we often want to look at partial statistics,
such as the maximum value per patient
or the average value per day.
One way to do this is to create a new temporary array of the data we want,
then ask it to do the calculation:
End of explanation
"""
print('maximum inflammation for patient 2:', data[2, :].max())
"""
Explanation: We don't actually need to store the row in a variable of its own.
Instead, we can combine the selection and the method call:
End of explanation
"""
data.mean(axis=0)
"""
Explanation: What if we need the maximum inflammation for all patients (as in the
next diagram on the left), or the average for each day (as in the
diagram on the right)? As the diagram below shows, we want to perform the
operation across an axis:
To support this,
most array methods allow us to specify the axis we want to be consumed by the operation.
If we ask for the average across axis 0 (rows in our 2D example),
we get:
End of explanation
"""
data.mean(axis=0).shape
"""
Explanation: As a quick check,
we can ask this array what its shape is:
End of explanation
"""
data.mean(axis=1)
"""
Explanation: The expression (40,) tells us we have an N×1 vector,
so this is the average inflammation per day for all patients.
If we average across axis 1 (columns in our 2D example), we get:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(data)
"""
Explanation: which is the average inflammation per patient across all days.
Plotting Data
The mathematician Richard Hamming once said,
"The purpose of computing is insight, not numbers,"
and the best way to develop insight is often to visualize data.
Visualization deserves an entire lecture (or course) of its own,
but we can explore a few features of Python's matplotlib library here.
While there is no "official" plotting library,
this package is the de facto standard.
First,
we will import the pyplot module from matplotlib
and use two of its functions to create and display a heat map of our data:
End of explanation
"""
ave_inflammation = data.mean(axis=0)
plt.plot(ave_inflammation)
"""
Explanation: Blue regions in this heat map are low values, while red shows high values.
As we can see,
inflammation rises and falls over a 40-day period.
Some IPython magic
If you're using an IPython / Jupyter notebook,
you'll need to execute the following command
in order for your matplotlib images to appear
in the notebook:
% matplotlib inline
The % indicates an IPython magic function -
a function that is only valid within the notebook environment.
Note that you only have to execute this function once per notebook.
Let's take a look at the average inflammation over time:
End of explanation
"""
plt.plot(data.max(axis=0))
plt.plot(data.min(axis=0))
"""
Explanation: Here,
we have put the average per day across all patients in the variable ave_inflammation,
then asked matplotlib.pyplot to create and display a line graph of those values.
The result is roughly a linear rise and fall,
which is suspicious:
based on other studies,
we expect a sharper rise and slower fall.
Let's have a look at two other statistics:
End of explanation
"""
fig = plt.figure(figsize=(10.0, 3.0))
axes1 = fig.add_subplot(1, 3, 1)
axes2 = fig.add_subplot(1, 3, 2)
axes3 = fig.add_subplot(1, 3, 3)
axes1.set_ylabel('average')
axes1.plot(data.mean(axis=0))
axes2.set_ylabel('max')
axes2.plot(data.max(axis=0))
axes3.set_ylabel('min')
axes3.plot(data.min(axis=0))
fig.tight_layout()
"""
Explanation: The maximum value rises and falls perfectly smoothly, while the minimum seems to be a step function. Neither result seems particularly likely, so either there's a mistake in our calculations or something is wrong with our data.
You can group similar plots in a single figure using subplots. This script below uses a number of new commands. The function figure() creates a space into which we will place all of our plots. The parameter figsize tells Python how big to make this space. Each subplot is placed into the figure using the subplot command. The subplot command takes 3 parameters. The first denotes how many total rows of subplots there are, the second parameter refers to the total number of subplot columns, and the final parameters denotes which subplot your variable is referencing. Each subplot is stored in a different variable (axes1, axes2, axes3). Once a subplot is created, the axes are can be titled using the set_xlabel() command (or set_ylabel()). Here are our three plots side by side:
End of explanation
"""
element = 'oxygen'
print('first three characters:', element[:3])
print('last three characters:', element[3:6])
"""
Explanation: The call to loadtxt reads our data,
and the rest of the program tells the plotting library
how large we want the figure to be,
that we're creating three sub-plots,
what to draw for each one,
and that we want a tight layout.
(Perversely,
if we leave out that call to fig.tight_layout(),
the graphs will actually be squeezed together more closely.)
Exercise: Sorting out references
What does the following program print out?
python
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
Exercise: Slicing strings
A section of an array is called a slice.
We can take slices of character strings as well:
End of explanation
"""
word = 'lead'
print(word[0])
print(word[1])
print(word[2])
print(word[3])
"""
Explanation: What is the value of element[:4]?
What about element[4:]?
Or element[:]?
What is element[-1]?
What is element[-2]?
Given those answers,
explain what element[1:-1] does.
Repeating Actions with Loops
Above, we wrote some code that plots some values of interest from our first inflammation dataset,
and reveals some suspicious features in it.
We have a dozen data sets right now, though, and more on the way.
We want to create plots for all of our data sets with a single statement.
To do that, we'll have to teach the computer how to repeat things.
An example task that we might want to repeat is printing each character in a
word on a line of its own. One way to do this would be to use a series of print statements:
End of explanation
"""
word = 'tin'
print(word[0])
print(word[1])
print(word[2])
print(word[3])
"""
Explanation: This is a bad approach for two reasons:
It doesn't scale:
if we want to print the characters in a string that's hundreds of letters long,
we'd be better off just typing them in.
It's fragile:
if we give it a longer string,
it only prints part of the data,
and if we give it a shorter one,
it produces an error because we're asking for characters that don't exist.
End of explanation
"""
word = 'lead'
for char in word:
print(char)
"""
Explanation: Here's a better approach:
End of explanation
"""
word = 'oxygen'
for char in word:
print(char)
"""
Explanation: This is shorter---certainly shorter than something that prints every character in a hundred-letter string---and
more robust as well:
End of explanation
"""
length = 0
for vowel in 'aeiou':
length = length + 1
print('There are', length, 'vowels')
"""
Explanation: The improved version of print_characters uses a for loop
to repeat an operation---in this case, printing---once for each thing in a collection.
The general form of a loop is:
for variable in collection:
do things with variable
We can call the loop variable anything we like,
but there must be a colon at the end of the line starting the loop,
and we must indent anything we want to run inside the loop. Unlike many other languages, there is no
command to end a loop (e.g. end for); what is indented after the for statement belongs to the loop.
Here's another loop that repeatedly updates a variable:
End of explanation
"""
letter = 'z'
for letter in 'abc':
print(letter)
print('after the loop, letter is', letter)
"""
Explanation: It's worth tracing the execution of this little program step by step.
Since there are five characters in 'aeiou',
the statement on line 3 will be executed five times.
The first time around,
length is zero (the value assigned to it on line 1)
and vowel is 'a'.
The statement adds 1 to the old value of length,
producing 1,
and updates length to refer to that new value.
The next time around,
vowel is 'e' and length is 1,
so length is updated to be 2.
After three more updates,
length is 5;
since there is nothing left in 'aeiou' for Python to process,
the loop finishes
and the print statement on line 4 tells us our final answer.
Note that a loop variable is just a variable that's being used to record progress in a loop.
It still exists after the loop is over,
and we can re-use variables previously defined as loop variables as well:
End of explanation
"""
len('aeiou')
"""
Explanation: Note also that finding the length of a string is such a common operation
that Python actually has a built-in function to do it called len:
End of explanation
"""
5**3
"""
Explanation: len is much faster than any function we could write ourselves,
and much easier to read than a two-line loop;
it will also give us the length of many other things that we haven't met yet,
so we should always use it when we can.
Exercise: Computing powers with loops
Exponentiation is built into Python:
End of explanation
"""
odds = [1, 3, 5, 7]
print('odds are:', odds)
"""
Explanation: Write a loop that calculates the same result as 5 ** 3 using
multiplication (and without exponentiation).
Exercise: Reverse a string
Write a loop that takes a string,
and produces a new string with the characters in reverse order,
so 'Newton' becomes 'notweN'.
Storing Multiple Values in Lists
Just as a for loop is a way to do operations many times,
a list is a way to store many values.
Unlike NumPy arrays,
lists are built into the language (so we don't have to load a library
to use them).
We create a list by putting values inside square brackets:
End of explanation
"""
print('first and last:', odds[0], odds[-1])
"""
Explanation: We select individual elements from lists by indexing them:
End of explanation
"""
for number in odds:
print(number)
"""
Explanation: and if we loop over a list,
the loop variable is assigned elements one at a time:
End of explanation
"""
names = ['Newton', 'Darwing', 'Turing'] # typo in Darwin's name
print('names is originally:', names)
names[1] = 'Darwin' # correct the name
print('final value of names:', names)
"""
Explanation: There is one important difference between lists and strings:
we can change the values in a list,
but we cannot change the characters in a string.
For example:
End of explanation
"""
name = 'Bell'
name[0] = 'b'
"""
Explanation: works, but:
End of explanation
"""
odds.append(11)
print('odds after adding a value:', odds)
del odds[0]
print('odds after removing the first element:', odds)
odds.reverse()
print('odds after reversing:', odds)
"""
Explanation: does not.
Ch-Ch-Ch-Changes
Data which can be modified in place is called mutable,
while data which cannot be modified is called immutable.
Strings and numbers are immutable. This does not mean that variables with string or number values are constants,
but when we want to change the value of a string or number variable, we can only replace the old value
with a completely new value.
Lists and arrays, on the other hand, are mutable: we can modify them after they have been created. We can
change individual elements, append new elements, or reorder the whole list. For some operations, like
sorting, we can choose whether to use a function that modifies the data in place or a function that returns a
modified copy and leaves the original unchanged.
Be careful when modifying data in place. If two variables refer to the same list, and you modify the list
value, it will change for both variables! If you want variables with mutable values to be independent, you
must make a copy of the value when you assign it.
Because of pitfalls like this, code which modifies data in place can be more difficult to understand. However,
it is often far more efficient to modify a large data structure in place than to create a modified copy for
every small change. You should consider both of these aspects when writing your code.
There are many ways to change the contents of lists besides assigning new values to
individual elements:
End of explanation
"""
odds = [1, 3, 5, 7]
primes = odds
primes += [2]
print('primes:', primes)
print('odds:', odds)
"""
Explanation: While modifying in place, it is useful to remember that python treats lists in a slightly counterintuitive way.
If we make a list and (attempt to) copy it then modify in place, we can cause all sorts of trouble:
End of explanation
"""
odds = [1, 3, 5, 7]
# remember what this does!
primes = odds[:]
primes += [2]
print('primes:', primes)
print('odds:', odds)
"""
Explanation: This is because python stores a list in memory, and then can use multiple names to refer to the same list.
If all we want to do is copy a (simple) list, we can index the values into a new list, so we do not modify a list we did not mean to:
End of explanation
"""
["h", "e", "l", "l", "o"]
"""
Explanation: Exercise: Turn a string into a list
Use a for-loop to convert the string "hello" into a list of letters:
End of explanation
"""
my_list = []
"""
Explanation: Hint: You can create an empty list like this:
End of explanation
"""
(34,90,56) # Tuple with three elements
(15,) # Tuple with one element
(12, 'foobar') # Mixed tuple
"""
Explanation: Tuples
If we wish to create an immutable, ordered sequence of elements, we can use a tuple. These elements may be of arbitrary and mixed types. The tuple is specified by a comma-separated sequence of items, enclosed by parentheses:
End of explanation
"""
foo = (5, 7, 2, 8, 2, -1, 0, 4)
foo[4]
"""
Explanation: As with lists, individual elements in a tuple can be accessed by indexing.
End of explanation
"""
tuple('foobar')
"""
Explanation: The tuple function can be used to cast any sequence into a tuple:
End of explanation
"""
my_dict = {'a':16,
'b':(4,5),
'foo':'''(noun) a term used as a universal substitute
for something real, especially when discussing technological ideas and
problems'''}
my_dict
my_dict['b']
"""
Explanation: Dictionaries
One of the more flexible built-in data structures is the dictionary. A dictionary maps a collection of values to a set of associated keys. These mappings are mutable, and unlike lists or tuples, are unordered. Hence, rather than using the sequence index to return elements of the collection, the corresponding key must be used. Dictionaries are specified by a comma-separated sequence of keys and values, which are separated in turn by colons. The dictionary is enclosed by curly braces.
For example:
End of explanation
"""
len(my_dict)
"""
Explanation: Notice that a indexes an integer, b a tuple, and foo a string. Hence, a dictionary is a sort of associative array. Some languages refer to such a structure as a hash or key-value store.
As with lists, being mutable, dictionaries have a variety of methods and functions that take dictionary arguments. For example, some dictionary functions include:
End of explanation
"""
'a' in my_dict
"""
Explanation: We can also check an object for membership in a dictionary using the in expression:
End of explanation
"""
# Returns key/value pairs as list
my_dict.items()
# Returns list of keys
my_dict.keys()
# Returns list of values
my_dict.values()
"""
Explanation: Some useful dictionary methods are:
End of explanation
"""
my_dict['c']
"""
Explanation: When we try to index a value that does not exist, it raises a KeyError.
End of explanation
"""
my_dict.get('c')
"""
Explanation: If we would rather not get the error, we can use the get method, which returns None if the value is not present.
End of explanation
"""
my_dict.get('c', -1)
"""
Explanation: Custom return values can be specified with a second argument.
End of explanation
"""
my_dict.popitem()
my_dict
my_dict.clear()
my_dict
"""
Explanation: It is easy to remove items from a dictionary.
End of explanation
"""
import glob
"""
Explanation: Analyzing Data from Multiple Files
We now have almost everything we need to process all our data files.
The only thing that's missing is a library with a rather unpleasant name:
End of explanation
"""
from glob import glob
glob('*.html')
"""
Explanation: The glob library contains a single function, also called glob,
that finds files whose names match a pattern.
We provide those patterns as strings:
the character * matches zero or more characters,
while ? matches any one character.
We can use this to get the names of all the HTML files in the current directory:
End of explanation
"""
filenames = glob('data/inflammation*.csv')
filenames = filenames[0:3]
for f in filenames:
print(f)
data = numpy.loadtxt(fname=f, delimiter=',')
fig = plt.figure(figsize=(10.0, 3.0))
axes1 = fig.add_subplot(1, 3, 1)
axes2 = fig.add_subplot(1, 3, 2)
axes3 = fig.add_subplot(1, 3, 3)
axes1.set_ylabel('average')
axes1.plot(data.mean(axis=0))
axes2.set_ylabel('max')
axes2.plot(data.max(axis=0))
axes3.set_ylabel('min')
axes3.plot(data.min(axis=0))
fig.tight_layout()
"""
Explanation: As these examples show,
glob.glob's result is a list of strings,
which means we can loop over it
to do something with each filename in turn.
In our case,
the "something" we want to do is generate a set of plots for each file in our inflammation dataset.
Let's test it by analyzing the first three files in the list:
End of explanation
"""
num = 37
if num > 100:
print('greater')
else:
print('not greater')
print('done')
"""
Explanation: Sure enough,
the maxima of the first two data sets show exactly the same ramp as the first,
and their minima show the same staircase structure;
a different situation has been revealed in the third dataset,
where the maxima are a bit less regular, but the minima are consistently zero.
Conditionals
We can ask Python to take different actions, depending on a condition, with an if statement:
End of explanation
"""
num = 53
print('before conditional...')
if num > 100:
print('53 is greater than 100')
print('...after conditional')
"""
Explanation: The second line of this code uses the keyword if to tell Python that we want to make a choice.
If the test that follows the if statement is true,
the body of the if
(i.e., the lines indented underneath it) are executed.
If the test is false,
the body of the else is executed instead.
Only one or the other is ever executed:
Conditional statements don't have to include an else.
If there isn't one,
Python simply does nothing if the test is false:
End of explanation
"""
num = -3
if num > 0:
print(num, "is positive")
elif num == 0:
print(num, "is zero")
else:
print(num, "is negative")
"""
Explanation: We can also chain several tests together using elif,
which is short for "else if".
The following Python code uses elif to print the sign of a number.
End of explanation
"""
if (1 > 0) and (-1 > 0):
print('both parts are true')
else:
print('at least one part is not true')
"""
Explanation: One important thing to notice in the code above is that we use a double equals sign == to test for equality
rather than a single equals sign
because the latter is used to mean assignment.
We can also combine tests using and and or.
and is only true if both parts are true:
End of explanation
"""
if (1 < 0) or (-1 < 0):
print('at least one test is true')
"""
Explanation: while or is true if at least one part is true:
End of explanation
"""
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
if data.max(axis=0)[0] == 0 and data.max(axis=0)[20] == 20:
print('Suspicious looking maxima!')
elif data.min(axis=0).sum() == 0:
print('Minima add up to zero!')
else:
print('Seems OK!')
data = numpy.loadtxt(fname='data/inflammation-03.csv', delimiter=',')
if data.max(axis=0)[0] == 0 and data.max(axis=0)[20] == 20:
print('Suspicious looking maxima!')
elif data.min(axis=0).sum() == 0:
print('Minima add up to zero!')
else:
print('Seems OK!')
"""
Explanation: Checking our Data
Now that we've seen how conditionals work,
we can use them to check for the suspicious features we saw in our inflammation data.
In the first couple of plots, the maximum inflammation per day
seemed to rise like a straight line, one unit per day.
We can check for this inside the for loop we wrote with the following conditional:
python
if data.max(axis=0)[0] == 0 and data.max(axis=0)[20] == 20:
print('Suspicious looking maxima!')
We also saw a different problem in the third dataset;
the minima per day were all zero (looks like a healthy person snuck into our study).
We can also check for this with an elif condition:
python
elif data.min(axis=0).sum() == 0:
print('Minima add up to zero!')
And if neither of these conditions are true, we can use else to give the all-clear:
python
else:
print('Seems OK!')
Let's test that out:
End of explanation
"""
if '':
print('empty string is true')
if 'word':
print('word is true')
if []:
print('empty list is true')
if [1, 2, 3]:
print('non-empty list is true')
if 0:
print('zero is true')
if 1:
print('one is true')
"""
Explanation: In this way,
we have asked Python to do something different depending on the condition of our data.
Here we printed messages in all cases,
but we could also imagine not using the else catch-all
so that messages are only printed when something is wrong,
freeing us from having to manually examine every plot for features we've seen before.
What is truth?
True and False are special words in Python called booleans which represent true
and false statements. However, they aren't the only values in Python that are true and false.
In fact, any value can be used in an if or elif.
After reading and running the code below,
explain what the rule is for which values are considered true and which are considered false.
End of explanation
"""
x = 1 # original value
x += 1 # add one to x, assigning result back to x
x *= 3 # multiply x by 3
x
"""
Explanation: In-place operators
Python (and most other languages in the C family) provides in-place operators
that work like this:
End of explanation
"""
def fahr_to_kelvin(temp):
return ((temp - 32) * (5/9)) + 273.15
"""
Explanation: Writing Functions
At this point,
we've written code to draw some interesting features in our inflammation data,
loop over all our data files to quickly draw these plots for each of them,
and have Python make decisions based on what it sees in our data.
But, our code is getting pretty long and complicated;
what if we had thousands of datasets,
and didn't want to generate a figure for every single one?
Commenting out the figure-drawing code is a nuisance.
Also, what if we want to use that code again,
on a different dataset or at a different point in our program?
Cutting and pasting it is going to make our code get very long and very repetative,
very quickly.
We'd like a way to package our code so that it is easier to reuse,
and Python provides for this by letting us define things called functions -
a shorthand way of re-executing longer pieces of code.
Let's start by defining a function fahr_to_kelvin that converts temperatures from Fahrenheit to Kelvin:
End of explanation
"""
print('freezing point of water:', fahr_to_kelvin(32))
print('boiling point of water:', fahr_to_kelvin(212))
"""
Explanation: The function definition opens with the word def,
which is followed by the name of the function
and a parenthesized list of parameter names.
The body of the function --- the
statements that are executed when it runs --- is indented below the definition line,
typically by four spaces.
When we call the function,
the values we pass to it are assigned to those variables
so that we can use them inside the function.
Inside the function,
we use a return statement to send a result back to whoever asked for it.
Let's try running our function.
Calling our own function is no different from calling any other function:
End of explanation
"""
def kelvin_to_celsius(temp):
return temp - 273.15
print('absolute zero in Celsius:', kelvin_to_celsius(0.0))
"""
Explanation: We've successfully called the function that we defined,
and we have access to the value that we returned.
Integer division
We are using Python 3, where division always returns a floating point number:
$ python3 -c "print(5/9)"
0.5555555555555556
Unfortunately, this wasn't the case in Python 2:
```
5/9
0
```
If you are using Python 2 and want to keep the fractional part of division
you need to convert one or the other number to floating point:
```
5.0/9
0.555555555556
5/9.0
0.555555555556
```
And if you want an integer result from division in Python 3,
use a double-slash:
```
3//2
1
```
Composing Functions
Now that we've seen how to turn Fahrenheit into Kelvin,
it's easy to turn Kelvin into Celsius:
End of explanation
"""
def fahr_to_celsius(temp):
temp_k = fahr_to_kelvin(temp)
result = kelvin_to_celsius(temp_k)
return result
print('freezing point of water in Celsius:', fahr_to_celsius(32.0))
"""
Explanation: What about converting Fahrenheit to Celsius?
We could write out the formula,
but we don't need to.
Instead,
we can compose the required function, based on the two functions we have already created:
End of explanation
"""
def analyze(filename):
data = numpy.loadtxt(fname=filename, delimiter=',')
fig = plt.figure(figsize=(10.0, 3.0))
axes1 = fig.add_subplot(1, 3, 1)
axes2 = fig.add_subplot(1, 3, 2)
axes3 = fig.add_subplot(1, 3, 3)
axes1.set_ylabel('average')
axes1.plot(data.mean(axis=0))
axes2.set_ylabel('max')
axes2.plot(data.max(axis=0))
axes2.set_title(filename[:-4])
axes3.set_ylabel('min')
axes3.plot(data.min(axis=0))
fig.tight_layout()
"""
Explanation: This is our first taste of how larger programs are built:
we define basic operations,
then combine them in ever-large chunks to get the effect we want.
Real-life functions will usually be larger than the ones shown here --- typically half a dozen to a few dozen lines --- but
they shouldn't ever be much longer than that,
or the next person who reads it won't be able to understand what's going on.
Tidying up
Now that we know how to wrap bits of code up in functions,
we can make our inflammation analyasis easier to read and easier to reuse.
First, let's make an analyze function that generates our plots:
End of explanation
"""
def detect_problems(filename):
data = numpy.loadtxt(fname=filename, delimiter=',')
if data.max(axis=0)[0] == 0 and data.max(axis=0)[20] == 20:
print('Suspicious looking maxima!')
elif data.min(axis=0).sum() == 0:
print('Minima add up to zero!')
else:
print('Seems OK!')
"""
Explanation: and another function called detect_problems that checks for those systematics
we noticed:
End of explanation
"""
for f in filenames[:3]:
print('\nOpening file', f)
analyze(f)
detect_problems(f)
"""
Explanation: Notice that rather than jumbling this code together in one giant for loop,
we can now read and reuse both ideas separately.
We can reproduce the previous analysis with a much simpler for loop:
End of explanation
"""
def center(data, desired):
return (data - data.mean()) + desired
"""
Explanation: By giving our functions human-readable names,
we can more easily read and understand what is happening in the for loop.
Even better, if at some later date we want to use either of those pieces of code again,
we can do so in a single line.
Testing and Documenting
Once we start putting things in functions so that we can re-use them,
we need to start testing that those functions are working correctly.
To see how to do this,
let's write a function to center a dataset around a particular value:
End of explanation
"""
z = numpy.zeros((2,2))
print(center(z, 3))
"""
Explanation: We could test this on our actual data,
but since we don't know what the values ought to be,
it will be hard to tell if the result was correct.
Instead,
let's use NumPy to create a matrix of 0's
and then center that around 3:
End of explanation
"""
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
print(center(data, 0))
"""
Explanation: That looks right,
so let's try center on our real data:
End of explanation
"""
print('original min, mean, and max are:', data.min(), data.mean(), data.max())
centered = center(data, 0)
print('min, mean, and and max of centered data are:', centered.min(), centered.mean(), centered.max())
"""
Explanation: It's hard to tell from the default output whether the result is correct,
but there are a few simple tests that will reassure us:
End of explanation
"""
print('std dev before and after:', data.std(), centered.std())
"""
Explanation: That seems almost right:
the original mean was about 6.1,
so the lower bound from zero is how about -6.1.
The mean of the centered data isn't quite zero --- we'll explore why not in the challenges --- but it's pretty close.
We can even go further and check that the standard deviation hasn't changed:
End of explanation
"""
print('difference in standard deviations before and after:', data.std() - centered.std())
"""
Explanation: Those values look the same,
but we probably wouldn't notice if they were different in the sixth decimal place.
Let's do this instead:
End of explanation
"""
# center(data, desired): return a new array containing the original data centered around the desired value.
def center(data, desired):
return (data - data.mean()) + desired
"""
Explanation: Again,
the difference is very small.
It's still possible that our function is wrong,
but it seems unlikely enough that we should probably get back to doing our analysis.
We have one more task first, though:
we should write some documentation for our function
to remind ourselves later what it's for and how to use it.
The usual way to put documentation in software is to add comments like this:
End of explanation
"""
def center(data, desired):
'''Return a new array containing the original data centered around the desired value.'''
return (data - data.mean()) + desired
"""
Explanation: There's a better way, though.
If the first thing in a function is a string that isn't assigned to a variable,
that string is attached to the function as its documentation:
End of explanation
"""
help(center)
"""
Explanation: This is better because we can now ask Python's built-in help system to show us the documentation for the function:
End of explanation
"""
def center(data, desired):
'''Return a new array containing the original data centered around the desired value.
Example: center([1, 2, 3], 0) => [-1, 0, 1]'''
return (data - data.mean()) + desired
help(center)
"""
Explanation: A string like this is called a docstring.
We don't need to use triple quotes when we write one,
but if we do,
we can break the string across multiple lines:
End of explanation
"""
numpy.loadtxt('data/inflammation-01.csv', delimiter=',')
"""
Explanation: Defining Defaults
We have passed parameters to functions in two ways:
directly, as in type(data),
and by name, as in numpy.loadtxt(fname='something.csv', delimiter=',').
In fact,
we can pass the filename to loadtxt without the fname=:
End of explanation
"""
numpy.loadtxt('data/inflammation-01.csv', ',')
"""
Explanation: but we still need to say delimiter=:
End of explanation
"""
def center(data, desired=0.0):
'''Return a new array containing the original data centered around the desired value (0 by default).
Example: center([1, 2, 3], 0) => [-1, 0, 1]'''
return (data - data.mean()) + desired
"""
Explanation: To understand what's going on,
and make our own functions easier to use,
let's re-define our center function like this:
End of explanation
"""
test_data = numpy.zeros((2, 2))
print(center(test_data, 3))
"""
Explanation: The key change is that the second parameter is now written desired=0.0 instead of just desired.
If we call the function with two arguments,
it works as it did before:
End of explanation
"""
more_data = 5 + numpy.zeros((2, 2))
print('data before centering:')
print(more_data)
print('centered data:')
print(center(more_data))
"""
Explanation: But we can also now call it with just one parameter,
in which case desired is automatically assigned the default value of 0.0:
End of explanation
"""
def display(a=1, b=2, c=3):
print('a:', a, 'b:', b, 'c:', c)
print('no parameters:')
display()
print('one parameter:')
display(55)
print('two parameters:')
display(55, 66)
"""
Explanation: This is handy:
if we usually want a function to work one way,
but occasionally need it to do something else,
we can allow people to pass a parameter when they need to
but provide a default to make the normal case easier.
The example below shows how Python matches values to parameters:
End of explanation
"""
print('only setting the value of c')
display(c=77)
"""
Explanation: As this example shows,
parameters are matched up from left to right,
and any that haven't been given a value explicitly get their default value.
We can override this behavior by naming the value as we pass it in:
End of explanation
"""
help(numpy.loadtxt)
"""
Explanation: With that in hand,
let's look at the help for numpy.loadtxt:
End of explanation
"""
numpy.loadtxt('data/inflammation-01.csv', ',')
"""
Explanation: There's a lot of information here,
but the most important part is the first couple of lines:
loadtxt(fname, dtype=<type 'float'>, comments='#', delimiter=None, converters=None, skiprows=0, usecols=None,
unpack=False, ndmin=0)
This tells us that loadtxt has one parameter called fname that doesn't have a default value,
and eight others that do.
If we call the function like this:
End of explanation
"""
f = 0
k = 0
def f2k(f):
k = ((f-32)*(5.0/9.0)) + 273.15
return k
f2k(8)
f2k(41)
f2k(32)
print(k)
"""
Explanation: then the filename is assigned to fname (which is what we want),
but the delimiter string ',' is assigned to dtype rather than delimiter,
because dtype is the second parameter in the list. However ',' isn't a known dtype so
our code produced an error message when we tried to run it.
When we call loadtxt we don't have to provide fname= for the filename because it's the
first item in the list, but if we want the ',' to be assigned to the variable delimiter,
we do have to provide delimiter= for the second parameter since delimiter is not
the second parameter in the list.
Exercise: Combining strings
"Adding" two strings produces their concatenation:
'a' + 'b' is 'ab'.
Write a function called fence that takes two parameters called original and wrapper
and returns a new string that has the wrapper character at the beginning and end of the original.
A call to your function should look like this:
print(fence('name', '*'))
*name*
Exercise: Rescaling an array
Write a function rescale that takes an array as input
and returns a corresponding array of values scaled to lie in the range 0.0 to 1.0.
Exercise: Variables inside and outside functions
What does the following piece of code display when run - and why?
End of explanation
"""
import errors_01
errors_01.favorite_ice_cream()
"""
Explanation: Errors and Exceptions
Every programmer encounters errors,
both those who are just beginning,
and those who have been programming for years.
Encountering errors and exceptions can be very frustrating at times,
and can make coding feel like a hopeless endeavour.
However,
understanding what the different types of errors are
and when you are likely to encounter them can help a lot.
Once you know why you get certain types of errors,
they become much easier to fix.
Errors in Python have a very specific form,
called a traceback.
Let's examine one:
End of explanation
"""
def some_function()
msg = "hello, world!"
print(msg)
return msg
"""
Explanation: This particular traceback has two levels.
You can determine the number of levels by looking for the number of arrows on the left hand side.
In this case:
The first shows code from the cell above,
with an arrow pointing to Line 2 (which is favorite_ice_cream()).
The second shows some code in another function (favorite_ice_cream, located in the file errors_01.py),
with an arrow pointing to Line 7 (which is print(ice_creams[3])).
The last level is the actual place where the error occurred.
The other level(s) show what function the program executed to get to the next level down.
So, in this case, the program first performed a function call to the function favorite_ice_cream.
Inside this function,
the program encountered an error on Line 7, when it tried to run the code print(ice_creams[3]).
Long Tracebacks
Sometimes, you might see a traceback that is very long -- sometimes they might even be 20 levels deep!
This can make it seem like something horrible happened,
but really it just means that your program called many functions before it ran into the error.
Most of the time,
you can just pay attention to the bottom-most level,
which is the actual place where the error occurred.
So what error did the program actually encounter?
In the last line of the traceback,
Python helpfully tells us the category or type of error (in this case, it is an IndexError)
and a more detailed error message (in this case, it says "list index out of range").
If you encounter an error and don't know what it means,
it is still important to read the traceback closely.
That way,
if you fix the error,
but encounter a new one,
you can tell that the error changed.
Additionally,
sometimes just knowing where the error occurred is enough to fix it,
even if you don't entirely understand the message.
If you do encounter an error you don't recognize,
try looking at the official documentation on errors.
However,
note that you may not always be able to find the error there,
as it is possible to create custom errors.
In that case,
hopefully the custom error message is informative enough to help you figure out what went wrong.
Syntax Errors
When you forget a colon at the end of a line,
accidentally add one space too many when indenting under an if statement,
or forget a parentheses,
you will encounter a syntax error.
This means that Python couldn't figure out how to read your program.
This is similar to forgetting punctuation in English:
this text is difficult to read there is no punctuation there is also no capitalization
why is this hard because you have to figure out where each sentence ends
you also have to figure out where each sentence begins
to some extent it might be ambiguous if there should be a sentence break or not
People can typically figure out what is meant by text with no punctuation,
but people are much smarter than computers.
If Python doesn't know how to read the program,
it will just give up and inform you with an error.
For example:
End of explanation
"""
def some_function():
msg = "hello, world!"
print(msg)
return msg
"""
Explanation: Here, Python tells us that there is a SyntaxError on line 1,
and even puts a little arrow in the place where there is an issue.
In this case the problem is that the function definition is missing a colon at the end.
Actually, the function above has two issues with syntax.
If we fix the problem with the colon,
we see that there is also an IndentationError,
which means that the lines in the function definition do not all have the same indentation:
End of explanation
"""
print(a)
"""
Explanation: Both SyntaxError and IndentationError indicate a problem with the syntax of your program,
but an IndentationError is more specific:
it always means that there is a problem with how your code is indented.
Tabs and Spaces
A quick note on indentation errors:
they can sometimes be insidious,
especially if you are mixing spaces and tabs.
Because they are both whitespace,
it is difficult to visually tell the difference.
The IPython notebook actually gives us a bit of a hint,
but not all Python editors will do that.
In the following example,
the first two lines are using a tab for indentation,
while the third line uses four spaces:
python
def some_function():
msg = "hello, world!"
print(msg)
return msg
File "<ipython-input-5-653b36fbcd41>", line 4
return msg
^
IndentationError: unindent does not match any outer indentation level
By default, one tab is equivalent to eight spaces,
so the only way to mix tabs and spaces is to make it look like this.
In general, is is better to just never use tabs and always use spaces,
because it can make things very confusing.
Variable Name Errors
Another very common type of error is called a NameError,
and occurs when you try to use a variable that does not exist.
For example:
End of explanation
"""
print(hello)
"""
Explanation: Variable name errors come with some of the most informative error messages,
which are usually of the form "name 'the_variable_name' is not defined".
Why does this error message occur?
That's harder question to answer,
because it depends on what your code is supposed to do.
However,
there are a few very common reasons why you might have an undefined variable.
The first is that you meant to use a string, but forgot to put quotes around it:
End of explanation
"""
for number in range(10):
count = count + number
print("The count is: " + str(count))
"""
Explanation: The second is that you just forgot to create the variable before using it.
In the following example,
count should have been defined (e.g., with count = 0) before the for loop:
End of explanation
"""
Count = 0
for number in range(10):
count = count + number
print("The count is: " + str(count))
"""
Explanation: Finally, the third possibility is that you made a typo when you were writing your code.
Let's say we fixed the error above by adding the line Count = 0 before the for loop.
Frustratingly, this actually does not fix the error.
Remember that variables are case-sensitive,
so the variable count is different from Count. We still get the same error, because we still have not defined count:
End of explanation
"""
letters = ['a', 'b', 'c']
print("Letter #1 is " + letters[0])
print("Letter #2 is " + letters[1])
print("Letter #3 is " + letters[2])
print("Letter #4 is " + letters[3])
"""
Explanation: Item Errors
Next up are errors having to do with containers (like lists and dictionaries) and the items within them.
If you try to access an item in a list or a dictionary that does not exist,
then you will get an error.
This makes sense:
if you asked someone what day they would like to get coffee,
and they answered "caturday",
you might be a bit annoyed.
Python gets similarly annoyed if you try to ask it for an item that doesn't exist:
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.12.2/examples/notebooks/generated/interactions_anova.ipynb | bsd-3-clause | %matplotlib inline
from urllib.request import urlopen
import numpy as np
np.set_printoptions(precision=4, suppress=True)
import pandas as pd
pd.set_option("display.width", 100)
import matplotlib.pyplot as plt
from statsmodels.formula.api import ols
from statsmodels.graphics.api import interaction_plot, abline_plot
from statsmodels.stats.anova import anova_lm
try:
salary_table = pd.read_csv('salary.table')
except: # recent pandas can read URL without urlopen
url = 'http://stats191.stanford.edu/data/salary.table'
fh = urlopen(url)
salary_table = pd.read_table(fh)
salary_table.to_csv('salary.table')
E = salary_table.E
M = salary_table.M
X = salary_table.X
S = salary_table.S
"""
Explanation: Interactions and ANOVA
Note: This script is based heavily on Jonathan Taylor's class notes https://web.stanford.edu/class/stats191/notebooks/Interactions.html
Download and format data:
End of explanation
"""
plt.figure(figsize=(6,6))
symbols = ['D', '^']
colors = ['r', 'g', 'blue']
factor_groups = salary_table.groupby(['E','M'])
for values, group in factor_groups:
i,j = values
plt.scatter(group['X'], group['S'], marker=symbols[j], color=colors[i-1],
s=144)
plt.xlabel('Experience');
plt.ylabel('Salary');
"""
Explanation: Take a look at the data:
End of explanation
"""
formula = 'S ~ C(E) + C(M) + X'
lm = ols(formula, salary_table).fit()
print(lm.summary())
"""
Explanation: Fit a linear model:
End of explanation
"""
lm.model.exog[:5]
"""
Explanation: Have a look at the created design matrix:
End of explanation
"""
lm.model.data.orig_exog[:5]
"""
Explanation: Or since we initially passed in a DataFrame, we have a DataFrame available in
End of explanation
"""
lm.model.data.frame[:5]
"""
Explanation: We keep a reference to the original untouched data in
End of explanation
"""
infl = lm.get_influence()
print(infl.summary_table())
"""
Explanation: Influence statistics
End of explanation
"""
df_infl = infl.summary_frame()
df_infl[:5]
"""
Explanation: or get a dataframe
End of explanation
"""
resid = lm.resid
plt.figure(figsize=(6,6));
for values, group in factor_groups:
i,j = values
group_num = i*2 + j - 1 # for plotting purposes
x = [group_num] * len(group)
plt.scatter(x, resid[group.index], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
plt.xlabel('Group');
plt.ylabel('Residuals');
"""
Explanation: Now plot the residuals within the groups separately:
End of explanation
"""
interX_lm = ols("S ~ C(E) * X + C(M)", salary_table).fit()
print(interX_lm.summary())
"""
Explanation: Now we will test some interactions using anova or f_test
End of explanation
"""
from statsmodels.stats.api import anova_lm
table1 = anova_lm(lm, interX_lm)
print(table1)
interM_lm = ols("S ~ X + C(E)*C(M)", data=salary_table).fit()
print(interM_lm.summary())
table2 = anova_lm(lm, interM_lm)
print(table2)
"""
Explanation: Do an ANOVA check
End of explanation
"""
interM_lm.model.data.orig_exog[:5]
"""
Explanation: The design matrix as a DataFrame
End of explanation
"""
interM_lm.model.exog
interM_lm.model.exog_names
infl = interM_lm.get_influence()
resid = infl.resid_studentized_internal
plt.figure(figsize=(6,6))
for values, group in factor_groups:
i,j = values
idx = group.index
plt.scatter(X[idx], resid[idx], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
plt.xlabel('X');
plt.ylabel('standardized resids');
"""
Explanation: The design matrix as an ndarray
End of explanation
"""
drop_idx = abs(resid).argmax()
print(drop_idx) # zero-based index
idx = salary_table.index.drop(drop_idx)
lm32 = ols('S ~ C(E) + X + C(M)', data=salary_table, subset=idx).fit()
print(lm32.summary())
print('\n')
interX_lm32 = ols('S ~ C(E) * X + C(M)', data=salary_table, subset=idx).fit()
print(interX_lm32.summary())
print('\n')
table3 = anova_lm(lm32, interX_lm32)
print(table3)
print('\n')
interM_lm32 = ols('S ~ X + C(E) * C(M)', data=salary_table, subset=idx).fit()
table4 = anova_lm(lm32, interM_lm32)
print(table4)
print('\n')
"""
Explanation: Looks like one observation is an outlier.
End of explanation
"""
resid = interM_lm32.get_influence().summary_frame()['standard_resid']
plt.figure(figsize=(6,6))
resid = resid.reindex(X.index)
for values, group in factor_groups:
i,j = values
idx = group.index
plt.scatter(X.loc[idx], resid.loc[idx], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
plt.xlabel('X[~[32]]');
plt.ylabel('standardized resids');
"""
Explanation: Replot the residuals
End of explanation
"""
lm_final = ols('S ~ X + C(E)*C(M)', data = salary_table.drop([drop_idx])).fit()
mf = lm_final.model.data.orig_exog
lstyle = ['-','--']
plt.figure(figsize=(6,6))
for values, group in factor_groups:
i,j = values
idx = group.index
plt.scatter(X[idx], S[idx], marker=symbols[j], color=colors[i-1],
s=144, edgecolors='black')
# drop NA because there is no idx 32 in the final model
fv = lm_final.fittedvalues.reindex(idx).dropna()
x = mf.X.reindex(idx).dropna()
plt.plot(x, fv, ls=lstyle[j], color=colors[i-1])
plt.xlabel('Experience');
plt.ylabel('Salary');
"""
Explanation: Plot the fitted values
End of explanation
"""
U = S - X * interX_lm32.params['X']
plt.figure(figsize=(6,6))
interaction_plot(E, M, U, colors=['red','blue'], markers=['^','D'],
markersize=10, ax=plt.gca())
"""
Explanation: From our first look at the data, the difference between Master's and PhD in the management group is different than in the non-management group. This is an interaction between the two qualitative variables management,M and education,E. We can visualize this by first removing the effect of experience, then plotting the means within each of the 6 groups using interaction.plot.
End of explanation
"""
try:
jobtest_table = pd.read_table('jobtest.table')
except: # do not have data already
url = 'http://stats191.stanford.edu/data/jobtest.table'
jobtest_table = pd.read_table(url)
factor_group = jobtest_table.groupby(['MINORITY'])
fig, ax = plt.subplots(figsize=(6,6))
colors = ['purple', 'green']
markers = ['o', 'v']
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
ax.set_xlabel('TEST');
ax.set_ylabel('JPERF');
min_lm = ols('JPERF ~ TEST', data=jobtest_table).fit()
print(min_lm.summary())
fig, ax = plt.subplots(figsize=(6,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
ax.set_xlabel('TEST')
ax.set_ylabel('JPERF')
fig = abline_plot(model_results = min_lm, ax=ax)
min_lm2 = ols('JPERF ~ TEST + TEST:MINORITY',
data=jobtest_table).fit()
print(min_lm2.summary())
fig, ax = plt.subplots(figsize=(6,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
fig = abline_plot(intercept = min_lm2.params['Intercept'],
slope = min_lm2.params['TEST'], ax=ax, color='purple');
fig = abline_plot(intercept = min_lm2.params['Intercept'],
slope = min_lm2.params['TEST'] + min_lm2.params['TEST:MINORITY'],
ax=ax, color='green');
min_lm3 = ols('JPERF ~ TEST + MINORITY', data = jobtest_table).fit()
print(min_lm3.summary())
fig, ax = plt.subplots(figsize=(6,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
fig = abline_plot(intercept = min_lm3.params['Intercept'],
slope = min_lm3.params['TEST'], ax=ax, color='purple');
fig = abline_plot(intercept = min_lm3.params['Intercept'] + min_lm3.params['MINORITY'],
slope = min_lm3.params['TEST'], ax=ax, color='green');
min_lm4 = ols('JPERF ~ TEST * MINORITY', data = jobtest_table).fit()
print(min_lm4.summary())
fig, ax = plt.subplots(figsize=(8,6));
for factor, group in factor_group:
ax.scatter(group['TEST'], group['JPERF'], color=colors[factor],
marker=markers[factor], s=12**2)
fig = abline_plot(intercept = min_lm4.params['Intercept'],
slope = min_lm4.params['TEST'], ax=ax, color='purple');
fig = abline_plot(intercept = min_lm4.params['Intercept'] + min_lm4.params['MINORITY'],
slope = min_lm4.params['TEST'] + min_lm4.params['TEST:MINORITY'],
ax=ax, color='green');
# is there any effect of MINORITY on slope or intercept?
table5 = anova_lm(min_lm, min_lm4)
print(table5)
# is there any effect of MINORITY on intercept
table6 = anova_lm(min_lm, min_lm3)
print(table6)
# is there any effect of MINORITY on slope
table7 = anova_lm(min_lm, min_lm2)
print(table7)
# is it just the slope or both?
table8 = anova_lm(min_lm2, min_lm4)
print(table8)
"""
Explanation: Minority Employment Data
End of explanation
"""
try:
rehab_table = pd.read_csv('rehab.table')
except:
url = 'http://stats191.stanford.edu/data/rehab.csv'
rehab_table = pd.read_table(url, delimiter=",")
rehab_table.to_csv('rehab.table')
fig, ax = plt.subplots(figsize=(8,6))
fig = rehab_table.boxplot('Time', 'Fitness', ax=ax, grid=False)
rehab_lm = ols('Time ~ C(Fitness)', data=rehab_table).fit()
table9 = anova_lm(rehab_lm)
print(table9)
print(rehab_lm.model.data.orig_exog)
print(rehab_lm.summary())
"""
Explanation: One-way ANOVA
End of explanation
"""
try:
kidney_table = pd.read_table('./kidney.table')
except:
url = 'http://stats191.stanford.edu/data/kidney.table'
kidney_table = pd.read_csv(url, delim_whitespace=True)
"""
Explanation: Two-way ANOVA
End of explanation
"""
kidney_table.head(10)
"""
Explanation: Explore the dataset
End of explanation
"""
kt = kidney_table
plt.figure(figsize=(8,6))
fig = interaction_plot(kt['Weight'], kt['Duration'], np.log(kt['Days']+1),
colors=['red', 'blue'], markers=['D','^'], ms=10, ax=plt.gca())
"""
Explanation: Balanced panel
End of explanation
"""
kidney_lm = ols('np.log(Days+1) ~ C(Duration) * C(Weight)', data=kt).fit()
table10 = anova_lm(kidney_lm)
print(anova_lm(ols('np.log(Days+1) ~ C(Duration) + C(Weight)',
data=kt).fit(), kidney_lm))
print(anova_lm(ols('np.log(Days+1) ~ C(Duration)', data=kt).fit(),
ols('np.log(Days+1) ~ C(Duration) + C(Weight, Sum)',
data=kt).fit()))
print(anova_lm(ols('np.log(Days+1) ~ C(Weight)', data=kt).fit(),
ols('np.log(Days+1) ~ C(Duration) + C(Weight, Sum)',
data=kt).fit()))
"""
Explanation: You have things available in the calling namespace available in the formula evaluation namespace
End of explanation
"""
sum_lm = ols('np.log(Days+1) ~ C(Duration, Sum) * C(Weight, Sum)',
data=kt).fit()
print(anova_lm(sum_lm))
print(anova_lm(sum_lm, typ=2))
print(anova_lm(sum_lm, typ=3))
nosum_lm = ols('np.log(Days+1) ~ C(Duration, Treatment) * C(Weight, Treatment)',
data=kt).fit()
print(anova_lm(nosum_lm))
print(anova_lm(nosum_lm, typ=2))
print(anova_lm(nosum_lm, typ=3))
"""
Explanation: Sum of squares
Illustrates the use of different types of sums of squares (I,II,II)
and how the Sum contrast can be used to produce the same output between
the 3.
Types I and II are equivalent under a balanced design.
Do not use Type III with non-orthogonal contrast - ie., Treatment
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/end-to-end-structured/solutions/4b_keras_dnn_babyweight.ipynb | apache-2.0 | import datetime
import os
import shutil
import matplotlib.pyplot as plt
import tensorflow as tf
print(tf.__version__)
"""
Explanation: LAB 4b: Create Keras DNN model.
Learning Objectives
Set CSV Columns, label column, and column defaults
Make dataset of features and label from CSV files
Create input layers for raw features
Create feature columns for inputs
Create DNN dense hidden layers and output layer
Create custom evaluation metric
Build DNN model tying all of the pieces together
Train and evaluate
Introduction
In this notebook, we'll be using Keras to create a DNN model to predict the weight of a baby before it is born.
We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a deep neural network in Keras. We'll create a custom evaluation metric and build our DNN model. Finally, we'll train and evaluate our model.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Load necessary libraries
End of explanation
"""
%%bash
ls *.csv
%%bash
head -5 *.csv
"""
Explanation: Verify CSV files exist
In the seventh lab of this series 4a_sample_babyweight, we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
End of explanation
"""
# Determine CSV, label, and key columns
# Create list of string column headers, make sure order matches.
CSV_COLUMNS = [
"weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks",
]
# Add string name for label column
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0]]
"""
Explanation: Create Keras model
Set CSV Columns, label column, and column defaults.
Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.
* CSV_COLUMNS are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files
* LABEL_COLUMN is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.
* DEFAULTS is a list with the same length as CSV_COLUMNS, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
End of explanation
"""
def features_and_labels(row_data):
"""Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
"""
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
"""Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: tf.estimator.ModeKeys to determine if training or evaluating.
Returns:
`Dataset` object.
"""
# Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
)
# Map dataset to features and label
dataset = dataset.map(map_func=features_and_labels) # features, label
# Shuffle and repeat for training
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
"""
Explanation: Make dataset of features and label from CSV files.
Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use tf.data.experimental.make_csv_dataset. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
End of explanation
"""
def create_input_layers():
"""Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
"""
inputs = {
colname: tf.keras.layers.Input(name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]
}
inputs.update(
{
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="string"
)
for colname in ["is_male", "plurality"]
}
)
return inputs
"""
Explanation: Create input layers for raw features.
We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers (tf.Keras.layers.Input) by defining:
* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.
* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
* dtype: The data type expected by the input, as a string (float32, float64, int32...)
End of explanation
"""
def categorical_fc(name, values):
"""Helper function to wrap categorical feature by indicator column.
Args:
name: str, name of feature.
values: list, list of strings of categorical values.
Returns:
Indicator column of categorical feature.
"""
cat_column = tf.feature_column.categorical_column_with_vocabulary_list(
key=name, vocabulary_list=values
)
return tf.feature_column.indicator_column(categorical_column=cat_column)
def create_feature_columns():
"""Creates dictionary of feature columns from inputs.
Returns:
Dictionary of feature columns.
"""
feature_columns = {
colname: tf.feature_column.numeric_column(key=colname)
for colname in ["mother_age", "gestation_weeks"]
}
feature_columns["is_male"] = categorical_fc(
"is_male", ["True", "False", "Unknown"]
)
feature_columns["plurality"] = categorical_fc(
"plurality",
[
"Single(1)",
"Twins(2)",
"Triplets(3)",
"Quadruplets(4)",
"Quintuplets(5)",
"Multiple(2+)",
],
)
return feature_columns
"""
Explanation: Create feature columns for inputs.
Next, define the feature columns. mother_age and gestation_weeks should be numeric. The others, is_male and plurality, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
End of explanation
"""
def get_model_outputs(inputs):
"""Creates model architecture and returns outputs.
Args:
inputs: Dense tensor used as inputs to model.
Returns:
Dense tensor output from the model.
"""
# Create two hidden layers of [64, 32] just in like the BQML DNN
h1 = tf.keras.layers.Dense(64, activation="relu", name="h1")(inputs)
h2 = tf.keras.layers.Dense(32, activation="relu", name="h2")(h1)
# Final output is a linear activation because this is regression
output = tf.keras.layers.Dense(units=1, activation="linear", name="weight")(
h2
)
return output
"""
Explanation: Create DNN dense hidden layers and output layer.
So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
End of explanation
"""
def rmse(y_true, y_pred):
"""Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
"""
return tf.sqrt(tf.reduce_mean((y_pred - y_true) ** 2))
"""
Explanation: Create custom evaluation metric.
We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
End of explanation
"""
def build_dnn_model():
"""Builds simple DNN using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
"""
# Create input layer
inputs = create_input_layers()
# Create feature columns
feature_columns = create_feature_columns()
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(
feature_columns=feature_columns.values()
)(inputs)
# Get output of model given inputs
output = get_model_outputs(dnn_inputs)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
"""
Explanation: Build DNN model tying all of the pieces together.
Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Keras' Functional Model API. Here we will build the model using tf.keras.models.Model giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
End of explanation
"""
tf.keras.utils.plot_model(
model=model, to_file="dnn_model.png", show_shapes=False, rankdir="LR"
)
"""
Explanation: We can visualize the DNN using the Keras plot_model utility.
End of explanation
"""
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 5 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 10000
trainds = load_dataset(
pattern="train*",
batch_size=TRAIN_BATCH_SIZE,
mode=tf.estimator.ModeKeys.TRAIN,
)
evalds = load_dataset(
pattern="eval*", batch_size=1000, mode=tf.estimator.ModeKeys.EVAL
).take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join("logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1
)
history = model.fit(
trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch,
callbacks=[tensorboard_callback],
)
"""
Explanation: Run and evaluate model
Train and evaluate.
We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.
End of explanation
"""
# Plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "rmse"]):
ax = fig.add_subplot(nrows, ncols, idx + 1)
plt.plot(history.history[key])
plt.plot(history.history[f"val_{key}"])
plt.title(f"model {key}")
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
"""
Explanation: Visualize loss curve
End of explanation
"""
OUTPUT_DIR = "babyweight_trained"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S")
)
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH
) # with default serving function
print(f"Exported trained model to {EXPORT_PATH}")
!ls $EXPORT_PATH
"""
Explanation: Save the model
End of explanation
"""
|
thempel/adaptivemd | examples/rp/test_worker.ipynb | lgpl-2.1 | import sys, os, time
"""
Explanation: AdaptiveMD
Example 1 - Setup
0. Imports
End of explanation
"""
# verbose = os.environ.get('RADICAL_PILOT_VERBOSE', 'REPORT')
os.environ['RADICAL_PILOT_VERBOSE'] = 'ERROR'
"""
Explanation: We want to stop RP from reporting all sorts of stuff for this example so we set a specific environment variable to tell RP to do so. If you want to see what RP reports change it to REPORT.
End of explanation
"""
from adaptivemd import Project
from adaptivemd import OpenMMEngine
from adaptivemd import PyEMMAAnalysis
from adaptivemd import File, Directory, WorkerScheduler
from adaptivemd import DT
"""
Explanation: We will import the appropriate parts from AdaptiveMD as we go along so it is clear what it needed at what stage. Usually you will have the block of imports at the beginning of your script or notebook as suggested in PEP8.
End of explanation
"""
# Project.delete('test')
project = Project('test')
"""
Explanation: Let's open a project with a UNIQUE name. This will be the name used in the DB so make sure it is new and not too short. Opening a project will always create a non-existing project and reopen an exising one. You cannot chose between opening types as you would with a file. This is a precaution to not accidentally delete your project.
End of explanation
"""
from adaptivemd import LocalCluster AllegroCluster
resource_id = 'local.jhp'
if resource_id == 'local.jhp':
project.initialize(LocalJHP())
elif resource_id == 'local.sheep':
project.initialize(LocalSheep())
elif resource_id == 'fub.allegro':
project.initialize(AllegroCluster())
"""
Explanation: Now we have a handle for our project. First thing is to set it up to work on a resource.
1. Set the resource
What is a resource? A Resource specifies a shared filesystem with one or more clusteres attached to it. This can be your local machine or just a regular cluster or even a group of cluster that can access the same FS (like Titan, Eos and Rhea do).
Once you have chosen your place to store your results this way it is set for the project and can (at least should) not be altered since all file references are made to match this resource. Currently you can use the Fu Berlin Allegro Cluster or run locally. There are two specific local adaptations that include already the path to your conda installation. This simplifies the use of openmm or pyemma.
Let us pick a local resource on a laptop for now.
End of explanation
"""
pdb_file = File('file://../files/alanine/alanine.pdb').named('initial_pdb').load()
"""
Explanation: TaskGenerators
TaskGenerators are instances whose purpose is to create tasks to be executed. This is similar to the
way Kernels work. A TaskGenerator will generate Task objects for you which will be translated into a ComputeUnitDescription and executed. In simple terms:
The task generator creates the bash scripts for you that run a simulation or run pyemma.
A task generator will be initialized with all parameters needed to make it work and it will now what needs to be staged to be used.
The engine
A task generator that will create jobs to run simulations. Currently it uses a little python script that will excute OpenMM. It requires conda to be added to the PATH variable or at least openmm to be installed on the cluster. If you setup your resource correctly then this should all happen automatically.
First we define a File object. These are used to represent files anywhere, on the cluster or your local application. File like any complex object in adaptivemd can have a .name attribute that makes them easier to find later.
End of explanation
"""
engine = OpenMMEngine(
pdb_file=pdb_file,
system_file=File('file://../files/alanine/system.xml').load(),
integrator_file=File('file://../files/alanine/integrator.xml').load(),
args='-r --report-interval 1 -p CPU --store-interval 1'
).named('openmm')
"""
Explanation: Here we used a special prefix that can point to specific locations.
file:// points to files on your local machine.
unit:// specifies files on the current working directory of the executing node. Usually these are temprary files for a single execution.
shared:// specifies the root shared FS directory (e.g. NO_BACKUP/ on Allegro) Use this to import and export files that are already on the cluster.
staging:// a special scheduler specific directory where files are moved after they are completed on a node and should be used for later. Use this to relate to files that should be stored or reused. After you one excution is done you usually move all important files to this place.
sandbox:// this should not concern you and is a special RP folder where all pilot/session folders are located.
So let's do an example for an OpenMM engine. This is simply a small python script that makes OpenMM look like a executable. It run a simulation by providing an initial frame, OpenMM specific system.xml and integrator.xml files and some additional parameters like the platform name, how often to store simulation frames, etc.
End of explanation
"""
engine.name
"""
Explanation: To explain this we have now an OpenMMEngine which uses the previously made pdb File object and uses the location defined in there. The same some Files for the OpenMM XML files and some args to store each frame (to keep it fast) and run using the CPU kernel.
Last we name the engine openmm to find it later.
End of explanation
"""
modeller = PyEMMAAnalysis(
pdb_file=pdb_file
).named('pyemma')
"""
Explanation: The modeller
The instance to compute an MSM model of existing trajectories that you pass it. It is initialized with a .pdb file that is used to create features between the $c_\alpha$ atoms. This implementaton requires a PDB but in general this is not necessay. It is specific to my PyEMMAAnalysis show case.
End of explanation
"""
project.generators.add(engine)
project.generators.add(modeller)
project.files.one
sc = WorkerScheduler(project.resource)
sc.enter(project)
t = engine.task_run_trajectory(project.new_trajectory(pdb_file, 100, restart=True)). extend(50).extend(100)
sc(t)
import radical.pilot as rp
rp.TRANSFER
sc.advance()
for f in project.trajectories:
print f.basename, f.length, DT(f.created).time
for t in project.tasks:
print t.stderr.objs['worker']
print project.generators
t1 = engine.task_run_trajectory(project.new_trajectory(pdb_file, 100, restart=True))
t2 = t1.extend(100)
t2.trajectory.restart
project.tasks.add(t2)
for f in project.trajectories:
print f.drive, f.basename, len(f), f.created, f.__time__, f.exists, hex(f.__uuid__)
for f in project.files:
print f.drive, f.path, f.created, f.__time__, f.exists, hex(f.__uuid__)
w = project.workers.last
print w.state
print w.command
for t in project.tasks:
print t.state, t.worker.hostname if t.worker else 'None'
sc.advance()
t1 = engine.task_run_trajectory(project.new_trajectory(pdb_file, 100))
t2 = t1.extend(100)
project.tasks.add(t2)
# from adaptivemd.engine import Trajectory
# t3 = engine.task_run_trajectory(Trajectory('staging:///trajs/0.dcd', pdb_file, 100)).extend(100)
# t3.dependencies = []
# def get_created_files(t, s):
# if t.is_done():
# print 'done', s
# return s - set(t.added_files)
# else:
# adds = set(t.added_files)
# rems = set(s.required[0] for s in t._pre_stage)
# print '+', adds
# print '-', rems
# q = set(s) - adds | rems
# if t.dependencies is not None:
# for d in t.dependencies:
# q = get_created_files(d, q)
# return q
# get_created_files(t3, {})
for w in project.workers:
print w.hostname, w.state
w = project.workers.last
print w.state
print w.command
w.command = 'shutdown'
for t in project.tasks:
print t.state, t.worker.hostname if t.worker else 'None'
for f in project.trajectories:
print f.drive, f.basename, len(f), f.created, f.__time__, f.exists, hex(f.__uuid__)
project.trajectories.one[0]
t = engine.task_run_trajectory(project.new_trajectory(project.trajectories.one[0], 100))
project.tasks.add(t)
print project.files
print project.tasks
t = modeller.execute(list(project.trajectories))
project.tasks.add(t)
from uuid import UUID
project.storage.tasks._document.find_one({'_dict': {'generator' : { '_dict': }}})
genlist = ['openmm']
scheduler = sc
prefetch = 1
while True:
scheduler.advance()
if scheduler.is_idle:
for _ in range(prefetch):
tasklist = scheduler(project.storage.tasks.consume_one())
if len(tasklist) == 0:
break
time.sleep(2.0)
"""
Explanation: Again we name it pyemma for later reference.
Add generators to project
Next step is to add these to the project for later usage. We pick the .generators store and just add it. Consider a store to work like a set() in python. It contains objects only once and is not ordered. Therefore we need a name to find the objects later. Of course you can always iterate over all objects, but the order is not given.
To be precise there is an order in the time of creation of the object, but it is only accurate to seconds and it really is the time it was created and not stored.
End of explanation
"""
scheduler = project.get_scheduler(cores=1)
"""
Explanation: Note, that you cannot add the same engine twice. But if you create a new engine it will be considered different and hence you can store it again.
Create one intial trajectory
Finally we are ready to run a first trajectory that we will store as a point of reference in the project. Also it is nice to see how it works in general.
1. Open a scheduler
a job on the cluster to execute tasks
the .get_scheduler function delegates to the resource and uses the get_scheduler functions from there. This is merely a convenience since a Scheduler has the responsibility to open queues on the resource for you.
You have the same options as the queue has in the resource. This is often the number of cores and walltime, but can be additional ones, too.
Let's open the default queue and use a single core for it since we only want to run one simulation.
End of explanation
"""
trajectory = project.new_trajectory(engine['pdb_file'], 100)
trajectory
"""
Explanation: Next we create the parameter for the engine to run the simulation. Since it seemed appropriate we use a Trajectory object (a special File with initial frame and length) as the input. You could of course pass these things separately, but this way, we can actualy reference the no yet existing trajectory and do stuff with it.
A Trajectory should have a unique name and so there is a project function to get you one. It uses numbers and makes sure that this number has not been used yet in the project.
End of explanation
"""
task = engine.task_run_trajectory(trajectory)
"""
Explanation: This says, initial is alanine.pdb run for 100 frames and is named xxxxxxxx.dcd.
Now, we want that this trajectory actually exists so we have to make it (on the cluster which is waiting for things to do). So we need a Task object to run a simulation. Since Task objects are very flexible there are helper functions to get them to do, what you want, like the ones we already created just before. Let's use the openmm engine to create an openmm task
End of explanation
"""
scheduler(task)
"""
Explanation: That's it, just that a trajectory description and turn it into a task that contains the shell commands and needed files, etc.
Last step is to really run the task. You can just use a scheduler as a function or call the .submit() method.
End of explanation
"""
scheduler.is_idle
print scheduler.generators
"""
Explanation: Now we have to wait. To see, if we are done, you can check the scheduler if it is still running tasks.
End of explanation
"""
# scheduler.wait()
"""
Explanation: or you wait until it becomes idle using .wait()
End of explanation
"""
print project.files
print project.trajectories
"""
Explanation: If all went as expected we will now have our first trajectory.
End of explanation
"""
scheduler.exit()
"""
Explanation: Excellent, so cleanup and close our queue
End of explanation
"""
project.close()
"""
Explanation: and close the project.
End of explanation
"""
|
WNoxchi/Kaukasos | pytorch/PyTorch60MinBlitz.ipynb | mit | import torch
"""
Explanation: PyTorch 60 Minute Blitz
Wnixalo
2018/2/18
I. What is PyTorch
It's a Python based scientific computing package targeted at two sets of audiences:
A replacement for NumPy to use the power of GPUs
A Deep Learning research platform that provides maximum flexibility and speed.
1. Getting Started
a. Tensors
Tensors are similar to NumPy's ndarrays, with the addition being that Tensors can also be used on a GPU to accelerate computing.
End of explanation
"""
x = torch.Tensor(5, 3)
print(x)
"""
Explanation: Construct a 5x3 matrix, unitialized:
End of explanation
"""
print(x.size())
"""
Explanation: Get its size:
End of explanation
"""
y = torch.rand(5, 3)
print(x + y)
"""
Explanation: !NOTE¡: torch.Size is in fact a tuple, so it supports all tuple operations.
b. Operations
There are multiple syntaxes for operations. In the following example, we will take a look at the addition operation.
Addition: synax 1
End of explanation
"""
print(torch.add(x, y))
"""
Explanation: Addition: syntax 2
End of explanation
"""
result = torch.Tensor(5, 3)
torch.add(x, y, out=result)
print(result)
"""
Explanation: Addition: providing an output tensor as argument
End of explanation
"""
# adds x to y
y.add_(x)
print(y)
"""
Explanation: Addition: in-place
End of explanation
"""
print(x[:, 1])
"""
Explanation: ¡NOTE!: Any operation that mutates a tensor in-place is post-fixed with an _. For example: x.copy_(y), x.t_(), will change x.
You can use standard NumPy-like indexing with all the bells and whistles!
End of explanation
"""
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())
# testing out valid resizings
for i in range(256):
for j in range(256):
try:
y = x.view(i,j)
print(y.size())
except RuntimeError:
pass
# if you make either i or j = -1, then it basically
# lists all compatible resizings
x.view(16)
"""
Explanation: Resizing: If you want to resize/reshape a tensor, you can use torch.view:
End of explanation
"""
a = torch.ones(5)
print(a)
b = a.numpy()
print(b)
"""
Explanation: Read later:
100+ Tensor operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random numbers, etc., are described here.
2. NumPy Bridge
Converting a Torch Tensor to a NumPy array and vice versa is a breeze.
The Torch Tensor and NumPy array will share their underlying memory locations, and changing one will change the other.
a. Converting a Torch Tensor to a NumPy Array
End of explanation
"""
a.add_(1)
print(a)
print(b)
"""
Explanation: See how the numpy array changed in value
End of explanation
"""
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
"""
Explanation: ahh, cool
b. Converting NumPy Array to Torch Tensor
See how changing the NumPy array changed the Torch Tensor automatically
End of explanation
"""
# let's run this cell only if CUDA is available
if torch.cuda.is_available():
x = x.cuda()
y = y.cuda()
x + y
"""
Explanation: All the Tensors on the CPU except a CharTensor support converting to NumPy and back.
3. CUDA Tensors
Tensors can be moved onto GPU using the .cuda method.
End of explanation
"""
import torch
from torch.autograd import Variable
"""
Explanation: II. Autograd: Automatic Differentiation
Central to all neural networks in PyTorch is the autograd package. Let's first briefly visit this, and we'll then go to training our first neural network.
The autograd package provides automatic differentiation for all operations on Tensors. It's a define-by-run framework, which means that your backprop is defined by how your code is run, and that every single iteration can be different.
Let's see this in simpler terms with some examples.
1. Variable
autograd.Variable is the central class of this package. It wraps a Tensor, and supports nearly all operations defined on it. Once you finish your computation you can call .backward() and have all gradients computed automatically.
You can access the raw tensor through the .data attribute, while the gradient wrt this variable is accumulated into .grad.
<img src="http://pytorch.org/tutorials/_images/Variable.png" alt="autograd.Variable">
There's one more class which is very important for the autograd implementation: a Function.
Variable and Function are interconnected and build up an acyclic graph that encodes a complete history of computation. Each variable has a .grad_fn attribute that references a Function that has created the Variable (except for variables created by the user - their grad_fn is None).
If you want to compute the derivatives, you can call .backward() on a Variable. If Variable is a scalar (ie: it holds a one-element datum), you don't need to specify a grad_output argument that is a tensor of matching shape.
End of explanation
"""
x = Variable(torch.ones(2, 2), requires_grad=True)
print(x)
print(x.grad_fn)
"""
Explanation: Create a variable:
End of explanation
"""
y = x + 2
print(y)
"""
Explanation: Do an operation on the variable:
End of explanation
"""
print(y.grad_fn)
"""
Explanation: y was created as a result of an operation, so it has a grad_fn.
End of explanation
"""
z = y * y * 3
out = z.mean()
print(z, out)
"""
Explanation: Do more operations on y:
End of explanation
"""
# x = torch.autograd.Variable(torch.ones(2,2), requires_grad=True)
# x.grad.data.zero_() # re-zero gradients of x if rerunning
# y = x + 2
# z = y**2*3
# out = z.mean()
# print(out.backward())
# print(x.grad)
out.backward()
"""
Explanation: 2. Gradients
let's backprop now: out.backward() is equivalent to doing out.backward(torch.Tensor([1.0]))
End of explanation
"""
print(x.grad)
"""
Explanation: print gradients d(out)/dx:
End of explanation
"""
x = torch.randn(3)
x = Variable(x, requires_grad=True)
y = x*2
while y.data.norm() < 1000:
y = y*2
print(y)
gradients = torch.FloatTensor([0.1, 1.0, 0.0001])
y.backward(gradients)
print(x.grad)
"""
Explanation: You should've gotten a matrix of 4.5. Let's call the out Variable "$o$". We have that: $o = \tfrac{1}{4} Σ_i z_i,z_i = 3(x_i + 2)^2$ and $z_i\bigr\rvert_{x_i=1}=27$.
$\Rightarrow$ $\tfrac{δo}{δx_i}=\tfrac{3}{2}(x_i+2)$ $\Rightarrow$ $\frac{\partial o}{\partial x_i}\bigr\rvert_{x_i=1} = \frac{9}{2} = 4.5$
You can do a lot of crazy things with autograd.
End of explanation
"""
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5) # ConvNet
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10) # classification layer
## NOTE: ahh, so the forward pass in PyTorch is just you defining
# what all the activation functions are going to be?
# Im seeing this pattern of tensor_X = actvnFn(layer(tensor_X))
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
# ahh, just get total number of features by
# multiplying dimensions ('tensor "volume"')
net = Net()
print(net)
"""
Explanation: Read Later:
Documentation of Variable and Function is at http://pytorch.org/docs/autograd
III. Neural Networks
Neural networks can be constructed using the torch.nn package.
Now that you had a glimpse of autograd, nn depends on autograd to define models and differentiate them. An nn.Module contains layers, and a method forward(input that returns the output.
For example, look at this network that classifies digit images:
<img src="http://pytorch.org/tutorials/_images/mnist.png" alt=convnet>
It's a simple feed-forward network. It takes the input, feeds it through several layers, one after the other, and then finally gives the output.
A typical training prcedure for a neural network is as follows:
Define the neural network that has some learnable parameters (or weights)
Iterate over a dataset of inputs
Process input through the network
Compute the loss (how far is the output from being ocrrect)
Propagate gradients back into the network's parameters
Update the weights of the network, typically using a simple update rule: weight = weight - learning_rate * gradient
1. Define the Network
Let's define this network:
End of explanation
"""
params = list(net.parameters())
print(len(params))
print(params[0].size()) # conv1's .weight
"""
Explanation: You just have to define the forward function, and the backward function (where gradients are computed) is automatically defined for you using autograd. --ah that makes sense-- You can use any of the Tensor operations in the forward function.
The learnable parameters of a model are returned by net.parameters()
End of explanation
"""
input = Variable(torch.randn(1, 1, 32, 32)) # batch_size x channels x height x width
out = net(input)
print(out)
"""
Explanation: The input to the forward is an autograd.Variable, and so is the output. NOTE: Expected input size to this net (LeNet) is 32x32. To use this network on MNIST data, resize the images from the dataset to 32x32.
End of explanation
"""
net.zero_grad()
out.backward(torch.randn(1, 10))
"""
Explanation: Zero the gradient buffers of all parameters and backprops with random gradients:
End of explanation
"""
output = net(input)
target = Variable(torch.arange(1, 11)) # a dummy target example
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)
"""
Explanation: !NOTE¡: torch.nn only supports mini-batches. The entire torch.nn package only supports inputs that are mini-batches of samples, and not a single sample.
For example, nn.Conv2d will take in a 4D Tensor of nSamples x nChannels x Height x Widgth.
If you have a single sample, just use input.unsqueeze(0) to add a fake batch dimension.
Before proceeding further, let's recap all the classes we've seen so far.
Recap:
* torch.Tensor - A multi-dimensional array.
* autograd.Variable - Wraps a Tensor and records the history of operations applied to it. Has the same API as a Tensor, with some additions like backward(). Also holds the gradient wrt the tensor.
* nn.Module - Nerual network module. Convenient way of encapsulating parameters, with helpers for moving them to GPU, exporting, loading, etc.
* nn.Parameter - A kind of Variable, that's automatically registered as a parameter when assigned as an attribute to a Module.
* autograd.Function - Implements forward and backward definitions of an autograd operation. Every Variable operation creates at least a single Function node that connects to functions that created a Variable and encodes its history.
At this point, we've covered:
* Defining a neural network
* Processing inputs and calling backward.
Still Left:
* Computing the loss
* Updating the weights of the network
2. Loss Function
A loss function takes the (output, target) pair of inputs, and computes a value that estimates how far away the output is from the target.
There are several different loss functions under the nn package. A simple loss is: nn.MSELoss which computes the mean-squared error between input and target.
For example:
End of explanation
"""
print(loss.grad_fn) # MSELoss
print(loss.grad_fn.next_functions[0][0]) # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU
"""
Explanation: Now, if you follow loss in the backward direction, usint its .grad_fn attribute, you'll see a graph of computations that looks like this:
input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
-> view -> linear -> relu -> linear -> relu -> linear
-> MSELoss
-> loss
So, when we call loss.backward(), the whole graph is differentiated wrt the loss, and all Variables in the graph will have their .grad Variable accumulated with the gradient.
For illustration, let's follow a few steps backward:
End of explanation
"""
net.zero_grad() # zeroes gradient buffers of all parameters
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
"""
Explanation: 3. Backpropagation
To backpropagate the error all we have to do is loss.backward(). You need to clear the existing gradients though, or else gradients will be accumulated.
Now we'll call loss.backward(), and have a look at conv1'd bias gradients before and after the backward.
End of explanation
"""
learning_rate = 0.01
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate)
"""
Explanation: Now we've seen how to use loss functions.
Read Later:
The neural network package contains variables modules and loss functions that form the building blocks of deep neural networks. A full list with documentation is here.
The only thing left to learn is:
* updating the weights of the network
4. Update the Weights
The simplest update rule used in practice is Stochastic Gradient Descent (SGD):
weight = weight - learning_rate * gradient
We can implement this in our simply python code;
End of explanation
"""
import torch.optim as optim
# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step() # Does the update
"""
Explanation: However, as you use neural networks, you want to use various different update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc. To enable this, we built a small package: torch.optim that implements all these methods. Using it is very simple:
End of explanation
"""
import torch
import torchvision
import torchvision.transforms as transforms
"""
Explanation: !NOTE¡: Observe how gradient buffers had to be manually set to zero using optimizer.zero_grad(). This is because gradients are accumulated as explained in the Backprop section.
IV. Training a Classifier
This is it. You've seen how to define neural networks, compute loss, and make updates to the weights of a network.
Now you might be thinking..
1. What about Data?
Generally, when you have to deal with image, text, audo, or video data, you cna use standard python packages that load data into a numpy array. Then you can convert this array into a torch.*Tensor.
For images, packages such as Pillow (PIL) and OpenCV are useful.
For audio, packages usch as scipy and librosa
For text, either raw Python or Cython based loading, or NLTK and SpaCy are useful.
Specifically for vision, we have created a package called torchvision, that has data loaders for common datasets such as ImagNet, CIFAR10, MNIST, etc., and data transformers for images, viz., torhvision.datasets and torch.utils.data.DataLoader.
This provides a huge convenience and avoids writing boilerplate code.
For this tutorial we'll use the CIFAR10 dataset. It has classes ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are size 3x32x32, ie: 3-channel color images, 32x32 pixels in size.
<img src="http://pytorch.org/tutorials/_images/cifar10.png" alt=CIFAR10 grid>
2. Training an Image Classifier
We'll do the following steps in order:
Load and normalize the CIFAR10 training and test datasets using torchvision
Define a Convolutional Neural Network
Define a loss function
Train the network on training data
Test the network on test data
2.1 Loading and Normalizing CIFAR10
Using torchvision, it's extremely easy to load CIFAR10:
End of explanation
"""
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5,0.5,0.5), (0.5,0.5,0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
"""
Explanation: The output of torchvision datasets are PIL-Image images of range [0,1]. We transform them to Tensors of normalized range [-1,1]:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # un-normalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
"""
Explanation: Let's view some of the training images;
End of explanation
"""
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5) # ConvNet
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10) # classification layer
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16*5*5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
"""
Explanation: 2.2 Define a Convolutional Neural Network
Copy the neural network from the Neural Networks section before and modify it to take 3-channel images (instead of 1-channel as orignly defined)
End of explanation
"""
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
"""
Explanation: 2.3 Define a Loss Function and Optimizer
We'll use Classification Cross-Entropy Loss and SGD with Momentum
End of explanation
"""
for epoch in range(2): # loop over dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
#wrap them in Variable
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print(f'[{epoch+1}, {i+1:5d}] loss: {running_loss/2000:.3f}')
running_loss = 0.0
print('Finished Training | Entraînement Terminé.')
# Python 3.6 string formatting refresher
tmp = np.random.random()
print("%.3f" % tmp)
print("{:.3f}".format(tmp))
print(f"{tmp:.3f}")
print("%d" % 52)
print("%5d" % 52)
print(f'{52:5d}')
"""
Explanation: 2.4 Train the Network
This is when things start to get interesting. We simple have to loop over our data iterator, and feed the inputs to the network and optimize.
End of explanation
"""
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
print('GroundTruth: ', ' '.join(f'{classes[labels[j]] for j in range(4)}'))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
"""
Explanation: 2.5 Test the Network on Test Data
We trained the network for 2 passes over the training dataset. But we need to check if the network as learnt anything at all.
We'll check this by predicting the class label that the neural network outputs, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions.
Okay, first step. Let's display an image from the test set to get familiar.
End of explanation
"""
outputs = net(Variable(images))
"""
Explanation: Okay, now let's see what the neural network thinks these examples above are:
End of explanation
"""
_, predicted = torch.max(outputs.data, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
"""
Explanation: The outputs are energies (confidences) for the 10 classes. Let's get the index of the highest confidence:
End of explanation
"""
correct = 0
total = 0
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
print(f'Accuracy of network on 10,000 test images: {np.round(100*correct/total,2)}%')
"""
Explanation: Seems pretty good.
Now to look at how the network performs on the whole dataset.
End of explanation
"""
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
for data in testloader:
images, labels = data
outputs = net(Variable(images))
_, predicted = torch.max(outputs.data, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i]
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
"""
Explanation: That looks way better than chance for 10 different choices (10%). Seems like the network learnt something.
What classes performed well and which didn't?:
End of explanation
"""
net.cuda()
"""
Explanation: Cool, so what next?
How do we run these neural networks on the GPU?
3. Training on GPU
Just like how you transfer a Tensor onto a GPU, you transfer the neural net onto the GPU. This will recursively go over all modules and convert their parameters and buffers to CUDA tensors:
End of explanation
"""
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
"""
Explanation: Remember that you'll have to send the inputs and targets at every step to the GPU too:
End of explanation
"""
model.gpu()
"""
Explanation: Why don't we notice a MASSIVE speedup compared to CPU? Because the network is very small.
Exercise: Try increasing the width of your network (argument 2 of the first nn.Conv2d, and argument 1 of the second nn.Conv2d - they need to be the same number), see what kind of speedup you get.
Goals achieved:
* Understanding PyTorch's Tensor library and neural networks at a high level.
* Train a small neural network to classify images
4. Training on Multiple GPUs
If you want to see even more MASSIVE speedups using all your GPUs, please checkout Optional: Data Parallelism.
5. Where Next?
Train neural nets to play video games
Train a SotA ResNet network on ImageNet
Train a face generator using Generative Adversarial Networks
Train a word-level language model using Recurrent LSTM networks
More examples
More tutorials
Discuss PyTorch on the Forums
Chat with other users on Slack
V. Optional: Data Parallelism
In this tutorial we'll learn how to use multiple GPUs using DataParallel.
It's very easy to use GPUs with PyTorch. You can put the model on a GPU:
End of explanation
"""
mytensor = my_tensor.gpu()
"""
Explanation: Then, you can copy all your tensors to the GPU:
End of explanation
"""
model = nn.DataParallel(model)
"""
Explanation: Please note that just calling mytensor.gpu() won't copy the tensor to the GPU. You need to assign it to a new tensor and use that tensor on the GPU.
It's natural to execute your forward and backward propagations on multiple GPUs. However, PyTorch will only use one GPU by default. You can easily run your operations on multiple GPUs by making your model run parallelly using DataParallel:
End of explanation
"""
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch.utils.data import Dataset, DataLoader
# Parameters and DataLoaders
input_size = 5
output_size = 2
batch_size = 30
data_size = 100
"""
Explanation: That's the core behind this tutorial. We'll explore it in more detail below.
1. Imports and Parameters
Import PyTorch modules and define parameters.
End of explanation
"""
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
rand_loader = DataLoader(dataset=RandomDataset(input_size, 100),
batch_size=batch_size, shuffle=True)
"""
Explanation: 2. Dummy DataSet
Make a dummy (random) dataset. You just need to implement the getitem
End of explanation
"""
class Model(nn.Module):
# Our model
def __init__(self, input_size, output_size):
super(Model, self).__init__()
self.fc = nn.Linear(input_size, output_size)
def forward(self, input):
output = self.fc(input)
print(" In Model; input size", input.size(),
"output size", output.size())
return output
"""
Explanation: 3. Simple Model
For the demo, our model just gets an input, performs a linear operation, and gives an output. However, you can use DataParallel on any model (CNN, RNN, CapsuleNet, etc.)
We've placed a print statement inside the model to monitor the size of input and output tensors. Please pay attention to what is printed at batch rank 0.
End of explanation
"""
model = Model(input_size, output_size)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model = nn.DataParallel(model)
if torch.cuda.is_available():
model.cuda()
"""
Explanation: 4. Create Model and DataParallel
This is the core part of this tutorial. First, we need to make a model instance and check if we have multiple GPUs. If we have multiple GPUs, we can wrap our model using nn.DataParallel. Then we can put our model on GPUs by model.gpu()
End of explanation
"""
for data in rand_loader:
if torch.cuda.is_available():
input_var = Variable(data.cuda())
else:
input_var = Variable(data)
output = model(input_var)
print("Outside: input size", input_var.size(),
"output_size", output.size())
"""
Explanation: 5. Run the Model
Now we can see the sizes of input and output tensors.
End of explanation
"""
|
SSQ/Coursera-UW-Machine-Learning-Classification | Week 6 PA 1/module-9-precision-recall-assignment-blank.ipynb | mit | import graphlab
from __future__ import division
import numpy as np
graphlab.canvas.set_target('ipynb')
"""
Explanation: Exploring precision and recall
The goal of this second notebook is to understand precision-recall in the context of classifiers.
Use Amazon review data in its entirety.
Train a logistic regression model.
Explore various evaluation metrics: accuracy, confusion matrix, precision, recall.
Explore how various metrics can be combined to produce a cost of making an error.
Explore precision and recall curves.
Because we are using the full Amazon review dataset (not a subset of words or reviews), in this assignment we return to using GraphLab Create for its efficiency. As usual, let's start by firing up GraphLab Create.
Make sure you have the latest version of GraphLab Create (1.8.3 or later). If you don't find the decision tree module, then you would need to upgrade graphlab-create using
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
"""
products = graphlab.SFrame('amazon_baby.gl/')
"""
Explanation: Load amazon review dataset
End of explanation
"""
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
review_clean = products['review'].apply(remove_punctuation)
# Count words
products['word_count'] = graphlab.text_analytics.count_words(review_clean)
# Drop neutral sentiment reviews.
products = products[products['rating'] != 3]
# Positive sentiment to +1 and negative sentiment to -1
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
"""
Explanation: Extract word counts and sentiments
As in the first assignment of this course, we compute the word counts for individual words and extract positive and negative sentiments from ratings. To summarize, we perform the following:
Remove punctuation.
Remove reviews with "neutral" sentiment (rating 3).
Set reviews with rating 4 or more to be positive and those with 2 or less to be negative.
End of explanation
"""
products
"""
Explanation: Now, let's remember what the dataset looks like by taking a quick peek:
End of explanation
"""
train_data, test_data = products.random_split(.8, seed=1)
"""
Explanation: Split data into training and test sets
We split the data into a 80-20 split where 80% is in the training set and 20% is in the test set.
End of explanation
"""
model = graphlab.logistic_classifier.create(train_data, target='sentiment',
features=['word_count'],
validation_set=None)
"""
Explanation: Train a logistic regression classifier
We will now train a logistic regression classifier with sentiment as the target and word_count as the features. We will set validation_set=None to make sure everyone gets exactly the same results.
Remember, even though we now know how to implement logistic regression, we will use GraphLab Create for its efficiency at processing this Amazon dataset in its entirety. The focus of this assignment is instead on the topic of precision and recall.
End of explanation
"""
accuracy= model.evaluate(test_data, metric='accuracy')['accuracy']
print "Test Accuracy: %s" % accuracy
"""
Explanation: Model Evaluation
We will explore the advanced model evaluation concepts that were discussed in the lectures.
Accuracy
One performance metric we will use for our more advanced exploration is accuracy, which we have seen many times in past assignments. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
To obtain the accuracy of our trained models using GraphLab Create, simply pass the option metric='accuracy' to the evaluate function. We compute the accuracy of our logistic regression model on the test_data as follows:
End of explanation
"""
baseline = len(test_data[test_data['sentiment'] == 1])/len(test_data)
print "Baseline accuracy (majority class classifier): %s" % baseline
"""
Explanation: Baseline: Majority class prediction
Recall from an earlier assignment that we used the majority class classifier as a baseline (i.e reference) model for a point of comparison with a more sophisticated classifier. The majority classifier model predicts the majority class for all data points.
Typically, a good model should beat the majority class classifier. Since the majority class in this dataset is the positive class (i.e., there are more positive than negative reviews), the accuracy of the majority class classifier can be computed as follows:
End of explanation
"""
confusion_matrix = model.evaluate(test_data, metric='confusion_matrix')['confusion_matrix']
confusion_matrix
"""
Explanation: Quiz Question: Using accuracy as the evaluation metric, was our logistic regression model better than the baseline (majority class classifier)?
Confusion Matrix
The accuracy, while convenient, does not tell the whole story. For a fuller picture, we turn to the confusion matrix. In the case of binary classification, the confusion matrix is a 2-by-2 matrix laying out correct and incorrect predictions made in each label as follows:
+---------------------------------------------+
| Predicted label |
+----------------------+----------------------+
| (+1) | (-1) |
+-------+-----+----------------------+----------------------+
| True |(+1) | # of true positives | # of false negatives |
| label +-----+----------------------+----------------------+
| |(-1) | # of false positives | # of true negatives |
+-------+-----+----------------------+----------------------+
To print out the confusion matrix for a classifier, use metric='confusion_matrix':
End of explanation
"""
precision = model.evaluate(test_data, metric='precision')['precision']
print "Precision on test data: %s" % precision
"""
Explanation: Quiz Question: How many predicted values in the test set are false positives?
Computing the cost of mistakes
Put yourself in the shoes of a manufacturer that sells a baby product on Amazon.com and you want to monitor your product's reviews in order to respond to complaints. Even a few negative reviews may generate a lot of bad publicity about the product. So you don't want to miss any reviews with negative sentiments --- you'd rather put up with false alarms about potentially negative reviews instead of missing negative reviews entirely. In other words, false positives cost more than false negatives. (It may be the other way around for other scenarios, but let's stick with the manufacturer's scenario for now.)
Suppose you know the costs involved in each kind of mistake:
1. \$100 for each false positive.
2. \$1 for each false negative.
3. Correctly classified reviews incur no cost.
Quiz Question: Given the stipulation, what is the cost associated with the logistic regression classifier's performance on the test set?
Precision and Recall
You may not have exact dollar amounts for each kind of mistake. Instead, you may simply prefer to reduce the percentage of false positives to be less than, say, 3.5% of all positive predictions. This is where precision comes in:
$$
[\text{precision}] = \frac{[\text{# positive data points with positive predicitions}]}{\text{[# all data points with positive predictions]}} = \frac{[\text{# true positives}]}{[\text{# true positives}] + [\text{# false positives}]}
$$
So to keep the percentage of false positives below 3.5% of positive predictions, we must raise the precision to 96.5% or higher.
First, let us compute the precision of the logistic regression classifier on the test_data.
End of explanation
"""
recall = model.evaluate(test_data, metric='recall')['recall']
print "Recall on test data: %s" % recall
"""
Explanation: Quiz Question: Out of all reviews in the test set that are predicted to be positive, what fraction of them are false positives? (Round to the second decimal place e.g. 0.25)
Quiz Question: Based on what we learned in lecture, if we wanted to reduce this fraction of false positives to be below 3.5%, we would: (see the quiz)
A complementary metric is recall, which measures the ratio between the number of true positives and that of (ground-truth) positive reviews:
$$
[\text{recall}] = \frac{[\text{# positive data points with positive predicitions}]}{\text{[# all positive data points]}} = \frac{[\text{# true positives}]}{[\text{# true positives}] + [\text{# false negatives}]}
$$
Let us compute the recall on the test_data.
End of explanation
"""
def apply_threshold(probabilities, threshold):
### YOUR CODE GOES HERE
# +1 if >= threshold and -1 otherwise.
...
"""
Explanation: Quiz Question: What fraction of the positive reviews in the test_set were correctly predicted as positive by the classifier?
Quiz Question: What is the recall value for a classifier that predicts +1 for all data points in the test_data?
Precision-recall tradeoff
In this part, we will explore the trade-off between precision and recall discussed in the lecture. We first examine what happens when we use a different threshold value for making class predictions. We then explore a range of threshold values and plot the associated precision-recall curve.
Varying the threshold
False positives are costly in our example, so we may want to be more conservative about making positive predictions. To achieve this, instead of thresholding class probabilities at 0.5, we can choose a higher threshold.
Write a function called apply_threshold that accepts two things
* probabilities (an SArray of probability values)
* threshold (a float between 0 and 1).
The function should return an SArray, where each element is set to +1 or -1 depending whether the corresponding probability exceeds threshold.
End of explanation
"""
probabilities = model.predict(test_data, output_type='probability')
predictions_with_default_threshold = apply_threshold(probabilities, 0.5)
predictions_with_high_threshold = apply_threshold(probabilities, 0.9)
print "Number of positive predicted reviews (threshold = 0.5): %s" % (predictions_with_default_threshold == 1).sum()
print "Number of positive predicted reviews (threshold = 0.9): %s" % (predictions_with_high_threshold == 1).sum()
"""
Explanation: Run prediction with output_type='probability' to get the list of probability values. Then use thresholds set at 0.5 (default) and 0.9 to make predictions from these probability values.
End of explanation
"""
# Threshold = 0.5
precision_with_default_threshold = graphlab.evaluation.precision(test_data['sentiment'],
predictions_with_default_threshold)
recall_with_default_threshold = graphlab.evaluation.recall(test_data['sentiment'],
predictions_with_default_threshold)
# Threshold = 0.9
precision_with_high_threshold = graphlab.evaluation.precision(test_data['sentiment'],
predictions_with_high_threshold)
recall_with_high_threshold = graphlab.evaluation.recall(test_data['sentiment'],
predictions_with_high_threshold)
print "Precision (threshold = 0.5): %s" % precision_with_default_threshold
print "Recall (threshold = 0.5) : %s" % recall_with_default_threshold
print "Precision (threshold = 0.9): %s" % precision_with_high_threshold
print "Recall (threshold = 0.9) : %s" % recall_with_high_threshold
"""
Explanation: Quiz Question: What happens to the number of positive predicted reviews as the threshold increased from 0.5 to 0.9?
Exploring the associated precision and recall as the threshold varies
By changing the probability threshold, it is possible to influence precision and recall. We can explore this as follows:
End of explanation
"""
threshold_values = np.linspace(0.5, 1, num=100)
print threshold_values
"""
Explanation: Quiz Question (variant 1): Does the precision increase with a higher threshold?
Quiz Question (variant 2): Does the recall increase with a higher threshold?
Precision-recall curve
Now, we will explore various different values of tresholds, compute the precision and recall scores, and then plot the precision-recall curve.
End of explanation
"""
precision_all = []
recall_all = []
probabilities = model.predict(test_data, output_type='probability')
for threshold in threshold_values:
predictions = apply_threshold(probabilities, threshold)
precision = graphlab.evaluation.precision(test_data['sentiment'], predictions)
recall = graphlab.evaluation.recall(test_data['sentiment'], predictions)
precision_all.append(precision)
recall_all.append(recall)
"""
Explanation: For each of the values of threshold, we compute the precision and recall scores.
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
def plot_pr_curve(precision, recall, title):
plt.rcParams['figure.figsize'] = 7, 5
plt.locator_params(axis = 'x', nbins = 5)
plt.plot(precision, recall, 'b-', linewidth=4.0, color = '#B0017F')
plt.title(title)
plt.xlabel('Precision')
plt.ylabel('Recall')
plt.rcParams.update({'font.size': 16})
plot_pr_curve(precision_all, recall_all, 'Precision recall curve (all)')
"""
Explanation: Now, let's plot the precision-recall curve to visualize the precision-recall tradeoff as we vary the threshold.
End of explanation
"""
baby_reviews = test_data[test_data['name'].apply(lambda x: 'baby' in x.lower())]
"""
Explanation: Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better? Round your answer to 3 decimal places.
Quiz Question: Using threshold = 0.98, how many false negatives do we get on the test_data? (Hint: You may use the graphlab.evaluation.confusion_matrix function implemented in GraphLab Create.)
This is the number of false negatives (i.e the number of reviews to look at when not needed) that we have to deal with using this classifier.
Evaluating specific search terms
So far, we looked at the number of false positives for the entire test set. In this section, let's select reviews using a specific search term and optimize the precision on these reviews only. After all, a manufacturer would be interested in tuning the false positive rate just for their products (the reviews they want to read) rather than that of the entire set of products on Amazon.
Precision-Recall on all baby related items
From the test set, select all the reviews for all products with the word 'baby' in them.
End of explanation
"""
probabilities = model.predict(baby_reviews, output_type='probability')
"""
Explanation: Now, let's predict the probability of classifying these reviews as positive:
End of explanation
"""
threshold_values = np.linspace(0.5, 1, num=100)
"""
Explanation: Let's plot the precision-recall curve for the baby_reviews dataset.
First, let's consider the following threshold_values ranging from 0.5 to 1:
End of explanation
"""
precision_all = []
recall_all = []
for threshold in threshold_values:
# Make predictions. Use the `apply_threshold` function
## YOUR CODE HERE
predictions = ...
# Calculate the precision.
# YOUR CODE HERE
precision = ...
# YOUR CODE HERE
recall = ...
# Append the precision and recall scores.
precision_all.append(precision)
recall_all.append(recall)
"""
Explanation: Second, as we did above, let's compute precision and recall for each value in threshold_values on the baby_reviews dataset. Complete the code block below.
End of explanation
"""
plot_pr_curve(precision_all, recall_all, "Precision-Recall (Baby)")
"""
Explanation: Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better for the reviews of data in baby_reviews? Round your answer to 3 decimal places.
Quiz Question: Is this threshold value smaller or larger than the threshold used for the entire dataset to achieve the same specified precision of 96.5%?
Finally, let's plot the precision recall curve.
End of explanation
"""
|
LEX2016WoKaGru/pyClamster | examples/example_notebook.ipynb | gpl-3.0 | %matplotlib inline
import os
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import logging
import pyclamster
import pickle
import scipy
import scipy.misc
from skimage.feature import match_template
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")
"""
Explanation: This notebook shows how a cloud camera could be calibrated with the pyclamster and sun positions.
End of explanation
"""
filename = "../examples/calibration/wolf-3-calibration.pk"
calibration = pickle.load(open(filename, 'rb'))
cal_coords = calibration.create_coordinates()
cal_coords.z = 2000
plt.subplot(221)
plt.title("elevation on the image [deg]")
plt.imshow(cal_coords.elevation*360/(2*np.pi))
plt.colorbar()
plt.subplot(222)
plt.title("azimuth on the image [deg]")
plt.imshow(cal_coords.azimuth*360/(2*np.pi))
plt.colorbar()
plt.subplot(223)
plt.title("[z=2000 plane]\nreal-world x on the image [m]")
plt.imshow(cal_coords.x)
plt.colorbar()
plt.subplot(224)
plt.title("[z=2000 plane]\nreal-world y on the image [m]")
plt.imshow(cal_coords.y)
plt.colorbar()
plt.tight_layout()
"""
Explanation: Load pickled coordinates for the first Hungriger Wolf camera
End of explanation
"""
base_folder = "../"
image_directory = os.path.join(base_folder, "examples", "images", "wolf")
trained_models = os.path.join(base_folder, "trained_models")
good_angle = 45
center = int(1920/2)
good_angle_dpi = int(np.round(1920 / 180 * good_angle))
denoising_ratio = 10
#all_images = glob.glob(os.path.join(image_directory, "Image_20160531_114000_UTCp1_*.jpg"))
#print(all_images)
all_images = [
os.path.join(image_directory, "Image_20160531_114100_UTCp1_3.jpg"),
os.path.join(image_directory, "Image_20160531_114100_UTCp1_4.jpg")]
kmeans = pickle.load(open(os.path.join(trained_models, "kmeans.pk"), "rb"))
"""
Explanation: Set the paramters for the image clustering
End of explanation
"""
image = pyclamster.Image(all_images[0])
image.coordinates = cal_coords
cutted_image = image.cut([960, 960, 1460, 1460])
plt.title("The raw cutted image")
plt.imshow(cutted_image)
plt.axis('off')
image.data = pyclamster.clustering.preprocess.LCN(size=(25,25,3), scale=False).fit_transform(image.data)
image = image.cut([960, 960, 1460, 1460])
w, h, _ = original_shape = image.data.shape
raw_image = pyclamster.clustering.functions.rbDetection(image.data).reshape((w*h, -1))
"""
Explanation: Load image and preprocess it
End of explanation
"""
label = kmeans.predict(raw_image)
label.reshape((w, h), replace=True)
plt.title("The masked clouds")
plt.imshow(label.labels, cmap='gray')
plt.axis('off')
masks = label.getMaskStore()
"""
Explanation: Predict the labels with the trained model and convert it into a mask store
End of explanation
"""
masks.denoise([0], 1000)
cloud_labels, _ = masks.labelMask([0,])
plt.title("The labeled clouds")
plt.imshow(cloud_labels.labels, cmap='gray')
plt.axis('off')
cloud_store = cloud_labels.getMaskStore()
clouds = [cloud_store.getCloud(cutted_image, [k,]) for k in cloud_store.masks.keys()]
cloud1 = cloud_store.cutMask(cutted_image, [1,])
print(cloud1.data.shape)
"""
Explanation: Denoise the cloud mask and label the clouds
End of explanation
"""
image = pyclamster.Image(all_images[1])
image = image.cut([850, 850, 1460, 1460])
plt.title("The raw cutted image")
plt.imshow(image)
plt.axis('off')
"""
Explanation: Load the second image and cutted
End of explanation
"""
result = match_template(image.data, cloud1.data, pad_input=True, mode='reflect', constant_values=0)
plt.title("The matching result")
plt.imshow(result, cmap='gray')
plt.colorbar()
plt.axis('off')
#print(np.unravel_index(np.argmax(result), result.shape))
"""
Explanation: Move the cloud around to find the best matching point
End of explanation
"""
image = pyclamster.Image(all_images[1])
image.coordinates = cal_coords # Fake the second coordiante
image.crop([931, 981, 1430, 1480])
cloud2 = pyclamster.matching.Cloud(image)
sCloud = pyclamster.matching.SpatialCloud(pyclamster.matching.Cloud(cloud1), cloud2)
position = sCloud._calc_position()
print(position[2])
plt.title("A faked height map")
plt.imshow(position[2], cmap='gray')
plt.colorbar()
plt.axis('off')
"""
Explanation: Example for SpatialCloud
End of explanation
"""
|
cyucheng/skimr | jupyter/2b_Fix_FullText_Cleanup.ipynb | bsd-3-clause | import os, time, re, pickle
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from datetime import timedelta, date
import urllib
import html5lib
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup, SoupStrainer
"""
Explanation: Fix FullText Cleanup
Noticed that I was getting comments as well as main text from Medium.com scraping! Fix to avoid getting comments.
End of explanation
"""
fhtml = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/fullhtml_20170606_00-54-27_isolate.txt','r')
id_ft = []
fullt = []
# article ID #1005 from 00-54-27 has blockquote highlight! (url: https://themission.co/the-pain-of-being-a-pro-1a41f802614)
# Later, look into using html tags as features?
# num = 0
for line in fhtml:
text = line.strip().split('\t')
fullh = text[2]
fullt_line = []
soup = BeautifulSoup(fullh,'lxml') #,parse_only=content)
txt0 = soup.find('div',attrs={'data-source':'post_page'}) #class_='postArticle-content')
if not txt0:
print('error! skipping '+text[1])
continue
txt1 = txt0.find_all(class_='graf')#'p',class_='graf')
id_ft.append(text[1])
for line in txt1:
txt2 = re.sub('<[^>]+>', '', str(line) )
fullt_line.append(txt2)
# num+=1
# if num == 10:
# break
# print(fullt_line)
fullt.append( fullt_line )
print(id_ft[1006])
print(fullt[1006])
print(id_ft[953])
print(fullt[953])
"""
Explanation: CLEAN UP FULLTEXT
note: fixed to avoid getting comments as well as main text
End of explanation
"""
fhtml2 = open('/Users/clarencecheng/Dropbox/~Insight/skimr/datasets/fullhtml_20170606_10-45-58_edit_isolate.txt','r')
id_ft2 = []
fullt2 = []
# num = 0
for line in fhtml2:
text = line.strip().split('\t')
fullh = text[2]
fullt_line = []
soup = BeautifulSoup(fullh,'lxml') #,parse_only=content)
txt0 = soup.find('div',attrs={'data-source':'post_page'}) #class_='postArticle-content')
if not txt0:
print('error! skipping '+text[1])
continue
txt1 = txt0.find_all(class_='graf')#'p',class_='graf')
id_ft2.append(str(int(text[1])+2384))
for line in txt1:
txt2 = re.sub('<[^>]+>', '', str(line) )
fullt_line.append(txt2)
# print(fullt_line)
fullt2.append( fullt_line )
# num+=1
# if num == 22:
# break
print(id_ft2[9])
print(fullt2[9])
print(id_ft2)
print(str(len(id_ft2)))
print(str(len(fullt2)))
"""
Explanation: CLEAN UP FULLTEXT for second dataset
End of explanation
"""
id_ft_all = id_ft + id_ft2
fullt_all = fullt + fullt2
print(str(len(id_ft)))
print(str(len(id_ft2)))
print(str(len(id_ft_all)))
print(str(len(fullt)))
print(str(len(fullt2)))
print(str(len(fullt_all)))
keys_fullt = id_ft_all
vals_fullt = fullt_all
dict_fullt = dict(zip(keys_fullt,vals_fullt))
"""
Explanation: COMBINE DATASETS 1 AND 2
End of explanation
"""
# retrieve with pickle
data_temp = pickle.load(open('/Users/clarencecheng/Dropbox/~Insight/skimr/data_pd','rb'))
print(data_temp.head())
ids = data_temp['ids']
intersect = set(ids) & set(id_ft_all)
# print(intersect)
# print(len(ids))
# print(len(id_ft_all))
# print(len(intersect))
interx = list(map(int,intersect))
interx.sort()
interx = list(map(str,interx))
# print(interx)
sorted(interx) == sorted(ids)
keys_f = []
vals_f = []
for i in interx:
keys_f.append(i)
vals_f.append(dict_fullt[i])
keys_h = data_temp['ids']
vals_h = data_temp['highlights']
print(sorted(keys_h) == sorted(keys_f))
dict_h = dict(zip(keys_h,vals_h))
dict_f = dict(zip(keys_f,vals_f))
# print(len(keys_h))
# print(len(vals_h))
# print(len(keys_f))
# print(len(vals_f))
vals_all = zip(vals_h, vals_f)
dict_all_new = dict(zip(keys_h, vals_all))
print(dict_all_new['2'])
data_new = pd.DataFrame({'ids':keys_h, 'highlights':vals_h, 'text':vals_f})
"""
Explanation: Use prior 'uniqified' ids to remove duplicates from new cleaned data
(data_pd initially saved from http://localhost:8888/notebooks/skimr/jupyter/2_Data_Exploration.ipynb)
End of explanation
"""
# save dataframe with pickle
# data = pd.DataFrame({'ids':keys_h, 'highlights':vals_h, 'text':vals_f})
fdata = open('/Users/clarencecheng/Dropbox/~Insight/skimr/data_pd_new','wb')
pickle.dump(data_new,fdata)
# # save dict_all with pickle
fdict_all = open('/Users/clarencecheng/Dropbox/~Insight/skimr/dict_all_new','wb')
pickle.dump(dict_all_new, fdict_all)
"""
Explanation: Save dataframe and dict with pickle
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.1/tutorials/spots.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: Binary with Spots
Setup
IMPORTANT NOTE: if using spots on contact systems or single stars, make sure to use 2.1.15 or later as the 2.1.15 release fixed a bug affecting spots in these systems.
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b.add_feature('spot', component='primary', feature='spot01')
"""
Explanation: Adding Spots
Let's add one spot to each of our stars in the binary.
A spot is a feature, and needs to be attached directly to a component upon creation. Providing a tag for 'feature' is entirely optional - if one is not provided it will be created automatically.
End of explanation
"""
b.add_spot(component='secondary', feature='spot02')
"""
Explanation: As a shortcut, we can also call add_spot directly.
End of explanation
"""
print b['spot01']
b.set_value(qualifier='relteff', feature='spot01', value=0.9)
b.set_value(qualifier='radius', feature='spot01', value=30)
b.set_value(qualifier='colat', feature='spot01', value=45)
b.set_value(qualifier='long', feature='spot01', value=90)
"""
Explanation: Relevant Parameters
A spot is defined by the colatitude (where 0 is defined as the North (spin) Pole) and longitude (where 0 is defined as pointing towards the other star for a binary, or to the observer for a single star) of its center, its angular radius, and the ratio of temperature of the spot to the local intrinsic value.
End of explanation
"""
b.add_dataset('mesh', times=[0,0.25,0.5,0.75,1.0], columns=['teffs'])
b.run_compute()
afig, mplfig = b.filter(component='primary', time=0.75).plot(fc='teffs', show=True)
"""
Explanation: To see the spot, add a mesh dataset and plot it.
End of explanation
"""
b.set_value('syncpar@primary', 1.5)
b.run_compute(irrad_method='none')
"""
Explanation: Spot Corotation
The positions (colat, long) of a spot are defined at t0 (note: t0@system, not necessarily t0_perpass or t0_supconj). If the stars are not synchronous, then the spots will corotate with the star. To illustrate this, let's set the syncpar > 1 and plot the mesh at three different phases from above.
End of explanation
"""
print "t0 = {}".format(b.get_value('t0', context='system'))
afig, mplfig = b.plot(time=0, y='ws', fc='teffs', ec='None', show=True)
"""
Explanation: At time=t0=0, we can see that the spot is where defined: 45 degrees south of the north pole and 90 degree longitude (where longitude of 0 is defined as pointing towards the companion star at t0).
End of explanation
"""
afig, mplfig = b.plot(time=0.25, y='ws', fc='teffs', facecmap='YlOrRd', ec='None', show=True)
afig, mplfig = b.plot(time=0.5, y='ws', fc='teffs', facecmap='YlOrRd', ec='None', show=True)
ax, artists = b.plot(time=0.75, y='ws', fc='teffs', facecmap='YlOrRd', ec='None', show=True)
"""
Explanation: At a later time, the spot is still technically at the same coordinates, but longitude of 0 no longer corresponds to pointing to the companion star. The coordinate system has rotated along with the asyncronous rotation of the star.
End of explanation
"""
ax, artists = b.plot(time=1.0, y='ws', fc='teffs', facecmap='YlOrRd', ec='None', show=True)
"""
Explanation: Since the syncpar was set to 1.5, one full orbit later the star (and the spot) has made an extra half-rotation.
End of explanation
"""
|
BibMartin/folium | examples/CRS comparison.ipynb | mit | import json
import sys
sys.path.insert(0,'..')
import folium
print (folium.__file__)
print (folium.__version__)
"""
Explanation: Illustration of CRS effect
Leaflet is able to handle several CRS (coordinate reference systems). It means that depending on the data you have, you may need to use the one or the other.
Don't worry ; in practice, almost everyone on the web uses EPSG3857 (the default value for folium and Leaflet). But it may be interesting to know the possible values.
End of explanation
"""
geo_json_data = json.load(open('us-states.json'))
"""
Explanation: Let's create a GeoJSON map, and change it's CRS.
End of explanation
"""
m = folium.Map([43,-100], zoom_start=4, crs='EPSG3857')
folium.GeoJson(geo_json_data).add_to(m)
m
"""
Explanation: EPSG3857 ; the standard
Provided that our tiles are computed with this projection, this map has the expected behavior.
End of explanation
"""
m = folium.Map([43,-100], zoom_start=4, crs='EPSG4326')
folium.GeoJson(geo_json_data).add_to(m)
m
"""
Explanation: EPSG4326
This projection is a common CRS among GIS enthusiasts according to Leaflet's documentation. And we see it's quite different.
End of explanation
"""
m = folium.Map([43,-100], zoom_start=4, crs='EPSG3395')
folium.GeoJson(geo_json_data).add_to(m)
m
"""
Explanation: EPSG3395
The elliptical projection is almost equal to EPSG3857 ; though different.
End of explanation
"""
m = folium.Map([43,-100], zoom_start=4, crs='Simple')
folium.GeoJson(geo_json_data).add_to(m)
m
"""
Explanation: Simple
At last, Leaflet also give the possibility to use no projection at all. With this, you get flat charts.
It can be useful if you want to use folium to draw non-geographical data.
End of explanation
"""
|
QCaudron/Python-Workshop | 1.BasicPython.ipynb | mit | print("He said, 'what ?'")
"""
Explanation: Introductory Python
Quentin CAUDRON <br /> <br /> Ecology and Evolutionary Biology <br /> <br /> qcaudron@princeton.edu <br /> <br /> @QuentinCAUDRON
This section moves quickly. I'm assuming that everyone speaks at least one programming language well, and / or has introductory Python experience, and so this chapter gives a lightning intro to syntax in Python. The sections are subheaded, but they really overlap quite a lot, so they're there more as a page reference...
Variables and Arithmetic
End of explanation
"""
s = "This is a string."
print(s)
print(type(s))
print(len(s))
s = 42
print(s)
print(type(s))
"""
Explanation: Strings are delimited by ", but can also use '. This is useful because you can now use one set of quotes inside another, and it'll still be one big string.
End of explanation
"""
print(s * 2)
print(s + 7)
# Neither statement modifies the variable.
"""
Explanation: Variables don't need to be given a type, as Python is dynamically-typed. That means if I wanted to reuse s as an integer, Python would have no issue with that.
End of explanation
"""
s += 2**3 # s is being incremented by 2^3
print("Same as s = s + 2**3")
print(s)
"""
Explanation: Single-line comments use #.
Arithmetic uses the standard operators : +, -, *, /.
You can take powers using **.
Python also allows += syntax :
End of explanation
"""
print(s == 42)
print(s == 50)
print(s > 10)
"""
Explanation: This statement is equivalent to saying s = s + 2**3, it's just shorthand. Also works with
-=, *=, /=, **=
End of explanation
"""
x = "Blah"
print(x + x)
print(len(x))
"""
Explanation: The == operator is the comparison operator. Here, we also see Python's syntax for logical statements : True and False. As with any programming syntax, capitalisation is important. In Python, 1 is also True, and 0 is also False.
End of explanation
"""
mylist = [1, 2.41341]
mylist.append("We can mix types !")
print(mylist)
print(type(mylist))
"""
Explanation: Strings can be concatenated using the + operator. The len() function returns the length of a string.
Lists
End of explanation
"""
print(mylist, "\n")
print(mylist[0])
print(mylist[1])
print(mylist[2])
"""
Explanation: Python accesses elements in lists from 0, not from 1 as in Matlab or R. This will be familiar to C and Java users.
.append is a method of the list class - it's a function that belongs to the list object, and so you can call it directly from the object itself. This particular function appends something to the end of the list.
End of explanation
"""
print("Length is {} long.\n".format(len(mylist)))
print("There are {} ones in this list.\n".format(mylist.count(1)))
mylist.reverse()
print("Reversed ! {}".format(mylist))
"""
Explanation: Lists have several methods ( count, sort, reverse, pop, insert, remove, ... ). Here are a few.
End of explanation
"""
for i in mylist :
print(i)
print("Hello\n")
print("Finished")
"""
Explanation: Any thoughts on why len() is a global function in Python, and not a method of the list object ?
Control Structures
Python objects like lists are iterables. That is, we can directly iterate over them :
End of explanation
"""
from __future__ import braces
"""
Explanation: Note the indentation. Loops in Python don't get delimited by brackets like in C or R. Each block gets its own indentation.
Typically, people use tabs, but you can use any amount of whitespace you want as long as you are consistent. To end the loop, simply unindent. We'll see that in a few lines.
Users of languages like C or Java, where code blocks are delimited by curly braces, sometimes ask that they be made available in Python.
for i in range {
do something to i
}
Python's __future__ module has already taken care of this.
End of explanation
"""
print(1 in mylist)
print(2 in mylist)
"""
Explanation: The keyword in can also be used to check whether something is in a container :
End of explanation
"""
for i in range(len(mylist)) :
print(i, mylist[i])
"""
Explanation: If you wanted to loop by indexing the list, we can use range(), which, in its simplest ( single-argument ) form, returns a list from 0 to that element minus 1.
End of explanation
"""
for index, value in enumerate(mylist) :
print("Element number {} in the list has the value {}".format(index, value))
"""
Explanation: Another way to do this is the enumerate function :
End of explanation
"""
x = 5
if x > 3 :
print("x is greater than 3.")
elif x == 5 :
print("We aren't going to see this. Why ?")
else :
print("x is not greater than 3.")
print("We can see this, it's not in the if statement.")
"""
Explanation: What about if statements ?
End of explanation
"""
for outer in range(1, 3) :
print("BIG CLICK, outer loop change to {}".format(outer))
for inner in range(4) :
print("*little click*, outer is still {}, and inner is {}.".format(outer, inner))
print("I'm done here.")
"""
Explanation: Notice how the contents of the while loop are indented, and then code that is outside the loop continues unindented below.
Here's a nested loop to clarify :
End of explanation
"""
myint = 2
myfloat = 3.14
print(type(myint), type(myfloat))
# Multiplying an int with a float gives a float : the int was promoted.
print(myint * myfloat)
print(type(myint * myfloat))
# A minor difference between Python 2 and Python 3 :
print(7 / 3)
# Py2 : 2
# Py3 : 2.3333
# In Python 2, operations between same type gives the same type :
print(type(7 / 3))
# Py2 : <type 'int'>
# Py3 : <class 'float'>
# Quick hack with ints to floats - there's no need to typecast, just give it a float
print(float(7) / 3)
print(7 / 3.0)
# In Python 3, this is handled "correctly"; you can use // as integer division
print(7 // 3)
# Quick note for Py2 users - see https://www.python.org/dev/peps/pep-0238/
from __future__ import division
print(7 / 3)
"""
Explanation: Here, we used range() with two arguments. In Python 2, it generates a list from the first argument to the second argument minus 1. In Python 3, it returns an immutable iterable, but you can cast it to a list by calling something like list(range(5)). Also, note that we can feed the print function several things to print, separated by a comma.
Interacting Between Different Variable Types
Beware of integer division with Python 2. Unlike R, Python 2 doesn't assume that everything is a float unless explicitly told; it recognises that 2 is an integer, and this can be good and bad. In Python 3, we don't need to worry about this; the following code was run under a Python 3 kernel, but test it under Python 2 to see the difference.
End of explanation
"""
# Create a list of integers 0, 1, 2, 3, 4
A = list(range(5))
print(A)
# Py2 vs Py3 :
# In Py2, range() returns a list already
# Let's replace the middle element
A[2] = "Naaaaah"
print(A)
"""
Explanation: More Lists : Accessing Elements
Let's go back to lists. They're a type of generic, ordered container; their elements can be accessed in several ways.
End of explanation
"""
print(A[1:4])
"""
Explanation: What are the middle three elements ? Let's use the : operator. Like range(), it creates a list of integers.
[1:4] will give us elements 1, 2, and 3, because we stop at n-1, like with range().
End of explanation
"""
print(A[:2])
print(A[2:])
"""
Explanation: We don't need to give a start or an end :
End of explanation
"""
print(A[len(A)-2:])
print(A[-2:])
"""
Explanation: Can we access the last element ? What about the last two ?
End of explanation
"""
print(list(range(0, 5, 2)))
"""
Explanation: Earlier, we saw that range() can take two arguments : range(start, finish). It can actually take a third : range(start, finish, stride).
End of explanation
"""
print(A[0:5:2])
# Here, it will give us elements 0, 2, 4.
"""
Explanation: The : operator can also do the same.
End of explanation
"""
# This will simply go from start to finish with a stride of 2
print(A[::2])
# And this one, from the second element to finish, with a stride of 2
print(A[1::2])
# So, uh... Reverse ?
print(A[::-1])
"""
Explanation: What if I don't want to explicitly remember the size of the list ?
End of explanation
"""
print(A + A)
print(A * 3)
"""
Explanation: List arithmetic ?
End of explanation
"""
pythonPoints = { "Quentin" : 1./3, "Paul" : 42, "Matthew" : 1e3 }
print(pythonPoints)
# Dictionaries associate keys with values
print(pythonPoints.keys())
print(pythonPoints.values())
# You can access them through their keys
print(pythonPoints["Paul"] * 2)
if "Ruthie" in pythonPoints : # for dicts, "in" checks the keys
print("Ruthie's here too !")
else :
pythonPoints["Ruthie"] = 0
print("Ruthie has {} mad skillz.".format(pythonPoints["Ruthie"]))
"""
Explanation: Dictionaries
Let's take a very brief look at dictionaries. These are unordered containers that you can use to pair elements in, similar to a std::map if you're a C++ coder.
End of explanation
"""
# Let's build a list of elements 1^2, 2^2, ..., 5^2
y = [i**2 for i in range(6)]
print(y)
# Want to keep your index ? Use a dictionary.
squares = { x : x**2 for x in range(6) }
for key, val in squares.items() :
print("{} squared is {}".format(key, val))
# Also useful : zip()
# for key, val in zip(squares.keys(), squares.values()) :
# print("{} : {}".format(key, val))
# We can inline if statements too
print(42 if type(42) is int else 32)
# Note this is interpreted as
# print (something if a, else print something_else)
# and not
# (print something) if a, else (do something_else)
"""
Explanation: There are a couple of other built-in containers, like tuples and sets. I won't go into them here, plainly because I have to use them so rarely that it's not worth the time during the session. If you want to read up : http://docs.python.org/2/tutorial/datastructures.html
List Comprehension and Inlines
End of explanation
"""
# Fibonacci numbers
# OH NO RECURSION
def fib(n) :
if n < 2 :
return n
else :
return fib(n-1) + fib(n-2)
print("Done defining.")
# Testing :
for i in range(10) :
print(fib(i))
"""
Explanation: Functions
End of explanation
"""
def printFib(i) :
print("The {}th number of the Fibonnaci sequence is {}.".format(i, fib(i)))
printFib(20)
"""
Explanation: Looks good. We've just defined a function that takes one argument, n, and returns something based on what n is. The Fibonacci function is quite particular because it calls itself ( recursion ), but it's a small, fun example, so why not.
End of explanation
"""
# I modified this one from Learn Python The Hard Way ( highly recommended ) :
formatstring = "Start {} {}"
print(formatstring.format(formatstring, formatstring))
"""
Explanation: Here, %d is a format code for integer. %f is for floating point numbers ( floats ), and %s is for strings.
Note how, to pass more than one thing in, we had to put it into round brackets. This is a tuple, we mentioned it briefly in the last notebook. It's basically just an immutable list. String formatting like this takes tuples.
End of explanation
"""
# Written on-the-fly, because I got mad skills
print("This is a haiku\n\tI'm awful at poetry\nWait, this really worked")
"""
Explanation: Also worth knowing are \n and \t : the newline and tab characters, respectively.
End of explanation
"""
myfile = open("example.txt", "r")
for line in myfile :
print(line.strip("\n"))
# There are other options instead of looping over each line.
# You can instead use myfile.read().
# Writing : you can dump a variable using myfile.write()
# after having opened it in "w" mode.
# There are many other ways to read and write files,
# including ways to read and write CSV directly.
"""
Explanation: File IO
A very, very quick look at file IO, because there are packages that can do a better job.
End of explanation
"""
A = list(range(4, 29, 3))
print(A)
B = [a**2 for a in A]
print(B)
B += B[::-1]
print(B)
def addflip(mylist) :
squared = [element**2 for element in mylist]
return squared + squared[::-1]
print(addflip(range(5)))
"""
Explanation: Syntax Exercises
( very easy )
Generate a list A of integers from 4 to 28 inclusive in strides of 3.
range(), and list() if you're in Python 3
Generate a new list B, squaring each element of A.
list comprehensions
** operator
Append a reversed version of B to itself
+ or +=
[::-1] - be careful with reverse, it affects the list directly !
Write a function called addflip that will do all of this and return you the new list.
def
return
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/inm/cmip6/models/inm-cm5-0/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'inm-cm5-0', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: INM
Source ID: INM-CM5-0
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:04
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
AllenDowney/ThinkBayes2 | examples/lions_soln.ipynb | mit | # Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# import classes from thinkbayes2
from thinkbayes2 import Pmf, Cdf, Suite, Joint
import thinkplot
"""
Explanation: Think Bayes
Copyright 2018 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
class LionsTigersBears(Suite, Joint):
def Likelihood(self, data, hypo):
"""
data: string 'L' , 'T', 'B'
hypo: p1, p2, p3
"""
# Fill this in.
# Solution
class LionsTigersBears(Suite, Joint):
def Likelihood(self, data, hypo):
"""
data: string 'L' , 'T', 'B'
hypo: p1, p2, p3
"""
p1, p2, p3 = hypo
if data == 'L':
return p1
if data == 'T':
return p2
if data == 'B':
return p3
ps = np.linspace(0, 1, 101);
"""
Explanation: Lions and Tigers and Bears
Suppose we visit a wild animal preserve where we know that the only animals are lions and tigers and bears, but we don't know how many of each there are.
During the tour, we see 3 lions, 2 tigers and one bear. Assuming that every animal had an equal chance to appear in our sample, estimate the prevalence of each species.
What is the probability that the next animal we see is a bear?
Grid algorithm
I'll start with a grid algorithm, enumerating the space of prevalences, p1, p2, and p3, that add up to 1, and computing the likelihood of the data for each triple of prevalences.
End of explanation
"""
from itertools import product
def enumerate_triples(ps):
for p1, p2, p3 in product(ps, ps, ps):
if p1+p2+p3 == 1:
yield p1, p2, p3
"""
Explanation: Here's a simple way to find eligible triplets, but it is inefficient, and it runs into problems with floating-point approximations.
End of explanation
"""
# Solution
from itertools import product
def enumerate_triples(ps):
for p1, p2 in product(ps, ps):
if p1 + p2 > 1:
continue
p3 = 1 - p1 - p2
yield p1, p2, p3
"""
Explanation: As an exercise, write a better version of enumerate_triples.
End of explanation
"""
suite = LionsTigersBears(enumerate_triples(ps));
"""
Explanation: Now we can initialize the suite.
End of explanation
"""
def plot_marginal_pmfs(joint):
pmf_lion = joint.Marginal(0)
pmf_tiger = joint.Marginal(1)
pmf_bear = joint.Marginal(2)
thinkplot.Pdf(pmf_lion, label='lions')
thinkplot.Pdf(pmf_tiger, label='tigers')
thinkplot.Pdf(pmf_bear, label='bears')
thinkplot.decorate(xlabel='Prevalence',
ylabel='PMF')
def plot_marginal_cdfs(joint):
pmf_lion = joint.Marginal(0)
pmf_tiger = joint.Marginal(1)
pmf_bear = joint.Marginal(2)
thinkplot.Cdf(pmf_lion.MakeCdf(), label='lions')
thinkplot.Cdf(pmf_tiger.MakeCdf(), label='tigers')
thinkplot.Cdf(pmf_bear.MakeCdf(), label='bears')
thinkplot.decorate(xlabel='Prevalence',
ylabel='CDF')
"""
Explanation: Here are functions for displaying the distributions
End of explanation
"""
plot_marginal_cdfs(suite)
"""
Explanation: Here are the prior distributions
End of explanation
"""
for data in 'LLLTTB':
suite.Update(data)
"""
Explanation: Now we can do the update.
End of explanation
"""
plot_marginal_cdfs(suite)
"""
Explanation: And here are the posteriors.
End of explanation
"""
suite.Marginal(2).Mean()
"""
Explanation: To get the predictive probability of a bear, we can take the mean of the posterior marginal distribution:
End of explanation
"""
suite.Copy().Update('B')
"""
Explanation: Or we can do a pseudo-update and use the total probability of the data.
End of explanation
"""
from thinkbayes2 import Dirichlet
def DirichletMarginal(dirichlet, i):
return dirichlet.MarginalBeta(i).MakePmf()
Dirichlet.Marginal = DirichletMarginal
"""
Explanation: Using the Dirichlet object
The Dirichlet distribution is the conjugate prior for this likelihood function, so we can use the Dirichlet object to do the update.
The following is a monkey patch that gives Dirichlet objects a Marginal method.
End of explanation
"""
dirichlet = Dirichlet(3)
plot_marginal_cdfs(dirichlet)
"""
Explanation: Here are the priors:
End of explanation
"""
dirichlet.Update((3, 2, 1))
"""
Explanation: Here's the update.
End of explanation
"""
plot_marginal_pmfs(dirichlet)
"""
Explanation: Here are the posterior PDFs.
End of explanation
"""
plot_marginal_cdfs(dirichlet)
"""
Explanation: And the CDFs.
End of explanation
"""
thinkplot.PrePlot(6)
plot_marginal_cdfs(dirichlet)
plot_marginal_cdfs(suite)
"""
Explanation: And we can confirm that we get the same results as the grid algorithm.
End of explanation
"""
import warnings
warnings.simplefilter('ignore', FutureWarning)
import pymc3 as pm
"""
Explanation: MCMC
Exercise: Implement this model using MCMC. You might want to start with this example.
End of explanation
"""
observed = [0,0,0,1,1,2]
k = len(Pmf(observed))
a = np.ones(k)
"""
Explanation: Here's the data.
End of explanation
"""
# Solution
model = pm.Model()
with model:
ps = pm.Dirichlet('ps', a, shape=a.shape)
xs = pm.Categorical('xs', ps, observed=observed, shape=1)
model
# Solution
with model:
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(1000, start=start, step=step, tune=1000)
"""
Explanation: Here's the MCMC model:
End of explanation
"""
pm.traceplot(trace);
"""
Explanation: Check the traceplot
End of explanation
"""
def plot_trace_cdfs(trace):
rows = trace['ps'].transpose()
cdf_lion = Cdf(rows[0])
cdf_tiger = Cdf(rows[1])
cdf_bear = Cdf(rows[2])
thinkplot.Cdf(cdf_lion, label='lions')
thinkplot.Cdf(cdf_tiger, label='tigers')
thinkplot.Cdf(cdf_bear, label='bears')
thinkplot.decorate(xlabel='Prevalence',
ylabel='CDF')
plot_trace_cdfs(trace)
"""
Explanation: And let's see the results.
End of explanation
"""
thinkplot.PrePlot(6)
plot_marginal_cdfs(dirichlet)
plot_trace_cdfs(trace)
"""
Explanation: And compare them to what we got with Dirichlet:
End of explanation
"""
animals = ['lions', 'tigers', 'bears']
c = np.array([3, 2, 1])
a = np.array([1, 1, 1])
warnings.simplefilter('ignore', UserWarning)
with pm.Model() as model:
# Probabilities for each species
ps = pm.Dirichlet('ps', a=a, shape=3)
# Observed data is a multinomial distribution with 6 trials
xs = pm.Multinomial('xs', n=6, p=ps, shape=3, observed=c)
model
with model:
# Sample from the posterior
trace = pm.sample(draws=1000, tune=1000)
pm.traceplot(trace);
thinkplot.PrePlot(6)
plot_marginal_cdfs(dirichlet)
plot_trace_cdfs(trace)
"""
Explanation: Using a Multinomial distribution
Here's another solution that uses a Multinomial distribution instead of a Categorical. In this case, we represent the observed data using just the counts, [3, 2, 1], rather than a specific sequence of observations [0,0,0,1,1,2].
I suspect that this is a better option; because it uses a less specific representation of the data (without losing any information), I would expect the probability space to be easier to search.
This solution is based on this excellent notebook from Will Koehrsen.
End of explanation
"""
summary = pm.summary(trace)
summary.index = animals
summary
"""
Explanation: The results look good. We can use summary to get the posterior means, and other summary stats.
End of explanation
"""
ax = pm.plot_posterior(trace, varnames = ['ps']);
for i, a in enumerate(animals):
ax[i].set_title(a)
"""
Explanation: We can also use plot_posterior to get a better view of the results.
End of explanation
"""
|
adam2392/paremap | paremap_nih_rotation/notebooks/exploratory analysis_old/Robust Spectrotemporal Decomposition.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
import scipy as sp
%matplotlib inline
"""
Explanation: Robust Spectrotemporal Decomposition by Iteratively Reweighted Least Squares
Contributed by: Armen Gharibans
Reference: Ba, D., Babadi, B., Purdon, P. L., & Brown, E. N. (2014). Robust spectrotemporal decomposition by iteratively reweighted least squares. Proceedings of the National Academy of Sciences, 111(50), E5336-E5345.
End of explanation
"""
T = 600 #[s]
fs = 500 #[Hz]
f0 = 0.04 #[Hz]
f1 = 10 #[Hz]
f2 = 11 #[Hz]
t = np.linspace(0,T,fs*T)
signal = 10*(np.cos(2*np.pi*f0*t))**8*np.sin(2*np.pi*f1*t) + \
10*np.exp(4*(t-T)/T)*np.cos(2*np.pi*f2*t)
noise = np.random.normal(0,0.3,T*fs)
signal = signal + noise
#PLOT
fig, ax = plt.subplots(nrows=2,ncols=1,figsize=(8,6))
ax[0].plot(t,signal)
ax[0].set_ylabel('Amplitude')
ax[1].plot(t,signal)
ax[1].set_ylabel('Amplitude')
ax[1].set_xlabel('Time (s)')
ax[1].set_xlim([295,305])
ax[1].set_ylim([-12,12])
fig.tight_layout()
"""
Explanation: Toy Example
This example points out the limitations of classical techniques in analyzing time series data. We simulated noisy observations from the linear combination of two amplitude-modulated signals using the following equation:
$y_{t}=10 \cos^{8}\left(2\pi f_{o}t\right) \sin\left(2\pi f_{1}t\right)+10\exp\left(4\frac{t-T}{T}\right)\cos\left(2\pi f_{2}t\right)+v_{t}, \quad$ for $\enspace 0\leq t \leq T$
where $f_{0}=0.04$ Hz, $f_{1}=10$ Hz, $f_{2}=11$ Hz, $T=600$ s, and $\left(v_{t}\right)_{t=1}^{T}$ is independent, identically distributed, zero-mean Gaussian noise with variance set to acheive a signal-to-noise ratio (SNR) of 5 dB.
The simulated data consists of a 10 Hz oscillation whos amplitude is modulated by slow 0.04 Hz oscillation, and an exponentially gorwing 11 Hz oscillation.
End of explanation
"""
fig, ax = plt.subplots(nrows=1,ncols=2,figsize=(11,4))
im1 = ax[0].specgram(signal,NFFT=1000,Fs=500,noverlap=500,interpolation='none')
ax[0].set_ylim([0,20])
ax[0].set_ylabel('Frequency (Hz)')
ax[0].set_xlabel('Time (s)')
im1[3].set_clim(-40,10)
im2 = ax[1].specgram(signal,NFFT=1000,Fs=500,noverlap=500,interpolation='none')
ax[1].set_xlim([250,350])
ax[1].set_ylim([7,13])
ax[1].set_xlabel('Time (s)')
fig.tight_layout()
im2[3].set_clim(-40,10)
cb = fig.colorbar(im2[3])
"""
Explanation: The following is a spectrogram of the simulated signal that highlights the limitations of classical frequency analysis. The analysis is unable to resolve the closely spaced signals of 10 and 11 Hz in the frequency domain.
End of explanation
"""
# Define Matrix F
numSamples = fs*T #num samples
W = 1000 #window size
K = W #frequency bands
N = numSamples//W #number of windows
F = np.zeros([W,K])
k = np.array(range(1,K//2+1))
l = np.array(range(1,W+1))
for jj in range(0,np.size(k)):
for ii in range(0,np.size(l)):
F[ii,jj] = np.cos(2*np.pi*l[ii]*(k[jj]-1)/K)
F[ii,jj+K//2] = np.sin(2*np.pi*l[ii]*(k[jj]-1)/K)
#plt.imshow(F)
#print(np.shape(F))
"""
Explanation: Robust Spectral Decomposition
The State-Space Model
In this analysis, we will consider a signal $y_{t}$ that is obtained by sampling a noisy, continuous-time signal at rate $f_{s}$ (above the Nyquist rate).
$y_{t}\rightarrow$ discrete-time signal
$t = 1,2,...,T\rightarrow$ samples
$f_{s}\rightarrow$ sampling rate
$W\rightarrow$ arbitrary window length
$N \triangleq \frac{T}{W}\rightarrow$ number of windows
$y_{n} \triangleq \left(y_{\left(n-1\right)W+1},y_{\left(n-1\right)W+2},...,y_{nW}\right)'$ for $n=1,2,...,N$
Consider the following spectrotemporal representation of $y_{n}$ as:
$$y_{n} = \tilde{F}{n}\tilde{x}{n} + v_{n}$$
where,
* $\left ( \tilde{F}{n} \right ){l,k} \triangleq \exp \left( j2\pi \left( \left(n-1 \right) W+l \right) \frac{\left( k-1 \right)}{K} \right)$
for $l=1,2,...,W$ and $k=1,2,...K$
* $\tilde{x}{n} \triangleq \left(\tilde{x}{n,1},\tilde{x}{n,2},...,\tilde{x}{n,K} \right)' $
* $v_{n}\rightarrow$ independent, identically distributed, additive zero-mean Gaussian noise
Equivalently, we can define the linear observation model over a real vector space as follows:
$$y_{n} = F_{n}x_{n}+v_{n}$$
where,
* $\left ( F_{n} \right ){l,k} \triangleq \cos \left( 2\pi \left( \left(n-1 \right) W+l \right) \frac{\left( k-1 \right)}{K} \right)$ for $l = 1,2,...,W$ and $k=1,2,...\frac{K}{2}$
* $\left ( F{n} \right ){l,k+K/2} \triangleq \sin \left( 2\pi \left( \left(n-1 \right) W+l \right) \frac{\left( k-1 \right)}{K} \right)$ for $l = 1,2,...,W$ and $k=1,2,...\frac{K}{2}$
* $x{n} \triangleq \left(x_{n,1},x_{n,2},...,x_{n,K} \right)' $
End of explanation
"""
#initialize
Q = np.eye(K)*0.001
xKalman = np.zeros([K,N+1])
xPredict = np.zeros([K,N+1])
sigKalman = np.zeros([K,K,N+1])
sigPredict = np.zeros([K,K,N+1])
sigKalman[:,:,0] = np.eye(K)
#Kalman Filter
for n in range(0,N):
y = signal[n*W:(n+1)*W]
xPredict[:,n+1] = xKalman[:,n]
sigPredict[:,:,n+1] = sigKalman[:,:,n] + Q
gainK = np.dot(sigPredict[:,:,n+1],F.T).dot(np.linalg.inv(np.dot(F,sigPredict[:,:,n+1]).dot(F.T)+np.eye(K)))
xKalman[:,n+1] = xPredict[:,n+1] + np.dot(gainK,y-np.dot(F,xPredict[:,n+1]))
sigKalman[:,:,n+1] = sigPredict[:,:,n+1] - np.dot(gainK,F).dot(sigPredict[:,:,n+1])
#remove initial conditions
xKalman = xKalman[:,1:N+1]
xPredict = xPredict[:,1:N+1]
sigKalman = sigKalman[:,:,1:N+1]
sigPredict = sigPredict[:,:,1:N+1]
xEst = xKalman[0:K//2,:]-xKalman[K//2:W,:]*1j
xPSD = 10*np.log10(np.abs(xEst)**2)
fig, ax = plt.subplots(nrows=1,ncols=2,figsize=(11,4))
im1 = ax[0].imshow(xPSD,origin='lower',extent=[0,N*W//fs,0,fs//2-5],aspect='auto',interpolation='none')
ax[0].set_ylim([0,20])
ax[0].set_ylabel('Frequency (Hz)')
ax[0].set_xlabel('Time (s)')
im1.set_clim(-40,10)
im2 = ax[1].imshow(xPSD,origin='lower',extent=[0,N*W//fs,0,fs//2-5],aspect='auto',interpolation='none')
ax[1].set_xlim([250,350])
ax[1].set_ylim([7,13])
ax[1].set_xlabel('Time (s)')
fig.tight_layout()
im2.set_clim(-40,10)
cb = fig.colorbar(im2)
"""
Explanation: The objective is to compute an estimate $\hat{x}$ of $x$ given the data $y$. The component-wise magnitude-squared of $\hat{x}$ gives an estimate of the magnitude spectrum of $y$. By treating $\left(x_{n}\right){n=1}^{N}$ as a sequence of random variables and carefully selecting a prior distribution, a stochastic continuity constraint can be established across time. By imposing a model on the components $\left(x{n,k}\right)_{k=1}^{K}$ for each $n = 1,2,..,N$, sparsity is enforced in the frequency domain. The stochastic continuity constraint can be expressed in the form of the first-order difference equation:
$$x_{n} = x_{n-1} + w_{n}$$
where $w = \left( w_{1}',w_{2}',...,w_{N}'\right)'$ is a random vector. The following joint prior probability density function is used to enforce sparsity in the frequency domain and smoothness in time:
$$\log p_{1}\left(w_{1},w_{2},...,w_{N}\right) = -\alpha \sum_{k=1}^{K} \left( \sum_{n=1}^{N} w_{n,k}^{2} + \epsilon^{2} \right)^\frac{1}{2}+c_{1}$$
where $\alpha > 0$ is a constant and $\epsilon > 0$ is a small constant.
The Inverse Solution
Bayesian estimation is used to compute the robust spectral decomposition of $y$, where the posterior density of $x$ given $y$ fully characterizes the space of inverse solutions. This is computed by solving the following MAP estimation problem:
$$\max_{x_{1},...x_{N}} -\sum_{n=1}^{N} \frac{1}{2\sigma^{2}} \left \| y_{n} - F_{n}x_{n} \right \|^{2}{2} + f\left(x{1},x_{2},...,x_{N}\right)$$
where $f\left(x_{1},x_{2},...,x_{N}\right) \triangleq \log p_{i} \left(x_{1}-x_{0},x_{2}-x_{1},...,x_{N}-x_{N-1}\right)$. This is a strictly concave optimization problem that can be solved using standard techniques. However, these techniques do not scale well with $N$ becuase of the batch nature of the problem.
Kalman Filter
The Kalman filter solves the least-squares estimation problem recursively, and in a computationally efficient manner. The algorithm works in a two-step process. In the prediciton step, the Kalman filter produces estimates of the current state variables, along with their uncertainties. Once the outcome of the next measurement (corrupted with some amount of error, including random noise) is observed, these estimates are updated with more weight being given to estimates with higher certainty.
<img src="http://i.imgur.com/s1YU6Qy.png">
Algorithm:
Initial Conditions:
$x_{0\mid 0} = \left(0,...,0\right)' \in \mathbb{R}^{K}$
$\Sigma_{0\mid 0} = I_{k} \in \mathbb{R}^{KK}$
Filter at time $n=1,2,...,N$:
* $x_{n\mid n-1}=x_{n-1 \mid n-1}$
* $\Sigma_{n\mid n-1}=\Sigma_{n-1\mid n-1}+Q^{\left(l\right)}$
* $K_{n}=\Sigma_{n\mid n-1}F_{n}^{H}\left(F_{n}\Sigma_{n\mid n-1}F_{n}^{H}+\sigma^{2}I\right)^{-1}$
* $x_{n\mid n}=x_{n\mid n-1}+K_{n}\left(y_{n}-F_{n}x_{n\mid n-1}\right)$
* $\Sigma_{n\mid n}=\Sigma_{n\mid n-1}-K_{n}F_{n}\Sigma_{n\mid n-1}$
End of explanation
"""
xSmooth = xKalman
sigSmooth = sigKalman
for n in range(N-2,-1,-1):
B = np.dot(sigKalman[:,:,n],np.linalg.inv(sigPredict[:,:,n+1]))
xSmooth[:,n] = xKalman[:,n] + np.dot(B,(xSmooth[:,n+1]-xPredict[:,n+1]))
sigSmooth[:,:,n] = sigKalman[:,:,n] + np.dot(B,(sigSmooth[:,:,n+1]-sigPredict[:,:,n+1])).dot(B.T)
xEst = xSmooth[0:K//2,:]-xSmooth[K//2:W,:]*1j
xPSD = 10*np.log10(np.abs(xEst)**2)
fig, ax = plt.subplots(nrows=1,ncols=2,figsize=(11,4))
im1 = ax[0].imshow(xPSD,origin='lower',extent=[0,N*W//fs,0,fs//2-5],aspect='auto',interpolation='none')
ax[0].set_ylim([0,20])
ax[0].set_ylabel('Frequency (Hz)')
ax[0].set_xlabel('Time (s)')
im1.set_clim(-40,10)
im2 = ax[1].imshow(xPSD,origin='lower',extent=[0,N*W//fs,0,fs//2-5],aspect='auto',interpolation='none')
ax[1].set_xlim([250,350])
ax[1].set_ylim([7,13])
ax[1].set_xlabel('Time (s)')
fig.tight_layout()
im2.set_clim(-40,10)
cb = fig.colorbar(im2)
"""
Explanation: Kalman Smoother
The Kalman filter is designed for real-time applications. It estimates the properties of a system at a given time using measurments of the system up to that time. However, when a real-time estimate is not needed, the Kalman filter effectively throws away half of the measurement data. The Kalman smoother is an extension of the Kalman filter that uses measurement information from after the time at which state estimates are required as well as before that time.
Algorithm
Smoother at time $n=N-1,N-2,...,1:$
$B_{n}=\Sigma_{n\mid n}\Sigma_{n+1\mid n}^{-1}$
$x_{n\mid N}=x_{n\mid n}+B_{n}\left(x_{n+1\mid N}-x_{n+1\mid n}\right)$
$\Sigma_{n\mid N}=\Sigma_{n\mid n}+B_{n}\left(\Sigma_{n+1\mid N}-\Sigma_{n+1\mid n}\right)B_{n}^{H}$
End of explanation
"""
#Parameters
alpha = 21000
tol = 0.005
maxIter = 10
Q = np.eye(K)*0.001
iter = 1
while (iter <= maxIter):
#Step 1:
#initialize
xKalman = np.zeros([K,N+1])
xPredict = np.zeros([K,N+1])
sigKalman = np.zeros([K,K,N+1])
sigPredict = np.zeros([K,K,N+1])
sigKalman[:,:,0] = np.eye(K)
#Kalman Filter
for n in range(0,N):
y = signal[n*W:(n+1)*W]
xPredict[:,n+1] = xKalman[:,n]
sigPredict[:,:,n+1] = sigKalman[:,:,n] + Q
gainK = np.dot(sigPredict[:,:,n+1],F.T).dot(np.linalg.inv(np.dot(F,sigPredict[:,:,n+1]).dot(F.T)+np.eye(K)))
xKalman[:,n+1] = xPredict[:,n+1] + np.dot(gainK,y-np.dot(F,xPredict[:,n+1]))
sigKalman[:,:,n+1] = sigPredict[:,:,n+1] - np.dot(gainK,F).dot(sigPredict[:,:,n+1])
#remove initial conditions
xKalman = xKalman[:,1:N+1]
xPredict = xPredict[:,1:N+1]
sigKalman = sigKalman[:,:,1:N+1]
sigPredict = sigPredict[:,:,1:N+1]
#Step 2:
#initialize
xSmooth = xKalman
sigSmooth = sigKalman
#Kalman Smoother
for n in range(N-2,-1,-1):
B = np.dot(sigKalman[:,:,n],np.linalg.inv(sigPredict[:,:,n+1]))
xSmooth[:,n] = xKalman[:,n] + np.dot(B,(xSmooth[:,n+1]-xPredict[:,n+1]))
sigSmooth[:,:,n] = sigKalman[:,:,n] + np.dot(B,(sigSmooth[:,:,n+1]-sigPredict[:,:,n+1])).dot(B.T)
#Step 4:
if iter > 1 and np.linalg.norm(xSmooth-xPrev,'fro')/np.linalg.norm(xPrev,'fro') < tol:
break
#Step 5: Update Q
Q = np.zeros([K,K])
for k in range(0,K):
qTemp = 0
for n in range(1,N):
qTemp += (xSmooth[k,n]-xSmooth[k,n-1])**2
Q[k,k] = (qTemp + np.finfo(float).eps**2)**(1/2)/alpha
xPrev = xSmooth
iter += 1
print(iter-1)
print fs
xEst = xSmooth[0:K//2,:]-xSmooth[K//2:W,:]*1j
xPSD = 10*np.log10(np.abs(xEst)**2)
fig, ax = plt.subplots(nrows=1,ncols=2,figsize=(11,4))
im1 = ax[0].imshow(xPSD,origin='lower',extent=[0,N*W//fs,0,fs//2-5],aspect='auto',interpolation='none')
ax[0].set_ylim([0,20])
ax[0].set_ylabel('Frequency (Hz)')
ax[0].set_xlabel('Time (s)')
im1.set_clim(-40,10)
im2 = ax[1].imshow(xPSD,origin='lower',extent=[0,N*W//fs,0,fs//2-5],aspect='auto',interpolation='none')
ax[1].set_xlim([250,350])
ax[1].set_ylim([7,13])
ax[1].set_xlabel('Time (s)')
fig.tight_layout()
im2.set_clim(-40,10)
cb = fig.colorbar(im2)
"""
Explanation: IRLS Algorithm for Spectrotemporal Pursuit
The solution to the optimization problem can be obtained as the limit of a sequence $\left(\hat{x}^\left(l\right)\right)_{l=0}^{\infty}$ whose $l^{th}$ element is the solution to the Guassian MAP estimation problem (constrained least-squares program) of the form:
$$ \max_{x_{1},...,x{N}} -\sum_{n=1}^{N} \frac{1}{2\sigma^{2}} \left \| y_{n} - F_{n}x_{n} \right \|^{2}{2} - \sum{k=1}^{K} \sum_{n=1}^{N} \frac{\left(x_{n,k}-x_{n-1,k}\right)^{2}}{2\left(Q^{\left(l\right)}\right)_{k,k}} $$
where,
$$ \left(Q^{\left(l\right)}\right){k,k} = \frac{\left(\sum{n=1}^{N}\left(\hat{x}{n,k}^{\left(l-1\right)}-\hat{x}{n-1,k}^{\left(l-1\right)} \right)+\epsilon^{2}\right)^{\frac{1}{2}}}{\alpha} \rightarrow K\times K \textrm{ diagonal matrix}$$
This is a quadratic program with strictly concave objective function and block-tridiagonal Hessian. It can be solved iteratively using the following steps:
Input:
* $y \rightarrow$ observations
* $\hat{x}^{\left(0\right)} \in \mathbb{R}^{KN}\rightarrow$ initial guess
* $Q^{\left(0\right)} \rightarrow$ initial state-noise covariance
* $x_{0\mid0}, \Sigma_{0\mid0} \rightarrow$ intial conditions
* $tol \in \left(0,0.01\right) \rightarrow$ tolerance
* $L_{max} \in \mathbb{N}^{+} \rightarrow$ maximum number of iterations
Step 0. Initialize iteration number $l$ to 1
Step 1. Kalman Filter at time $n=1,2,...,N$:
$x_{n\mid n-1}=x_{n-1 \mid n-1}$
$\Sigma_{n\mid n-1}=\Sigma_{n-1\mid n-1}+Q^{\left(l\right)}$
$K_{n}=\Sigma_{n\mid n-1}F_{n}^{H}\left(F_{n}\Sigma_{n\mid n-1}F_{n}^{H}+\sigma^{2}I\right)^{-1}$
$x_{n\mid n}=x_{n\mid n-1}+K_{n}\left(y_{n}-F_{n}x_{n\mid n-1}\right)$
$\Sigma_{n\mid n}=\Sigma_{n\mid n-1}-K_{n}F_{n}\Sigma_{n\mid n-1}$
Step 2. Smoother at time $n=N-1,N-2,...,1$:
$B_{n}=\Sigma_{n\mid n}\Sigma_{n+1\mid n}^{-1}$
$x_{n\mid N}=x_{n\mid n}+B_{n}\left(x_{n+1\mid N}-x_{n+1\mid n}\right)$
$\Sigma_{n\mid N}=\Sigma_{n\mid n}+B_{n}\left(\Sigma_{n+1\mid N}-\Sigma_{n+1\mid n}\right)B_{n}^{H}$
Step3. Let $\hat{x}{n}^{\left(l\right)}=x{n\mid N},n=1,...,N$ and $\hat{x}^{\left(l\right)}=\left(\hat{x}{1}^{\left(l\right)'},...,\hat{x}{N}^{\left(l\right)'}\right)'$
Step4. Stop if $\frac{\left \| \hat{x}^{\left(l \right )}-\hat{x}^{\left(l-1 \right) } \right \|{2}}{\left \| \hat{x}^{\left(l-1 \right) }\right \|{2}}<tol$ or $l=L_{max}$
Step5. Let $l=l+1$, and update the state covariance $Q_n^{\left(l\right)}$
Step6. Go back to Step 1
Output: $\hat{x}^{\left(L\right)}$, where $L\leq L_{max}$ is the number of the last iteration of the algorithm
End of explanation
"""
|
oroszl/szamprob | notebooks/Package10/mintapelda10.ipynb | gpl-3.0 | # a szokásos rutinok betöltése
%pylab inline
from scipy.integrate import * # az integráló rutinok betöltése
"""
Explanation: Még több scipy ...
Az alábbi notebookban megismerkedünk két témával, melyek annak ellenére, hogy magukban is fontos jelentőséggel bírnak, kulcsfontosságú szerepet töltenek be más problémák numerikus megoldásában. A numerikus integrálás témakörében megvizsgálunk néhány egyszerű integrált a scipy csomag quad függvényével. A második kérdéskör, a differenciálegyenletek megoldása számos fizikai probléma vizsgálatában nyújthat segítséget. Gondoljunk csak arra, hogy a klasszikus mechanika Newton törvényei, az elektrodinamika Maxwell-egyenletei, a kvantummechanika Schrödinger-egyenlete, illetve az általános relativitáselmélet Einstein-egyenletei mind-4mind differenciálegyenletek!
Numerikus integrálás
Sok gyakorlati alkalmazásban előkerül egy fügvény integráljának, azaz a függvény görbéje alatti területnek a meghatározása. Sok analitikus függvény integrálját zárt alakban meg lehet határozni. A gyakorlatban viszont sokszor nem ismert a függvény analitikus alakja. Gondoljunk itt például egy zajos mérésre. Ilyen esetben a függvény integrálját a rendelkezésre áló mérési pontok $x_0,x_1,x_2,…x_j,…$ és mért értékek $f_0,f_1,f_2,…f_j,…$ alapján kell valahogy meghatároznunk. Ekkor az integrál értékét rendszerint egy véges összegzéssel határozhatjuk meg, ahol az összeadandó értékek a mérési pontok és a mért mennyiségek valamilyen függvényei. Az alábbiakban a scipy csomag quad függvényével fogunk megismerkedni, mely függvények numerikus integrálására alkalmas.
1D integrál
End of explanation
"""
def f(x):
return (x**2+3*x+2)
"""
Explanation: Vizsgáljuk meg az alábbi egyszerű integrált $$ \int_{-1}^1 (x^2+3x +2)\mathrm{d}x .$$
Ennek az értéke némi algebrával $14/3\approx 4.66666$ Vajon ugyen ezt kapjuk-e a quad fügvénnyel ?
Először definiáljuk az integrálandó függvényt.
End of explanation
"""
quad(f,-1,1)
"""
Explanation: Most már meghívhatjuk a quad-ot. Az első változó az integrálandó függvény, a második és a harmadik pedig az integrálási határok. A kimenet két szám. Az első az integrál becsült értéke, a második az algoritmus becsült hibája.
End of explanation
"""
# az integrandus definiálása
def gauss(x):
return exp(-x**2)
"""
Explanation: Amint az eredmény is mutatja, ez az analitikus számítás és a numerikus integrál megegyeznek.
Előfordulhat, hogy az integrálási határok végtelenek. Például vizsgáljuk meg a Gauss-görbe alatti területet:
$$ \int_{-\infty}^\infty \mathrm{e}^{-x^2}\mathrm{d}x =\sqrt{\pi}$$
End of explanation
"""
quad(gauss,-inf,inf)
sqrt(pi)
"""
Explanation: A quad függvénynek a végtelen integrálási határokat az inf jelöléssel adhatjuk meg.
End of explanation
"""
def h(x):
return ((x-1.0)**(-2))**(1.0/3.0)
quad(h,0,2)
"""
Explanation: A fent vizsgált két példával különösebb gond nélkül meg tudott birkózni a quad. Előfordul azonban, hogy az integrálás problémákba ütközik. Erre egy jó példa, ha az integrandus szingulárissá válik a integrálási tartomány egy pontjában. Ez nem feltétlenül jelenti azt, hogy az integrál nem létezik! Az ilyen pontokra külön felhívhatjuk a quad függvény figyelmét a points kulcsszó segítségével.
Vizsgáljuk meg a $$h(x)=\sqrt[3]{\frac{1}{(x-1)^2}}$$ függvényt mely a $x=1$ esetén divergál:
End of explanation
"""
quad(h,0,2,points=[1.0])
"""
Explanation: A quad bizony nehézségekbe ütközik, ha a 0 reciprokát kell vennie! Specifikáljuk most az $x=1$-et mint problémás pontot:
End of explanation
"""
# Az integrandus definiálása
def func(x,y):
return cos(x)*exp(-x**2-y**2)
"""
Explanation: Így az integrál már szépen elvégezhető.
2D integrál
A quad függvény kétdimenziós változata a dblquad. Nézzünk két egyszerű példát kétdimenziós integrálra is!
Integráljuk a $$ \cos(x) e^{-(x^2+y^2)} $$ függvényt az alábbi két integrálási tartományon:
- egy origó központú egységnyi hosszú négyzeten
- az origó központú egység körlapon!
End of explanation
"""
dblquad(func, -1/2, 1/2, lambda x:-1/2, lambda x:1/2)
"""
Explanation: A dblquad első paramétere megint az integrálandó függvény. A második és harmadik bemenő paraméter az integrandusfüggvény első paraméterének határait adja meg. A negyedik és ötödik bemenő paraméter az első integrálási változó függvényeként kifejezve a második integrálási változó határai. A legegyszerűbb esetben ezek valamilyen konstansfüggvények.
Az első integrálási tartomány tehát így számítható :
End of explanation
"""
dblquad(func,-1,1,lambda x:-sqrt(1-x**2),lambda x:sqrt(1-x**2))
"""
Explanation: A második integrálási tartományban az $x$ változó függvényében kell paraméterezni az $y$ változó szélső értékeit. Ha az integrálási tartomány az egység sugarú körlap, akkor az alsó határt a $y(x)=-\sqrt{1-x^2}$, a felső határt pedig a $y(x)=\sqrt{1-x^2}$ adja:
End of explanation
"""
from scipy.integrate import * # ez kell a diffegyenletekhez is!!
"""
Explanation: Differenciálegyenletek
Amint azt a bevezetőben is említettük, a fizikai törvények jelentős része a differenciálegyenletek nyelvén van megfogalmazva.
Az alábbiakban megismerkedünk az odeint rutinnal amely differenciálegyenletek numerikus megoldását tesz lehetővé.
End of explanation
"""
# Paraméterek definiálása
epsilon=1
R=1.0e6
C=1.0e-7
"""
Explanation: Egy egyenletet differenciálegyenletnek hívunk, ha a meghatározandó függvény deriváltjai szerepelnek benne. Egy egyszerű differenciálegyenletre jutunk például, ha megvizsgáljuk egy kondenzátor töltésének időbeli válltozását!
<img src="http://fizipedia.bme.hu/images/e/e0/RC_k%C3%B6r.JPG" width=300></img>
A kondenzátoron felhalmozott $Q$ töltés meghatározására induljunk ki a Kirchhoff-féle huroktörvényből, amit a fenti ábrán látható áramkörre írtunk fel:
$$ \varepsilon= \underbrace{I R}{U_R}+ \underbrace{\frac{Q}{C}}{U_C} = \frac{\mathrm{d}Q}{\mathrm{d}t}R+\frac{Q}{C}$$
A megoldandó differenciálegyenlet tehát:
$$ \frac{\mathrm{d}Q}{\mathrm{d}t}= \frac{\varepsilon}{R}-\frac{Q}{RC} $$
Tegyük fel, hogy a kezdetben üres volt a kondenzátor, tehát $Q(0)=0$, és számoljunk az $\varepsilon=1\mathrm{V}$, $R=1\mathrm{M}\Omega$ és $C=100nF$ paraméterekkel!
End of explanation
"""
def RCkor(q,t):
return epsilon/R-q/(R*C)
"""
Explanation: Az odeint függvény hívásához szükség van a paramétereken kívül a növekményfüggvényre (azaz a fenti egyenlet jobb oldalára), definiáljuk most ezt:
End of explanation
"""
t=linspace(0,1,1000) # ezek az érdekes idő pillanatok
q0=0 # A töltés kezdeti értéke
q=odeint(RCkor,q0,t) # itt oldjuk meg a diffegyenletet
"""
Explanation: Most már készen vagyunk arra, hogy meghívjuk a differenciál egyenlet megoldó függvényt! Az odeint alapvetően három bemenő paramétert vár. Az első a fent definiált növekményfüggvény, a második a meghatározandó függvény kezdeti értéke, a harmadik pedig azon időpontok halmaza, ahol kíváncsiak vagyunk a megoldásra. A függvény visszatérési értéke maga a keresett adatsor.
End of explanation
"""
plot(t,q/C,lw=3)
xlabel(r'$t$[s]',fontsize=20)
ylabel(r'$Q/C$[V]',fontsize=20)
grid()
"""
Explanation: Most már csak ábrázolni kell!
End of explanation
"""
def f(u, t):
x=u[0] # az u első komponense a kitérés
v=u[1] # az u második komponense a sebesség
return [v,-x] # ez maga a növekmény kiértékelése
"""
Explanation: Vizsgáljunk meg egy másik példát ! Legyen ez egy rugóval rögzített test, mely egy vonal mentén súrlódás nélkül tud mozogni.
<img src="https://i1.wp.com/www.paroc.com/knowhow/sound/~/media/Images/Knowhow/Sound/The-ideal-mass-spring-system-3244099.ashx" width=300></img>
Ennek a mozgásnak az egyenlete Newton szerint,
$$m\frac{\mathrm{d}^2x}{\mathrm{d}t^2}(t)=-kx(t)$$
Írjuk át ezt az időben másodrendű differenciálegyenletet két időben elsőrendű differenciálegyenletre!
$$\frac{\mathrm{d}x}{\mathrm{d}t}(t)=v(t)$$
$$m \frac{\mathrm{d}v}{\mathrm{d}t}(t)=-k x(t)$$
Általában minden magasabb rendű differenciálegyenlet rendje hasonló módon csökkenthető. Azaz a legáltalánosabb esetben is új ismeretlen függvények bevezetésével elsőrendű differenciálegyenlet-rendszer megoldására tudjuk redukálni a problémánkat!
Vizsgáljuk meg azt az esetet, ha $m=k=1$ !
Most a növekményfüggvényünk egy kételemű vektort kell, hogy kezeljen!
End of explanation
"""
t=linspace(0,20,1000); # az idő intervallum
u0 = [1,0] # kezdeti érték x-re és v-re
u=odeint(f,u0,t) # itt történik a diffegyenlet megoldása
plot(t,u[:,0],label=r'$x(t)$ pozicio')
plot(t,u[:,1],label=r'$v(t)$ sebesseg')
legend(fontsize=20)
xlabel(r'$t$[s]',fontsize=20)
grid()
"""
Explanation: Oldjuk meg az egyenleteket úgy, hogy kezdetben a kitérés 1, a kezdősebesség pedig nulla!
End of explanation
"""
|
indranilsinharoy/PyZDDE | Examples/IPNotebooks/03 Generation of Speckle using Zemax Grid Sag Surface.ipynb | mit | from __future__ import division, print_function
import os as os
import collections as co
import numpy as np
import math as math
import scipy.stats as sps
import scipy.optimize as opt
import matplotlib.pyplot as plt
from IPython.display import Image as ipImage
import pyzdde.zdde as pyz
import pyzdde.zfileutils as zfu
# The following python modules are available at
# 1. https://github.com/indranilsinharoy/iutils/blob/master/optics/fourier.py
# 2. https://github.com/indranilsinharoy/iutils/blob/master/optics/beam.py
import iutils.optics.fourier as fou
import iutils.optics.beam as bou
%matplotlib inline
zmxfile = 'SpeckleUsingPOP_GridSagSurf.zmx'
lensPath = os.path.join(os.getcwd().split('Examples')[0], 'ZMXFILES')
lensFile = os.path.join(lensPath, zmxfile)
ln = pyz.createLink()
ln.zLoadFile(lensFile)
# Surfaces in the LDE @ Zemax.
ln.ipzGetLDE()
# Define surface number constants to remember
SURF_BEAMSTART = 1
SURF_DIFFSMOOTHFACE = 2 # Smooth face of the diffuser
SURF_GRIDSAG = 3 # Rough face of the diffuser
SURF_IMA = 4
# Get wavelength (Zemax returns in units of microns)
wavelength = ln.zGetWave(ln.zGetPrimaryWave()).wavelength/1000.0
print(u'Wavelength, \u03BB = {:.3e} mm'.format(wavelength))
# Set sigma, sampling, and semi-diameter of the grid sag surface
# the semi-diameter must match that of the grid sag surface in LDE
#sigma = 5*wavelength # set sigma later
nx, ny = 401, 401
semidia = 5.0
# Start out with a zero height profile surface
comment = 'zero height profile sag'
filename = os.path.join(os.path.expandvars("%userprofile%"), 'Documents',
'Zemax\\Objects\\Grid Files', 'gridsag_zeroheight.DAT')
# the function randomGridSagFile() in pyzdde/zfileutils generates grid sag ASCII
# file with Gaussian distributed sag profile
z, sagfile = zfu.randomGridSagFile(mu=0, sigma=np.inf, semidia=semidia, nx=nx,
ny=ny, fname=filename, comment=comment)
# load the zero height grid sag surface file in to the extra data editor
ln.zImportExtraData(surfNum=SURF_GRIDSAG, fileName=sagfile)
"""
Explanation: Generation of speckle pattern using Zemax's Grid sag surface
<img src="https://raw.githubusercontent.com/indranilsinharoy/PyZDDE/master/Doc/Images/articleBanner_03_speckleGridSag.png" height="230">
Please feel free to e-mail any corrections, comments and suggestions to the author (Indranil Sinharoy)
Last updated: 12/27/2015
License: Creative Commons Attribution 4.0 International
References
Statistical properties of Laser Speckle, J. Goodman
Laser Doppler and time-varying speckle: a reconciliation, J. David Briers
End of explanation
"""
# Function to set the POP analysis parameters
def set_POP(ln, data='irr', wide=50.0, waist=1.0, start=1, end=None):
"""helper function to set POP
Parameters
----------
ln : object
data : string
the display data type. 'irr' or 'phase'
wide : float
initial width and height of the region to display. See Note 2.
waist : float
beam radius at the waist (in mm)
start : integer
start surface
end : integer
end surface
Return
------
settinsfilename : string
CFG settings file name
Note
----
1. Use the same name for the CFG settings file. This helps POPD to return the correct
values of parameters (mostly)
2. The ``auto`` parameter in the function ``zSetPOPSettings()`` does not seem to work
as expected. Hence, we need to provide the ``widex`` and ``widey`` parameters
explicitly. In order to get the appropriate values for these parameters, use the
"Automatic" button in Zemax POP analysis window for the particular design file
"""
setfile = ln.zGetFile().lower().replace('.zmx', '.CFG')
datatype = 1 if data == 'phase' else 0
GAUSS_WAIST, WAIST_X, WAIST_Y = 0, 1, 2
S_1024, S_2048 = 6, 7
cfgfile = ln.zSetPOPSettings(data=datatype, settingsFile=setfile,
startSurf=start, endSurf=end, field=1, wave=1,
beamType=GAUSS_WAIST,
paramN=((WAIST_X, WAIST_Y), (waist, waist)),
sampx=S_2048, sampy=S_2048, widex=wide, widey=wide)
return cfgfile
# Helper functions to display POP display data
def plot_pop_display_data(popdata, height, width, title):
"""plot pop display data retrieved from Zemax application
using `zGetPOP()` function
Parameters
----------
popdata : list
list of speckle patterns or pop display data arrays
height : list
list of height of the speckle patterns
width : list
list of width of the speckle patterns
Returns
-------
None
Notes
-----
The labels of the plot extents are not guaranteed to be exact
"""
numPatts = len(popdata)
figHeight = 5
figWidth = 1.3*figHeight*numPatts if numPatts > 1 else figHeight
fig = plt.figure(figsize=(figWidth, figHeight))
for i, (pat, h, w, t) in enumerate(zip(popdata, height, width, title), 1):
ax = fig.add_subplot(1, numPatts, i)
ax.imshow(pat, cmap=plt.cm.plasma, extent=(-w/2, w/2, h/2, -h/2))
ax.set_xlabel('mm'); ax.set_ylabel('mm')
ax.set_title(t, y=1.02)
fig.tight_layout()
plt.show()
# helper function to zoom
def zoom(img, amount=2):
"""simple function for cropping the image data for display
Parameters
----------
img : ndarray
2-dim ndarray
amount : float
amount of zooming
"""
r, c = img.shape
newR = r//amount
newC = c//amount
startR, startC = (r - newR)//2, (c - newC)//2
return img[startR:startR+newR, startC:startC+newC]
# the `wide` value was determined from the Zemax main applicaiton's POP analysis
# setting by clicking on "Automatic"
beamRadius = 1.0
cfgfile = set_POP(ln, data='irr', wide=80.0, waist=beamRadius, start=SURF_BEAMSTART, end=SURF_IMA)
"""
Explanation: Set up POP analysis
End of explanation
"""
def rayleigh_fraunhofer(beamRadius, wavelength):
"""print the rayleigh range and Fraunhofer distance in mm
"""
beamDia = 2.0*beamRadius
rr = bou.GaussianBeam(waistDiameter=beamDia, wavelength=wavelength).rayleigh
fd = fou.fraunhofer_distance(d=beamDia, wavelen=wavelength)
print('Rayleigh range = {:2.2f} mm'.format(rr))
print('Fraunhofer distance (far-field) = {:2.2f} mm'.format(fd))
rayleigh_fraunhofer(beamRadius, wavelength)
# view the set parameters for POP analysis. Note that this analysis takes little more than a min
# because of large number of samples
popinfo, irrdata = ln.zGetPOP(settingsFile=cfgfile, displayData=True, timeout=3*60)
popinfo
mmPerPxY, mmPerPxX = popinfo.widthY/popinfo.gridY, popinfo.widthX/popinfo.gridX
irradiance = zoom(np.array(irrdata), amount=5.5)
pxY, pxX = irradiance.shape
h, w = mmPerPxY*pxY, mmPerPxX*pxX
plot_pop_display_data([irradiance,], [h,], [w,], ['Initial irradiance',])
"""
Explanation: Rayleigh range and Fraunhofer distances
End of explanation
"""
# Helper function to see the surface statistics
def sag_statistics(sag, sigma=1, wave=1, nbins=100):
"""dispaly basic statistics of the sag profile
"""
h, w = sag.shape
absMax, meanSag = np.max(np.abs(sag)), np.mean(sag)
varSag, stdSag = np.var(sag), np.std(sag)
print(u'sag absolute max: {:.4f} mm ({:.4f}\u03BB)'
.format(absMax, absMax/wave))
print(u'sag mean value: {:.4e} mm ({:.4f}\u03BB)'
.format(meanSag, meanSag/wave))
print(u'sag std deviation: {:.4e} ({:.4f}\u03BB)'
.format(stdSag, stdSag/wave))
hist, binEdges = np.histogram(sag, bins=nbins,
range=(-5*sigma, 5*sigma), density=True)
binCenters = (binEdges[:-1] + binEdges[1:])/2
#
def gauss(x, mu, sigma):
"""gaussian distribution
"""
a = 1.0/(sigma*np.sqrt(2.0*np.pi))
return a*np.exp(-(x - mu)**2/(2.0*sigma**2))
# figures
fig = plt.figure(figsize=(8, 4))
ax0 = fig.add_axes([0.00, 0.00, 0.40, 0.95])
ax1 = fig.add_axes([0.49, 0.00, 0.46, 1.00])
ax2 = fig.add_axes([0.98, 0.05, 0.02, 0.89])
gaussDist = gauss(binCenters, mu=0, sigma=sigma)
ax0.plot(binCenters/wave, gaussDist, lw=6, alpha=0.4,
label='Gaussian dist')
ax0.plot(binCenters/wave, hist, label='Fluctuation hist')
ax0.set_xlim(-5*sigma/wave, 5*sigma/wave)
ax0.yaxis.set_ticks([])
ax0.legend(fontsize=8)
ax0.set_xlabel(r'$\lambda$', fontsize=15)
ax0.set_title('Sag fluctuation histogram', y=1.01)
im = ax1.imshow(sag, cmap=plt.cm.jet, vmin=-absMax, vmax=absMax,
interpolation='none')
ax1.set_title('Sag surface profile', y=1.01)
plt.colorbar(im, ax2)
plt.show()
# Create a rough surface and display the surface roughness statistics
sigma = 5.0*wavelength # surface roughness
comment = 'gauss random dist of grid sag for speckle generation'
print('Diffuser semi-diameter = {:2.3f} mm'.format(semidia))
print('Nx = {:d}, Ny = {:d}'.format(nx, ny))
print('delx = {:.5f} mm'
.format(2.0*semidia/(nx-1)))
print('dely = {:.5f} mm'
.format(2.0*semidia/(ny-1)))
z, sagfile = zfu.randomGridSagFile(mu=0, sigma=sigma, semidia=semidia, nx=nx, ny=ny)
sag_statistics(z.reshape(ny, nx), sigma, wavelength)
# load the Grid sag surface file in to the extra data editor
ln.zImportExtraData(surfNum=SURF_GRIDSAG, fileName=sagfile)
"""
Explanation: Create and import rough sag surface into Zemax
End of explanation
"""
popinfo, irrdata = ln.zGetPOP(settingsFile=cfgfile, displayData=True, timeout=3*60)
popinfo
"""
Explanation: Retrieve and plot the speckle data generated in Zemax
End of explanation
"""
mmPerPxY, mmPerPxX = popinfo.widthY/popinfo.gridY, popinfo.widthX/popinfo.gridX
speckle = zoom(np.array(irrdata), amount=5.5)
pxY, pxX = irradiance.shape
h, w = mmPerPxY*pxY, mmPerPxX*pxX
plot_pop_display_data([speckle,], [h,], [w,], ['speckle pattern',])
"""
Explanation: NOTE: If the Zemax Error Message "The reference rays cannot be traced or are too close together", please check the maximum aboslute height of the grid sag surface. It is probably much larger (more than an order of magnitude) than the wavelength of the source.
End of explanation
"""
# histogram of speckle
numBins = 100
hist, binEdges = np.histogram(speckle.flatten(), bins=numBins, density=True)
binCenters = (binEdges[:-1] + binEdges[1:])/2
fig, ax = plt.subplots(1, 1)
ax.plot(binCenters, hist, label='speckle data')
# fit an exponential curve. Since the data is noise-free, we will use
# the method provided by Scipy
loc, scale = sps.expon.fit(speckle.flatten(), floc=0)
y = sps.expon.pdf(binCenters, loc=loc, scale=scale)
ax.plot(binCenters, y, linestyle='dashed', label='fitted expo curve')
ax.set_xlim(0, np.max(binCenters))
ax.legend(fontsize=12)
ax.set_xlabel('(scaled) intensity')
print(u'Rate parameter, \u03BB = {:.3f}'.format(1.0/scale))
plt.show()
"""
Explanation: First order speckle statistics
Distribution of speckle intensity
The speckle intensity of the ideal Gaussian speckle pattern follows a negative exponential probability-density function [1]. The (negative) exponential probability density function is given as:
$$
P(x | \lambda) = \left { \begin{array}{cc}
\lambda e^{-\lambda x} & \hspace{4pt} & x \geq 0, \
0 & \hspace{4pt} & x < 0,
\end{array}\right .\
$$
End of explanation
"""
speckleContrast = np.std(speckle.flatten())/ np.mean(speckle.flatten())
print('Speckle contrast = {:2.5f}'.format(speckleContrast))
"""
Explanation: Speckle contrast
For ideal Gaussian speckle pattern, the standard deviation, $\sigma$ of the intensity is equal to the mean intensity, $\langle I \rangle$, where strictly, $\langle \, \rangle$ stands for ensemble average. Here, we will assume that the ensemble average equals the sample average.
End of explanation
"""
# Expected speckle size
def set_small_values_to_zero(tol, *values):
"""helper function to set infinitesimally small values to zero
Parameters
----------
tol : float
threshold. All numerical values below abs(tol) is set to zero
*values : unflattened sequence of values
Returns
-------
"""
return [0.0 if abs(value) < tol else value for value in values]
#TODO!! Move this function to PyZDDE!! (could rename to specify it's a POP analysis helper function)
def get_beam_centroid_radius(ln, surf, update=True):
"""returns the beam width and position at surface ``surf`` using
POP analysis
Parameters
----------
surf : integer
surface number. 0 implies last surface
udpate : bool
if `True`, then Zemx will recompute all pupil positions and solves, etc
and the data in the LDE will be updated before retrieving the POPD
values.
Returns
-------
para : namedtuple
beam parameters (cx, cy, rx, ry) where cx, cy are the coordinates
of the centroid of the beam w.r.t. the chief ray
"""
CENTROID_X, CENTROID_Y, BEAM_RADIUS_X, BEAM_RADIUS_Y = 21, 22, 23, 24
wave, field, xtr1, xtr2 = 0, 0, 0, 0
if update:
ln.zGetUpdate()
cx = ln.zOperandValue('POPD', surf, wave, field, CENTROID_X, xtr1, xtr2)
cy = ln.zOperandValue('POPD', surf, wave, field, CENTROID_Y, xtr1, xtr2)
rx = ln.zOperandValue('POPD', surf, wave, field, BEAM_RADIUS_X, xtr1, xtr2)
ry = ln.zOperandValue('POPD', surf, wave, field, BEAM_RADIUS_Y, xtr1, xtr2)
cx, cy, rx, ry = set_small_values_to_zero(1e-12, cx, cy, rx, ry)
beam = co.namedtuple('beam', ['cx', 'cy', 'rx', 'ry'])
return beam(cx, cy, rx, ry)
beamDiameterAtDiff = 2.0*get_beam_centroid_radius(ln, SURF_DIFFSMOOTHFACE).rx
THICKNESS = 3
diffScDist = ln.zGetSurfaceData(surfNum=SURF_IMA - 1, code=THICKNESS) # note this is not general
theorySpeckleWidth = wavelength*diffScDist/beamDiameterAtDiff
print('Beam diameter @ diffuser = {} mm'.format(beamDiameterAtDiff))
print('Distance between diffuser and obs. screen = {} mm'.format(diffScDist))
print(u'Theoretical speckle width = {:.5f} mm ({:.3E} \u03BCm)'
.format(theorySpeckleWidth, theorySpeckleWidth/wavelength))
"""
Explanation: Second order speckle statistics
The width of the autocorrelation of the intensity of the speckle distribution gives a reasonable measure of the "average" width of a speckle in the pattern [1].
End of explanation
"""
# Helper functions for estimating the speckle size
# Most of the ideas for determining the speckle size is from
# "Speckle Size via Autocorrelation", by Joel
# mathworks.com/matlabcentral/fileexchange/
# 25046-speckle-size-via-autocorrelation/content//SpeckleSize.m
def xcov(x, y=None, scale='none'):
"""returns the cross-covariance of two discrete-time
sequences, `x` and `y`.
Parameters
----------
x : ndarray
1-dim ndarray
y : ndarray, optional
1-dim ndarray. If y is 'None', the autocovariance of
the sequence `x`is returned
scale : string, optional
specifies a normalization option for the cross-
covariance
Returns
-------
crosscov : ndarray
1-dim ndarray of the cross-covariance sequence. Length
of `crosscov` is `2*m - 1`, where `m` is the length of
`x` (and `y` is passed)
Notes
-----
`xcov` emulates Matlab's `xcov()` function in a
limited way. For details see _[1]
References
----------
.. [1] http://www.mathworks.com/help/signal/ref/xcov.html
"""
m = len(x)
y = x if y is None else y
assert m == len(y), \
'Sequences x and y must be of same length.'
raw = np.correlate(x - np.mean(x),
y - np.mean(y), 'full')
if scale == 'coeff':
crosscov = raw/np.max(raw)
elif scale == 'biased':
crosscov = raw/m
elif scale == 'unbiased':
maxlag = m - 1
k = np.arange(-maxlag, maxlag + 1)
crosscov = raw/(m - np.abs(k))
else:
crosscov = raw
return crosscov
def avg_autocov(x, axis=0, scale='coeff'):
"""returns the "average" autocovariance of x along
the `axis` specified
Parameters
----------
x : ndarray
2-dim ndarray
axis : integer
0 = average auto-covariance along the first dimension;
1 = average auto-covariance along the second dimension
scale : string, optional
specifies a normalization option for the cross-
covariance
Returns
-------
aCorr : ndarray
1-dim ndarray of average auto-covariance along the
`axis`, normalized such that the maximum is 1.
"""
x = x if axis else x.T
r, c = x.shape
avgAcov = np.zeros(2*c - 1)
for row in x:
avgAcov = avgAcov + xcov(row, scale=scale)
return avgAcov/np.max(avgAcov)
def gauss(x, a, mu, sigma):
"""gaussian model function for curve fitting
"""
return a*np.exp((-(x - mu)**2)/(2.0*sigma**2))
def gauss_fit(data, expectedSize=10):
"""helper function for curve gaussian curve fitting
Parameters
----------
data : ndarray
1-dim ndarray consisting of the data
expectedSize : ndarray
expected size of the speckle???? (in pixels)
Returns
-------
a : float
mu : float
mean of the gaussian curve
sigma : float
standard deviation of the gaussian curve
TO DO!!!
What is a good `expectedSize`?
probably should use some standard deviation of speckle estimate,
and pixel size ... based on the beam width and wavelength
"""
# clean the data by simple thresholding
y = data.copy()
upper, lower = 1.0, 0.005
y[y > upper] = 1.0
y[y < lower] = 0.0
m = len(y)
x = np.arange(0, m) # index
p0 = [1.0, m/2, expectedSize] # initial guess
pEst, pCov = opt.curve_fit(gauss, x, y, p0=p0)
stdErr = np.sqrt(np.diag(pCov))
return pEst[0], pEst[1], pEst[2], stdErr
def width_FWHM(a, sigma):
return 2.0*sigma*np.sqrt(2.0*np.log(a/0.5))
def width_oneOESqu(a, sigma):
return 2.0*sigma*np.sqrt(2.0*np.log(a/.1353353))
def plot_avg_intensity_autocovariance(x, acR, acRFit, acC, acCFit):
"""helper function to plot the average intensity autocovariances
and the fit curves for visual inspection
Parameters
----------
x : ndarray
indices
acR, acC : ndarray
1-dim ndarray of the average autocovariance along the rows/columns
acRFit, acRFit : ndarray
1-dim ndarray of the fitted gaussian curve along the rows/columns
"""
fig, (ax0, ax1) = plt.subplots(2, 1, figsize=(12, 7))
ax0.plot(x, acR, label='avg acov horizontal')
ax0.plot(x, acRFit, '--', label='fitted gauss')
ax0.legend()
ax0.autoscale(tight=True)
ax1.plot(x, acC, label='avg acov vertical')
ax1.plot(x, acCFit, '--', label='fitted gauss')
ax1.legend()
ax1.autoscale(tight=True)
plt.show()
def estimate_mean_speckle_size(intPat, mmPerPxY, mmPerPxX):
"""function to estimate the mean speckle intensity
Parameters
----------
intPat : ndarray
2-dim ndarray of the intensity pattern of the speckle
mmPerPxY : float
millimeter per pixel in y direction (in the POP display)
mmPerPxX : float
millimeter per pixel in x direction (in the POP display)
Returns
-------
None
"""
r, c = intPat.shape
# average auto-covariance along the rows
acR = avg_autocov(intPat, axis=1, scale='coeff')
# average auto-covariance along the columns
acC = avg_autocov(intPat, axis=0, scale='coeff')
# fit a Gaussian curve to acR and acC
x = np.arange(0, len(acR))
aR, muR, stdR, _ = gauss_fit(acR)
acRFit = gauss(x, aR, muR, stdR)
aC, muC, stdC, _ = gauss_fit(acC)
acCFit = gauss(x, aC, muC, stdC)
print('Gaussian fit parameters:')
print('aR = {:2.4f}, muR = {:2.2f}, stdR = {:2.4f}'.format(aR, muR, stdR))
print('aC = {:2.4f}, muC = {:2.2f}, stdC = {:2.4f}'.format(aC, muC, stdC))
print('\nPlot of the average autocovariances and fitted Gaussian curve:')
plot_avg_intensity_autocovariance(x, acR, acRFit, acC, acCFit)
# Estimate the FWHM and 1/e^2 widths
fwhm_x = width_FWHM(aR, stdR)
fwhm_y = width_FWHM(aC, stdC)
oneOESqu_x = width_oneOESqu(aR, stdR)
oneOESqu_y = width_oneOESqu(aC, stdC)
print('\nSpeckle size estimates:')
print('----------------------')
print('FWHM: Wx = {:2.4f} pixles ({:2.4f} mm), Wy = {:2.4f} pixles ({:2.4f} mm)'
.format(fwhm_x, fwhm_x*mmPerPxX, fwhm_y, fwhm_y*mmPerPxY))
print(u'1/e\u00B2: Wx = {:2.4f} pixles ({:2.4f} mm), Wy = {:2.4f} pixles ({:2.4f} mm)'
.format(oneOESqu_x, oneOESqu_x*mmPerPxX, oneOESqu_y, oneOESqu_y*mmPerPxY))
estimate_mean_speckle_size(speckle, mmPerPxY, mmPerPxX)
#print(np.mean(speckle), np.std(speckle))
print(u'Theoretical speckle width = {:.5f} mm ({:.3E} \u03BCm)'
.format(theorySpeckleWidth, theorySpeckleWidth/wavelength))
ln.close()
"""
Explanation: After fitting a Gaussian distribution the $\text{FWHM}$ and $1/e^2$ widths may be estimated as follows:
If $F(x)$ is the Gaussian curve, and $F(x) \Big|{x=x^+} = \frac{1}{2}$ and $F(x) \Big|{x=x^-} = \frac{1}{2}$ on either side of the mean, then the $\text{FWHM}$ width is given by $x^+ - x^-$. Similarly, the $1/e^2$ width may be estimated by taking the difference between the abscissae where $F(x)=1/e^2=0.135335$
$$
\begin{array}{cl}
F_x & = & a e^{- \frac{(x - \mu)^2}{2\sigma^2} } \\
ln(F_x) & = & ln(a) - \frac{(x - \mu)^2}{2\sigma^2} \\
\frac{(x - \mu)^2}{2\sigma^2} & = & ln \left( \frac{a}{F_x} \right)\\
x & = & \mu + \sigma \sqrt{ \left[ 2 \, ln \left( \frac{a}{F_x} \right) \right]}
\end{array}
$$
If we represent
$$
x^{\pm} = \mu \pm \sigma \sqrt{ \left[ 2 \, ln \left( \frac{a}{F_x} \right) \right]}
$$
then,
$$
\Delta x = 2 \, \sigma \sqrt{ \left[ 2 \, ln \left( \frac{a}{F_x} \right) \right]}
$$
End of explanation
"""
|
roebius/deeplearning_keras2 | nbs2/taxi_data_prep_and_mlp.ipynb | apache-2.0 | data_path = "data/taxi/"
"""
Explanation: Below path is a shared directory, swap to own
End of explanation
"""
meta = pd.read_csv(data_path+'metaData_taxistandsID_name_GPSlocation.csv', header=0)
meta.head()
train = pd.read_csv(data_path+'train/train.csv', header=0)
train.head()
train['ORIGIN_CALL'] = pd.Series(pd.factorize(train['ORIGIN_CALL'])[0]) + 1
train['ORIGIN_STAND']=pd.Series([0 if pd.isnull(x) or x=='' else int(x) for x in train["ORIGIN_STAND"]])
train['TAXI_ID'] = pd.Series(pd.factorize(train['TAXI_ID'])[0]) + 1
# train['DAY_TYPE'] = pd.Series([ord(x[0]) - ord('A') for x in train['DAY_TYPE']])
train['DAY_TYPE'] = pd.Series([(ord(x[0]) - ord('A')) for x in train['DAY_TYPE']]) # - correct
"""
Explanation: Replication of 'csv_to_hdf5.py'
Original repo used some bizarre tuple method of reading in data to save in a hdf5 file using fuel. The following does the same approach in that module, only using pandas and saving in a bcolz format (w/ training data as example)
End of explanation
"""
polyline = pd.Series([ast.literal_eval(x) for x in train['POLYLINE']])
"""
Explanation: The array of long/lat coordinates per trip (row) is read in as a string. The function ast.literal_eval(x) evaluates the string into the expression it represents (safely). This happens below
End of explanation
"""
train['LATITUDE'] = pd.Series([np.array([point[1] for point in poly],dtype=np.float32) for poly in polyline])
train['LONGITUDE'] = pd.Series([np.array([point[0] for point in poly],dtype=np.float32) for poly in polyline])
utils2.save_array(data_path+'train/train.bc', train.as_matrix())
utils2.save_array(data_path+'train/meta_train.bc', meta.as_matrix())
"""
Explanation: Split into latitude/longitude
End of explanation
"""
train = pd.DataFrame(utils2.load_array(data_path+'train/train.bc'), columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE'])
train.head()
"""
Explanation: Further Feature Engineering
After converting 'csv_to_hdf5.py' functionality to pandas, I saved that array and then simply constructed the rest of the features as specified in the paper using pandas. I didn't bother seeing how the author did it as it was extremely obtuse and involved the fuel module.
End of explanation
"""
train['ORIGIN_CALL'].max()
train['ORIGIN_STAND'].max()
train['TAXI_ID'].max()
"""
Explanation: The paper discusses how many categorical variables there are per category. The following all check out
End of explanation
"""
train['DAY_OF_WEEK'] = pd.Series([datetime.datetime.fromtimestamp(t).weekday() for t in train['TIMESTAMP']])
"""
Explanation: Self-explanatory
End of explanation
"""
train['QUARTER_HOUR'] = pd.Series([int((datetime.datetime.fromtimestamp(t).hour*60 + datetime.datetime.fromtimestamp(t).minute)/15)
for t in train['TIMESTAMP']])
"""
Explanation: Quarter hour of the day, i.e. 1 of the 4*24 = 96 quarter hours of the day
End of explanation
"""
train['WEEK_OF_YEAR'] = pd.Series([datetime.datetime.fromtimestamp(t).isocalendar()[1] for t in train['TIMESTAMP']])
"""
Explanation: Self-explanatory
End of explanation
"""
train['TARGET'] = pd.Series([[l[1][0][-1], l[1][1][-1]] if len(l[1][0]) > 1 else np.nan for l in train[['LONGITUDE','LATITUDE']].iterrows()])
"""
Explanation: Target coords are the last in the sequence (final position). If there are no positions, or only 1, then mark as invalid w/ nan in order to drop later
End of explanation
"""
def start_stop_inputs(k):
result = []
for l in train[['LONGITUDE','LATITUDE']].iterrows():
if len(l[1][0]) < 2 or len(l[1][1]) < 2:
result.append(np.nan)
elif len(l[1][0][:-1]) >= 2*k:
result.append(np.concatenate([l[1][0][0:k],l[1][0][-(k+1):-1],l[1][1][0:k],l[1][1][-(k+1):-1]]).flatten())
else:
l1 = np.lib.pad(l[1][0][:-1], (0,20-len(l[1][0][:-1])), mode='edge')
l2 = np.lib.pad(l[1][1][:-1], (0,20-len(l[1][1][:-1])), mode='edge')
result.append(np.concatenate([l1[0:k],l1[-k:],l2[0:k],l2[-k:]]).flatten())
return pd.Series(result)
train['COORD_FEATURES'] = start_stop_inputs(5)
train.shape
train.dropna().shape
"""
Explanation: This function creates the continuous inputs, which are the concatened k first and k last coords in a sequence, as discussed in the paper.
If there aren't at least 2* k coords excluding the target, then the k first and k last overlap. In this case the sequence (excluding target) is padded at the end with the last coord in the sequence. The paper mentioned they padded front and back but didn't specify in what manner.
Also marks any invalid w/ na's
End of explanation
"""
train = train.dropna()
utils2.save_array(data_path+'train/train_features.bc', train.as_matrix())
"""
Explanation: Drop na's
End of explanation
"""
train = pd.read_csv(data_path+'train/train.csv', header=0)
test = pd.read_csv(data_path+'test/test.csv', header=0)
def start_stop_inputs(k, data, test):
result = []
for l in data[['LONGITUDE','LATITUDE']].iterrows():
if not test:
if len(l[1][0]) < 2 or len(l[1][1]) < 2:
result.append(np.nan)
elif len(l[1][0][:-1]) >= 2*k:
result.append(np.concatenate([l[1][0][0:k],l[1][0][-(k+1):-1],l[1][1][0:k],l[1][1][-(k+1):-1]]).flatten())
else:
l1 = np.lib.pad(l[1][0][:-1], (0,4*k-len(l[1][0][:-1])), mode='edge')
l2 = np.lib.pad(l[1][1][:-1], (0,4*k-len(l[1][1][:-1])), mode='edge')
result.append(np.concatenate([l1[0:k],l1[-k:],l2[0:k],l2[-k:]]).flatten())
else:
if len(l[1][0]) < 1 or len(l[1][1]) < 1:
result.append(np.nan)
elif len(l[1][0]) >= 2*k:
result.append(np.concatenate([l[1][0][0:k],l[1][0][-k:],l[1][1][0:k],l[1][1][-k:]]).flatten())
else:
l1 = np.lib.pad(l[1][0], (0,4*k-len(l[1][0])), mode='edge')
l2 = np.lib.pad(l[1][1], (0,4*k-len(l[1][1])), mode='edge')
result.append(np.concatenate([l1[0:k],l1[-k:],l2[0:k],l2[-k:]]).flatten())
return pd.Series(result)
"""
Explanation: End to end feature transformation
End of explanation
"""
lat_mean = 41.15731
lat_std = 0.074120656
long_mean = -8.6161413
long_std = 0.057200309
def feature_ext(data, test=False):
data['ORIGIN_CALL'] = pd.Series(pd.factorize(data['ORIGIN_CALL'])[0]) + 1
data['ORIGIN_STAND']=pd.Series([0 if pd.isnull(x) or x=='' else int(x) for x in data["ORIGIN_STAND"]])
data['TAXI_ID'] = pd.Series(pd.factorize(data['TAXI_ID'])[0]) + 1
data['DAY_TYPE'] = pd.Series([ord(x[0]) - ord('A') for x in data['DAY_TYPE']])
polyline = pd.Series([ast.literal_eval(x) for x in data['POLYLINE']])
data['LATITUDE'] = pd.Series([np.array([point[1] for point in poly],dtype=np.float32) for poly in polyline])
data['LONGITUDE'] = pd.Series([np.array([point[0] for point in poly],dtype=np.float32) for poly in polyline])
if not test:
data['TARGET'] = pd.Series([[l[1][0][-1], l[1][1][-1]] if len(l[1][0]) > 1 else np.nan for l in data[['LONGITUDE','LATITUDE']].iterrows()])
data['LATITUDE'] = pd.Series([(t-lat_mean)/lat_std for t in data['LATITUDE']])
data['LONGITUDE'] = pd.Series([(t-long_mean)/long_std for t in data['LONGITUDE']])
data['COORD_FEATURES'] = start_stop_inputs(5, data, test)
data['DAY_OF_WEEK'] = pd.Series([datetime.datetime.fromtimestamp(t).weekday() for t in data['TIMESTAMP']])
data['QUARTER_HOUR'] = pd.Series([int((datetime.datetime.fromtimestamp(t).hour*60 + datetime.datetime.fromtimestamp(t).minute)/15)
for t in data['TIMESTAMP']])
data['WEEK_OF_YEAR'] = pd.Series([datetime.datetime.fromtimestamp(t).isocalendar()[1] for t in data['TIMESTAMP']])
data = data.dropna()
return data
train = feature_ext(train)
# train["TARGET"]
train.head()
test = feature_ext(test, test=True)
test.head()
utils2.save_array(data_path+'train/train_features.bc', train.as_matrix())
utils2.save_array(data_path+'test/test_features.bc', test.as_matrix())
train.head()
"""
Explanation: Pre-calculated below on train set
End of explanation
"""
# train = pd.DataFrame(utils2.load_array(data_path+'train/train_features.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
# 'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'DAY_OF_WEEK',
# 'QUARTER_HOUR', "WEEK_OF_YEAR", "TARGET", "COORD_FEATURES"])
# - Correct column order to load the Bcolz array that was saved above
train = pd.DataFrame(utils2.load_array(data_path+'train/train_features.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'TARGET', 'COORD_FEATURES', 'DAY_OF_WEEK',
'QUARTER_HOUR', 'WEEK_OF_YEAR'])
"""
Explanation: MEANSHIFT
Meanshift clustering as performed in the paper
End of explanation
"""
y_targ = np.vstack(train["TARGET"].as_matrix())
from sklearn.cluster import MeanShift, estimate_bandwidth
"""
Explanation: Clustering performed on the targets
End of explanation
"""
#bw = estimate_bandwidth(y_targ, quantile=.1, n_samples=1000)
bw = 0.001
"""
Explanation: Can use the commented out code for a estimate of bandwidth, which causes clustering to converge much quicker.
This is not mentioned in the paper but is included in the code. In order to get results similar to the paper's,
they manually chose the uncommented bandwidth
End of explanation
"""
ms = MeanShift(bandwidth=bw, bin_seeding=True, min_bin_freq=5)
ms.fit(y_targ)
cluster_centers = ms.cluster_centers_
"""
Explanation: This takes some time
End of explanation
"""
cluster_centers.shape
utils2.save_array(data_path+"cluster_centers_bw_001.bc", cluster_centers)
"""
Explanation: This is very close to the number of clusters mentioned in the paper
End of explanation
"""
train = pd.DataFrame(utils2.load_array(data_path+'train/train_features.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'TARGET',
'COORD_FEATURES', 'DAY_OF_WEEK', "QUARTER_HOUR", "WEEK_OF_YEAR"])
cluster_centers = utils2.load_array(data_path+"cluster_centers_bw_001.bc")
long = np.array([c[0] for c in cluster_centers])
lat = np.array([c[1] for c in cluster_centers])
X_train, X_val = train_test_split(train, test_size=0.2, random_state=42)
def get_features(data):
return [np.vstack(data['COORD_FEATURES'].as_matrix()), np.vstack(data['ORIGIN_CALL'].as_matrix()),
np.vstack(data['TAXI_ID'].as_matrix()), np.vstack(data['ORIGIN_STAND'].as_matrix()),
np.vstack(data['QUARTER_HOUR'].as_matrix()), np.vstack(data['DAY_OF_WEEK'].as_matrix()),
np.vstack(data['WEEK_OF_YEAR'].as_matrix()), np.array([long for i in range(0,data.shape[0])]),
np.array([lat for i in range(0,data.shape[0])])]
def get_target(data):
return np.vstack(data["TARGET"].as_matrix())
X_train_features = get_features(X_train)
X_train_target = get_target(X_train)
# utils2.save_array(data_path+'train/X_train_features.bc', get_features(X_train)) # - doesn't work - needs an array, not a list
"""
Explanation: Formatting Features for Bcolz iterator / garbage
End of explanation
"""
train = pd.DataFrame(utils2.load_array(data_path+'train/train_features.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'TARGET',
'COORD_FEATURES', "DAY_OF_WEEK", "QUARTER_HOUR", "WEEK_OF_YEAR"])
"""
Explanation: MODEL
Load training data and cluster centers
End of explanation
"""
cuts = [
1376503200, # 2013-08-14 18:00
1380616200, # 2013-10-01 08:30
1381167900, # 2013-10-07 17:45
1383364800, # 2013-11-02 04:00
1387722600 # 2013-12-22 14:30
]
print(datetime.datetime.fromtimestamp(1376503200))
train.shape
val_indices = []
index = 0
for index, row in train.iterrows():
time = row['TIMESTAMP']
latitude = row['LATITUDE']
for ts in cuts:
if time <= ts and time + 15 * (len(latitude) - 1) >= ts:
val_indices.append(index)
break
index += 1
X_valid = train.iloc[val_indices]
X_valid.head()
for d in X_valid['TIMESTAMP']:
print(datetime.datetime.fromtimestamp(d))
X_train = train.drop(train.index[[val_indices]])
cluster_centers = utils2.load_array(data_path+"cluster_centers_bw_001.bc")
long = np.array([c[0] for c in cluster_centers])
lat = np.array([c[1] for c in cluster_centers])
utils2.save_array(data_path+'train/X_train.bc', X_train.as_matrix())
utils2.save_array(data_path+'valid/X_val.bc', X_valid.as_matrix())
X_train = pd.DataFrame(utils2.load_array(data_path+'train/X_train.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'TARGET',
'COORD_FEATURES', "DAY_OF_WEEK", "QUARTER_HOUR", "WEEK_OF_YEAR"])
X_valid = pd.DataFrame(utils2.load_array(data_path+'valid/X_val.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE', 'TARGET',
'COORD_FEATURES', "DAY_OF_WEEK", "QUARTER_HOUR", "WEEK_OF_YEAR"])
"""
Explanation: Validation cuts
End of explanation
"""
def equirectangular_loss(y_true, y_pred):
deg2rad = 3.141592653589793 / 180
long_1 = y_true[:,0]*deg2rad
long_2 = y_pred[:,0]*deg2rad
lat_1 = y_true[:,1]*deg2rad
lat_2 = y_pred[:,1]*deg2rad
return 6371*K.sqrt(K.square((long_1 - long_2)*K.cos((lat_1 + lat_2)/2.))
+K.square(lat_1 - lat_2))
def embedding_input(name, n_in, n_out, reg):
inp = Input(shape=(1,), dtype='int64', name=name)
return inp, Embedding(n_in, n_out, input_length=1, embeddings_regularizer=l2(reg))(inp) # Keras 2
"""
Explanation: The equirectangular loss function mentioned in the paper.
Note: Very important that y[0] is longitude and y[1] is latitude.
Omitted the radius of the earth constant "R" as it does not affect minimization and units were not given in the paper.
End of explanation
"""
def taxi_mlp(k, cluster_centers):
shp = cluster_centers.shape[0]
nums = Input(shape=(4*k,))
center_longs = Input(shape=(shp,))
center_lats = Input(shape=(shp,))
emb_names = ['client_ID', 'taxi_ID', "stand_ID", "quarter_hour", "day_of_week", "week_of_year"]
emb_ins = [57106, 448, 64, 96, 7, 52]
emb_outs = [10 for i in range(0,6)]
regs = [0 for i in range(0,6)]
embs = [embedding_input(e[0], e[1]+1, e[2], e[3]) for e in zip(emb_names, emb_ins, emb_outs, regs)]
x = concatenate([nums] + [Flatten()(e[1]) for e in embs]) # Keras 2
x = Dense(500, activation='relu')(x)
x = Dense(shp, activation='softmax')(x)
y = concatenate([dot([x, center_longs], axes=1), dot([x, center_lats], axes=1)]) # Keras 2
return Model(inputs = [nums]+[e[0] for e in embs] + [center_longs, center_lats], outputs = y) # Keras 2
"""
Explanation: The following returns a fully-connected model as mentioned in the paper. Takes as input k as defined before, and the cluster centers.
Inputs: Embeddings for each category, concatenated w/ the 4*k continous variable representing the first/last k coords as mentioned above.
Embeddings have no regularization, as it was not mentioned in paper, though are easily equipped to include.
Paper mentions global normalization. Didn't specify exactly how they did that, whether thay did it sequentially or whatnot. I just included a batchnorm layer for the continuous inputs.
After concatenation, 1 hidden layer of 500 neurons as called for in paper.
Finally, output layer has as many outputs as there are cluster centers, w/ a softmax activation. Call this output P.
The prediction is the weighted sum of each cluster center c_i w/ corresponding predicted prob P_i.
To facilitate this, dotted output w/ cluster latitudes and longitudes separately. (this happens at variable y), then concatenated
into single tensor.
NOTE!!: You will see that I have the cluster center coords as inputs. Ideally, This function should store the cluster longs/lats as a constant to be used in the model, but I could not figure out. As a consequence, I pass them in as a repeated input.
End of explanation
"""
def data_iter(data, batch_size, cluster_centers):
long = [c[0] for c in cluster_centers]
lat = [c[1] for c in cluster_centers]
i = 0
N = data.shape[0]
while True:
yield ([np.vstack(data['COORD_FEATURES'][i:i+batch_size].as_matrix()), np.vstack(data['ORIGIN_CALL'][i:i+batch_size].as_matrix()),
np.vstack(data['TAXI_ID'][i:i+batch_size].as_matrix()), np.vstack(data['ORIGIN_STAND'][i:i+batch_size].as_matrix()),
np.vstack(data['QUARTER_HOUR'][i:i+batch_size].as_matrix()), np.vstack(data['DAY_OF_WEEK'][i:i+batch_size].as_matrix()),
np.vstack(data['WEEK_OF_YEAR'][i:i+batch_size].as_matrix()), np.array([long for i in range(0,batch_size)]),
np.array([lat for i in range(0,batch_size)])], np.vstack(data["TARGET"][i:i+batch_size].as_matrix()))
i += batch_size
# x=Lambda(thing)([x,long,lat])
"""
Explanation: As mentioned, construction of repeated cluster longs/lats for input
Iterator for in memory train pandas dataframe. I did this as opposed to bcolz iterator due to the pre-processing
End of explanation
"""
del model
model = taxi_mlp(5, cluster_centers)
"""
Explanation: Of course, k in the model needs to match k from feature construction. We again use 5 as they did in the paper
End of explanation
"""
# Reduced the initial 0.001 learning rate to avoid NaN's
model.compile(optimizer=SGD(1e-6, momentum=0.9), loss=equirectangular_loss, metrics=['mse'])
# - Try also Adam optimizer
# optim = Adam(lr=1e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
# model.compile(optimizer=optim, loss=equirectangular_loss, metrics=['mse'])
X_train_feat = get_features(X_train)
X_train_target = get_target(X_train)
X_val_feat = get_features(X_valid)
X_val_target = get_target(X_valid)
tqdm = TQDMNotebookCallback()
# - Added verbose=1 to track improvement through epochs
checkpoint = ModelCheckpoint(verbose=1, filepath=data_path+'models/weights.{epoch:03d}.{val_loss:.8f}.hdf5', save_best_only=True)
batch_size=256
"""
Explanation: Paper used SGD opt w/ following paramerters
End of explanation
"""
model.fit(X_train_feat, X_train_target, epochs=1, batch_size=batch_size, validation_data=(X_val_feat, X_val_target), callbacks=[tqdm, checkpoint], verbose=0)
model.fit(X_train_feat, X_train_target, epochs=30, batch_size=batch_size, validation_data=(X_val_feat, X_val_target), callbacks=[tqdm, checkpoint], verbose=0)
# - Load the saved best model, otherwise the training would go on from the current model
# - which is not guaranteed to be the best one
# - (check the actual file name)
model = load_model(data_path+'models/weights.028.4.29282813.hdf5', custom_objects={'equirectangular_loss':equirectangular_loss})
# - trying also learning rate annealing
K.set_value(model.optimizer.lr, 5e-4)
model.fit(X_train_feat, X_train_target, epochs=100, batch_size=batch_size, validation_data=(X_val_feat, X_val_target), callbacks=[tqdm, checkpoint], verbose=0)
model.save(data_path+'models/current_model.hdf5')
"""
Explanation: original
End of explanation
"""
model.fit(X_train_feat, X_train_target, epochs=1, batch_size=batch_size, validation_data=(X_val_feat, X_val_target), callbacks=[tqdm, checkpoint], verbose=0)
# - Load again the saved best model, otherwise the training would go on from the current model
# - which is not guaranteed to be the best one
# - (check the actual file name)
model = load_model(data_path+'models/weights.000.0.73703137.hdf5', custom_objects={'equirectangular_loss':equirectangular_loss})
model.fit(X_train_feat, X_train_target, epochs=400, batch_size=batch_size, validation_data=(X_val_feat, X_val_target), callbacks=[tqdm, checkpoint], verbose=0)
model.save(data_path+'models/current_model.hdf5')
len(X_val_feat[0])
"""
Explanation: new valid
End of explanation
"""
# - Use the filename of the best model
best_model = load_model(data_path+'models/weights.308.0.03373993.hdf5', custom_objects={'equirectangular_loss':equirectangular_loss})
best_model.evaluate(X_val_feat, X_val_target)
test = pd.DataFrame(utils2.load_array(data_path+'test/test_features.bc'),columns=['TRIP_ID', 'CALL_TYPE', 'ORIGIN_CALL', 'ORIGIN_STAND', 'TAXI_ID',
'TIMESTAMP', 'DAY_TYPE', 'MISSING_DATA', 'POLYLINE', 'LATITUDE', 'LONGITUDE',
'COORD_FEATURES', "DAY_OF_WEEK", "QUARTER_HOUR", "WEEK_OF_YEAR"])
# test['ORIGIN_CALL'] = pd.read_csv(data_path+'real_origin_call.csv', header=None) # - file not available
# test['TAXI_ID'] = pd.read_csv(data_path+'real_taxi_id.csv',header=None) # # - file not available
X_test = get_features(test)
b = np.sort(X_test[1],axis=None)
test_preds = np.round(best_model.predict(X_test), decimals=6)
d = {0:test['TRIP_ID'], 1:test_preds[:,1], 2:test_preds[:,0]}
kaggle_out = pd.DataFrame(data=d)
kaggle_out.to_csv(data_path+'submission.csv', header=['TRIP_ID','LATITUDE', 'LONGITUDE'], index=False)
def hdist(a, b):
deg2rad = 3.141592653589793 / 180
lat1 = a[:, 1] * deg2rad
lon1 = a[:, 0] * deg2rad
lat2 = b[:, 1] * deg2rad
lon2 = b[:, 0] * deg2rad
dlat = abs(lat1-lat2)
dlon = abs(lon1-lon2)
al = np.sin(dlat/2)**2 + np.cos(lat1) * np.cos(lat2) * (np.sin(dlon/2)**2)
d = np.arctan2(np.sqrt(al), np.sqrt(1-al))
hd = 2 * 6371 * d
return hd
val_preds = best_model.predict(X_val_feat)
trn_preds = model.predict(X_train_feat)
er = hdist(val_preds, X_val_target)
er.mean()
K.equal()
"""
Explanation: It works, but it seems to converge unrealistically quick and the loss values are not the same. The paper does not mention what it's using as "error" in it's results. I assume the same equirectangular? Not very clear. The difference in values could be due to the missing Earth-radius factor
Kaggle Entry
End of explanation
"""
cuts = [
1376503200, # 2013-08-14 18:00
1380616200, # 2013-10-01 08:30
1381167900, # 2013-10-07 17:45
1383364800, # 2013-11-02 04:00
1387722600 # 2013-12-22 14:30
]
np.any([train['TIMESTAMP'].map(lambda x: x in cuts)])
train['TIMESTAMP']
np.any(train['TIMESTAMP']==1381167900)
times = train['TIMESTAMP'].as_matrix()
X_train.columns
times
count = 0
for index, row in X_val.iterrows():
for ts in cuts:
time = row['TIMESTAMP']
latitude = row['LATITUDE']
if time <= ts and time + 15 * (len(latitude) - 1) >= ts:
count += 1
one = count
count + one
import h5py
h = h5py.File(data_path+'original/data.hdf5', 'r')
evrData=h['/Configure:0000/Run:0000/CalibCycle:0000/EvrData::DataV3/NoDetector.0:Evr.0/data']
c = np.load(data_path+'original/arrival-clusters.pkl')
"""
Explanation: To-do: simple to extend to validation data
Uh oh... training data not representative of test
End of explanation
"""
from fuel.utils import find_in_data_path
from fuel.datasets import H5PYDataset
original_path = '/data/bckenstler/data/taxi/original/'
train_set = H5PYDataset(original_path+'data.hdf5', which_sets=('train',),load_in_memory=True)
valid_set = H5PYDataset(original_path+'valid.hdf5', which_sets=('cuts/test_times_0',),load_in_memory=True)
print(train_set.num_examples)
print(valid_set.num_examples)
data = train_set.data_sources
data[0]
valid_data = valid_set.data_sources
valid_data[4][0]
stamps = valid_data[-3]
stamps[0]
for i in range(0,304):
print(np.any([t==int(stamps[i]) for t in X_val['TIMESTAMP']]))
type(X_train['TIMESTAMP'][0])
type(stamps[0])
check = [s in stamps for s in X_val['TIMESTAMP']]
for s in X_val['TIMESTAMP']:
print(datetime.datetime.fromtimestamp(s))
for s in stamps:
print(datetime.datetime.fromtimestamp(s))
ids = valid_data[-1]
type(ids[0])
ids
X_val
"""
Explanation: hd5f files
End of explanation
"""
|
quiltdata/quilt | docs/walkthrough/editing-a-package.ipynb | apache-2.0 | import quilt3
p = quilt3.Package()
"""
Explanation: Data in Quilt is organized in terms of data packages. A data package is a logical group of files, directories, and metadata.
Initializing a package
To edit a new empty package, use the package constructor:
End of explanation
"""
quilt3.Package.install(
"examples/hurdat",
"s3://quilt-example",
)
"""
Explanation: To edit a preexisting package, we need to first make sure to install the package:
End of explanation
"""
p = quilt3.Package.browse('examples/hurdat')
"""
Explanation: Use browse to edit the package:
End of explanation
"""
# add entries individually using `set`
# ie p.set("foo.csv", "/local/path/foo.csv"),
# p.set("bar.csv", "s3://bucket/path/bar.csv")
# create test data
with open("data.csv", "w") as f:
f.write("id, value\na, 42")
p = quilt3.Package()
p.set("data.csv", "data.csv")
p.set("banner.png", "s3://quilt-example/imgs/banner.png")
# or grab everything in a directory at once using `set_dir`
# ie p.set_dir("stuff/", "/path/to/stuff/"),
# p.set_dir("things/", "s3://path/to/things/")
# create test directory
import os
os.mkdir("data")
p.set_dir("stuff/", "./data/")
p.set_dir("imgs/", "s3://quilt-example/imgs/")
"""
Explanation: For more information on accessing existing packages see the section "Installing a Package".
Adding data to a package
Use the set and set_dir commands to add individual files and whole directories, respectively, to a Package:
End of explanation
"""
p
"""
Explanation: The first parameter to these functions is the logical key, which will determine where the file lives within the package. So after running the commands above our package will look like this:
End of explanation
"""
# assuming data.csv is in the current directory
p = quilt3.Package()
p.set("data.csv")
"""
Explanation: The second parameter is the physical key, which states the file's actual location. The physical key may point to either a local file or a remote object (with an s3:// path).
If the physical key and the logical key are the same, you may omit the second argument:
End of explanation
"""
# switch to a test directory and create some test files
import os
%cd data/
os.mkdir("stuff")
with open("new_data.csv", "w") as f:
f.write("id, value\na, 42")
# set the contents of the package to that of the current directory
p.set_dir(".", ".")
"""
Explanation: Another useful trick. Use "." to set the contents of the package to that of the current directory:
End of explanation
"""
p.delete("data.csv")
"""
Explanation: Deleting data in a package
Use delete to remove entries from a package:
End of explanation
"""
p = quilt3.Package()
p.set("data.csv", "new_data.csv", meta={"type": "csv"})
p.set_dir("stuff/", "stuff/", meta={"origin": "unknown"})
"""
Explanation: Note that this will only remove this piece of data from the package. It will not delete the actual data itself.
Adding metadata to a package
Packages support metadata anywhere in the package. To set metadata on package entries or directories, use the meta argument:
End of explanation
"""
# set metadata on a package
p.set_meta({"package-type": "demo"})
"""
Explanation: You can also set metadata on the package as a whole using set_meta.
End of explanation
"""
|
cdawei/digbeta | dchen/tour/cv_protocol.ipynb | gpl-3.0 | % matplotlib inline
import os, sys, time
import math, random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from joblib import Parallel, delayed
"""
Explanation: Trajectory Recommendation - Test Evaluation Protocol
End of explanation
"""
%run 'ssvm.ipynb'
check_protocol = True
"""
Explanation: Run notebook ssvm.ipynb
End of explanation
"""
traj_group_test = dict()
test_ratio = 0.3
for key in sorted(TRAJ_GROUP_DICT.keys()):
group = sorted(TRAJ_GROUP_DICT[key])
num = int(test_ratio * len(group))
if num > 0:
np.random.shuffle(group)
traj_group_test[key] = set(group[:num])
if check_protocol == True:
nnrand_dict = dict()
ssvm_dict = dict()
# train set
trajid_set_train = set(trajid_set_all)
for key in traj_group_test.keys():
trajid_set_train = trajid_set_train - traj_group_test[key]
# train ssvm
poi_info = calc_poi_info(list(trajid_set_train), traj_all, poi_all)
# build POI_ID <--> POI__INDEX mapping for POIs used to train CRF
# which means only POIs in traj such that len(traj) >= 2 are included
poi_set = set()
for x in trajid_set_train:
if len(traj_dict[x]) >= 2:
poi_set = poi_set | set(traj_dict[x])
poi_ix = sorted(poi_set)
poi_id_dict, poi_id_rdict = dict(), dict()
for idx, poi in enumerate(poi_ix):
poi_id_dict[poi] = idx
poi_id_rdict[idx] = poi
# generate training data
train_traj_list = [traj_dict[x] for x in trajid_set_train if len(traj_dict[x]) >= 2]
node_features_list = Parallel(n_jobs=N_JOBS)\
(delayed(calc_node_features)\
(tr[0], len(tr), poi_ix, poi_info, poi_clusters=POI_CLUSTERS, \
cats=POI_CAT_LIST, clusters=POI_CLUSTER_LIST) for tr in train_traj_list)
edge_features = calc_edge_features(list(trajid_set_train), poi_ix, traj_dict, poi_info)
assert(len(train_traj_list) == len(node_features_list))
X_train = [(node_features_list[x], edge_features.copy(), \
(poi_id_dict[train_traj_list[x][0]], len(train_traj_list[x]))) for x in range(len(train_traj_list))]
y_train = [np.array([poi_id_dict[x] for x in tr]) for tr in train_traj_list]
assert(len(X_train) == len(y_train))
# train
sm = MyModel()
verbose = 0 #5
ssvm = OneSlackSSVM(model=sm, C=SSVM_C, n_jobs=N_JOBS, verbose=verbose)
ssvm.fit(X_train, y_train, initialize=True)
print('SSVM training finished, start predicting.'); sys.stdout.flush()
# predict for each query
for query in sorted(traj_group_test.keys()):
ps, L = query
# start should be in training set
if ps not in poi_set: continue
assert(L <= poi_info.shape[0])
# prediction of ssvm
node_features = calc_node_features(ps, L, poi_ix, poi_info, poi_clusters=POI_CLUSTERS, \
cats=POI_CAT_LIST, clusters=POI_CLUSTER_LIST)
# normalise test features
unaries, pw = scale_features_linear(node_features, edge_features, node_max=sm.node_max, node_min=sm.node_min, \
edge_max=sm.edge_max, edge_min=sm.edge_min)
X_test = [(unaries, pw, (poi_id_dict[ps], L))]
# test
y_pred = ssvm.predict(X_test)
rec = [poi_id_rdict[x] for x in y_pred[0]] # map POIs back
rec1 = [ps] + rec[1:]
ssvm_dict[query] = rec1
# prediction of nearest neighbour
candidates_id = sorted(TRAJ_GROUP_DICT[query] - traj_group_test[query])
assert(len(candidates_id) > 0)
np.random.shuffle(candidates_id)
nnrand_dict[query] = traj_dict[candidates_id[0]]
if check_protocol == True:
F1_ssvm = []; pF1_ssvm = []; Tau_ssvm = []
F1_nn = []; pF1_nn = []; Tau_nn = []
for key in sorted(ssvm_dict.keys()):
assert(key in nnrand_dict)
F1, pF1, tau = evaluate(ssvm_dict[key], traj_group_test[key])
F1_ssvm.append(F1); pF1_ssvm.append(pF1); Tau_ssvm.append(tau)
F1, pF1, tau = evaluate(nnrand_dict[key], traj_group_test[key])
F1_nn.append(F1); pF1_nn.append(pF1); Tau_nn.append(tau)
print('SSVM: F1 (%.3f, %.3f), pairsF1 (%.3f, %.3f) Tau (%.3f, %.3f)' % \
(np.mean(F1_ssvm), np.std(F1_ssvm)/np.sqrt(len(F1_ssvm)), \
np.mean(pF1_ssvm), np.std(pF1_ssvm)/np.sqrt(len(pF1_ssvm)),
np.mean(Tau_ssvm), np.std(Tau_ssvm)/np.sqrt(len(Tau_ssvm))))
print('NNRAND: F1 (%.3f, %.3f), pairsF1 (%.3f, %.3f), Tau (%.3f, %.3f)' % \
(np.mean(F1_nn), np.std(F1_nn)/np.sqrt(len(F1_nn)), \
np.mean(pF1_nn), np.std(pF1_nn)/np.sqrt(len(pF1_nn)), \
np.mean(Tau_nn), np.std(Tau_nn)/np.sqrt(len(Tau_nn))))
"""
Explanation: Sanity check for evaluation protocol
70/30 split for trajectories conform to each query.
End of explanation
"""
|
scikit-optimize/scikit-optimize.github.io | dev/notebooks/auto_examples/sampler/sampling_comparison.ipynb | bsd-3-clause | print(__doc__)
import numpy as np
np.random.seed(123)
import matplotlib.pyplot as plt
"""
Explanation: Comparing initial point generation methods
Holger Nahrstaedt 2020
.. currentmodule:: skopt
Bayesian optimization or sequential model-based optimization uses a surrogate
model to model the expensive to evaluate function func. There are several
choices for what kind of surrogate model to use. This notebook compares the
performance of:
Halton sequence,
Hammersly sequence,
Sobol' sequence and
Latin hypercube sampling
as initial points. The purely random point generation is used as
a baseline.
End of explanation
"""
from skopt.benchmarks import hart6 as hart6_
# redefined `hart6` to allow adding arbitrary "noise" dimensions
def hart6(x, noise_level=0.):
return hart6_(x[:6]) + noise_level * np.random.randn()
from skopt.benchmarks import branin as _branin
def branin(x, noise_level=0.):
return _branin(x) + noise_level * np.random.randn()
from matplotlib.pyplot import cm
import time
from skopt import gp_minimize, forest_minimize, dummy_minimize
def plot_convergence(result_list, true_minimum=None, yscale=None, title="Convergence plot"):
ax = plt.gca()
ax.set_title(title)
ax.set_xlabel("Number of calls $n$")
ax.set_ylabel(r"$\min f(x)$ after $n$ calls")
ax.grid()
if yscale is not None:
ax.set_yscale(yscale)
colors = cm.hsv(np.linspace(0.25, 1.0, len(result_list)))
for results, color in zip(result_list, colors):
name, results = results
n_calls = len(results[0].x_iters)
iterations = range(1, n_calls + 1)
mins = [[np.min(r.func_vals[:i]) for i in iterations]
for r in results]
ax.plot(iterations, np.mean(mins, axis=0), c=color, label=name)
#ax.errorbar(iterations, np.mean(mins, axis=0),
# yerr=np.std(mins, axis=0), c=color, label=name)
if true_minimum:
ax.axhline(true_minimum, linestyle="--",
color="r", lw=1,
label="True minimum")
ax.legend(loc="best")
return ax
def run(minimizer, initial_point_generator,
n_initial_points=10, n_repeats=1):
return [minimizer(func, bounds, n_initial_points=n_initial_points,
initial_point_generator=initial_point_generator,
n_calls=n_calls, random_state=n)
for n in range(n_repeats)]
def run_measure(initial_point_generator, n_initial_points=10):
start = time.time()
# n_repeats must set to a much higher value to obtain meaningful results.
n_repeats = 1
res = run(gp_minimize, initial_point_generator,
n_initial_points=n_initial_points, n_repeats=n_repeats)
duration = time.time() - start
# print("%s %s: %.2f s" % (initial_point_generator,
# str(init_point_gen_kwargs),
# duration))
return res
"""
Explanation: Toy model
We will use the :class:benchmarks.hart6 function as toy model for the expensive function.
In a real world application this function would be unknown and expensive
to evaluate.
End of explanation
"""
from functools import partial
example = "hart6"
if example == "hart6":
func = partial(hart6, noise_level=0.1)
bounds = [(0., 1.), ] * 6
true_minimum = -3.32237
n_calls = 40
n_initial_points = 10
yscale = None
title = "Convergence plot - hart6"
else:
func = partial(branin, noise_level=2.0)
bounds = [(-5.0, 10.0), (0.0, 15.0)]
true_minimum = 0.397887
n_calls = 30
n_initial_points = 10
yscale="log"
title = "Convergence plot - branin"
from skopt.utils import cook_initial_point_generator
# Random search
dummy_res = run_measure("random", n_initial_points)
lhs = cook_initial_point_generator(
"lhs", lhs_type="classic", criterion=None)
lhs_res = run_measure(lhs, n_initial_points)
lhs2 = cook_initial_point_generator("lhs", criterion="maximin")
lhs2_res = run_measure(lhs2, n_initial_points)
sobol = cook_initial_point_generator("sobol", randomize=False,
min_skip=1, max_skip=100)
sobol_res = run_measure(sobol, n_initial_points)
halton_res = run_measure("halton", n_initial_points)
hammersly_res = run_measure("hammersly", n_initial_points)
grid_res = run_measure("grid", n_initial_points)
"""
Explanation: Objective
The objective of this example is to find one of these minima in as
few iterations as possible. One iteration is defined as one call
to the :class:benchmarks.hart6 function.
We will evaluate each model several times using a different seed for the
random number generator. Then compare the average performance of these
models. This makes the comparison more robust against models that get
"lucky".
End of explanation
"""
plot = plot_convergence([("random", dummy_res),
("lhs", lhs_res),
("lhs_maximin", lhs2_res),
("sobol'", sobol_res),
("halton", halton_res),
("hammersly", hammersly_res),
("grid", grid_res)],
true_minimum=true_minimum,
yscale=yscale,
title=title)
plt.show()
"""
Explanation: Note that this can take a few minutes.
End of explanation
"""
lhs2 = cook_initial_point_generator("lhs", criterion="maximin")
lhs2_15_res = run_measure(lhs2, 12)
lhs2_20_res = run_measure(lhs2, 14)
lhs2_25_res = run_measure(lhs2, 16)
"""
Explanation: This plot shows the value of the minimum found (y axis) as a function
of the number of iterations performed so far (x axis). The dashed red line
indicates the true value of the minimum of the :class:benchmarks.hart6
function.
Test with different n_random_starts values
End of explanation
"""
plot = plot_convergence([("random - 10", dummy_res),
("lhs_maximin - 10", lhs2_res),
("lhs_maximin - 12", lhs2_15_res),
("lhs_maximin - 14", lhs2_20_res),
("lhs_maximin - 16", lhs2_25_res)],
true_minimum=true_minimum,
yscale=yscale,
title=title)
plt.show()
"""
Explanation: n_random_starts = 10 produces the best results
End of explanation
"""
|
tarashor/vibrations | py/notebooks/draft/All geometries.ipynb | mit | from sympy import *
from sympy.vector import CoordSys3D
N = CoordSys3D('N')
x1, x2, x3 = symbols("x_1 x_2 x_3")
alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha_3")
R, L, ga, gv = symbols("R L g_a g_v")
init_printing()
"""
Explanation: Shells
Init symbols for sympy
End of explanation
"""
a1 = pi / 2 + (L / 2 - alpha1)/R
x = R * cos(a1)
y = alpha2
z = R * sin(a1)
r = x*N.i + y*N.j + z*N.k
"""
Explanation: Cylindrical coordinates
End of explanation
"""
r
"""
Explanation: Mid-surface coordinates is defined with the following vector $\vec{r}=\vec{r}(\alpha_1, \alpha_2)$
End of explanation
"""
r1 = r.diff(alpha1)
r2 = r.diff(alpha2)
k1 = trigsimp(r1.magnitude())
k2 = trigsimp(r2.magnitude())
r1 = r1/k1
r2 = r2/k2
r1
r2
"""
Explanation: Tangent to curve
End of explanation
"""
n = r1.cross(r2)
n = trigsimp(n.normalize())
n
"""
Explanation: Normal to curve
End of explanation
"""
dr1=r1.diff(alpha1)
k1 = trigsimp(r1.cross(dr1).magnitude()/k1**3)
k1
dr2=r2.diff(alpha2)
k2 = trigsimp(r2.cross(dr2).magnitude()/k2**3)
k2
"""
Explanation: Curvature
End of explanation
"""
n.diff(alpha1)
"""
Explanation: Derivative of base vectors
Let's find
$\frac { d\vec{n} } { d\alpha_1}$
$\frac { d\vec{v} } { d\alpha_1}$
$\frac { d\vec{n} } { d\alpha_2}$
$\frac { d\vec{v} } { d\alpha_2}$
End of explanation
"""
r1.diff(alpha1)
"""
Explanation: $ \frac { d\vec{n} } { d\alpha_1} = -\frac {1}{R} \vec{v} = -k \vec{v} $
End of explanation
"""
R_alpha=r+alpha3*n
R_alpha
R1=R_alpha.diff(alpha1)
R2=R_alpha.diff(alpha2)
R3=R_alpha.diff(alpha3)
trigsimp(R1)
R2
R3
"""
Explanation: $ \frac { d\vec{v} } { d\alpha_1} = \frac {1}{R} \vec{n} = k \vec{n} $
Derivative of vectors
$ \vec{u} = u_v \vec{v} + u_n\vec{n} $
$ \frac { d\vec{u} } { d\alpha_1} = \frac { d(u_v\vec{v}) } { d\alpha_1} + \frac { d(u_n\vec{n}) } { d\alpha_1} =
\frac { du_n } { d\alpha_1} \vec{n} + u_n \frac { d\vec{n} } { d\alpha_1} + \frac { du_v } { d\alpha_1} \vec{v} + u_v \frac { d\vec{v} } { d\alpha_1} = \frac { du_n } { d\alpha_1} \vec{n} - u_n k \vec{v} + \frac { du_v } { d\alpha_1} \vec{v} + u_v k \vec{n}$
Then
$ \frac { d\vec{u} } { d\alpha_1} = \left( \frac { du_v } { d\alpha_1} - u_n k \right) \vec{v} + \left( \frac { du_n } { d\alpha_1} + u_v k \right) \vec{n}$
$ \frac { d\vec{u} } { d\alpha_2} = \frac { d(u_n\vec{n}) } { d\alpha_2} + \frac { d(u_v\vec{v}) } { d\alpha_2} =
\frac { du_n } { d\alpha_2} \vec{n} + u_n \frac { d\vec{n} } { d\alpha_2} + \frac { du_v } { d\alpha_2} \vec{v} + u_v \frac { d\vec{v} } { d\alpha_2} = \frac { du_n } { d\alpha_2} \vec{n} + \frac { du_v } { d\alpha_2} \vec{v} $
Base Vectors $\vec{R}_1, \vec{R}_2, \vec{R}_3$
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import cm
x = R * cos(a1)
z = R * sin(a1)
alpha1_x = lambdify([R, L, alpha1], x, "numpy")
alpha3_z = lambdify([R, L, alpha1], z, "numpy")
R_num = 1/0.8
L_num = 2
x1_start = 0
x1_end = L_num
x3_start = -0.05
x3_end = 0.05
plot_x1_elements = 100
dx1 = (x1_end - x1_start) / plot_x1_elements
X_init = []
Y_init = []
x2 = 0
x3 = 0
for i in range(plot_x1_elements + 1):
x1 = x1_start + i * dx1
x=alpha1_x(R_num, L_num, x1)
z=alpha3_z(R_num, L_num, x1)
X_init.append(x)
Y_init.append(z)
plt.plot(X_init, Y_init, "r", label="початкова конфігурація")
geometry_title = "K={}".format(1/R_num)
plot_title = r"Форма панелі $L={}, h={}$".format(x1_end - x1_start, x3_end - x3_start)
if (len(geometry_title) > 0):
plot_title = r"Форма панелі $L={}, h={}, {}$".format(x1_end - x1_start, x3_end - x3_start, geometry_title)
plt.title(plot_title)
plt.axes().set_aspect('equal', 'datalim')
plt.legend(loc='best')
plt.xlabel(r"$x_1$, м", fontsize=12)
plt.ylabel(r"$x_3$, м", fontsize=12)
plt.grid()
plt.show()
"""
Explanation: Draw
End of explanation
"""
eps=trigsimp(R1.dot(R2.cross(R3)))
R_1=simplify(trigsimp(R2.cross(R3)/eps))
R_2=simplify(trigsimp(R3.cross(R1)/eps))
R_3=simplify(trigsimp(R1.cross(R2)/eps))
eps
R_1
"""
Explanation: Base Vectors $\vec{R}^1, \vec{R}^2, \vec{R}^3$
End of explanation
"""
R_3
"""
Explanation: R_2
End of explanation
"""
dx1da1=R1.dot(N.i)
dx1da2=R2.dot(N.i)
dx1da3=R3.dot(N.i)
dx2da1=R1.dot(N.j)
dx2da2=R2.dot(N.j)
dx2da3=R3.dot(N.j)
dx3da1=R1.dot(N.k)
dx3da2=R2.dot(N.k)
dx3da3=R3.dot(N.k)
A=Matrix([[dx1da1, dx1da2, dx1da3], [dx2da1, dx2da2, dx2da3], [dx3da1, dx3da2, dx3da3]])
simplify(A)
A_inv = trigsimp(A**-1)
simplify(trigsimp(A_inv))
trigsimp(A.det())
"""
Explanation: Jacobi matrix:
$ A = \left(
\begin{array}{ccc}
\frac{\partial x_1}{\partial \alpha_1} & \frac{\partial x_1}{\partial \alpha_2} & \frac{\partial x_1}{\partial \alpha_3} \
\frac{\partial x_2}{\partial \alpha_1} & \frac{\partial x_2}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \
\frac{\partial x_3}{\partial \alpha_1} & \frac{\partial x_3}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \
\end{array}
\right)$
$ \left[
\begin{array}{ccc}
\vec{R}_1 & \vec{R}_2 & \vec{R}_3
\end{array}
\right] = \left[
\begin{array}{ccc}
\vec{e}_1 & \vec{e}_2 & \vec{e}_3
\end{array}
\right] \cdot \left(
\begin{array}{ccc}
\frac{\partial x_1}{\partial \alpha_1} & \frac{\partial x_1}{\partial \alpha_2} & \frac{\partial x_1}{\partial \alpha_3} \
\frac{\partial x_2}{\partial \alpha_1} & \frac{\partial x_2}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \
\frac{\partial x_3}{\partial \alpha_1} & \frac{\partial x_3}{\partial \alpha_2} & \frac{\partial x_3}{\partial \alpha_3} \
\end{array}
\right) = \left[
\begin{array}{ccc}
\vec{e}_1 & \vec{e}_2 & \vec{e}_3
\end{array}
\right] \cdot A$
$ \left[
\begin{array}{ccc}
\vec{e}_1 & \vec{e}_2 & \vec{e}_3
\end{array}
\right] =\left[
\begin{array}{ccc}
\vec{R}_1 & \vec{R}_2 & \vec{R}_3
\end{array}
\right] \cdot A^{-1}$
End of explanation
"""
g11=R_1.dot(R_1)
g12=R_1.dot(R_2)
g13=R_1.dot(R_3)
g21=R_2.dot(R_1)
g22=R_2.dot(R_2)
g23=R_2.dot(R_3)
g31=R_3.dot(R_1)
g32=R_3.dot(R_2)
g33=R_3.dot(R_3)
G=Matrix([[g11, g12, g13],[g21, g22, g23], [g31, g32, g33]])
G=trigsimp(G)
G
"""
Explanation: Metric tensor
${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
End of explanation
"""
g_11=R1.dot(R1)
g_12=R1.dot(R2)
g_13=R1.dot(R3)
g_21=R2.dot(R1)
g_22=R2.dot(R2)
g_23=R2.dot(R3)
g_31=R3.dot(R1)
g_32=R3.dot(R2)
g_33=R3.dot(R3)
G_con=Matrix([[g_11, g_12, g_13],[g_21, g_22, g_23], [g_31, g_32, g_33]])
G_con=trigsimp(G_con)
G_con
G_inv = G**-1
G_inv
"""
Explanation: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
End of explanation
"""
dR1dalpha1 = trigsimp(R1.diff(alpha1))
dR1dalpha1
"""
Explanation: Derivatives of vectors
Derivative of base vectors
End of explanation
"""
dR1dalpha2 = trigsimp(R1.diff(alpha2))
dR1dalpha2
dR1dalpha3 = trigsimp(R1.diff(alpha3))
dR1dalpha3
"""
Explanation: $ \frac { d\vec{R_1} } { d\alpha_1} = -\frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \vec{R_3} $
End of explanation
"""
dR2dalpha1 = trigsimp(R2.diff(alpha1))
dR2dalpha1
dR2dalpha2 = trigsimp(R2.diff(alpha2))
dR2dalpha2
dR2dalpha3 = trigsimp(R2.diff(alpha3))
dR2dalpha3
dR3dalpha1 = trigsimp(R3.diff(alpha1))
dR3dalpha1
"""
Explanation: $ \frac { d\vec{R_1} } { d\alpha_3} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $
End of explanation
"""
dR3dalpha2 = trigsimp(R3.diff(alpha2))
dR3dalpha2
dR3dalpha3 = trigsimp(R3.diff(alpha3))
dR3dalpha3
"""
Explanation: $ \frac { d\vec{R_3} } { d\alpha_1} = \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} $
End of explanation
"""
u1=Function('u^1')
u2=Function('u^2')
u3=Function('u^3')
q=Function('q') # q(alpha3) = 1+alpha3/R
K = Symbol('K') # K = 1/R
u1_nabla1 = u1(alpha1, alpha2, alpha3).diff(alpha1) + u3(alpha1, alpha2, alpha3) * K / q(alpha3)
u2_nabla1 = u2(alpha1, alpha2, alpha3).diff(alpha1)
u3_nabla1 = u3(alpha1, alpha2, alpha3).diff(alpha1) - u1(alpha1, alpha2, alpha3) * K * q(alpha3)
u1_nabla2 = u1(alpha1, alpha2, alpha3).diff(alpha2)
u2_nabla2 = u2(alpha1, alpha2, alpha3).diff(alpha2)
u3_nabla2 = u3(alpha1, alpha2, alpha3).diff(alpha2)
u1_nabla3 = u1(alpha1, alpha2, alpha3).diff(alpha3) + u1(alpha1, alpha2, alpha3) * K / q(alpha3)
u2_nabla3 = u2(alpha1, alpha2, alpha3).diff(alpha3)
u3_nabla3 = u3(alpha1, alpha2, alpha3).diff(alpha3)
# $\nabla_2 u^2 = \frac { \partial u^2 } { \partial \alpha_2}$
grad_u = Matrix([[u1_nabla1, u2_nabla1, u3_nabla1],[u1_nabla2, u2_nabla2, u3_nabla2], [u1_nabla3, u2_nabla3, u3_nabla3]])
grad_u
G_s = Matrix([[q(alpha3)**2, 0, 0],[0, 1, 0], [0, 0, 1]])
grad_u_down=grad_u*G_s
expand(simplify(grad_u_down))
"""
Explanation: $ \frac { d\vec{R_3} } { d\alpha_3} = \vec{0} $
Derivative of vectors
$ \vec{u} = u^1 \vec{R_1} + u^2\vec{R_2} + u^3\vec{R_3} $
$ \frac { d\vec{u} } { d\alpha_1} = \frac { d(u^1\vec{R_1}) } { d\alpha_1} + \frac { d(u^2\vec{R_2}) } { d\alpha_1}+ \frac { d(u^3\vec{R_3}) } { d\alpha_1} = \frac { du^1 } { d\alpha_1} \vec{R_1} + u^1 \frac { d\vec{R_1} } { d\alpha_1} + \frac { du^2 } { d\alpha_1} \vec{R_2} + u^2 \frac { d\vec{R_2} } { d\alpha_1} + \frac { du^3 } { d\alpha_1} \vec{R_3} + u^3 \frac { d\vec{R_3} } { d\alpha_1} = \frac { du^1 } { d\alpha_1} \vec{R_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \vec{R_3} + \frac { du^2 } { d\alpha_1} \vec{R_2}+ \frac { du^3 } { d\alpha_1} \vec{R_3} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1}$
Then
$ \frac { d\vec{u} } { d\alpha_1} = \left( \frac { du^1 } { d\alpha_1} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \right) \vec{R_1} + \frac { du^2 } { d\alpha_1} \vec{R_2} + \left( \frac { du^3 } { d\alpha_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) \right) \vec{R_3}$
$ \frac { d\vec{u} } { d\alpha_2} = \frac { d(u^1\vec{R_1}) } { d\alpha_2} + \frac { d(u^2\vec{R_2}) } { d\alpha_2}+ \frac { d(u^3\vec{R_3}) } { d\alpha_2} = \frac { du^1 } { d\alpha_2} \vec{R_1} + \frac { du^2 } { d\alpha_2} \vec{R_2} + \frac { du^3 } { d\alpha_2} \vec{R_3} $
Then
$ \frac { d\vec{u} } { d\alpha_2} = \frac { du^1 } { d\alpha_2} \vec{R_1} + \frac { du^2 } { d\alpha_2} \vec{R_2} + \frac { du^3 } { d\alpha_2} \vec{R_3} $
$ \frac { d\vec{u} } { d\alpha_3} = \frac { d(u^1\vec{R_1}) } { d\alpha_3} + \frac { d(u^2\vec{R_2}) } { d\alpha_3}+ \frac { d(u^3\vec{R_3}) } { d\alpha_3} =
\frac { du^1 } { d\alpha_3} \vec{R_1} + u^1 \frac { d\vec{R_1} } { d\alpha_3} + \frac { du^2 } { d\alpha_3} \vec{R_2} + u^2 \frac { d\vec{R_2} } { d\alpha_3} + \frac { du^3 } { d\alpha_3} \vec{R_3} + u^3 \frac { d\vec{R_3} } { d\alpha_3} = \frac { du^1 } { d\alpha_3} \vec{R_1} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \vec{R_1} + \frac { du^2 } { d\alpha_3} \vec{R_2}+ \frac { du^3 } { d\alpha_3} \vec{R_3} $
Then
$ \frac { d\vec{u} } { d\alpha_3} = \left( \frac { du^1 } { d\alpha_3} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}} \right) \vec{R_1} + \frac { du^2 } { d\alpha_3} \vec{R_2}+ \frac { du^3 } { d\alpha_3} \vec{R_3}$
Gradient of vector
$\nabla_1 u^1 = \frac { \partial u^1 } { \partial \alpha_1} + u^3 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}}$
$\nabla_1 u^2 = \frac { \partial u^2 } { \partial \alpha_1} $
$\nabla_1 u^3 = \frac { \partial u^3 } { \partial \alpha_1} - u^1 \frac {1}{R} \left( 1+\frac{\alpha_3}{R} \right) $
$\nabla_2 u^1 = \frac { \partial u^1 } { \partial \alpha_2}$
$\nabla_2 u^2 = \frac { \partial u^2 } { \partial \alpha_2}$
$\nabla_2 u^3 = \frac { \partial u^3 } { \partial \alpha_2}$
$\nabla_3 u^1 = \frac { \partial u^1 } { \partial \alpha_3} + u^1 \frac {1}{R} \frac {1}{1+\frac{\alpha_3}{R}}$
$\nabla_3 u^2 = \frac { \partial u^2 } { \partial \alpha_3} $
$\nabla_3 u^3 = \frac { \partial u^3 } { \partial \alpha_3}$
$ \nabla \vec{u} = \left(
\begin{array}{ccc}
\nabla_1 u^1 & \nabla_1 u^2 & \nabla_1 u^3 \
\nabla_2 u^1 & \nabla_2 u^2 & \nabla_2 u^3 \
\nabla_3 u^1 & \nabla_3 u^2 & \nabla_3 u^3 \
\end{array}
\right)$
End of explanation
"""
B = zeros(9, 12)
B[0,1] = (1+alpha3/R)**2
B[0,8] = (1+alpha3/R)/R
B[1,2] = (1+alpha3/R)**2
B[2,0] = (1+alpha3/R)/R
B[2,3] = (1+alpha3/R)**2
B[3,5] = S(1)
B[4,6] = S(1)
B[5,7] = S(1)
B[6,9] = S(1)
B[6,0] = -(1+alpha3/R)/R
B[7,10] = S(1)
B[8,11] = S(1)
B
B_con = zeros(9, 12)
B_con[0,1] = 1
B_con[0,8] = (1+alpha3/R)/R
B_con[1,2] = 1
B_con[2,0] = -1/(R*(1+alpha3/R))
B_con[2,3] = 1
B_con[3,5] = S(1)
B_con[4,6] = S(1)
B_con[5,7] = S(1)
B_con[6,0] = -1/(R*(1+alpha3/R))
B_con[6,9] = S(1)
B_con[7,10] = S(1)
B_con[8,11] = S(1)
B_con
q=(1+alpha3/R)
g_=(q*q)
koef=g_.diff(alpha3)
u_down_to_up=eye(12)
u_down_to_up[0,0]=g_
u_down_to_up[1,1]=g_
u_down_to_up[2,2]=g_
u_down_to_up[3,3]=g_
u_down_to_up[3,0]=koef
u_down_to_up
simplify(B_con*u_down_to_up)
"""
Explanation: $
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
\left(
\begin{array}{c}
\left( 1+\frac{\alpha_2}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_1} + u^3 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \
\left( 1+\frac{\alpha_2}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_2} \
\left( 1+\frac{\alpha_3}{R} \right)^2 \frac { \partial u^1 } { \partial \alpha_3} + u^1 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
\frac { \partial u^3 } { \partial \alpha_1} - u^1 \frac {\left( 1+\frac{\alpha_3}{R} \right)}{R} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
$
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
B \cdot
\left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
End of explanation
"""
E=zeros(6,9)
E[0,0]=1
E[1,4]=1
E[2,8]=1
E[3,1]=1
E[3,3]=1
E[4,2]=1
E[4,6]=1
E[5,5]=1
E[5,7]=1
E
N = 3
grad_u = zeros(N,N)
for i in range(N):
for j in range(N):
grad_u[i,j] = Symbol(r'\nabla_{} u_{}'.format(i+1,j+1))
grad_u
metric_tensor_marix_symbol = MatrixSymbol('g', N, N)
metric_tensor_marix = Matrix(metric_tensor_marix_symbol)
metric_tensor_marix
e_nl_matrix = S(1)/S(2)*(grad_u*metric_tensor_marix*grad_u.T)
e_nl_matrix
e_nl=zeros(6, 1)
e_nl[0,0] = e_nl_matrix[0,0]
e_nl[1,0] = e_nl_matrix[1,1]
e_nl[2,0] = e_nl_matrix[2,2]
e_nl[3,0] = 2*e_nl_matrix[0,1]
e_nl[4,0] = 2*e_nl_matrix[0,2]
e_nl[5,0] = 2*e_nl_matrix[1,2]
e_nl
grad_u_g_symbol = MatrixSymbol('a', N, N)
grad_u_g = Matrix(grad_u_g_symbol)
grad_u_g
a_nl_matrix = grad_u_g*grad_u.T
a_nl=zeros(6, 1)
a_nl[0,0] = a_nl_matrix[0,0]
a_nl[1,0] = a_nl_matrix[1,1]
a_nl[2,0] = a_nl_matrix[2,2]
a_nl[3,0] = 2*a_nl_matrix[0,1]
a_nl[4,0] = 2*a_nl_matrix[0,2]
a_nl[5,0] = 2*a_nl_matrix[1,2]
a_nl
N = 3
grad_u_v = zeros(N*N, 1)
print(grad_u_v.shape)
for i in range(N):
for j in range(N):
index = i*N+j
# print("i = {}, j = {}, index = {}".format(i,j,index))
grad_u_v[index,0] = grad_u[j,i]
grad_u_v
E_NL = zeros(6,9)
E_NL[0,0] = grad_u_g[0,0]
E_NL[0,3] = grad_u_g[0,1]
E_NL[0,6] = grad_u_g[0,2]
E_NL[1,1] = grad_u_g[1,0]
E_NL[1,4] = grad_u_g[1,1]
E_NL[1,7] = grad_u_g[1,2]
E_NL[2,2] = grad_u_g[2,0]
E_NL[2,5] = grad_u_g[2,1]
E_NL[2,8] = grad_u_g[2,2]
E_NL[3,1] = 2*grad_u_g[0,0]
E_NL[3,4] = 2*grad_u_g[0,1]
E_NL[3,7] = 2*grad_u_g[0,2]
E_NL[4,2] = 2*grad_u_g[0,0]
E_NL[4,5] = 2*grad_u_g[0,1]
E_NL[4,8] = 2*grad_u_g[0,2]
E_NL[5,2] = 2*grad_u_g[1,0]
E_NL[5,5] = 2*grad_u_g[1,1]
E_NL[5,8] = 2*grad_u_g[1,2]
E_NL
E_NL*grad_u_v
a_values=S(1)/S(2)*(grad_u*metric_tensor_marix)
E_NL = zeros(6,9)
E_NL[0,0] = a_values[0,0]
E_NL[0,3] = a_values[0,1]
E_NL[0,6] = a_values[0,2]
E_NL[1,1] = a_values[1,0]
E_NL[1,4] = a_values[1,1]
E_NL[1,7] = a_values[1,2]
E_NL[2,2] = a_values[2,0]
E_NL[2,5] = a_values[2,1]
E_NL[2,8] = a_values[2,2]
E_NL[3,1] = 2*a_values[0,0]
E_NL[3,4] = 2*a_values[0,1]
E_NL[3,7] = 2*a_values[0,2]
E_NL[4,2] = 2*a_values[0,0]
E_NL[4,5] = 2*a_values[0,1]
E_NL[4,8] = 2*a_values[0,2]
E_NL[5,2] = 2*a_values[1,0]
E_NL[5,5] = 2*a_values[1,1]
E_NL[5,8] = 2*a_values[1,2]
E_NL
Q=E*B
Q=simplify(Q)
Q
"""
Explanation: Deformations tensor
$
\left(
\begin{array}{c}
\varepsilon_{11} \
\varepsilon_{22} \
\varepsilon_{33} \
2\varepsilon_{12} \
2\varepsilon_{13} \
2\varepsilon_{23} \
\end{array}
\right)
=
E \cdot
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)$
End of explanation
"""
T=zeros(12,6)
T[0,0]=1
T[0,2]=alpha3
T[1,1]=1
T[1,3]=alpha3
T[3,2]=1
T[8,4]=1
T[9,5]=1
T
Q=E*B*T
Q=simplify(Q)
Q
"""
Explanation: Tymoshenko theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
"""
from sympy import MutableDenseNDimArray
C_x = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
elem_index = 'C^{{{}{}{}{}}}'.format(i+1, j+1, k+1, l+1)
el = Symbol(elem_index)
C_x[i,j,k,l] = el
C_x
"""
Explanation: Elasticity tensor(stiffness tensor)
General form
End of explanation
"""
C_x_symmetry = MutableDenseNDimArray.zeros(3, 3, 3, 3)
def getCIndecies(index):
if (index == 0):
return 0, 0
elif (index == 1):
return 1, 1
elif (index == 2):
return 2, 2
elif (index == 3):
return 0, 1
elif (index == 4):
return 0, 2
elif (index == 5):
return 1, 2
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
elem_index = 'C^{{{}{}{}{}}}'.format(i+1, j+1, k+1, l+1)
el = Symbol(elem_index)
C_x_symmetry[i,j,k,l] = el
C_x_symmetry[i,j,l,k] = el
C_x_symmetry[j,i,k,l] = el
C_x_symmetry[j,i,l,k] = el
C_x_symmetry[k,l,i,j] = el
C_x_symmetry[k,l,j,i] = el
C_x_symmetry[l,k,i,j] = el
C_x_symmetry[l,k,j,i] = el
C_x_symmetry
"""
Explanation: Include symmetry
End of explanation
"""
C_isotropic = MutableDenseNDimArray.zeros(3, 3, 3, 3)
C_isotropic_matrix = zeros(6)
mu = Symbol('mu')
la = Symbol('lambda')
for s in range(6):
for t in range(s, 6):
if (s < 3 and t < 3):
if(t != s):
C_isotropic_matrix[s,t] = la
C_isotropic_matrix[t,s] = la
else:
C_isotropic_matrix[s,t] = 2*mu+la
C_isotropic_matrix[t,s] = 2*mu+la
elif (s == t):
C_isotropic_matrix[s,t] = mu
C_isotropic_matrix[t,s] = mu
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
el = C_isotropic_matrix[s, t]
C_isotropic[i,j,k,l] = el
C_isotropic[i,j,l,k] = el
C_isotropic[j,i,k,l] = el
C_isotropic[j,i,l,k] = el
C_isotropic[k,l,i,j] = el
C_isotropic[k,l,j,i] = el
C_isotropic[l,k,i,j] = el
C_isotropic[l,k,j,i] = el
C_isotropic
def getCalpha(C, A, q, p, s, t):
res = S(0)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
res += C[i,j,k,l]*A[q,i]*A[p,j]*A[s,k]*A[t,l]
return simplify(trigsimp(res))
C_isotropic_alpha = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
c = getCalpha(C_isotropic, A_inv, i, j, k, l)
C_isotropic_alpha[i,j,k,l] = c
C_isotropic_alpha[0,0,0,0]
C_isotropic_matrix_alpha = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_isotropic_matrix_alpha[s,t] = C_isotropic_alpha[i,j,k,l]
C_isotropic_matrix_alpha
"""
Explanation: Isotropic material
End of explanation
"""
C_orthotropic = MutableDenseNDimArray.zeros(3, 3, 3, 3)
C_orthotropic_matrix = zeros(6)
for s in range(6):
for t in range(s, 6):
elem_index = 'C^{{{}{}}}'.format(s+1, t+1)
el = Symbol(elem_index)
if ((s < 3 and t < 3) or t == s):
C_orthotropic_matrix[s,t] = el
C_orthotropic_matrix[t,s] = el
for s in range(6):
for t in range(s, 6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
el = C_orthotropic_matrix[s, t]
C_orthotropic[i,j,k,l] = el
C_orthotropic[i,j,l,k] = el
C_orthotropic[j,i,k,l] = el
C_orthotropic[j,i,l,k] = el
C_orthotropic[k,l,i,j] = el
C_orthotropic[k,l,j,i] = el
C_orthotropic[l,k,i,j] = el
C_orthotropic[l,k,j,i] = el
C_orthotropic
"""
Explanation: Orthotropic material
End of explanation
"""
def getCalpha(C, A, q, p, s, t):
res = S(0)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
res += C[i,j,k,l]*A[q,i]*A[p,j]*A[s,k]*A[t,l]
return simplify(trigsimp(res))
C_orthotropic_alpha = MutableDenseNDimArray.zeros(3, 3, 3, 3)
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
c = getCalpha(C_orthotropic, A_inv, i, j, k, l)
C_orthotropic_alpha[i,j,k,l] = c
C_orthotropic_alpha[0,0,0,0]
C_orthotropic_matrix_alpha = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_orthotropic_matrix_alpha[s,t] = C_orthotropic_alpha[i,j,k,l]
C_orthotropic_matrix_alpha
"""
Explanation: Orthotropic material in shell coordinates
End of explanation
"""
P=eye(12,12)
P[0,0]=1/(1+alpha3/R)
P[1,1]=1/(1+alpha3/R)
P[2,2]=1/(1+alpha3/R)
P[3,0]=-1/(R*(1+alpha3/R)**2)
P[3,3]=1/(1+alpha3/R)
P
Def=simplify(E*B*P)
Def
rows, cols = Def.shape
D_p=zeros(rows, cols)
q = 1+alpha3/R
for i in range(rows):
ratio = 1
if (i==0):
ratio = q*q
elif (i==3 or i == 4):
ratio = q
for j in range(cols):
D_p[i,j] = Def[i,j] / ratio
D_p = simplify(D_p)
D_p
"""
Explanation: Physical coordinates
$u^1=\frac{u_{[1]}}{1+\frac{\alpha_3}{R}}$
$\frac{\partial u^1} {\partial \alpha_3}=\frac{1}{1+\frac{\alpha_3}{R}} \frac{\partial u_{[1]}} {\partial \alpha_3} + u_{[1]} \frac{\partial} {\partial \alpha_3} \left( \frac{1}{1+\frac{\alpha_3}{R}} \right) = =\frac{1}{1+\frac{\alpha_3}{R}} \frac{\partial u_{[1]}} {\partial \alpha_3} - u_{[1]} \frac{1}{R \left( 1+\frac{\alpha_3}{R} \right)^2} $
End of explanation
"""
C_isotropic_alpha_p = MutableDenseNDimArray.zeros(3, 3, 3, 3)
q=1+alpha3/R
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
fact = 1
if (i==0):
fact = fact*q
if (j==0):
fact = fact*q
if (k==0):
fact = fact*q
if (l==0):
fact = fact*q
C_isotropic_alpha_p[i,j,k,l] = simplify(C_isotropic_alpha[i,j,k,l]*fact)
C_isotropic_matrix_alpha_p = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_isotropic_matrix_alpha_p[s,t] = C_isotropic_alpha_p[i,j,k,l]
C_isotropic_matrix_alpha_p
C_orthotropic_alpha_p = MutableDenseNDimArray.zeros(3, 3, 3, 3)
q=1+alpha3/R
for i in range(3):
for j in range(3):
for k in range(3):
for l in range(3):
fact = 1
if (i==0):
fact = fact*q
if (j==0):
fact = fact*q
if (k==0):
fact = fact*q
if (l==0):
fact = fact*q
C_orthotropic_alpha_p[i,j,k,l] = simplify(C_orthotropic_alpha[i,j,k,l]*fact)
C_orthotropic_matrix_alpha_p = zeros(6)
for s in range(6):
for t in range(6):
i,j = getCIndecies(s)
k,l = getCIndecies(t)
C_orthotropic_matrix_alpha_p[s,t] = C_orthotropic_alpha_p[i,j,k,l]
C_orthotropic_matrix_alpha_p
"""
Explanation: Stiffness tensor
End of explanation
"""
D_p_T = D_p*T
K = Symbol('K')
D_p_T = D_p_T.subs(R, 1/K)
simplify(D_p_T)
"""
Explanation: Tymoshenko
End of explanation
"""
theta, h1, h2=symbols('theta h_1 h_2')
square_geom=theta/2*(R+h2)**2-theta/2*(R+h1)**2
expand(simplify(square_geom))
"""
Explanation: Square of segment
$A=\frac {\theta}{2} \left( R + h_2 \right)^2-\frac {\theta}{2} \left( R + h_1 \right)^2$
End of explanation
"""
square_int=integrate(integrate(1+alpha3/R, (alpha3, h1, h2)), (alpha1, 0, theta*R))
expand(simplify(square_int))
"""
Explanation: ${\displaystyle A=\int_{0}^{L}\int_{h_1}^{h_2} \left( 1+\frac{\alpha_3}{R} \right) d \alpha_1 d \alpha_3}, L=R \theta$
End of explanation
"""
S = simplify(D_p.T*C_isotropic_matrix_alpha_p*D_p*(1+alpha3/R)**2)
S
"""
Explanation: Virtual work
Isotropic material physical coordinates
End of explanation
"""
W = simplify(D_p_T.T*C_isotropic_matrix_alpha_p*D_p_T*(1+K*alpha3))
W
h=Symbol('h')
E=Symbol('E')
v=Symbol('nu')
W_a3 = integrate(W, (alpha3, -h/2, h/2))
W_a3 = simplify(W_a3)
W_a3.subs(la, E*v/((1+v)*(1-2*v))).subs(mu, E/((1+v)*2))
A_M = zeros(3)
A_M[0,0] = E*h/(1-v**2)
A_M[1,1] = 5*E*h/(12*(1+v))
A_M[2,2] = E*h**3/(12*(1-v**2))
Q_M = zeros(3,6)
Q_M[0,1] = 1
Q_M[0,4] = K
Q_M[1,0] = -K
Q_M[1,2] = 1
Q_M[1,5] = 1
Q_M[2,3] = 1
W_M=Q_M.T*A_M*Q_M
W_M
"""
Explanation: Isotropic material physical coordinates - Tymoshenko
End of explanation
"""
rho=Symbol('rho')
B_h=zeros(3,12)
B_h[0,0]=1
B_h[1,4]=1
B_h[1,8]=1
M=simplify(rho*P.T*B_h.T*G_con*B_h*P)
M
M_p = rho*B_h.T*B_h*(1+alpha3/R)**2
M_p
mass_matrix_func = lambdify((rho, R, alpha3), M_p, "numpy")
mass_matrix_func(100,10,20)
stiffness_matrix_func = lambdify([R, mu, la, alpha3], S, "numpy")
stiffness_matrix_func(100, 200, 300, 400)
import fem.geometry as g
import fem.model as m
import fem.material as mat
import fem.solver as s
import fem.mesh as me
import plot
def generate_layers(thickness, layers_count, material):
layer_top = thickness / 2
layer_thickness = thickness / layers_count
layers = set()
for i in range(layers_count):
layer = m.Layer(layer_top - layer_thickness, layer_top, material, i)
layers.add(layer)
layer_top -= layer_thickness
return layers
def solve(width, curvature, thickness):
layers_count = 1
layers = generate_layers(thickness, layers_count, mat.IsotropicMaterial.steel())
mesh = me.Mesh.generate(width, layers, N, M, m.Model.FIXED_BOTTOM_LEFT_RIGHT_POINTS)
# geometry = g.CorrugatedCylindricalPlate(width, curvature, corrugation_amplitude, corrugation_frequency)
geometry = g.CylindricalPlate(width, curvature)
# geometry = g.Geometry()
model = m.Model(geometry, layers, m.Model.FIXED_BOTTOM_LEFT_RIGHT_POINTS)
return s.solve(model, mesh, stiffness_matrix, mass_matrix)
def stiffness_matrix(material, geometry, x1, x2, x3):
return stiffness_matrix_func(1/geometry.curvature, material.mu(), material.lam(), x3)
def mass_matrix(material, geometry, x1, x2, x3):
return mass_matrix_func(material.rho, 1/geometry.curvature, x3)
# r=2
# width = r*2*3.14
# curvature = 1/r
width = 2
curvature = 0.8
thickness = 0.05
N = 100
M = 10
results = solve(width, curvature, thickness)
results_index = 0
plot.plot_init_and_deformed_geometry(results[results_index], 0, width, -thickness / 2, thickness / 2, 0)
#plot.plot_init_geometry(results[results_index].geometry, 0, width, -thickness / 2, thickness / 2, 0)
# plot.plot_strain(results[results_index], 0, width, -thickness / 2, thickness / 2, 0)
to_print = 20
if (len(results) < to_print):
to_print = len(results)
for i in range(to_print):
print(results[i].freq)
"""
Explanation: Mass matrix in physical coordinates
End of explanation
"""
|
cdawei/digbeta | dchen/music/MLC_baseline.ipynb | gpl-3.0 | %matplotlib inline
%load_ext autoreload
%autoreload 2
import os, sys, time
import pickle as pkl
import numpy as np
import pandas as pd
from sklearn.base import BaseEstimator
from sklearn.linear_model import LogisticRegression
from sklearn.multiclass import OneVsRestClassifier
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report, make_scorer, f1_score, label_ranking_loss
import matplotlib.pyplot as plt
import seaborn as sns
sys.path.append('src')
from evaluate import avgPrecisionK, evaluatePrecision, evaluateF1, evaluateRankingLoss, f1_score_nowarn
from datasets import create_dataset, dataset_names, nLabels_dict
dataset_names
data_ix = 3
dataset_name = dataset_names[data_ix]
nLabels = nLabels_dict[dataset_name]
print(dataset_name, nLabels)
data_dir = 'data'
SEED = 918273645
fmodel_prec = os.path.join(data_dir, 'br-' + dataset_name + '-prec.pkl')
fmodel_f1 = os.path.join(data_dir, 'br-' + dataset_name + '-f1.pkl')
fmodel_base = os.path.join(data_dir, 'br-' + dataset_name + '-base.pkl')
fperf_prec = os.path.join(data_dir, 'perf-lr-prec.pkl')
fperf_f1 = os.path.join(data_dir, 'perf-lr-f1.pkl')
fperf_base = os.path.join(data_dir, 'perf-lr-base.pkl')
"""
Explanation: Multi-label classification -- binary relevance baseline
End of explanation
"""
X_train, Y_train = create_dataset(dataset_name=dataset_name, train_data=True, shuffle=True, random_state=SEED)
X_test, Y_test = create_dataset(dataset_name=dataset_name, train_data=False)
"""
Explanation: Load data.
End of explanation
"""
X_train_mean = np.mean(X_train, axis=0).reshape((1, -1))
X_train_std = np.std(X_train, axis=0).reshape((1, -1)) + 10 ** (-6)
X_train -= X_train_mean
X_train /= X_train_std
X_test -= X_train_mean
X_test /= X_train_std
"""
Explanation: Feature normalisation.
End of explanation
"""
#probs = np.mean(Y_train, axis=0)
#probs
#preds = np.tile(probs, (X_test.shape[0], 1))
#evaluatePrecision(Y_test, preds, verbose=1)
#evaluateRankingLoss(Y_test, preds, n_jobs=4)
"""
Explanation: Naive baseline
Use the estimated probability for each label as the predicted score for any example.
End of explanation
"""
class BinaryRelevance(BaseEstimator):
"""
Independent logistic regression based on OneVsRestClassifier wrapper.
"""
def __init__(self, C=1, n_jobs=-1):
assert C > 0
self.C = C
self.n_jobs = n_jobs
self.trained = False
def fit(self, X_train, Y_train):
assert X_train.shape[0] == Y_train.shape[0]
# don't make two changes at the same time
#self.estimator = OneVsRestClassifier(LogisticRegression(class_weight='balanced', C=self.C))
self.estimator = OneVsRestClassifier(LogisticRegression(C=self.C), n_jobs=self.n_jobs)
self.estimator.fit(X_train, Y_train)
self.trained = True
def decision_function(self, X_test):
assert self.trained is True
return self.estimator.decision_function(X_test)
def predict(self, X_test, binarise=False):
preds = self.decision_function(X_test)
return preds >= 0.5 if binarise is True else preds
def print_cv_results(clf):
if hasattr(clf, 'best_params_'):
print("\nBest parameters set found on development set:")
print(clf.best_params_)
if hasattr(clf, 'cv_results_'):
for mean, std, params in zip(clf.cv_results_['mean_test_score'], \
clf.cv_results_['std_test_score'], \
clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params))
def dump_results(predictor, X_train, Y_train, X_test, Y_test, fname):
"""
Compute and save performance results
"""
preds_train = predictor.decision_function(X_train)
preds_test = predictor.decision_function(X_test)
print('Training set:')
perf_dict_train = evaluatePrecision(Y_train, preds_train)
print()
print('Test set:')
perf_dict_test = evaluatePrecision(Y_test, preds_test)
print()
print('Training set:')
perf_dict_train.update(evaluateRankingLoss(Y_train, preds_train))
print(label_ranking_loss(Y_train, preds_train))
print()
print('Test set:')
perf_dict_test.update(evaluateRankingLoss(Y_test, preds_test))
print(label_ranking_loss(Y_test, preds_test))
F1_train = f1_score_nowarn(Y_train, preds_train >= 0.5, average='samples')
F1_test = f1_score_nowarn(Y_test, preds_test >= 0.5, average='samples')
print('\nF1 Train: %.4f, %f' % (F1_train, f1_score(Y_train, preds_train >= 0.5, average='samples')))
print('\nF1 Test : %.4f %f' % (F1_test, f1_score(Y_test, preds_test >= 0.5, average='samples')))
perf_dict_train.update({'F1': (F1_train,)})
perf_dict_test.update({'F1': (F1_test,)})
perf_dict = {'Train': perf_dict_train, 'Test': perf_dict_test}
if os.path.exists(fname):
_dict = pkl.load(open(fname, 'rb'))
if dataset_name not in _dict:
_dict[dataset_name] = perf_dict
else:
_dict = {dataset_name: perf_dict}
pkl.dump(_dict, open(fname, 'wb'))
print()
print(pkl.load(open(fname, 'rb')))
clf = BinaryRelevance(n_jobs=3)
clf.fit(X_train, Y_train)
Y_pred = clf.decision_function(X_test) >= 0
f1_score_nowarn(Y_test, Y_pred, average='samples')
f1_score_nowarn(Y_test, Y_pred, average='macro')
pkl.dump(clf, open(fmodel_base, 'wb'))
def avgF1(Y_true, Y_pred):
F1 = f1_score_nowarn(Y_true, Y_pred >= 0, average='samples')
print('\nF1: %g, #examples: %g' % (F1, Y_true.shape[0]))
return F1
C_set = [1e-6, 3e-6, 1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3, 0.01, 0.03, 0.1, 0.3,
1, 3, 10, 30, 100, 300, 1e3, 3e3, 1e4, 3e4, 1e5, 3e5, 1e6, 3e6]
parameters = [{'C': C_set}]
scorer = {'F1': make_scorer(avgF1)}
"""
Explanation: Binary relevance baseline
Train a logistic regression model for each label.
Note that OneVsRestClassifier can be either a multiclass classifier or a multilabel classifier, see this binary relevance example on yeast dataset.
NOTE: To do cross validation (i.e. fit a GridSearchCV), one has to put OneVsRestClassifier into a class wrapper, as the constructor of OneVsRestClassifier doesn't have a parameter C.
End of explanation
"""
if os.path.exists(fmodel_f1):
clf = pkl.load(open(fmodel_f1, 'rb'))
else:
clf = GridSearchCV(BinaryRelevance(), parameters, cv=5, scoring=scorer, verbose=2, n_jobs=10, refit='F1')
clf.fit(X_train, Y_train)
pkl.dump(clf, open(fmodel_f1, 'wb'))
f1_score_nowarn(Y_test, clf.decision_function(X_test) >= 0, average='samples')
clf.best_params_
print_cv_results(clf)
dump_results(clf, X_train, Y_train, X_test, Y_test, fperf_prec)
"""
Explanation: Cross validation according to F1.
End of explanation
"""
plot_loss_of_clf(clf, X_train, Y_train, X_test, Y_test)
from evaluate import calcLoss
from matplotlib.ticker import NullFormatter
def plot_loss_of_clf(clf, X_train, Y_train, X_test, Y_test):
preds_train = clf.decision_function(X_train)
tploss_train = calcLoss(Y_train, preds_train, 'TopPush', njobs=4)
pak_train = calcLoss(Y_train, preds_train, 'Precision@K', njobs=4)
preds_test = clf.decision_function(X_test)
tploss_test = calcLoss(Y_test, preds_test, 'TopPush', njobs=4)
pak_test = calcLoss(Y_test, preds_test, 'Precision@K', njobs=4)
#plot_loss(tploss_train, pak_train, 'Training set (' + dataset_name + ')')
plot_loss(tploss_test, pak_test, 'Test set (' + dataset_name + ')')
def plot_loss(loss, pak, title):
# the data
x = loss
y = 1 - pak
print('away from diagonal portion:', np.mean(loss != 1-pak))
nullfmt = NullFormatter() # no labels
# definitions for the axes
left, width = 0.1, 0.65
bottom, height = 0.1, 0.65
bottom_h = left_h = left + width + 0.02
rect_scatter = [left, bottom, width, height]
rect_histx = [left, bottom_h, width, 0.2]
rect_histy = [left_h, bottom, 0.2, height]
# start with a rectangular Figure
plt.figure(1, figsize=(8, 8))
axScatter = plt.axes(rect_scatter)
axHistx = plt.axes(rect_histx)
axHisty = plt.axes(rect_histy)
# no labels
axHistx.xaxis.set_major_formatter(nullfmt)
axHisty.yaxis.set_major_formatter(nullfmt)
# the scatter plot:
axScatter.scatter(x, y, color='b', alpha=0.5)
axScatter.plot([0, 1], [0, 1], ls='--', color='g')
axScatter.set_xlabel('Top push loss', fontdict={'fontsize': 12})
axScatter.set_ylabel('1 - precision@K', fontdict={'fontsize': 12})
# now determine nice limits by hand:
#binwidth = 0.25
#xymax = np.max([np.max(np.fabs(x)), np.max(np.fabs(y))])
#lim = (int(xymax/binwidth) + 1) * binwidth
#axScatter.set_xlim((-lim, lim))
#axScatter.set_ylim((-lim, lim))
#bins = np.arange(-lim, lim + binwidth, binwidth)
axHistx.hist(x, bins=10, color='g', alpha=0.3)
axHistx.set_yscale('log')
axHisty.hist(y, bins=10, color='g', alpha=0.3, orientation='horizontal')
axHisty.set_xscale('log')
#axHistx.set_xlim(axScatter.get_xlim())
#axHisty.set_ylim(axScatter.get_ylim())
axHistx.set_title(title, fontdict={'fontsize': 15}, loc='center')
%%script false
# NOTE: binary predictions (by predict()) are required for this method to work
if os.path.exists(fmodel_f1):
clf = pkl.load(open(fmodel_f1, 'rb'))
else:
scorer = make_scorer(f1_score_nowarn, average='samples')
clf = GridSearchCV(BinaryRelevance(), parameters, cv=5, scoring=scorer, verbose=2, n_jobs=6)
clf.fit(X_train, Y_train)
pkl.dump(clf, open(fmodel_f1, 'wb'))
print_cv_results(clf)
#dump_results(clf, X_train, Y_train, X_test, Y_test, fperf_f1)
"""
Explanation: Cross validation according to F1.
End of explanation
"""
if os.path.exists(fmodel_base):
clf = pkl.load(open(fmodel_base, 'rb'))
else:
clf = OneVsRestClassifier(LogisticRegression(verbose=1))
clf.fit(X_train, Y_train)
pkl.dump(clf, open(fmodel_base, 'wb'))
dump_results(clf, X_train, Y_train, X_test, Y_test, fperf_base)
"""
Explanation: Plain logistic regression.
End of explanation
"""
%%script false
allPreds_train = [ ]
allPreds_test = [ ]
allTruths_train = [ ]
allTruths_test = [ ]
coefMat = [ ]
labelIndices = [ ]
ranges = range(-6, 7)
parameters = [{'C': sorted([10**(e) for e in ranges] + [3 * 10**(e) for e in ranges])}]
scoring = 'average_precision' # 'accuracy' #'precision_macro'
for label_ix in range(nLabels):
print('Training for Label %d' % (label_ix+1))
y_train = Y_train[:, label_ix]
y_test = Y_test [:, label_ix]
allTruths_train.append(y_train)
allTruths_test.append(y_test)
assert( (not np.all(y_train == 0)) and (not np.all(y_train == 1)) )
# searching for a baseline in (Lin et al.) with:
# test F1 on bibtex 0.372, 26.8
# test F1 on bookmarks 0.307, 0.219
# test F1 on delicious 0.265, 0.102
# test F1 on bibtex: 0.3730, 0.277
# test F1 on bookmarks: 0.2912, 0.2072
# test F1 on delicious: 0.1899, 0.1268
#clf = LogisticRegression(C=100)
# test F1 on bookmarks: 0.2928, 0.2109
#clf = LogisticRegression(C=60)
# test F1 on bibtex: 0.4282
#clf = GridSearchCV(LogisticRegression(class_weight='balanced'), parameters, cv=5, scoring=scoring)
# test F1 on bibtex: < 0.3
# test F1 on bookmarks: 0.2981, 0.2281
# test F1 on delicious: 0.1756, 0.0861
#clf = LogisticRegression()
# test F1 on bibtex: 0.4342
#clf = LogisticRegression(class_weight='balanced')
# test F1 on bibtex: 0.3018
#clf = GridSearchCV(LogisticRegression(), parameters, cv=5, scoring=scoring)
# test F1 on bibtex: 0.3139
#clf = GridSearchCV(LogisticRegression(), parameters, scoring=scoring)
# test F1 on bibtex: 0.4252
#clf = GridSearchCV(LogisticRegression(class_weight='balanced'), parameters, scoring=scoring)
# test F1 on bibtex: 0.3598
#clf = LogisticRegression(C=10)
# test F1 on bibtex: 0.3670
#clf = LogisticRegression(C=30)
estimator = LogisticRegression(class_weight='balanced')#, solver='lbfgs')
clf = GridSearchCV(estimator, parameters, cv=5, scoring=scoring, n_jobs=4)
clf.fit(X_train, y_train)
print("Best parameters set found on development set:")
print(clf.best_params_)
print()
allPreds_train.append(clf.decision_function(X_train))
allPreds_test.append(clf.decision_function(X_test))
allTruths_train = np.array(allTruths_train).T
allTruths_test = np.array(allTruths_test).T
allPreds_train = np.array(allPreds_train).T
allPreds_test = np.array(allPreds_test).T
print(allPreds_test.shape)
print(allTruths_test.shape)
"""
Explanation: Cross validation for classifier of each label.
End of explanation
"""
#coefMat = np.array(coefMat).T
#coefMat.shape
#sns.heatmap(coefMat[:, :30])
#precisions_train = [avgPrecision(allTruths_train, allPreds_train, k) for k in range(1, nLabels+1)]
#precisions_test = [avgPrecision(allTruths_test, allPreds_test, k) for k in range(1, nLabels+1)]
#precisionK_train = avgPrecisionK(allTruths_train, allPreds_train)
#precisionK_test = avgPrecisionK(allTruths_test, allPreds_test)
%%script false
plt.figure(figsize=[10,5])
plt.plot(precisions_train, ls='--', c='r', label='Train')
plt.plot(precisions_test, ls='-', c='g', label='Test')
plt.plot([precisionK_train for k in range(nLabels)], ls='-', c='r', label='Train, Precision@K')
plt.plot([precisionK_test for k in range(nLabels)], ls='-', c='g', label='Test, Precision@K')
plt.xticks(np.arange(nLabels), np.arange(1,nLabels+1))
plt.xlabel('k')
plt.ylabel('Precision@k')
plt.legend(loc='best')
plt.title('Independent Logistic Regression on ' + dataset_name + ' dataset')
plt.savefig(dataset_name + '_lr.svg')
"""
Explanation: Result analysis
End of explanation
"""
|
catalystcomputing/DSIoT-Python-sessions | Session201811/code/11 Supervised Machine Learning - scikit learn.ipynb | apache-2.0 | import numpy as np
import matplotlib as mp
from sklearn import datasets
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
# Load the sample data set from the datasets module
dataset = datasets.load_iris()
# Display the data in the test dataset
dataset
# Species of Iris in the dataset
dataset['target_names']
"""
Explanation: Supervised Machine Learning - scikit learn
The example uses the Iris Dataset. (The Iris dataset section is adatped from an example from Analyics Vidhya)
https://en.wikipedia.org/wiki/Iris_flower_data_set
End of explanation
"""
# Names of the type of information recorded about an Iris - called features
dataset['feature_names']
# First 10 sets of Iris data
dataset['data'][:10]
# The classification of each of the first 10 sets of Iris data - the target
dataset['target'][:10]
"""
Explanation: Iris Setosa
Iris Versicolor
Iris Virginica
End of explanation
"""
# Now we create our model
model = LogisticRegression()
# We train it by passing in the test data and the actual results
model.fit(dataset.data, dataset.target)
# We use the model to create predictions
expected = dataset.target
predicted = model.predict(dataset.data)
# Using the metrics module we see the results of the model
metrics.accuracy_score(expected, predicted, normalize=True, sample_weight=None)
"""
Explanation: Here 0 equates to setosa the first entry in the 'target_names' array
End of explanation
"""
y_true = ["cat", "ant", "cat", "cat", "ant", "bird", "bird"]
y_pred = ["ant", "ant", "cat", "cat", "ant", "cat", "bird"]
metrics.accuracy_score(y_true, y_pred, normalize=True, sample_weight=None)
"""
Explanation: Digging deeper using metrics
Accuracy score, Classification report & Confusion matix
Here we will use a simple example to show metrics you can use: accuracy, classification reports and confusion matrices.
y_true is the test data
y_pred is the prediction
End of explanation
"""
print(metrics.classification_report(y_true, y_pred,
target_names=["ant", "bird", "cat"]))
"""
Explanation: 5 correct predictions out of 7 values. 71% accuracy
End of explanation
"""
metrics.confusion_matrix(y_true, y_pred)
"""
Explanation: Here we can see that the predictions:
- precision = $2/3 = 0.67$ (2 ants in test data and matched but found an extra 1 in prediction).
- recall = $2/2 = 1$ (2 ants in test data and these matched in prediction).
- f1-score = $(0.67 + 1) / 2 = 0.8$ mean of precision and recall.
- support shows that there are 2 ants, 2 birds and 3 cats in the test data.
http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html
End of explanation
"""
print(metrics.classification_report(expected, predicted,target_names=dataset['target_names']))
print (metrics.confusion_matrix(expected, predicted))
"""
Explanation: In the confusion_matrix the labels give the order of the rows.
ant was correctly categorised twice and was never miss categorised
bird was correctly categorised once and was categorised as cat once
cat was correctly categorised twice and was categorised as an ant once
Back to Iris predictions
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/tensorboard/tensorboard_in_notebooks.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
"""
# Load the TensorBoard notebook extension
%load_ext tensorboard
"""
Explanation: Using TensorBoard in Notebooks
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tensorboard/tensorboard_in_notebooks"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/tensorboard_in_notebooks.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorboard/blob/master/docs/tensorboard_in_notebooks.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorboard/docs/tensorboard_in_notebooks.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
TensorBoard can be used directly within notebook experiences such as Colab and Jupyter. This can be helpful for sharing results, integrating TensorBoard into existing workflows, and using TensorBoard without installing anything locally.
Setup
Start by installing TF 2.0 and loading the TensorBoard notebook extension:
For Jupyter users: If you’ve installed Jupyter and TensorBoard into
the same virtualenv, then you should be good to go. If you’re using a
more complicated setup, like a global Jupyter installation and kernels
for different Conda/virtualenv environments, then you must ensure that
the tensorboard binary is on your PATH inside the Jupyter notebook
context. One way to do this is to modify the kernel_spec to prepend
the environment’s bin directory to PATH, as described here.
For Docker users: In case you are running a Docker image of Jupyter Notebook server using TensorFlow's nightly, it is necessary to expose not only the notebook's port, but the TensorBoard's port. Thus, run the container with the following command:
docker run -it -p 8888:8888 -p 6006:6006 \
tensorflow/tensorflow:nightly-py3-jupyter
where the -p 6006 is the default port of TensorBoard. This will allocate a port for you to run one TensorBoard instance. To have concurrent instances, it is necessary to allocate more ports. Also, pass --bind_all to %tensorboard to expose the port outside the container.
End of explanation
"""
import tensorflow as tf
import datetime, os
"""
Explanation: Import TensorFlow, datetime, and os:
End of explanation
"""
fashion_mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train),(x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
"""
Explanation: TensorBoard in notebooks
Download the FashionMNIST dataset and scale it:
End of explanation
"""
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
"""
Explanation: Create a very simple model:
End of explanation
"""
def train_model():
model = create_model()
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
logdir = os.path.join("logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback])
train_model()
"""
Explanation: Train the model using Keras and the TensorBoard callback:
End of explanation
"""
%tensorboard --logdir logs
"""
Explanation: Start TensorBoard within the notebook using magics:
End of explanation
"""
%tensorboard --logdir logs
"""
Explanation: <!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/notebook_tensorboard.png?raw=1"/> -->
You can now view dashboards such as scalars, graphs, histograms, and others. Some dashboards are not available yet in Colab (such as the profile plugin).
The %tensorboard magic has exactly the same format as the TensorBoard command line invocation, but with a %-sign in front of it.
You can also start TensorBoard before training to monitor it in progress:
End of explanation
"""
train_model()
"""
Explanation: <!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/notebook_tensorboard_two_runs.png?raw=1"/> -->
The same TensorBoard backend is reused by issuing the same command. If a different logs directory was chosen, a new instance of TensorBoard would be opened. Ports are managed automatically.
Start training a new model and watch TensorBoard update automatically every 30 seconds or refresh it with the button on the top right:
End of explanation
"""
from tensorboard import notebook
notebook.list() # View open TensorBoard instances
# Control TensorBoard display. If no port is provided,
# the most recently launched TensorBoard is used
notebook.display(port=6006, height=1000)
"""
Explanation: You can use the tensorboard.notebook APIs for a bit more control:
End of explanation
"""
|
kraemerd17/kraemerd17.github.io | courses/python/material/ipynbs/Acquiring and wrangling with data.ipynb | mit | from __future__ import division
from numpy.random import randn
import numpy as np
import os
import matplotlib.pyplot as plt
np.random.seed(12345)
plt.rc('figure', figsize=(10, 6))
from pandas import Series, DataFrame
import pandas as pd
np.set_printoptions(precision=4)
"""
Explanation: In previous sessions, we've talked about the underlying data structures in the Pandas library. We've seen how to manipulate DataFrame and Series objects in order to answer questions regarding the data. Last week, we also saw how to use matplotlib to visualize our analysis.
This week we will be diving into an important topic in Pandas: aggregating and performing operations on specified groups of data without modifying the underlying structure. According to Wes McKinney,
Categorizing a data set and applying a function to each group, whether an aggregation or transformation, is often a critical component of a data analysis workflow. After loading, merging, and preparing a data set, a familiar task is to compute group statistics or possibly pivot tables for reporting or visualization purposes. Pandas provides a flexible and high-performance groupby facility, enabling you to slice and dice, and summarize data sets in a natural way.
End of explanation
"""
pd.options.display.notebook_repr_html = False
%matplotlib inline
"""
Explanation: Data Aggregation and Group Operations
End of explanation
"""
df = DataFrame({'key1' : ['a', 'a', 'b', 'b', 'a'],
'key2' : ['one', 'two', 'one', 'two', 'one'],
'data1' : np.random.randn(5),
'data2' : np.random.randn(5)})
df
"""
Explanation: GroupBy mechanics
Pandas was designed with a considerable deference to the progress made in data aggregation techniques by developers for the R programming language. The main mechanism is the split-apply-combine paradigm:
Data is split into groups based on one or more provided keys,
A function is applied to each group,
The results of all the function applications are combined into a result object.
As we will see, grouping keys are very flexible in nature. Some possible types of keys are
A list or array of values sharing the length of the grouped column
A value indicating a column name
A dict or Series giving a correspondence between the values on the axis being grouped and the group names
A function to be invoked on the axis index or the individual labels in the index
End of explanation
"""
grouped = df['data1'].groupby(df['key1'])
grouped
"""
Explanation: We can compute the mean of the column corresponding to "data1" by using the group labels from "key1". This can be done in a number of ways, but here is a straightforward example:
End of explanation
"""
grouped.mean()
"""
Explanation: Notice that grouped is its own Pandas object, just like a Series or DataFrame. Now we can compute simple statistics just like we would with other objects.
End of explanation
"""
means = df['data1'].groupby([df['key1'], df['key2']]).mean()
means
"""
Explanation: We can also pass an array of keys:
End of explanation
"""
means.unstack()
"""
Explanation: This forms a grouping using a heierarchical index, as we have seen earlier. To flatten the hierarchical index, as we have seen, we can call unstack().
End of explanation
"""
states = np.array(['Ohio', 'California', 'California', 'Ohio', 'Ohio'])
years = np.array([2005, 2005, 2006, 2005, 2006])
df['data1'].groupby([states, years]).mean()
"""
Explanation: In these examples, the keys refer to Series, though they could really be anything so long as the lengths match up. For example, consider the following key arrays.
End of explanation
"""
df.groupby('key1').mean()
df.groupby(['key1', 'key2']).mean()
df.groupby(['key1', 'key2']).size()
"""
Explanation: If you're just interested in the column names, you can simply pass the identifying string, or list of strings, as in the following examples:
End of explanation
"""
for name, group in df.groupby('key1'):
print(name)
print(group)
"""
Explanation: Iterating over groups
groupby() supports iteration. Specifically, groupby() produces tuples containing the group name along the relevant data. For example:
End of explanation
"""
for (k1, k2), group in df.groupby(['key1', 'key2']):
print((k1, k2))
print(group)
"""
Explanation: If multiple keys are being passed, you can overload the name by making a $n$-tuple of keys. For example:
End of explanation
"""
pieces = dict(list(df.groupby('key1')))
pieces['b']
"""
Explanation: This process is in general quite flexible. For example, suppose you want to store the relevant DataFrame groups as a native-Python dict. In this case, we can just wrap the groupby() call with a list and a dict, which will thus be stored into the dict.
End of explanation
"""
df.dtypes
grouped = df.groupby(df.dtypes, axis=1)
dict(list(grouped))
"""
Explanation: By default, groupby groups on axis=0, which corresponds to treating columns of data. Of course, you can specify which axis you desire.
End of explanation
"""
df.groupby(['key1', 'key2'])[['data2']].mean()
"""
Explanation: Selecting a column or subset of columns
Indexing a GroupBy object created from a DataFrame with a column name or array of column names has the efect of selecting those columns for aggregation. This means that:
Python
df.groupby('key1')['data1']
df.groupby('key1')[['data2']]
is effectively identical to
Python
df['data1'].groupby(df['key1'])
df[['data2']].groupby(df['key1'])
Why is this useful? If you're working with a large dataset for which aggregating the entire DataFrame is out of the question, you can speed up the process by specifying the particular columns you are interested in.
Here's an example: we can group a DataFrame according to a set of keys, specify a particular column (in this case, data2), and take the resulting mean.
End of explanation
"""
s_grouped = df.groupby(['key1', 'key2'])['data2']
s_grouped
s_grouped.mean()
"""
Explanation: The object returned here is a grouped DataFrame if a list or array is passed and a grouped Series if just a column name is passed.
End of explanation
"""
people = DataFrame(np.random.randn(5, 5),
columns=['a', 'b', 'c', 'd', 'e'],
index=['Joe', 'Steve', 'Wes', 'Jim', 'Travis'])
people.ix[2:3, ['b', 'c']] = np.nan # Add a few NA values
people
"""
Explanation: Grouping with dicts and Series
Grouping information may exist in a form other than an array. Let's consider another example DataFrame.
End of explanation
"""
mapping = {'a': 'red', 'b': 'red', 'c': 'blue',
'd': 'blue', 'e': 'red', 'f' : 'orange'}
"""
Explanation: Suppose we have a group correspondence for the columns and want to sum together the columns by group.
End of explanation
"""
by_column = people.groupby(mapping, axis=1)
by_column.sum()
"""
Explanation: The groupby method can use the dict natively, allowing you to form GroupBy objects on the fly.
End of explanation
"""
map_series = Series(mapping)
map_series
people.groupby(map_series, axis=1).count()
"""
Explanation: The same holds for Series objects, which are structurally similar to dicts.
End of explanation
"""
people.groupby(len).sum()
"""
Explanation: Grouping with functions
In Python, functions are simply another data type. You can actually use groupby to isolate members of your data set according to a rule, defined by a function. For example:
End of explanation
"""
key_list = ['one', 'one', 'one', 'two', 'two']
people.groupby([len, key_list]).min()
"""
Explanation: You can mix and match functions with other data types that we have seen above.
End of explanation
"""
columns = pd.MultiIndex.from_arrays([['US', 'US', 'US', 'JP', 'JP'],
[1, 3, 5, 1, 3]],
names=['cty', 'tenor'])
hier_df = DataFrame(np.random.randn(4, 5), columns=columns)
hier_df
hier_df.groupby(level='cty', axis=1).count()
"""
Explanation: Grouping by index levels
Finally, you can use heierarchical indexing to perform groupby operations. To do this, pass the level number or name using the level keyword.
End of explanation
"""
df
"""
Explanation: Data aggregation
Wes McKinney defines data aggregation as any transformation that takes a dataset or other array and produces scalar values. For example, the simple statistical functions such as
mean
max
min
sum
are examples of operations taking arrays to numbers. Many of the aggregations that we have seen so far have been optimized for performance, but Pandas gives you the functionality to implement customized aggregators.
End of explanation
"""
grouped = df.groupby('key1')
grouped['data1'].quantile(0.9)
"""
Explanation: For example, we can use quantile(x) (not explicitly implemented for GroupBy), which determines the value of $x$th percentile given a Series of data.
End of explanation
"""
def peak_to_peak(arr):
return arr.max() - arr.min()
grouped.agg(peak_to_peak)
"""
Explanation: More to the point, you can define your own functions and do groupby operations with them. For example, if you are interested in the range of your data sets, you can use:
End of explanation
"""
grouped.describe()
"""
Explanation: Even method that aren't really aggregations, such as describe, still perform useful operations on GroupBy objects.
End of explanation
"""
tips = pd.read_csv('tips.csv')
# Add tip percentage of total bill
tips['tip_pct'] = tips['tip'] / tips['total_bill']
tips[:6]
"""
Explanation: We will continue with the tips dataset from previous weeks to show off some of the more advanced features of aggregation. The data set can be found on the course webpage, or here, if you're lazy (like us).
End of explanation
"""
grouped = tips.groupby(['sex', 'smoker'])
"""
Explanation: Column-wise and multiple function application
As we've seen, aggregating a Series or all of the columns of a DataFrame is a matter of using aggregate with the desired function or calling a method like mean or std. However, you may want to aggregate using a different function depending on the column or multiple functions at once. Fotunately, this is straightforward to do, which we will illustrate through a number of examples. First, let's group the tips by sex and smoker.
End of explanation
"""
grouped_pct = grouped['tip_pct']
grouped_pct.agg('mean')
"""
Explanation: Descriptive statistics, such as mean, can be passed to the aggregator as a string.
End of explanation
"""
grouped_pct.agg(['mean', 'std', peak_to_peak])
"""
Explanation: You can also pass a list of functions to do aggregation. If the function is built-in, it passes as a string. Otherwise, one can simply pass the function on its own.
End of explanation
"""
grouped_pct.agg([('foo', 'mean'), ('bar', np.std)])
"""
Explanation: To label the columns assigned by agg, pass a tuple in for each function, with the first element corresponding to the label of the column.
End of explanation
"""
functions = ['count', 'mean', 'max']
result = grouped['tip_pct', 'total_bill'].agg(functions)
result
"""
Explanation: With a DataFame you have more options as you can specify as list of functions to apply to all of the columns or different functions per column.
End of explanation
"""
result['tip_pct']
"""
Explanation: Here we are using what effectively amounts to a hierarchical index, which we can then slice by choosing columns and subcolumns.
End of explanation
"""
ftuples = [('Durchschnitt', 'mean'), ('Abweichung', np.var)]
grouped['tip_pct', 'total_bill'].agg(ftuples)
"""
Explanation: As above, a list of tuples with custom names can be passed:
End of explanation
"""
grouped.agg({'tip' : np.max, 'size' : 'sum'})
"""
Explanation: You can also specify the particular column of a DataFrame that you want to do aggregation on by passing a dict of information. For example,
End of explanation
"""
grouped.agg({'tip_pct' : ['min', 'max', 'mean', 'std'],
'size' : 'sum'})
"""
Explanation: You can overload a particular column's aggregator functions by making the dict key corresponding to the column reference a list rather than just one function.
End of explanation
"""
tips.groupby(['sex', 'smoker'], as_index=False).mean()
"""
Explanation: Returning aggregated data in "unindexed" form
Of course, you can unindex a hierarchical indexed GroupBy object by Specifying the as_index optional paramter.
End of explanation
"""
df
"""
Explanation: Group-wise operations and transformations
Aggregation is but one kind of group operation. It is a special case of more general data transformations, taking one-dimensional arrays and reducing them to scalars. Here we will generalize this notion by showing you how to use apply and transform methods on DataFrame objects. Let's revisit our old DataFrame of random data.
End of explanation
"""
k1_means = df.groupby('key1').mean().add_prefix('mean_')
k1_means
pd.merge(df, k1_means, left_on='key1', right_index=True)
"""
Explanation: Suppose we want to add a column containing group means for each index. One way to do this is to aggregate, then merge:
End of explanation
"""
key = ['one', 'two', 'one', 'two', 'one']
people.groupby(key).mean()
people.groupby(key).transform(np.mean)
"""
Explanation: This works but is somewhat inflexible. You can think of the operation as transforming the two data columns using the np.mean function. Returning to the people DataFrame from before, we can use the transform method on GroupBy.
End of explanation
"""
def demean(arr):
return arr - arr.mean()
demeaned = people.groupby(key).transform(demean)
demeaned
"""
Explanation: What is going on here is that transform applies a function to each group, then places the results in the appropriate locations. When this reduces to the special case of scalar values, the answer is simply broadcasted across all the relevant locations.
Suppose instead you wanted to subtract the mean value from each group. To do so, let's define a function demean, and proceed by
End of explanation
"""
demeaned.groupby(key).mean()
"""
Explanation: You can check that demeanded now has zero group means:
End of explanation
"""
def top(df, n=5, column='tip_pct'):
return df.sort_index(by=column)[-n:]
top(tips, n=6)
"""
Explanation: We will soon see that demeaning can be achieved using apply as well.
Apply: General split-apply-combine
There are three data transformation tools that we can use to build analyses on our data. The first two, aggregate and transform, are somewhat rigid in their capabilities. On the flip side, this makes it easier on you the data analyst to perform data transformations. The third tool is apply, which gives you immense flexibility at the expense of intuitivity.
Returning to the tips.csv data set, suppose we want to select the top five tip_pct values by group. We can write a function to identify the top values of a DataFrame very easily:
End of explanation
"""
tips.groupby('smoker').apply(top)
"""
Explanation: Now if we group by smoker, say, and call apply with this function, we get
End of explanation
"""
tips.groupby(['smoker', 'day']).apply(top, n=1, column='total_bill')
"""
Explanation: top is called on each piece of the DataFrame, then the results are glued together using pandas.concat, labeling the pieces with the group names.
If you pass a function to apply that takes other arguments or keywords, you can pass these after the function:
End of explanation
"""
result = tips.groupby('smoker')['tip_pct'].describe()
result
result.unstack('smoker')
"""
Explanation: Recall that describe seems to work okay on a GroupBy object.
End of explanation
"""
tips.groupby('smoker', group_keys=False).apply(top)
"""
Explanation: What's really happening (for all you 151ers) is that when you invoke a method like describe, it is actually just a shortcut for:
Python
f = lambda x: x.describe()
grouped.apply(f)
Suppressing the group keys
One point of style: if you prefer not working with hierarchical indices, you can specify in the groupby call to treat the underlying DataFrame as flat by choosing group_keys=False.
End of explanation
"""
s = Series(np.random.randn(6))
s[::2] = np.nan
s
s.fillna(s.mean())
"""
Explanation: Example: Filling missing values with group-specific values
When cleaning up missing data, in some cases you will filter out data observations using dropna, but in others you may want to impute (fill in) the NA values using a fixed value or some value derived from the data. fillna is the right tool to use; for example, we can fill in NA values with the mean:
End of explanation
"""
states = ['Ohio', 'New York', 'Vermont', 'Florida',
'Oregon', 'Nevada', 'California', 'Idaho']
group_key = ['East'] * 4 + ['West'] * 4
data = Series(np.random.randn(8), index=states)
data[['Vermont', 'Nevada', 'Idaho']] = np.nan
data
data.groupby(group_key).mean()
"""
Explanation: Suppose you need the fill value to vary by group. As you may guess, you need only group the data and use apply with a function that calls fillna on each data chunk. Here is some sample data on some US states divided into eastern and western states.
End of explanation
"""
fill_mean = lambda g: g.fillna(g.mean())
data.groupby(group_key).apply(fill_mean)
"""
Explanation: We can fill the NA values using the group means:
End of explanation
"""
fill_values = {'East': 0.5, 'West': -1}
fill_func = lambda g: g.fillna(fill_values[g.name])
data.groupby(group_key).apply(fill_func)
"""
Explanation: In another case, you might have pre-defined fill values in your code that vary by group. Since the groups have a name attribute set, internally, we can use that:
End of explanation
"""
# Hearts, Spades, Clubs, Diamonds
suits = ['H', 'S', 'C', 'D']
card_val = (range(1, 11) + [10] * 3) * 4
base_names = ['A'] + range(2, 11) + ['J', 'K', 'Q']
cards = []
for suit in ['H', 'S', 'C', 'D']:
cards.extend(str(num) + suit for num in base_names)
deck = Series(card_val, index=cards)
"""
Explanation: Example: Random sampling and permutation
Suppose you wanted to draw a random sample from a large dataset for Monte Carlo simulation purposes or some other application. There are a number of ways to perform the "draws"; some are much more efficient than others. One way is to select the first K elements of np.random.permutation(N), where N is the size of your complete dataset and K the desired sample size. As a more fun example, here's a way to construct a deck of English-style playing cards:
End of explanation
"""
deck[:13]
"""
Explanation: So now we have a Series of length 52 whose index contains card names and values are the ones used in blackjack and other games (to keep things simple, I just let the ace be 1).
End of explanation
"""
def draw(deck, n=5):
return deck.take(np.random.permutation(len(deck))[:n])
draw(deck)
"""
Explanation: Now, based on what we've just discussed, drawing a hand of five cards from the desk could be written as:
End of explanation
"""
get_suit = lambda card: card[-1] # last letter is suit
deck.groupby(get_suit).apply(draw, n=2)
# alternatively
deck.groupby(get_suit, group_keys=False).apply(draw, n=2)
"""
Explanation: Suppose you wnted two random cards from each suit. Because the suit is the last character of each card name, we can group based on this and use apply:
End of explanation
"""
df = DataFrame({'category': ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'],
'data': np.random.randn(8),
'weights': np.random.rand(8)})
df
"""
Explanation: Example: Group weighted average and correlation
Under the split-apply-combine paradigm of groupby operations between columns in a DataFrame or two Series, such as group weighted average, become a routine affair. As an example, take this dataset containing group keys, values, and some weights:
End of explanation
"""
grouped = df.groupby('category')
get_wavg = lambda g: np.average(g['data'], weights=g['weights'])
grouped.apply(get_wavg)
"""
Explanation: The group weighted average by category would then be:
End of explanation
"""
close_px = pd.read_csv('stock_px.csv', parse_dates=True, index_col=0)
close_px.info()
"""
Explanation: As a less trivial example, consider a data set from Yahoo! Finance containing end of day prices for a few stocks and the S&P 500 index (the SPX ticker):
End of explanation
"""
close_px[-4:]
rets = close_px.pct_change().dropna()
spx_corr = lambda x: x.corrwith(x['SPX'])
by_year = rets.groupby(lambda x: x.year)
by_year.apply(spx_corr)
"""
Explanation: One task of interest might be to compute a DataFrame consisting of the yearly correlations of daily returns (computed from percent changes) with SPX. Here is one way to do it:
End of explanation
"""
# Annual correlation of Apple with Microsoft
by_year.apply(lambda g: g['AAPL'].corr(g['MSFT']))
"""
Explanation: There is of course nothing to stop you from computing inter-column correlation:
End of explanation
"""
tips=pd.read_csv('tips.csv')
tips['tip_pct'] = tips['tip']/tips['total_bill']
tips.pivot_table(index=['sex', 'smoker'])
"""
Explanation: Pivot tables and Cross-tabulation
A pivot table is a data summerization tool. Pivot tables work by aggregating a table of data by keys, where the data is organized rectangularly with the group keys along the rows and columns. We can use pivot tables in Python by using the groupby methodology. Using DataFrames allows us to apply the pivot_table method and we can use the pandas.pivot_table function. Besides acting as a great way to access groupby, pivot_table can add partial totals or margins.
We can use the pivot_table to calculate the group means of people by sex and smoker.
End of explanation
"""
tips.pivot_table(['tip_pct','size'], index=['sex', 'day'],
columns='smoker')
"""
Explanation: Suppose that now we only care about tip percentage, size of the group, and day of the week. We can put smoker in the columns and day in the rows, so that we yield a tables showing the group averages of tip_pct and size based on smoker and day.
End of explanation
"""
tips.pivot_table(['tip_pct', 'size'], index=['sex', 'day'],
columns='smoker', margins=True)
"""
Explanation: Furthermore, if we also want data about general tipping percentages and size of parties without regard to people smoking, we can use the margins argument to calculate to corresponding group statistic. Using margins=True calculates the partial totals of each column.
End of explanation
"""
tips.pivot_table('tip_pct', index=['sex', 'smoker'], columns='day',
aggfunc=len, margins=True)
"""
Explanation: The All columns show the average tipping percentage and size of parties without regard to smoking. To use a different aggregate function, we may use the aggfunc argument. For example we may use the len function to calculate the frequency of group sizes.
End of explanation
"""
tips.pivot_table('size', index=['time', 'sex', 'smoker'],
columns='day', aggfunc='sum',fill_value=0)
"""
Explanation: To replace empty values with zero, we can use the fill_value argument.
End of explanation
"""
from StringIO import StringIO
data = """\
Sample Gender Handedness
1 Female Right-handed
2 Male Left-handed
3 Female Right-handed
4 Male Right-handed
5 Male Left-handed
6 Male Right-handed
7 Female Right-handed
8 Female Left-handed
9 Male Right-handed
10 Female Right-handed"""
data = pd.read_table(StringIO(data), sep='\s+')
data
"""
Explanation: Cross-tabulations: crosstab
A cross-tabulation is a special case of a pivot table that computes group frequencies.
End of explanation
"""
pd.crosstab(data.Gender, data.Handedness, margins=True)
"""
Explanation: We could use pivot_table to do this calculation, but pandas.crosstab is a convenient.
End of explanation
"""
pd.crosstab([tips.time, tips.day], tips.smoker, margins=True)
"""
Explanation: When using crosstab, we may use either an array or Series or a list of arrays.
End of explanation
"""
fec = pd.read_csv('P00000001-ALL.csv')
fec.info()
"""
Explanation: Example: 2012 Federal Election Commission Database
We will be working with data from the 2012 US Presidential Election. This dataset focuses on campaign contributions for presidential candidates. The data can be loaded from:
End of explanation
"""
fec.ix[123456]
"""
Explanation: A sample data frame looks like this:
End of explanation
"""
unique_cands = fec.cand_nm.unique()
unique_cands
unique_cands[2]
"""
Explanation: One interesting aspect of this data set is the lack of partisanship as a way to classify candidates. We can add this information to the dataset. The way we are going to solve this problem is to create a dictionary indicating the politcal party of each candidate. First, we need to find out who all of the candidates are.
End of explanation
"""
parties = {'Bachmann, Michelle': 'Republican',
'Cain, Herman': 'Republican',
'Gingrich, Newt': 'Republican',
'Huntsman, Jon': 'Republican',
'Johnson, Gary Earl': 'Republican',
'McCotter, Thaddeus G': 'Republican',
'Obama, Barack': 'Democrat',
'Paul, Ron': 'Republican',
'Pawlenty, Timothy': 'Republican',
'Perry, Rick': 'Republican',
"Roemer, Charles E. 'Buddy' III": 'Republican',
'Romney, Mitt': 'Republican',
'Santorum, Rick': 'Republican'}
"""
Explanation: We use parties to specify a dictionary over all of the candidates.
End of explanation
"""
fec.cand_nm[123456:123461]
fec.cand_nm[123456:123461].map(parties)
"""
Explanation: We can test our dictionary by viewing a section of the dataset to first view the candidate and then view their political affiliation.
End of explanation
"""
# Add it as a column
fec['party'] = fec.cand_nm.map(parties)
fec['party'].value_counts()
"""
Explanation: To calculate the number of contributions to each party, we use the value_counts function to sum the number of contributions of each party.
End of explanation
"""
(fec.contb_receipt_amt > 0).value_counts()
"""
Explanation: Unfortunately, this counts both the positive and negative contributions to candidate's campaigns (negative values indcate refunds). Thus, to see the total number of donations to candidates in the 2012 US election we subset the receipt values.
End of explanation
"""
fec = fec[fec.contb_receipt_amt > 0]
fec_mrbo = fec[fec.cand_nm.isin(['Obama, Barack', 'Romney, Mitt'])]
"""
Explanation: To just use positive contibutions we use the following code.
End of explanation
"""
fec.contbr_occupation.value_counts()[:10]
"""
Explanation: Donation statistics by occupation and employer
One interesting question is the occupation of donors for each party. For example, do lawyers donate more to Democrats or Republicans? To which party do business executives donate more money?
End of explanation
"""
occ_mapping = {
'INFORMATION REQUESTED PER BEST EFFORTS' : 'NOT PROVIDED',
'INFORMATION REQUESTED' : 'NOT PROVIDED',
'INFORMATION REQUESTED (BEST EFFORTS)' : 'NOT PROVIDED',
'C.E.O.': 'CEO'
}
# If no mapping provided, return x
f = lambda x: occ_mapping.get(x, x)
fec.contbr_occupation = fec.contbr_occupation.map(f)
emp_mapping = {
'INFORMATION REQUESTED PER BEST EFFORTS' : 'NOT PROVIDED',
'INFORMATION REQUESTED' : 'NOT PROVIDED',
'SELF' : 'SELF-EMPLOYED',
'SELF EMPLOYED' : 'SELF-EMPLOYED',
}
# If no mapping provided, return x
f = lambda x: emp_mapping.get(x, x)
fec.contbr_employer = fec.contbr_employer.map(f)
"""
Explanation: We can again use a dictionary to better define the occupation of the donors, as well as the employers of the donors.
End of explanation
"""
by_occupation = fec.pivot_table('contb_receipt_amt',
index='contbr_occupation',
columns='party', aggfunc='sum')
over_2mm = by_occupation[by_occupation.sum(1) > 2000000]
over_2mm
over_2mm.plot(kind='barh')
"""
Explanation: Using a pivot_table we can view data on people who donated at least \$2 million.
End of explanation
"""
def get_top_amounts(group, key, n=5):
totals = group.groupby(key)['contb_receipt_amt'].sum()
# Order totals by key in descending order
return totals.order(ascending=False)[-n:]
grouped = fec_mrbo.groupby('cand_nm')
grouped.apply(get_top_amounts, 'contbr_occupation', n=7)
grouped.apply(get_top_amounts, 'contbr_employer', n=10)
"""
Explanation: Alternatively, we can view donors who gave to the campaigns of Barack Obama or Mitt Romney. We do this by grouping by candidate name using the top method that we learned earlier.
End of explanation
"""
bins = np.array([0, 1, 10, 100, 1000, 10000, 100000, 1000000, 10000000])
labels = pd.cut(fec_mrbo.contb_receipt_amt, bins)
labels
"""
Explanation: Bucketing donation amounts
A useful way to analyze data is to use the cut function to partition the data into comparable buckets.
End of explanation
"""
grouped = fec_mrbo.groupby(['cand_nm', labels])
grouped.size().unstack(0)
"""
Explanation: Grouping the data by name and bin, we get a histogram by donation size.
End of explanation
"""
bucket_sums = grouped.contb_receipt_amt.sum().unstack(0)
bucket_sums
normed_sums = bucket_sums.div(bucket_sums.sum(axis=1), axis=0)
normed_sums
normed_sums[:-2].plot(kind='barh', stacked=True)
"""
Explanation: The data shows that Barack Obama received significantly more contributions of smaller donation sizes. We can also sum the contribution amounts and normalize the data to view a percentage of total donations of each size by candidate:
End of explanation
"""
grouped = fec_mrbo.groupby(['cand_nm', 'contbr_st'])
totals = grouped.contb_receipt_amt.sum().unstack(0).fillna(0)
totals = totals[totals.sum(1) > 100000]
totals[:10]
"""
Explanation: Donation statistics by state
We can also aggregate donations by candidate and state:
End of explanation
"""
percent = totals.div(totals.sum(1), axis=0)
percent[:10]
"""
Explanation: Additionally, we may obtain the relative percentage of total donations by state for each candidate.
End of explanation
"""
|
danielhanchen/sciblox | sciblox (v1)/sciblox v0.01.ipynb | mit | from sciblox import *
%matplotlib inline
maxrows(5)
from jupyterthemes import jtplot
jtplot.style()
x = read("train.csv")
read("train.csv")
"""
Explanation: SciBlox v0.01 Example Code - Titanic Dataset
1. Data Analysis
Opening files - currently CSV is only supported
Use the import * method for easier calling. (Sorry classes not done yet)
MAXROWS(x) - how many rows do you want to show (default = 15)
End of explanation
"""
analyse(x)
"""
Explanation: Describing and analysing your data:
End of explanation
"""
describe(x, axis = 1)
"""
Explanation: You can also change axis to 1 (both ANALYSE and DESCRIBE works)
End of explanation
"""
analyse(x, colour = False)
"""
Explanation: You can output the analysis to a dataframe
End of explanation
"""
varcheck(x)
"""
Explanation: You can also check the data's Frequency Ratio and Variance Thresholds.
It'll try to get outliers highlighted.
End of explanation
"""
varcheck(x, freq = "mean", unique = 0.01)
"""
Explanation: You can specify thresholds:
End of explanation
"""
corr(x)
corr(x, table = True)
"""
Explanation: You can also output the correlation matrix:
End of explanation
"""
remcor(x, threshold = 0.5)
"""
Explanation: You can also remove correlated columns:
End of explanation
"""
plot(x = "Survived", y = "Fare", factor = "Embarked", data = x)
plot(x = "Fare", data = x)
plot(x = "Embarked", y = "Sex", data = x)
plot(x = "Age", y = "Parch", factor = "Fare", data = x)
plot(x = "Age", y = "Fare", factor = "Survived", data = x)
plot(x = "SibSp", y = "Embarked", factor = "Survived", data = x)
plot(x = "Fare", y = "Age", factor = "SibSp", data = x)
"""
Explanation: 2. Data Visualisations
Plotting is easy. (Currently X,Y,Factor supported)
End of explanation
"""
%%capture
knn = fillna(x)
knn
"""
Explanation: 3. Data Cleaning
Use the FILLNA function: (Fancy Impute package, sklearn and xgboost)
End of explanation
"""
%%capture
svd = fillna(x, method = "svd")
bpca = fillna(x, method = "bpca")
mice = fillna(x, method = "mice", mice = "boost")
fillna(x, method = "mice", mice = "tree")
fillna(x, method = "mice", mice = "linear")
mice
"""
Explanation: You can try MICE / BPCA / SVD methods
End of explanation
"""
to_cont(x)
to_cont(x, dummies = False)
codes, df = to_cont(x, dummies = False, class_max = "all", return_codes = True)
codes["Embarked"]
"""
Explanation: You can also get dummies
End of explanation
"""
maxrows(4)
get(x["Name"])
get(x["Name"], split = ", ")
"""
Explanation: 4. Data Mining
Getting strings is easy. Let's say we want to get Mr/Mrs.. honorifics
Everything is sequential
End of explanation
"""
get(x["Name"], split = ", ", loc = 1, split1 = ". ", loc1 = 0, df = True)
"""
Explanation: PLEASE TYPE SPLIT1 or SPLIT2 etc when you have more than 1 SPLIT
End of explanation
"""
wordfreq(x)
wordfreq(x["Name"], first = 15)
wordfreq(x["Name"], first = 5, hist = False)
"""
Explanation: You can also get word frequencies
End of explanation
"""
getwords(x, first = 5)
"""
Explanation: You can also get new columns from wordfreq
End of explanation
"""
discretise(x["Fare"], n = 5)
discretise(x["Fare"], n = 10, codes = True, smooth = False)
"""
Explanation: You can also discretise columns:
End of explanation
"""
flatten(x["Name"], lower = False)[0:10]
"""
Explanation: You can also flatten columns:
End of explanation
"""
columns(x)
conts(x)
strs(x)
index(x)[0:5]
"""
Explanation: 5. Data Descriptions
Getting columns and indexes is easy:
End of explanation
"""
unique(x)["Embarked"]
cunique(x)["Embarked"]
punique(x)
nunique(x["Parch"])
"""
Explanation: Getting uniques is easy:
End of explanation
"""
sort(x, by = ["Name"])
sort([1,2,3,4,1,2])
"""
Explanation: You can sort a dataframe or any datatype:
End of explanation
"""
fsort(x, by = "Name")
"""
Explanation: You can also sort by frequency then length:
End of explanation
"""
tail(x)
head(x)
random(x)
shape(x)
"""
Explanation: Other methods:
End of explanation
"""
isnull(x)
notnull(x, subset = "Fare")
"""
Explanation: You can also subset NULL rows / not NULL:
End of explanation
"""
x["Pclass"] = float(x["Pclass"])
x["Pclass"]
clean(x["Pclass"])[0:10]
"""
Explanation: Cleaning columns is easy:
End of explanation
"""
inc(x, "Name")
exc(x, "Name")
"""
Explanation: 6. Data Wrangling
Excluding columns, including columns is easy:
End of explanation
"""
df = copy(x)
reverse(x["Name"])
phone = {"Daniel":1234,"Michael":32432}
reverse(phone)
(x["Survived"] == 0)
reverse(x["Survived"] == 0)
"""
Explanation: Reversing columns, reversing lists and reversing dictionaries + reversing booleans:
End of explanation
"""
df = x[conts(x)]
hcat(mean(df), median(df), iqr(df), var(df), std(df))
df = x[strs(x)]
vcat(nunqiue(x),freqratio(x),count(x))
"""
Explanation: Horizontal concat, Vertical concat:
End of explanation
"""
reset(x)
"""
Explanation: Resetting indexes:
End of explanation
"""
C = array([1,2,3],[1,2,3])
A = matrix([1,2,3], [1,2,4], [5,3,2])
B = matrix("1 2 3\
7 673 2\
21321 22 3")
B
T(B)
tile(C,1,2)
J(5)*Z(5)*I(5)
qnorm(95)
pnorm(1.65)
CI(q = 95, data = x["Fare"])
M(tr(A)*diag(A))
"""
Explanation: 7. Mathematics and Statistics
Easy linear algebra:
End of explanation
"""
|
AnimeshShaw/MyJupyterNotebooks | notebooks/natural-language-processing/A Gentle Introduction to TextBlob.ipynb | lgpl-3.0 | # We import the most important class TextBlob
from textblob import TextBlob
"""
Explanation: A Gentle Introduction to TextBlob
TextBlob is a Python (2 and 3) library for processing textual data. It provides a consistent API for diving into common natural language processing (NLP) tasks such as part-of-speech tagging, noun phrase extraction, sentiment analysis, and more. TextBlob objects can be treated as if they were Python strings that learned how to do Natural Language Processing. TextBlob heavily depends on Python NLTK and pattern module by CLIPS. Corpora used by NLTK is the default corpora for TextBlob as well. For Installation insturctions Click Here
Today we will have an overview of this library in this part and our main focus will be to cover the different properties and methods of BaseBlob class.
Basic's of TextBlob and Tokenization
First we import the TextBlob class which can be said as the most important class.
End of explanation
"""
data = """
Hello, My name is Animesh Shaw and I am an undergraduate and studying Computer Science (upcoming Graduation in 2015). I Love programming and Computer science subjects of topics.
My field of interest include Computational Linguistics. I Love watching anime specially One Piece and Naruto Shippunden. Animes specially those two shows
an extravagant amount of dedication, passion love, and amibition towards achieving one's goal and aim's in life. These have always inpired me a lot.
Giving up on your own dreams to fulfill others and the same feeling that other carry along with friends or something which I call as an eternal bond.
I recommend everyone to watch Naruto and One Piece. I have learnt a lot from there. "People are not always born intelligent or powerfull but with hard work
great heights can be achieved in life." Yes its true, dedication and hard work are the key goals to success less than 1% people are born with the blessing
of being a prodigy. The world was shaken mostly by those non-prodigy people which have had a massive impact in every individuals lives.
With great goals and constant dedication and passion you can achieve
the unachievable.
"""
"""
Explanation: We will analyse and apply all the methods and functions of TextBlob on the following paragraph and sometimes with some additional sentences.
End of explanation
"""
tblob = TextBlob(data)
"""
Explanation: To use any functions or methods of TextBlob we first create a TextBlob object
End of explanation
"""
tags = tblob.tags #We have stored the words of the text with the respective parts of speech tags
tags[:6] # Now that the tags are stored we will display the first 6 tags.
"""
Explanation: We will store all the words of the paragraph along with their POS tags in a variable tags. tags is a property in TextBlob class which returns a list of tuples. The tuple format being (<word>, <POS>). All strings or return values in TextBlob are unicode encoded.
End of explanation
"""
for tag in tags[:20]:
print(str(tag[1]) + " ")
"""
Explanation: NNP stands for proper noun. It is used for name, place, animals etc. etc. PRP stands for Pronoun.
Now lets prints all the tags which was earlier stored in tags variable. We will just print the first 20 tags.
End of explanation
"""
print("\n".join([tag[1] for tag in tags[:20]]))
"""
Explanation: We can do the above by writing a single line of code too.
End of explanation
"""
print(" ".join([tag[1] for tag in tags]))
# Now let us have a look at the total no of tags. We store all the tags in a variable named pos_tags
pos_tags = [tag[1] for tag in tags]
#Now we will print the length
print("No. of tags : " + str(len(pos_tags)))
#if you have noticed in entry no. 16 that a lot of tags are repeating. We would like to get all the unique tags from them.
#We can simply use he set() data structure to do so which will remove the duplicates.
unique_poses = set(pos_tags)
print(" ".join([ i for i in unique_poses ]))
print("\nNo of unique POS's : " + str(len(unique_poses)))
"""
Explanation: Now let us have a look at all the tags in the data above
End of explanation
"""
# print all the noun phrases
tblob.noun_phrases
"""
Explanation: So now you can see that there are only 24 POS tags which have been used and the rest are just repetition. Using TextBlob we can even print all the noun phrases in the sentence.
End of explanation
"""
tblob.words #returns the data as word tokenized form in a list.
"""
Explanation: We can get all the words as a WordList as well, by using the words property as follows which returns a list of all words as a class of WordList. WordList is a list-like collection of words. Its no different from Python lists but with additional methods.
End of explanation
"""
tblob.detect_language()
"""
Explanation: See its so easy. Noun Phrases gives us a lot of important and relevant information which can be further used to analyse the meaning.
When we use the tblob.noun_phrases it returns the noun_phrases as an WordList which a class used to store words and manipulate them
or operate with different functions etc. etc.
Lets do something more.
Language Detection and Translation
Suppose that you want to detect the language used in the text above, TextBlob provides a detect_language() method to detect language used.
The methods uses the Google Langauge Translate API for the purpose.
End of explanation
"""
#Okay lets try some more.
TextBlob("Bonjour").detect_language()
#Another one
TextBlob("Ciao").detect_language()
"""
Explanation: Lets try some more and in different ways.
End of explanation
"""
TextBlob("Thanks").translate(to="ja")
"""
Explanation: In the last two "fr" stands for french and "it" stands for italian. Now lets move on.
Now that you know that TextBlob can detect language. You might have a question whether it can even do the translation or not. As a matter of fact it can
Lets take a simple example we will Convert "Thanks" in english to Japanese.
End of explanation
"""
TextBlob("Hello, My name is Animesh P Shaw. I will become the Programming King").translate(to="fr")
"""
Explanation: You can see we got the translated text in Japanese.
Lets try another example with a bigger sentence
End of explanation
"""
TextBlob("Bonjour , Mon nom est Animesh P. Shaw . Je vais devenir le roi de programmation").detect_language()
"""
Explanation: You might be having a doubt whether these returned values are true or not. Well you can always Google you know.
Lets see if the last french translated sentence is detected as french or not.
End of explanation
"""
tblob.raw
"""
Explanation: Ta da! The langauge of the above sentence has been detected as french since fr is the french langauge code.
Raw Text Handling
Let's explore more and see what we have got. We will print the complete text as raw which means all the escape characters like \r or \n or \t will also be printed. There is an builtin property for that purpose.
End of explanation
"""
tblob.raw_sentences #
"""
Explanation: raw_sentances is another property which returns a list of raw sentences which means all the escape characters like \r or \n or \t will also be printed
End of explanation
"""
tblob.sentences
"""
Explanation: Let us look at another property which is sentences. Now this is different from raw_sentences. The former will return a list of all the sentences of class Sentence().
We will have a look at it.
End of explanation
"""
for sent in tblob.sentences:
print(sent.sentiment.polarity)
"""
Explanation: Sentiment Analysis with TextBlob
TextBlob is specially helpful for Sentiment Analysis with all the built in methods and properties which you can modify by configuring and extend with different taggers or Analyzers.
What is Senitiment Analysis ?
Sentiment analysis (also known as opinion mining) refers to the use of natural language processing, text analysis and computational linguistics to identify and extract subjective information in source materials. With TextBlob we can see both the Polarity and Subjectivity of the information in a sentence or data.
Now lets do something interesting and important. Note the following produces important results.
We will now see how to measure the polarity of a sentence. Now what is polarity.
Polarity is a measure which gives a numerical value depending on which we can understand whether a sentence is postive or negetive. Its more like someone says bad about you feel sad and it means negetive and when someone praises you, you feel joy which is positive polarity.
End of explanation
"""
for sent in tblob.sentences:
print(sent.sentiment)
"""
Explanation: A value of 0.0 indicates neutral, 0.5 indicates positive. Note that the word "Love" indicates postiveness. Values which fall in between 0.4 and 0.5 are almost undecidatble or
more or less positve. Let's consider the second last value 0.25, it is low because of the words shaken or massive impact which infuses a negetive sense.
Now lets display both the polarity and subjectivity. The sentiment property returns a namedtuple of the form Sentiment(polarity, subjectivity). The polarity score is a float within the range [-1.0, 1.0]. The subjectivity is a float within the range [0.0, 1.0] where 0.0 is very objective and 1.0 is very subjective.
End of explanation
"""
blob = TextBlob("Nico Robin is the most sexy anime character I have ever encountered.")
print(blob.json)
"""
Explanation: Dumping Data Properties as JSON
Suppose that you want to get all the properties together as one in some format which is efficient and easy to parse. To solve such cases TextBlob provides a way to dump all the properties as a JSON file. For this example we will create a text blob instance with a smaller sentence "Nico Robin is the most sexy anime character I have ever encountered."
End of explanation
"""
blob.serialized
"""
Explanation: If you want to represent the JSON data in a serialized manner then you can do this in the following manner.
End of explanation
"""
blob = TextBlob("Nico Robin is the most sexy and beautiful lady anime character I have ever encountered.")
print(blob.json)
"""
Explanation: Now lets test something nice. Suppose that we add the following " and beautiful lady " after "sexy" in In [39] then what changes do you expect to happen. As you know that beautiful is a postive word and so what it does is it increases the polarity value. This technique can be used in different ways in research.
End of explanation
"""
|
NAU-CFL/Python_Learning_Source | reference_notebooks/Notes-01.ipynb | mit | type("Hello World!")
type(2)
"""
Explanation: Variables, Expressions and Statements
Values and Types
A value is one of the basic things a program works with, like a letter or a number.
"Hello World", and 2 are values with different types:
We can check their types using type() function in Python.
End of explanation
"""
type(3.2)
"""
Explanation: Another type of value is called floating-point numbers such as: 3.2, we name these types: float
End of explanation
"""
type("3.2")
type("13")
"""
Explanation: But if we put quotations around integer or float numbers it will become string:
End of explanation
"""
message = 'And now for something completely different' # variable named message stores string
n = 17 # variable named n stores value 17
pi = 3.1415926535897932 # variable named pi stores 3.14...
type(message)
type(n)
type(pi)
"""
Explanation: Variables
One of the most powerful features of a programming language is the ability to manipulate variables. A variable is a name that refers to a value.
Assignment Statement (=) assigns values to the variables, (value holders), let's see some examples of variable creation:
End of explanation
"""
32+45 # The result is integer
32 + 45.0 # The result is float, because of the operands is float
24 - 56
5 ** 2
(5+9)*(15-7)
90 / 56 # In Python 3 The result is always float.
90 // 56 # Floor division, rounds the number down to make it integer"
"""
Explanation: Variable names and keywords
Programmers generally choose names for their variables that are meaningful—they document what the variable is used for.
Variable names can be arbitrarily long. They can contain both letters and numbers, but they have to begin with a letter. It is legal to use uppercase letters, but it is a good idea to begin variable names with a lowercase letter.
There is also a way that we have to name our variables:
variable name cannot start with numbers:
variable name cannot have illegal characters:
variable name cannot be the keywords specific to the Python language:
```Python
The illegal usages, which won't work
76trombones = 'big parade'
more@ = 1000000
class = 'Advanced Theoretical Zymurgy'
```
Keywords in Python 3: (Don't use them when you are naming variables:)
and del from not while
as elif global or with
assert else if pass yield
break except import print nonlocal
class in raise is
continue finally return
def for lambda try
Operators and Operands
Operators are special symbols that represent computations like addition and multiplication.
The values the operator is applied to are called operands.
The operators +, -, *, / and ** perform addition, subtraction, multiplication, division and exponentiation, as in the following examples:
End of explanation
"""
miles = 26.2
miles * 1.61
"""
Explanation: Expressions and Statements
An expression is a combination of values, variables, and operators. A value all by itself is considered an expression, and so is a variable, so the following are all legal expressions (assuming that the variable x has been assigned a value):
17
x
17 + x
A statement is a unit of code that the Python interpreter can execute. We have seen two kinds of statement: print and assignment.
Technically an expression is also a statement, but it is probably simpler to think of them as different things. The important difference is that an expression has a value; a statement does not.
Interactive mode and Scripting mode
Python is interpreted language, meaning that you don't have to compile each time you need to run your code. This feature also give possibility to a interactive development environment. The Jupyter notebook and Ipython is amazing environments to do interactive development.
if you are using Python as a calculator, you might type:
End of explanation
"""
first = 'throat'
second = 'warbler'
print(first + " " + second)
"""
Explanation: But if you type the same code into a script and run it, you get no output at all. In script mode an expression, all by itself, has no visible effect. Python actually evaluates the expression, but it doesn’t display the value unless you tell it to:
Python
miles = 26.2
print(miles * 1.61)
This behavior can be confusing at first.
A script usually contains a sequence of statements. If there is more than one statement, the results appear one at a time as the statements execute.
Python
print 1
x = 2
print x
produces the output:
1
2
The assignment statement produces no output.
Try yourself!
Type the following statements in the Python interpreter to see what they do:
5
x = 5
x + 1
Now put the same statements into a script and run it. What is the output? Modify the script by transforming each expression into a print statement and then run it again.
String Operations
In general, you can’t perform mathematical operations on strings, even if the strings look like numbers, so the following are illegal:
Python
'2'-'1'
'eggs'/'easy'
'third'*'a charm'
The + operator works with strings, but it might not do what you expect: it performs concatenation, which means joining the strings by linking them end-to-end. For example:
End of explanation
"""
"spam" * 3
"""
Explanation: The * operator also works on strings; it performs repetition. For example,
End of explanation
"""
# compute the percentage of the hour that has elapsed
percentage = (24 * 100) / 60
percentage = (24 * 100) / 60 # percentage of an hour
"""
Explanation: Comments
As programs get bigger and more complicated, they get more difficult to read. Formal languages are dense, and it is often difficult to look at a piece of code and figure out what it is doing, or why.
For this reason, it is a good idea to add notes to your programs to explain in natural language what the program is doing. These notes are called comments, and they start with the # symbol:
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive/06_structured/3_keras_dnn.ipynb | apache-2.0 | # Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-east1' #'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
ls *.csv
"""
Explanation: <h1> Create Keras DNN model </h1>
This notebook illustrates:
<ol>
<li> Creating a model using Keras. This requires TensorFlow 2.1
</ol>
End of explanation
"""
import shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column. Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
"""
Explanation: Create Keras model
<p>
First, write an input_fn to read the data.
End of explanation
"""
## Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Helper function to handle categorical columns
def categorical_fc(name, values):
return tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list(name, values))
def build_dnn_model():
# input layer
inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in ['mother_age', 'gestation_weeks']
}
inputs.update({
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in ['is_male', 'plurality']
})
# feature columns from inputs
feature_columns = {
colname : tf.feature_column.numeric_column(colname)
for colname in ['mother_age', 'gestation_weeks']
}
if False:
# Until TF-serving supports 2.0, so as to get servable model
feature_columns['is_male'] = categorical_fc('is_male', ['True', 'False', 'Unknown'])
feature_columns['plurality'] = categorical_fc('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)'])
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [64, 32] just in like the BQML DNN
h1 = tf.keras.layers.Dense(64, activation='relu', name='h1')(dnn_inputs)
h2 = tf.keras.layers.Dense(32, activation='relu', name='h2')(h1)
# final output is a linear activation because this is regression
output = tf.keras.layers.Dense(1, activation='linear', name='babyweight')(h2)
model = tf.keras.models.Model(inputs, output)
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
print("Here is our DNN architecture so far:\n")
# note how to use strategy to do distributed training
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = build_dnn_model()
print(model.summary())
"""
Explanation: Next, define the feature columns. mother_age and gestation_weeks should be numeric.
The others (is_male, plurality) should be categorical.
End of explanation
"""
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
"""
Explanation: We can visualize the DNN using the Keras plot_model utility.
End of explanation
"""
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 5 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('eval*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
"""
Explanation: Train and evaluate
End of explanation
"""
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
"""
Explanation: Visualize loss curve
End of explanation
"""
# Serving function that passes through keys
@tf.function(input_signature=[{
'is_male': tf.TensorSpec([None,], dtype=tf.string, name='is_male'),
'mother_age': tf.TensorSpec([None,], dtype=tf.float32, name='mother_age'),
'plurality': tf.TensorSpec([None,], dtype=tf.string, name='plurality'),
'gestation_weeks': tf.TensorSpec([None,], dtype=tf.float32, name='gestation_weeks'),
'key': tf.TensorSpec([None,], dtype=tf.string, name='key')
}])
def my_serve(inputs):
feats = inputs.copy()
key = feats.pop('key')
output = model(feats)
return {'key': key, 'babyweight': output}
import shutil, os, datetime
OUTPUT_DIR = './export/babyweight'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
tf.saved_model.save(model, EXPORT_PATH, signatures={'serving_default': my_serve})
print("Exported trained model to {}".format(EXPORT_PATH))
os.environ['EXPORT_PATH'] = EXPORT_PATH
!find $EXPORT_PATH
"""
Explanation: Save the model
Let's wrap the model so that we can supply keyed predictions, and get the key back in our output
End of explanation
"""
!saved_model_cli show --tag_set serve --signature_def serving_default --dir {EXPORT_PATH}
%%bash
MODEL_NAME="babyweight"
VERSION_NAME="dnn"
MODEL_LOCATION=$EXPORT_PATH
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
if [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then
echo "The model named $MODEL_NAME already exists."
else
# create model
echo "Creating $MODEL_NAME model now."
gcloud ai-platform models create --regions=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then
echo "Deleting already the existing model $MODEL_NAME:$VERSION_NAME ... "
gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
# create model
echo "Creating $MODEL_NAME:$VERSION_NAME"
gcloud ai-platform versions create --model=$MODEL_NAME $VERSION_NAME --async \
--framework=tensorflow --python-version=3.7 --runtime-version=2.1 \
--origin=$MODEL_LOCATION --staging-bucket=gs://$BUCKET
"""
Explanation: Deploy trained model to Cloud AI Platform
End of explanation
"""
%%writefile input.json
{"key": "b1", "is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"key": "b2", "is_male": "True", "mother_age": 33.0, "plurality": "Single(1)", "gestation_weeks": 41}
{"key": "g1", "is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"key": "g2", "is_male": "False", "mother_age": 33.0, "plurality": "Single(1)", "gestation_weeks": 41}
!gcloud ai-platform predict --model babyweight --json-instances input.json --version dnn
"""
Explanation: Monitor the model creation at GCP Console > AI Platform and once the model version dnn is created, proceed to the next cell.
End of explanation
"""
from oauth2client.client import GoogleCredentials
from googleapiclient import discovery
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials)
project = PROJECT
model_name = 'babyweight'
version_name = 'dnn'
input_data = {
'instances': [
{
'key': 'b1',
'is_male': 'True',
'mother_age': 26.0,
'plurality': 'Single(1)',
'gestation_weeks': 39
},
{
'key': 'g1',
'is_male': 'False',
'mother_age': 29.0,
'plurality': 'Single(1)',
'gestation_weeks': 38
},
{
'key': 'b2',
'is_male': 'True',
'mother_age': 26.0,
'plurality': 'Triplets(3)',
'gestation_weeks': 39
},
{
'key': 'u1',
'is_male': 'Unknown',
'mother_age': 29.0,
'plurality': 'Multiple(2+)',
'gestation_weeks': 38
},
]
}
parent = 'projects/%s/models/%s/versions/%s' % (project, model_name, version_name)
prediction = api.projects().predict(body=input_data, name=parent).execute()
print(prediction)
print(prediction['predictions'][0]['babyweight'][0])
"""
Explanation: main.py
This is the code that exists in serving/application/main.py, i.e. the code in the web application that accesses the ML API.
End of explanation
"""
|
rvperry/phys202-2015-work | assignments/assignment12/FittingModelsEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
"""
Explanation: Fitting Models Exercise 1
Imports
End of explanation
"""
a_true = 0.5
b_true = 2.0
c_true = -4.0
"""
Explanation: Fitting a quadratic curve
For this problem we are going to work with the following model:
$$ y_{model}(x) = a x^2 + b x + c $$
The true values of the model parameters are as follows:
End of explanation
"""
np.random.normal?
xdata=np.linspace(-5,5,30)
dy=2
sigma=np.random.normal(0,dy,30)
ydata=a_true*xdata**2+b_true*xdata+c_true+sigma
assert True # leave this cell for grading the raw data generation and plot
"""
Explanation: First, generate a dataset using this model using these parameters and the following characteristics:
For your $x$ data use 30 uniformly spaced points between $[-5,5]$.
Add a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the size argument of np.random.normal).
After you generate the data, make a plot of the raw data (use points).
End of explanation
"""
def model(x,a,b,c):
y=a*x**2+b*x+c
return y
def deviation(theta,x,y,dy):
a=theta[0]
b=theta[1]
c=theta[2]
return (y-a*x**2-b*x-c)/dy
xdata,ydata,sigma
opt.leastsq?
model_best,error_best=opt.curve_fit(model,xdata,ydata,dy)
best_fit=opt.leastsq(deviation,np.array((1,2,-5)), args=(xdata, ydata, dy), full_output=True)
theta_best=best_fit[0]
theta_cov=best_fit[1]
print('a=',theta_best[0],'+/-',np.sqrt(theta_cov[0,0]))
print('b=',theta_best[1],'+/-',np.sqrt(theta_cov[1,1]))
print('c=',theta_best[2],'+/-',np.sqrt(theta_cov[2,2]))
plt.errorbar(xdata,ydata,dy,fmt='k.')
xfit=np.linspace(-5,5,100)
yfit=theta_best[0]*xfit**2+theta_best[1]*xfit+theta_best[2]
plt.plot(xfit,yfit)
plt.ylabel('y')
plt.xlabel('x')
plt.title('Quadratic Fit')
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
"""
Explanation: Now fit the model to the dataset to recover estimates for the model's parameters:
Print out the estimates and uncertainties of each parameter.
Plot the raw data and best fit of the model.
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.