repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
mortada/notebooks | blog/traveling_tesla_salesman.ipynb | apache-2.0 | import math
math.factorial(200)
"""
Explanation: Traveling Salesman Problem
The Traveling Salesman Problem (TSP) is quite an interesting math problem. It simply asks: Given a list of cities and the distances between them, what is the shortest possible path that visits each city exactly once and returns to the origin city?
It is a very simple problem to describe and yet very difficult to solve. TSP is known to be NP-hard and a brute-force solution can be incredibly expensive computationally. Even with just $200$ cities, with the brute-force method you have this many possible permutations to check:
End of explanation
"""
from IPython.display import Image
Image(url='http://imgs.xkcd.com/comics/travelling_salesman_problem.png')
"""
Explanation: <!-- PELICAN_END_SUMMARY -->
That's actually a lot more than the total number of atoms in the universe!
Here's an obligatory xkcd for this:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.core.pylabtools import figsize
figsize(15, 5)
from bs4 import BeautifulSoup
import re
import requests
import numpy as np
import pandas as pd
# get the list of superchargers in the US
url = 'http://www.teslamotors.com/findus/list/superchargers/United+States'
rv = requests.get(url)
content = rv.text
# get link to each supercharger, each page contains the supercharger's coordinates
sc_page_urls = re.findall('(/findus/location/supercharger/\w+)', content)
# get the cooridnates (latitude, longitude) for each supercharger
sc_names = []
sc_coors = {}
for sc_page_url in sc_page_urls:
url = 'http://www.teslamotors.com' + sc_page_url
rv = requests.get(url)
soup = BeautifulSoup(rv.text)
sc_name = soup.find('h1').text
sc_names.append(sc_name)
directions_link = soup.find('a', {'class': 'directions-link'})['href']
lat, lng = directions_link.split('=')[-1].split(',')
lat, lng = float(lat), float(lng)
sc_coors[sc_name] = {'lat': lat, 'lng': lng}
# sort the names
sc_names = sorted(sc_names)
coords = pd.DataFrame.from_dict(sc_coors).T.reindex(sc_names)
coords.head()
"""
Explanation: Tesla Superchargers
To make the TSP even more exciting, let's make the salesman visit the awesome Tesla Superchargers. As of this writing there are $196$ superchargers in the US, and that number is quickly growing. Let's look at what the optimal path looks like for going through these superchargers as a concrete TSP example.
Optimal Path for Supercharger Traveling
I'll go through how I obtained the results in the later sections, but first I'd like to present the optimal path that I found below. You can toggle the display for the superchargers and the optimal path by clicking on the checkboxes.
The optimal path looks pretty awesome, right?
Solving the TSP
There are numerous heuristics and approximate solutions for TSP and that is on its own a vast topic. An approximate solution called Christofides's algorithm is provably within $1.5$ times of the optimum. One can also use simulated annealing or genetic algorithms to find solutions that are very close to optimal in most cases.
But here I'm most interested in finding the exact optimum, since we don't have that many nodes, and the distance metric (symmetric geometric distance) is relatively simple. After surveying the literature and searching online, I found the Concorde TSP solver that can find the exact optimal path (instead of approximations) using branch-and-bound algorithms. The basic idea is that when the algorithm branches out to search for the optimum, many of the permutations can actually be safely cut short if it is impossible for a branch to result in a value better than a known better solution. This kind of method has been shown to be the most effective for finding the exact optimum for TSP.
Fetching Coordinates
So first we need to find all the supercharger locations. One possible way to do that is to get a list of addresses for them and then geocode the addresses into coordinates. However it turns out that some of the superchargers are in remote places that aren't easily specified by a street address. They are more conveniently specified by latitudes and longitudes.
Luckily the Tesla website contains references to coordinates of all the supercharger locations. We can use simple regular expressions and BeautifulSoup to parse the pages.
End of explanation
"""
def distance_on_earth(lat1, long1, lat2, long2, radius=6378.388):
"""
Compute distance between two points on earth specified by latitude/longitude.
The earth is assumed to be a perfect sphere of given radius. The radius defaults
to 6378.388 kilometers. To convert to miles, divide by 1.60934
Reference
---------
Adopted from John D. Cook's blog post:
http://www.johndcook.com/blog/python_longitude_latitude/
"""
# Convert latitude and longitude to spherical coordinates in radians.
degrees_to_radians = np.pi / 180.0
# phi = 90 - latitude
phi1 = (90.0 - lat1) * degrees_to_radians
phi2 = (90.0 - lat2) * degrees_to_radians
# theta = longitude
theta1 = long1 * degrees_to_radians
theta2 = long2 * degrees_to_radians
# Compute spherical distance from spherical coordinates.
cos = (np.sin(phi1) * np.sin(phi2)* np.cos(theta1 - theta2) +
np.cos(phi1) * np.cos(phi2))
arc = np.arccos(cos)
rv = arc * radius
return rv
"""
Explanation: Computing Geodesic Distances
Now that we've gather all the coordinates, we can start to compute distances. Here is a function that computes the distance between two points on earth specified by latitude-longitude pairs. This function is based on the code on John D. Cook's excellent blog post related to this topic.
End of explanation
"""
# get distances between all pairs
mile_in_km = 1.60934
distances = {}
for i in range(len(sc_names)):
a = sc_names[i]
distances[a] = {}
for j in range(len(sc_names)):
b = sc_names[j]
if j == i:
distances[a][b] = 0.
elif j > i:
distances[a][b] = distance_on_earth(coords.ix[a, 'lat'],
coords.ix[a, 'lng'],
coords.ix[b, 'lat'],
coords.ix[b, 'lng'])
else:
distances[a][b] = distances[b][a]
distances = pd.DataFrame(distances) / mile_in_km
"""
Explanation: Note that we are making the simplifying assumptions that the Earth is a perfect sphere, and that the distance is a simple Euclidean distance, instead of a driving distance. Although one can certainly plug in a different distance metric and follow the same procedure outlined here. (Update: see part two for the results using driving distances)
We can now compute the distances between all pairs of supercharger locations:
End of explanation
"""
closest_distances = distances[distances > 0].min()
ax = closest_distances.hist(bins=25)
ax.set_title('histogram of distances to closest superchargers')
ax.set_ylabel('number of superchargers')
ax.set_xlabel('miles')
closest_distances.describe()
"""
Explanation: One interesting thing to note is that, for each supercharger in the US, on average there's another one less than $60$ miles away. That's pretty nice.
End of explanation
"""
# create input file for Concorde TSP solver
sc_id = 0
output = ''
for sc_name in sc_names:
output += '%d %f %f\n' % (sc_id, sc_coors[sc_name]['lat'], sc_coors[sc_name]['lng'])
sc_id += 1
header = """NAME : TTS
COMMENT : Traveling Tesla Salesman
TYPE : TSP
DIMENSION : %d
EDGE_WEIGHT_TYPE : GEOM
NODE_COORD_SECTION
""" % sc_id
with open('sc.tsp', 'w') as output_file:
output_file.write(header)
output_file.write(output)
"""
Explanation: Using the Concorde TSP Solver
Now we are ready to use the Concorde TSP solver. To use Concorde, you'll need to download a few things and make sure you have a working C compiler. You can find the detailed steps here. I compiled it on OSX Yosemite without issues.
Information about the input/output files for Concorde can be found here. In our particular case, the input file to Concorde can be generated as follows:
End of explanation
"""
# after running the Concorde executable, parse the output file
solution = []
f = open('../../../TSP/concorde/TSP/sc.sol', 'r')
for line in f.readlines():
tokens = line.split()
solution += [int(c) for c in tokens]
f.close()
assert solution[0] == len(sc_names)
solution = solution[1:] # first number is just the dimension
assert len(solution) == len(sc_names)
"""
Explanation: This creates a .tsp file that the concorde executable can process directly, and it outputs the solution in a .sol file in the same directory where the executable is:
End of explanation
"""
optimal_path = []
for solution_id in solution:
optimal_path.append(sc_names[solution_id])
# connect back to the starting node
optimal_path.append(sc_names[solution[0]])
optimal_path = pd.Series(optimal_path)
optimal_path.head()
optimal_path.tail()
"""
Explanation: Now we have the optimal path!
End of explanation
"""
# compute total distance in optimal path
total = 0
for i in range(len(optimal_path) - 1):
total += distances.ix[optimal_path[i], optimal_path[i + 1]]
total
"""
Explanation: We can also easily find the total length of the path:
End of explanation
"""
|
jayoshih/ricecooker | docs/examples/languages.ipynb | mit | from le_utils.constants import languages
# can lookup language using language code
language_obj = languages.getlang('en')
language_obj
# can lookup language using language name (the new le_utils version has not shipped yet)
language_obj = languages.getlang_by_name('English')
language_obj
# all `language` attributed (channel, nodes, and files) need to use language code
language_obj.code
from le_utils.constants.languages import getlang_by_native_name
lang_obj = getlang_by_native_name('français')
print(lang_obj)
print(lang_obj.code)
"""
Explanation: Languages
This tutorial will explain how to set the language property for various nodes and file objects when using the ricecooker framework.
Explore language objects and language codes
First we must import the le-utils pacakge. The languages supported by Kolibri and the Content Curation Server are provided in le_utils.constants.languages.
End of explanation
"""
from ricecooker.chefs import SushiChef
from ricecooker.classes.nodes import ChannelNode, TopicNode, DocumentNode
from ricecooker.classes.files import DocumentFile
from le_utils.constants import licenses
from le_utils.constants.languages import getlang
class MultipleLanguagesChef(SushiChef):
"""
A sushi chef that creates a channel with content in EN, FR, and SP.
"""
channel_info = {
'CHANNEL_TITLE': 'Languages test channel',
'CHANNEL_SOURCE_DOMAIN': '<yourdomain.org>', # where you got the content
'CHANNEL_SOURCE_ID': '<unique id for channel>', # channel's unique id CHANGE ME!!
'CHANNEL_LANGUAGE': getlang('mul').code, # set global language for channel
'CHANNEL_DESCRIPTION': 'This channel contains nodes in multiple languages',
'CHANNEL_THUMBNAIL': None, # (optional)
}
def construct_channel(self, **kwargs):
# create channel
channel = self.get_channel(**kwargs)
# create the English topic, add a DocumentNode to it
topic = TopicNode(
source_id="<en_topic_id>",
title="New Topic in English",
language=getlang('en').code,
)
doc_node = DocumentNode(
source_id="<en_doc_id>",
title='Some doc in English',
description='This is a sample document node in English',
files=[DocumentFile(path='samplefiles/documents/doc_EN.pdf')],
license=licenses.PUBLIC_DOMAIN,
language=getlang('en').code,
)
topic.add_child(doc_node)
channel.add_child(topic)
# create the Spanish topic, add a DocumentNode to it
topic = TopicNode(
source_id="<es_topic_id>",
title="Topic in Spanish",
language=getlang('es-MX').code,
)
doc_node = DocumentNode(
source_id="<es_doc_id>",
title='Some doc in Spanish',
description='This is a sample document node in Spanish',
files=[DocumentFile(path='samplefiles/documents/doc_ES.pdf')],
license=licenses.PUBLIC_DOMAIN,
language=getlang('es-MX').code,
)
topic.add_child(doc_node)
channel.add_child(topic)
# create the French topic, add a DocumentNode to it
topic = TopicNode(
source_id="<fr_topic_id>",
title="Topic in French",
language=languages.getlang('fr').code,
)
doc_node = DocumentNode(
source_id="<fr_doc_id>",
title='Some doc in French',
description='This is a sample document node in French',
files=[DocumentFile(path='samplefiles/documents/doc_FR.pdf')],
license=licenses.PUBLIC_DOMAIN,
language=getlang('fr').code,
)
topic.add_child(doc_node)
channel.add_child(topic)
return channel
"""
Explanation: The above language code is an internal representaiton that uses two-letter codes, and sometimes has locale information, e.g., pt-BR for Brazilian Portuiguese. Sometimes the internal code representaiton for a language is the three-letter vesion, e.g., zul for Zulu.
Create chef class
We now create subclass of ricecooker.chefs.SushiChef and defined its get_channel and construct_channel methods.
For the purpose of this example, we'll create three topic nodes in different languages that contain one document in each.
End of explanation
"""
mychef = MultipleLanguagesChef()
args = {
'command': 'dryrun', # use 'uploadchannel' for real run
'verbose': True,
'token': 'YOURTOKENHERE9139139f3a23232'
}
options = {}
mychef.run(args, options)
"""
Explanation: Run of you chef by creating an instance of the chef class and calling it's run method:
End of explanation
"""
import youtube_dl
ydl = youtube_dl.YoutubeDL({
#'quiet': True,
'no_warnings': True,
'writesubtitles': True,
'allsubtitles': True,
})
youtube_id = 'FN12ty5ztAs'
info = ydl.extract_info(youtube_id, download=False)
subtitle_languages = info["subtitles"].keys()
print(subtitle_languages)
"""
Explanation: Congratulations, you put three languages on the internet!
Example 2: YouTube video with subtitles in multiple languages
You can use the library youtube_dl to get lots of useful metadata about videos and playlists, including the which language subtitle are vailable for a video.
End of explanation
"""
from ricecooker.chefs import SushiChef
from ricecooker.classes import licenses
from ricecooker.classes.nodes import ChannelNode, TopicNode, VideoNode
from ricecooker.classes.files import YouTubeVideoFile, YouTubeSubtitleFile
from ricecooker.classes.files import is_youtube_subtitle_file_supported_language
import youtube_dl
ydl = youtube_dl.YoutubeDL({
'quiet': True,
'no_warnings': True,
'writesubtitles': True,
'allsubtitles': True,
})
# Define the license object with necessary info
TE_LICENSE = licenses.SpecialPermissionsLicense(
description='Permission granted by Touchable Earth to distribute through Kolibri.',
copyright_holder='Touchable Earth Foundation (New Zealand)'
)
class YoutubeVideoWithSubtitlesSushiChef(SushiChef):
"""
A sushi chef that creates a channel with content in EN, FR, and SP.
"""
channel_info = {
'CHANNEL_SOURCE_DOMAIN': '<yourdomain.org>', # where you got the content
'CHANNEL_SOURCE_ID': '<unique id for channel>', # channel's unique id CHANGE ME!!
'CHANNEL_TITLE': 'Youtube subtitles downloading chef',
'CHANNEL_LANGUAGE': 'en',
'CHANNEL_THUMBNAIL': 'https://edoc.coe.int/4115/postcard-47-flags.jpg',
'CHANNEL_DESCRIPTION': 'This is a test channel to make sure youtube subtitle languages lookup works'
}
def construct_channel(self, **kwargs):
# create channel
channel = self.get_channel(**kwargs)
# get all subtitles available for a sample video
youtube_id ='FN12ty5ztAs'
info = ydl.extract_info(youtube_id, download=False)
subtitle_languages = info["subtitles"].keys()
print('Found subtitle_languages = ', subtitle_languages)
# create video node
video_node = VideoNode(
source_id=youtube_id,
title='Youtube video',
license=TE_LICENSE,
derive_thumbnail=True,
files=[YouTubeVideoFile(youtube_id=youtube_id)],
)
# add subtitles in whichever languages are available.
for lang_code in subtitle_languages:
if is_youtube_subtitle_file_supported_language(lang_code):
video_node.add_file(
YouTubeSubtitleFile(
youtube_id=youtube_id,
language=lang_code
)
)
else:
print('Unsupported subtitle language code:', lang_code)
channel.add_child(video_node)
return channel
chef = YoutubeVideoWithSubtitlesSushiChef()
args = {
'command': 'dryrun', # use 'uploadchannel' for real run
'verbose': True,
'token': 'YOURTOKENHERE9139139f3a23232'
}
options = {}
chef.run(args, options)
"""
Explanation: Full sushi chef example
The YoutubeVideoWithSubtitlesSushiChef class below shows how to create a channel with youtube video and upload subtitles files with all available languages.
End of explanation
"""
|
tritemio/multispot_paper | out_notebooks/usALEX-5samples-PR-raw-out-Dex-7d.ipynb | mit | ph_sel_name = "Dex"
data_id = "7d"
# ph_sel_name = "all-ph"
# data_id = "7d"
"""
Explanation: Executed: Mon Mar 27 11:35:39 2017
Duration: 7 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
"""
from fretbursts import *
init_notebook()
from IPython.display import display
"""
Explanation: Load software and filenames definitions
End of explanation
"""
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
"""
Explanation: Data folder:
End of explanation
"""
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
ph_sel_map = {'all-ph': Ph_sel('all'), 'Dex': Ph_sel(Dex='DAem'),
'DexDem': Ph_sel(Dex='Dem')}
ph_sel = ph_sel_map[ph_sel_name]
data_id, ph_sel_name
"""
Explanation: List of data files:
End of explanation
"""
d = loader.photon_hdf5(filename=files_dict[data_id])
"""
Explanation: Data load
Initial loading of the data:
End of explanation
"""
d.ph_times_t, d.det_t
"""
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
"""
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
"""
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
"""
plot_alternation_hist(d)
"""
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
"""
loader.alex_apply_period(d)
"""
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
"""
d
"""
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
"""
d.time_max
"""
Explanation: Or check the measurements duration:
End of explanation
"""
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
"""
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
"""
bs_kws = dict(L=10, m=10, F=7, ph_sel=ph_sel)
d.burst_search(**bs_kws)
th1 = 30
ds = d.select_bursts(select_bursts.size, th1=30)
bursts = (bext.burst_data(ds, include_bg=True, include_ph_index=True)
.round({'E': 6, 'S': 6, 'bg_d': 3, 'bg_a': 3, 'bg_aa': 3, 'nd': 3, 'na': 3, 'naa': 3, 'nda': 3, 'nt': 3, 'width_ms': 4}))
bursts.head()
burst_fname = ('results/bursts_usALEX_{sample}_{ph_sel}_F{F:.1f}_m{m}_size{th}.csv'
.format(sample=data_id, th=th1, **bs_kws))
burst_fname
bursts.to_csv(burst_fname)
assert d.dir_ex == 0
assert d.leakage == 0
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print ('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
"""
Explanation: Burst search and selection
End of explanation
"""
def hsm_mode(s):
"""
Half-sample mode (HSM) estimator of `s`.
`s` is a sample from a continuous distribution with a single peak.
Reference:
Bickel, Fruehwirth (2005). arXiv:math/0505419
"""
s = memoryview(np.sort(s))
i1 = 0
i2 = len(s)
while i2 - i1 > 3:
n = (i2 - i1) // 2
w = [s[n-1+i+i1] - s[i+i1] for i in range(n)]
i1 = w.index(min(w)) + i1
i2 = i1 + n
if i2 - i1 == 3:
if s[i1+1] - s[i1] < s[i2] - s[i1 + 1]:
i2 -= 1
elif s[i1+1] - s[i1] > s[i2] - s[i1 + 1]:
i1 += 1
else:
i1 = i2 = i1 + 1
return 0.5*(s[i1] + s[i2])
E_pr_do_hsm = hsm_mode(ds_do.E[0])
print ("%s: E_peak(HSM) = %.2f%%" % (ds.ph_sel, E_pr_do_hsm*100))
"""
Explanation: Donor Leakage fit
Half-Sample Mode
Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005).
End of explanation
"""
E_fitter = bext.bursts_fitter(ds_do, weights=None)
E_fitter.histogram(bins=np.arange(-0.2, 1, 0.03))
E_fitter.fit_histogram(model=mfit.factory_gaussian())
E_fitter.params
res = E_fitter.fit_res[0]
res.params.pretty_print()
E_pr_do_gauss = res.best_values['center']
E_pr_do_gauss
"""
Explanation: Gaussian Fit
Fit the histogram with a gaussian:
End of explanation
"""
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_fitter.calc_kde(bandwidth=bandwidth)
E_fitter.find_kde_max(E_ax, xmin=E_range_do[0], xmax=E_range_do[1])
E_pr_do_kde = E_fitter.kde_max_pos[0]
E_pr_do_kde
"""
Explanation: KDE maximum
End of explanation
"""
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, plot_model=False)
plt.axvline(E_pr_do_hsm, color='m', label='HSM')
plt.axvline(E_pr_do_gauss, color='k', label='Gauss')
plt.axvline(E_pr_do_kde, color='r', label='KDE')
plt.xlim(0, 0.3)
plt.legend()
print('Gauss: %.2f%%\n KDE: %.2f%%\n HSM: %.2f%%' %
(E_pr_do_gauss*100, E_pr_do_kde*100, E_pr_do_hsm*100))
"""
Explanation: Leakage summary
End of explanation
"""
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
"""
Explanation: Burst size distribution
End of explanation
"""
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))
E_fitter.fit_res[0].params.pretty_print()
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
"""
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
"""
ds_fret.fit_E_m(weights='size')
"""
Explanation: Weighted mean of $E$ of each burst:
End of explanation
"""
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
"""
Explanation: Gaussian fit (no weights):
End of explanation
"""
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr
"""
Explanation: Gaussian fit (using burst size as weights):
End of explanation
"""
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr
S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr
"""
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
"""
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
"""
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
"""
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
"""
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
"""
sample = data_id
"""
Explanation: Save data to file
End of explanation
"""
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '
'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '
'E_pr_do_kde E_pr_do_hsm E_pr_do_gauss nt_mean\n')
"""
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
"""
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-raw-%s.csv' % ph_sel_name, 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
"""
Explanation: This is just a trick to format the different variables:
End of explanation
"""
|
spectralDNS/shenfun | docs/source/functions.ipynb | bsd-2-clause | from shenfun import *
N = 8
T = FunctionSpace(N, 'Chebyshev', domain=(-1, 1))
"""
Explanation: <!-- File automatically generated using DocOnce (https://github.com/doconce/doconce/):
doconce format ipynb functions.do.txt -->
Demo - Working with Functions
Mikael Mortensen (email: mikaem@math.uio.no), Department of Mathematics, University of Oslo.
Date: August 7, 2020
Summary. This is a demonstration of how the Python module shenfun can be used to work with
global spectral functions in one and several dimensions.
Construction
A global spectral function $u(x)$ is represented on a one-dimensional
domain (a line) as
$$
u(x) = \sum_{k=0}^{N-1} \hat{u}_k \psi_k(x)
$$
where $\psi_k(x)$ is the $k$'th basis function and $x$ is a
position inside the domain. ${\hat{u}k}{k=0}^{N-1}$ are the
expansion coefficient for the series, and often referred to as the
degrees of freedom. There is one degree of freedom per basis function.
We can use any number of basis functions,
and the span of the chosen basis is then a function space. Also part of the
function space is the domain, which is
specified when a function space is created. To create a function space
$T=\text{span}{T_k}_{k=0}^{N-1}$ for
the first N Chebyshev polynomials of the first kind on the default domain $[-1, 1]$,
do
End of explanation
"""
u = Function(T)
"""
Explanation: The function $u(x)$ can now be created with all N coefficients
equal to zero as
End of explanation
"""
T = FunctionSpace(N, 'Chebyshev', domain=(0, 1))
"""
Explanation: When using Chebyshev polynomials the computational domain is always
$[-1, 1]$. However, we can still use a different physical domain,
like
End of explanation
"""
T = FunctionSpace(N, 'Chebyshev', domain=(-1, 1))
u = Function(T, val=1)
"""
Explanation: and under the hood shenfun will then map this domain to the reference
domain through
$$
u(x) = \sum_{k=0}^{N-1} \hat{u}_k \psi_k(2(x-0.5))
$$
Approximating analytical functions
The u function above was created with only zero
valued coefficients, which is the default. Alternatively,
a Function may be initialized using a constant
value
End of explanation
"""
import sympy as sp
x = sp.Symbol('x', real=True)
u = Function(T, buffer=4*x**3-3*x)
print(u)
"""
Explanation: but that is not very useful. A third method to initialize
a Function is to interpolate using an analytical
Sympy function.
End of explanation
"""
T = FunctionSpace(0, 'Chebyshev', domain=(-1, 1))
"""
Explanation: Here the analytical Sympy function will first be evaluated
on the entire quadrature mesh of the T function space,
and then forward transformed to get the coefficients. This
corresponds to a projection to T. The projection is
Find $u_h \in T$, such that
$$
(u_h - u, v)_w = 0 \quad \forall v \in T,
$$
where $v \in {T_j}{j=0}^{N-1}$ is a test function,
$u_h=\sum{k=0}^{N-1} \hat{u}k T_k$ is a trial function and the
notation $(\cdot, \cdot)_w$ represents a weighted inner product.
In this projection $u_h$ is the Function, $u$ is the sympy function and we use sympy
to exactly evaluate $u$ on all quadrature points
${x_j}{j=0}^{N-1}$. With quadrature we then have
$$
(u, v)w = \sum{j\in\mathcal{I}^N} u(x_j) v(x_j) w_j \forall v \in T,
$$
where $\mathcal{I}^N = (0, 1, \ldots, N-1)$ and ${w_j}_{j\in \mathcal{I}^N}$
are the quadrature weights. The left hand side of the projection is
$$
(u_h, v)w = \sum{j\in\mathcal{I}^N} u_h(x_j) v(x_j) w_j \forall v \in T.
$$
A linear system of equations arise when inserting for the
basis functions
$$
\left(u, T_i\right)_w = \tilde{u}_i \forall i \in \mathcal{I}^N,
$$
and
$$
\begin{align}
\left(u_h, T_i \right)w &= (\sum{k\in \mathcal{I}^N} \hat{u}k T_k , T_i)_w \
&= \sum{k\in \mathcal{I}^N} \left( T_k, T_i\right)_w \hat{u}_k
\end{align}
$$
with the mass matrix
$$
a_{ik} = \left( T_k, T_i\right)_w \forall (i, k) \in \mathcal{I}^N \times \mathcal{I}^N,
$$
we can now solve to get the unknown
expansion coefficients. In matrix notation
$$
\hat{u} = A^{-1} \tilde{u},
$$
where $\hat{u}={\hat{u}i}{i\in \mathcal{I}^N}$,
$\tilde{u}={\tilde{u}i}{i \in \mathcal{I}^N}$ and
$A={a_{ki}}_{(i,k) \in \mathcal{I}^N \times \mathcal{I}^N}$
Adaptive function size
The number of basis functions can also be left open during creation
of the function space, through
End of explanation
"""
u = Function(T, buffer=sp.cos(20*x))
print(len(u))
"""
Explanation: This is useful if you want to approximate a function and
are uncertain how many basis functions that are required.
For example, you may want to approximate the function $\cos(20 x)$.
You can then find the required Function using
End of explanation
"""
Tu = u.function_space()
print(Tu.N)
"""
Explanation: We see that $N=45$ is required to resolve this function. This agrees
well with what is reported also by Chebfun.
Note that in this process a new FunctionSpace() has been
created under the hood. The function space of u can be
extracted using
End of explanation
"""
T1 = FunctionSpace(0, 'Chebyshev', domain=(0, 100))
u = Function(T1, buffer=sp.besselj(0, x))
print(len(u))
"""
Explanation: To further show that shenfun is compatible with Chebfun we can also
approximate the Bessel function
End of explanation
"""
u = Function(T1, buffer=sp.besselj(0, x), reltol=1e-14)
print(len(u))
"""
Explanation: which gives 83 basis functions, in close agreement with Chebfun (89).
The difference lies only in the cut-off criteria. We cut frequencies
with a relative tolerance of 1e-12 by default, but if we make this criteria
a little bit stronger, then we will also arrive at a slightly higher number:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
Tu = u.function_space()
plt.plot(Tu.mesh(), u.backward())
"""
Explanation: Plotting the function on its quadrature points looks
a bit ragged, though:
End of explanation
"""
xj = np.linspace(0, 100, 1000)
plt.plot(xj, u(xj))
"""
Explanation: To improve the quality of this plot we can instead evaluate the
function on more points
End of explanation
"""
up = u.refine(200)
Tp = up.function_space()
plt.plot(Tp.mesh(), up.backward())
"""
Explanation: Alternatively, we can refine the function, which simply
pads zeros to $\hat{u}$
End of explanation
"""
print(up)
"""
Explanation: The padded expansion coefficients are now given as
End of explanation
"""
import numpy.polynomial.chebyshev as cheb
c = cheb.Chebyshev(u, domain=(0, 100))
"""
Explanation: More features
Since we have used a regular Chebyshev basis above, there
are many more features that could be explored simply by going through
Numpy's Chebyshev module.
For example, we can create a Chebyshev series like
End of explanation
"""
z = Tu.map_true_domain(cheb.chebroots(u))
"""
Explanation: The Chebyshev series in Numpy has a wide range of possibilities,
see here.
However, we may also work directly with the Chebyshev
coefficients already in u. To find the roots of the
polynomial that approximates the Bessel function on
domain $[0, 100]$, we can do
End of explanation
"""
z2 = z[np.where((z.imag == 0)*(z.real > 0)*(z.real < 100))].real
print(z2[:5])
"""
Explanation: Note that the roots are found on the reference domain $[-1, 1]$
and as such we need to move the result to the physical domain using
map_true_domain. The resulting roots z are both real and imaginary,
so to extract the real roots we need to filter a little bit
End of explanation
"""
Td = FunctionSpace(0, 'C', bc=(sp.besselj(0, 0), sp.besselj(0, 100)), domain=(0, 100))
ud = Function(Td, buffer=sp.besselj(0, x))
print(len(ud))
"""
Explanation: Here np.where returns the indices where the condition is true. The condition
is that the imaginary part is zero, whereas the real part is within the
true domain $[0, 100]$.
Notice.
Using directly cheb.chebroots(c) does not seem to work (even though the
series has been generated with the non-standard domain) because
Numpy only looks for roots in the reference domain $[-1, 1]$.
We could also use a function space with boundary conditions built
in, like
End of explanation
"""
C0 = FunctionSpace(20, 'C')
C1 = FunctionSpace(20, 'C')
T = TensorProductSpace(comm, (C0, C1))
u = Function(T)
"""
Explanation: As we can see this leads to a function space of dimension
very similar to the orthogonal space.
The major advantages of working with a space with boundary conditions
built in only comes to life when solving differential equations. As
long as we are only interested in approximating functions, we may just
as well stick to the orthogonal spaces. This goes for Legendre as
well as Chebyshev.
Multidimensional functions
Multidimensional tensor product spaces are created
by taking the tensor products of one-dimensional function spaces.
For example
End of explanation
"""
y = sp.Symbol('y', real=True)
u = Function(T, buffer=sp.cos(10*x)*sp.cos(10*y))
X = T.local_mesh(True)
plt.contourf(X[0], X[1], u.backward())
"""
Explanation: Here $\text{T} = \text{C0} \otimes \text{C1}$, the basis function is
$T_i(x) T_j(y)$ and the Function u is
$$
u(x, y) = \sum_{i=0}^{N-1} \sum_{j=0}^{N-1} \hat{u}_{ij} T_i(x) T_j(y).
$$
The multidimensional Functions work more or less exactly like for the
1D case. We can here interpolate 2D Sympy functions
End of explanation
"""
X = T.mesh()
for xj in X[0]:
for yj in X[1]:
plt.plot((xj, xj), (X[1][0, 0], X[1][0, -1]), 'k')
plt.plot((X[0][0], X[0][-1]), (yj, yj), 'k')
"""
Explanation: Like for 1D the coefficients are computed through projection,
where the exact function is evaluated on all quadrature points
in the mesh.
The Cartesian mesh represents the quadrature points of the
two function spaces, and can be visualized as follows
End of explanation
"""
X = T.local_mesh(broadcast=True, uniform=True)
plt.contourf(X[0], X[1], u.backward(kind={'chebyshev': 'uniform'}))
"""
Explanation: We may alternatively plot on a uniform mesh
End of explanation
"""
r, theta = psi = sp.symbols('x,y', real=True, positive=True)
rv = (r*sp.cos(theta), r*sp.sin(theta))
B0 = FunctionSpace(20, 'C', domain=(0, 1))
F0 = FunctionSpace(20, 'F')
T = TensorProductSpace(comm, (B0, F0), coordinates=(psi, rv))
"""
Explanation: Curvilinear coordinates
With shenfun it is possible to use curvilinear coordinates,
and not necessarily with orthogonal basis vectors. With
curvilinear coordinates the computational coordinates are
always straight lines, rectangles and cubes. But the physical
coordinates can be very complex.
Consider the unit disc with polar coordinates. Here
the position vector $\mathbf{r}$ is given by
$$
\mathbf{r} = r\cos \theta \mathbf{i} + r\sin \theta \mathbf{j}
$$
The physical domain is $\Omega = {(x, y): x^2 + y^2 < 1}$,
whereas the computational domain is the Cartesian product
$D = {(r, \theta) \in [0, 1] \times [0, 2 \pi]}$.
We create this domain in shenfun through
End of explanation
"""
u = Function(T, buffer=(1-r)*r*sp.sin(sp.cos(theta)))
"""
Explanation: Note that we are using a Fourier space for the azimuthal
direction, since the solution here needs to be periodic.
We can now create functions on the space using an
analytical function in computational coordinates
End of explanation
"""
X = T.local_mesh(True)
plt.contourf(X[0], X[1], u.backward(), 100)
"""
Explanation: However, when this is plotted it may not be what you expect
End of explanation
"""
X = T.local_cartesian_mesh()
plt.contourf(X[0], X[1], u.backward(), 100)
"""
Explanation: We see that the function has been plotted in computational coordinates,
and not on the disc, as you probably expected. To plot on
the disc we need the physical mesh, and not the computational
End of explanation
"""
up = u.backward()
xp, yp, up = wrap_periodic([X[0], X[1], up], axes=[1])
plt.contourf(xp, yp, up, 100)
"""
Explanation: Notice.
The periodic plot does not wrap all around the circle. This is
not wrong, we have simply not used the same point twice, but it
does not look very good. To overcome this problem we can wrap the
grid all the way around and re-plot.
End of explanation
"""
B0 = FunctionSpace(0, 'C', domain=(0, 1))
F0 = FunctionSpace(0, 'F')
T = TensorProductSpace(comm, (B0, F0), coordinates=(psi, rv))
u = Function(T, buffer=((1-r)*r)**2*sp.sin(sp.cos(theta)))
print(u.shape)
"""
Explanation: Adaptive functions in multiple dimensions
If you want to find a good resolution for a function in multiple
dimensions, the procedure is exactly like in 1D. First create function
spaces with 0 quadrature points, and then call Function
End of explanation
"""
|
aw3s/PT3S | .ipynb_checkpoints/PT3S-checkpoint.ipynb | mit | import doctest
"""
>>> from platform import python_version
>>> print(python_version())
3.8.8
"""
doctest.testmod()
"""
Explanation: PT3S
Use SIR 3S Modeldata and SIR 3S Results in pure Python.
With pandas, matplotlib and others.
For documentation, test, verification, analysis, reporting, prototyping, play.
Install Python
End of explanation
"""
### ggf. Rechte erforderlich:
### entweder in PowerShell: Start-Process powershell -Verb runAs
### oder RechteMausTaste WindowsSymbol: Windows PowerShell (Administrator)
### dann (ohne ! in PowerShell auf pip-Verz.):
#!pip uninstall --yes PT3S
#!pip install PT3S --no-cache-dir
"""
Explanation: Install PT3S to site-packages
End of explanation
"""
import logging
import os
logger = logging.getLogger('PT3S')
logFileName= r"PT3S.log"
loglevel = logging.DEBUG
logging.basicConfig(filename=logFileName
,filemode='w'
,level=loglevel
,format="%(asctime)s ; %(name)-60s ; %(levelname)-7s ; %(message)s")
fileHandler = logging.FileHandler(logFileName)
logger.addHandler(fileHandler)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logging.Formatter("%(levelname)-7s ; %(message)s"))
consoleHandler.setLevel(logging.INFO)
logger.addHandler(consoleHandler)
"""
Explanation: Logging
End of explanation
"""
#%pwd
# works only if pip install -e .is NOT active:
###from PT3S import Mx,Xm,Rm
# or if subdir is PT3S
#%cd -q ..
# ...
#%cd -q PT3S
# if pip install -e .IS active
# the local modules can be imported with:
#from PT3S
import Mx, Xm, Rm
# if %pwd is local devdir
import doctest
"""
>>> import pandas as pd
>>> pd.__version__
'1.2.4'
"""
doctest.testmod()
"""
Explanation: about from PT3S ... import ... and pip install -e .
End of explanation
"""
%run test.py -l -q -m 1 -t both \
-w OneLPipe -w LocalHeatingNetwork -w GPipe -w GPipes -w TinyWDN
# mit SIR 3S Daten (Modelle + Ergebnisse) aus Version 11
%run test.py -l -q -m 1 -t both \
-w OneLPipe -w LocalHeatingNetwork -w GPipe -w GPipes -w TinyWDN \
--testDir testdata11
# mit SIR 3S Daten (Modelle + Ergebnisse) aus Version 10
%run test.py -l -q -m 1 -t both \
-w OneLPipe -w LocalHeatingNetwork -w GPipe -w GPipes -w TinyWDN \
--testDir testdata10
# mit SIR 3S Daten (Modelle + Ergebnisse) aus Version 09
%run test.py -l -q -m 1 -t both \
-w OneLPipe -w LocalHeatingNetwork -w GPipe -w GPipes -w TinyWDN \
--testDir testdata09 --dotResolution NONE
"""
Explanation: ggf. Tests
Modultests
die Modultests (-m 1) gehen teilweise von einem definierten "leeren" Ausgangszustand aus: Relikte aus alten Testläufen sollten daher gelöscht werden (-t before), wenn die Modultests vollständig ohne Fehler durchlaufen sollen
die Modultests hinterlassen teilweise "Überbleibsel", die explizit gelöscht werden sollten (-t after ==> -t both)
es müssen nur die Modelle (-w ...) behandelt werden (vorher bzw. nachher aufräumen), die in den Modultests auch verwendet werden
es werden alle Modultests durchgeführt
nachfolgende Tests erzeugen (trotz -q) auch bei fehlerfreiem Durchlauf eine Ausgabe im Notebook
End of explanation
"""
%run test.py -l -q -m 0 \
-s Xm\. \
-x Xm\.vKNOTexpEBES \
-x Xm\.vROHRexpEBES \
-x Xm\._vRUES -x Xm\._vRSLW -x Xm\._vRSTN -x Xm\._vRXXX -x Xm\._vREdges \
-x Xm\.MxAdd \
-t both -y yes -z no \
-w OneLPipe -w LocalHeatingNetwork -w GPipe -w GPipes -w TinyWDN
%run test.py -l -q -m 0 \
-s Mx\. \
-x Mx\.FromH5 \
-x Mx\.ToH5 \
-t both -y yes -z no \
-w OneLPipe -w LocalHeatingNetwork -w GPipe -w GPipes -w TinyWDN
"""
Explanation: Singeltests
es müssen nur die Modelle (-w ...) behandelt werden, die in den Singletests auch verwendet werden
vor Singletests könnten die Modultests durchgeführt werden: nachfolgend sind diese ausgeschaltet (-m 0)
Singletests hinterlassen, so sie nicht selbst aufräumen, "Überbleibsel", die explizit gelöscht werden sollten (-t after)
Singletests erfahren im Gegensatz zu Modultests ein explizites MockUp außerhalb des eigentlichen Tests; d.h. die Tests beginnen mit "fertigen" Objekten bzw. können zumindest so beginnen; bei Modultests hingegen erfolgt das MockUp immer in der Testsequenz selbst
Details zum MockUp können mit -y (kein H5 lesen j/n) und -z (H5 schreiben j/n) angegeben werden
es können nur die Singletest auf einmal ausgeführt werden, die mit demselben MockUp zurecht kommen
MockUp hier: kein H5 lesen und kein H5-Schreiben: -y yes -z no
nur bei -z yes wird vor dem H5-Schreiben auch Xm.MxSync und .MxAdd durchgeführt
-s spezifiziert Tests per regExp
-x spezifiziert Test per regExp die aus der -s-Menge nicht durchzuführen sind
nachfolgende Tests erzeugen (wg. -q) nur bei fehlerfreiem Durchlauf _keine Ausgaben im Notebook
End of explanation
"""
# test_reinerMockUpLauf
%run test.py -l -v -m 0 -t before \
-u yes \
-y yes -z yes \
-w DHNetwork -w LocalHeatingNetwork -w GPipes
"""
Explanation: reiner MockUp-Lauf
es wird gar kein Test spezifiziert sondern ein reiner MockUp-Lauf
geeignet z.B. wenn ein MockUp zeitaufwendig ist ...
... und ein bestimmter Singletest in einem Modul häufig wiederholt werden muss ...
... weil am getesteten Funktionsbereich gearbeitet wird
MockUp hier: kein H5 lesen aber H5 schreiben: -y yes -z yes
mit -u wird der reine MockUp-Lauf durchgeführt
nachfolgender Lauf erzeugt (wg. -q) bei fehlerfreiem Durchlauf _keine Ausgaben im Notebook
End of explanation
"""
%run test.py -l -q -m 0 \
-s Xm\._vRUES -s Xm\._vRSLW -s Xm\._vRSTN -s Xm\._vRXXX -s Xm\._vREdges \
-y no -z no \
-w DHNetwork -w LocalHeatingNetwork -w GPipes
%run test.py -l -q -m 0 \
-s Xm\.MxAdd \
-y no -z no \
-w DHNetwork -w LocalHeatingNetwork -w GPipes
%run test.py -l -q -m 0 \
-s Xm\.vKNOTexpEBES \
-y no -z no \
-w DHNetwork
%run test.py -l -q -m 0 \
-s Xm\.vROHRexpEBES \
-y no -z no \
-w DHNetwork
"""
Explanation: ### Singletest basierend auf diesem reinem MockUp-Lauf
Testläufe können mit H5 lesen (-y no) und kein H5 schreiben (-z no) am Schnellsten durchgeführt werden
Singletest und MockUp müssen zueinander passen (d.h. z.B. wenn ein Singletest mit "fertigen" Objekten beginnt müssen diese durch den MockUp bereitgestellt werden)
der Entwicklungsstand ebenfalls: die "fertigen" Objekte werden ja zuvor in einem (ggf. reinen MockUp-Lauf) erzeugt und als H5 abgespeichert; mit solchen Singletests kann daher nur gearbeitet werden wenn andere Codeteile entsprechend "durchlaufen"
End of explanation
"""
%run Mx.py -l -q -m 0 -s getMicrosecondsFromRefTime
"""
Explanation: ### weitere Beispiele für Singletests
Beispiel: durchführen eines einzelnen Tests in Modul Mx über Mx.py
das Testobjekt ist eine Klassen-freie Funktion, die kein Modell-MockUp benötigt
End of explanation
"""
%run Xm.py -l -q -m 0 -s Xm\.constructShortestPathFromNodeList -t both -y yes -z no -w GPipes -w LocalHeatingNetwork
"""
Explanation: Beispiel: durchführen eines einzelnen Tests in Modul Xm über Xm.py
das Testobjekt ist eine Klassen-Funktion, welche die angegebenen Modell-MockUps benötigt
End of explanation
"""
%run Rm.py -l -q -m 0 \
-s pltMakeCategoricalColors \
-s pltMakeCategoricalCmap \
-s Rm\. \
-y no -z no \
-w DHNetwork
"""
Explanation: Beispiel: weitere Singletests
End of explanation
"""
import pandas as pd
import numpy as np
import scipy
import networkx as nx
"""
Explanation: Modell und Ergebnisse laden und nutzen
End of explanation
"""
path='.'
xmlFile=os.path.join(path,'testdata\LocalHeatingNetwork.XML')
xm=Xm.Xm(xmlFile=xmlFile)
vVBEL=xm.dataFrames['vVBEL']
vVBEL.filter(items=['BESCHREIBUNG','NAME_i','NAME_k','LAYR','L','D']).sort_index(level=1)
vVBEL.dtypes
[viewOrTable for viewOrTable in sorted(xm.dataFrames.keys())]
"""
Explanation: Modell
End of explanation
"""
# Weg A um an Ergebnisse zu kommen:
# Modell(-Views) um MX-Ergebnisse ergaenzen; MX-Ergebnissatz des Modells wird implizit gelesen und returned
mx=xm.MxSync()
# Weg B um an Ergebnisse zu kommen:
# einen MX-Ergebnissatz voellig unabhaengig von einem Modell lesen
# z.B. den MX-Ergebnissatz des Modells ...
(wDir,modelDir,modelName,mx1File)=xm.getWDirModelDirModelName()
# ... lesen
mx=Mx.Mx(mx1File=mx1File)
"""
Explanation: Ergebnisse
End of explanation
"""
mx.df.filter(items=['ALLG~~~4639827058859487185~SNAPSHOTTYPE','KNOT~V-L~~5736262931552588702~PH'])
mx.df.filter(regex='^KNOT').filter(regex='PH$').plot()
"""
Explanation: Non-Vector
End of explanation
"""
timesReq=mx.df.index.tolist()
mxVecsFileData=mx.getMxsVecsFileData(timesReq)
for vecsFileResult in mxVecsFileData:
print(vecsFileResult.index)
vecsFileResult.filter(regex='^ROHR').filter(regex='^(?!.*VEC)')
vecsFileResult.filter(regex='^KNOT')
"""
Explanation: Vector
End of explanation
"""
mx.dfVecAggs
"""
Explanation: Vector: Aggregate
TIME: MXS: SIR 3S t=0-Ergebnis als instat. AB (kein Aggregat) - implizit von _readMxsFile
TMIN,TMAX: MXS: SIR 3S Min./Max. - implizit von _readMxsFile
or Aggregates from 2 Times: MIN,MAX,... - implizit von getVecAggs
End of explanation
"""
xm.MxSync(mx=mx)
vROHR=xm.dataFrames['vROHR']
vROHR[['L','mx2NofPts','mx2Idx']]
"""
Explanation: Modell und Ergebnis "synchronisieren"
Bezuege herstellen zwischen einem Modell und einem Ergebnissatz (Stichworte mx2Idx, NAME123)
mx=xm.MxSync() synchronisiert implizit mit dem MX-Ergebnissatz des Modells (der MX-Satz wir dabei gelesen)
xm.MxSync(mx=mx): explizite Synchronisation mit irgendeinem bereits gelesenen MX-Ergebnissatz
End of explanation
"""
xm.MxAdd(mx=mx)
"""
Explanation: Modellsichten um Ergebnisse "ergänzen"
die Modellsichten werden dabei um Ergebnisspalten ergänzt
mx=xm.MxAdd() synchronisiert vorher implizit mit dem MX-Ergebnissatz des Modells (der MX-Satz wir dabei gelesen)
mx=xm.MxAdd() ist somit der KUERZESTE Aurufweg nach dem Lesen eines Modells mit Modell+Ergebnissen weiterzuarbeiten
xm.MxAdd(mx=mx): die Ergebnisse werden aus dem angegebenen MX-Satz entnommen
ohne Zeitangabe ergänzt MxAdd das stationäre Ergebnis
statt mit dem Ergebnis einer Szenariumzeit können die Modellsichten auch mit Ergebnisaggregaten über eine Zeitspanne ergänzt werden
die ergänzten Ergebnisspalten "kumulieren nicht"; die Ergebnisspaltennamen sind nach jedem MxAdd-Aufruf gleich und gleich viele
End of explanation
"""
vKNOT=xm.dataFrames['vKNOT']
vKNOT.dtypes
vVBEL=xm.dataFrames['vVBEL']
vVBEL.filter(items=['NAME_i','NAME_k','Z_i','KNOT~*~*~*~PH_i','Q']).sort_values(['Q','KNOT~*~*~*~PH_i'], ascending=[1,0])
vVBEL.dtypes
vROHR=xm.dataFrames['vROHR']
vROHR.dtypes
vFWVB=xm.dataFrames['vFWVB']
vFWVB.dtypes
"""
Explanation: Vektorergebnisse
End of explanation
"""
vROHRVecResults=xm.dataFrames['vROHRVecResults']
vROHRVecResults[['pk','mx2Idx','IptIdx','ROHR~*~*~*~SVEC','ROHR~*~*~*~ZVEC','ROHR~*~*~*~MVEC']]
"""
Explanation: Rohrvektorergebnisse
End of explanation
"""
vAGSN=xm.dataFrames['vAGSN']
vAGSN.dtypes
dfOneVL=vAGSN[(vAGSN['LFDNR']=='1') & (vAGSN['Layer']==1)]
dfOneVL[['OBJTYPE','x','P','Q']]
dfOneRL=vAGSN[(vAGSN['LFDNR']=='1') & (vAGSN['Layer']==2)]
dfOneRL[['OBJTYPE','x','P','Q']]
plt.plot(dfOneVL['x'],dfOneVL['Q']
,dfOneRL['x'],dfOneRL['Q'])
plt.plot(dfOneVL['x'],dfOneVL['P']
,dfOneRL['x'],dfOneRL['P'])
plt.ylim((2.5, 5.5))
# SIR 3S MIN-Ergebnis lesen
xm.MxAdd(mx=mx,aggReq='TMIN')
vAGSN=xm.dataFrames['vAGSN']
dfOneVLTMIN=vAGSN[(vAGSN['LFDNR']=='1') & (vAGSN['Layer']==1)]
plt.plot(dfOneVL['x'],dfOneVL['P']
,dfOneRL['x'],dfOneRL['P']
,dfOneVL['x'],dfOneVLTMIN['P']
)
plt.ylim((2.5, 5.5))
"""
Explanation: Schnitte
End of explanation
"""
xm.ToH5()
mx.ToH5()
# next read will be faster because H5 is read instead of XML / MX if H5 is newer than XML / MX
xm=Xm.Xm(xmlFile=xmlFile)
mx=Mx.Mx(mx1File=mx1File)
#xm?
#mx?
"""
Explanation: Store in H5
End of explanation
"""
dir(Rm)
import Rm
dir(Rm)
rm=Rm.Rm(xm=xm,mx=mx)
"""
Explanation: Rm Plot Beispiele
End of explanation
"""
plt.close('all')
fig=plt.figure(dpi=2*72,linewidth=1.)
# 3Classes und FixedLimits sind standardmaessig Falsch; RefPerc ist standardmaessig Wahr
pFWVB=rm.pltNetDHUS(
pltTitle='pltNetDHUS: Bsp1: Prozentdarstellung - keine Klassen'
,timeDeltaToT= pd.to_timedelta('30 seconds')
,pFIGNrcv=['WBLZ~WärmeblnzGes~\S*~\S+~WES'
,'WBLZ~WärmeblnzGes~\S*~\S+~WVB'
,'KNOT~PKON-Knoten~\S*~\S+~QM'
]
,pFIGNrcvTxt=['Erzeugung'
,'Verbrauch'
,'Kontrolle DH'
]
,pVICsDf=pd.DataFrame({'Kundenname': ['VIC1'],'Knotenname': ['V-K007']})
,CBShrink=1. # default: 0.3; ist hier wg. der ausgepraegten Querformat-Modellausdehnung zu klein
,CBLabelPad=-20 # default: -50; dito zu gross
)
plt.show()
"""
Explanation: Bsp1: Prozentdarstellung - keine Klassen
End of explanation
"""
plt.close('all')
fig=plt.figure(dpi=2*72,linewidth=1.)
# 3Classes und FixedLimits sind standardmaessig Falsch; RefPerc ist standardmaessig Wahr
pFWVB=rm.pltNetDHUS(
pltTitle='pltNetDHUS: Bsp2: Prozentdarstellung - Klassen'
,timeDeltaToT= pd.to_timedelta('30 seconds')
,pFIGNrcv=['WBLZ~WärmeblnzGes~\S*~\S+~WES'
,'WBLZ~WärmeblnzGes~\S*~\S+~WVB'
,'KNOT~PKON-Knoten~\S*~\S+~QM'
]
,pFIGNrcvTxt=['Erzeugung'
,'Verbrauch'
,'Kontrolle DH'
]
,CBShrink=1. # default: 0.3; ist hier wg. der ausgepraegten Querformat-Modellausdehnung zu klein
,CBLabelPad=-20 # default: -50; dito zu gross
,pFWVBMeasure3Classes=True
,pFWVBMeasureCBFixedLimitHigh=0.80
,pFWVBMeasureCBFixedLimitLow=0.66
)
plt.show()
"""
Explanation: Bsp2: Prozentdarstellung - Klassen
End of explanation
"""
plt.close('all')
fig=plt.figure(dpi=2*72,linewidth=1.)
# 3Classes und FixedLimits sind standardmaessig Falsch; RefPerc ist standardmaessig Wahr
pFWVB=rm.pltNetDHUS(
pltTitle='pltNetDHUS: Bsp3: keine Prozentdarstellung - Klassen'
,pFIGNrcv=['WBLZ~WärmeblnzGes~\S*~\S+~WES'
,'WBLZ~WärmeblnzGes~\S*~\S+~WVB'
,'KNOT~PKON-Knoten~\S*~\S+~QM'
]
,pFIGNrcvTxt=['Erzeugung'
,'Verbrauch'
,'Kontrolle DH'
]
,CBShrink=1. # default: 0.3; ist hier wg. der ausgepraegten Querformat-Modellausdehnung zu klein
,CBLabelPad=-20 # default: -50; dito zu gross
,pFWVBMeasure3Classes=True
,pFWVBMeasureInRefPerc=False
,pFWVBMeasure='FWVB~*~*~*~W'
,pFWVBMeasureCBFixedLimitHigh=200.
,pFWVBMeasureCBFixedLimitLow=130.
)
plt.show()
"""
Explanation: Bsp3: keine Prozentdarstellung - Klassen
End of explanation
"""
G=nx.from_pandas_edgelist(xm.dataFrames['vVBEL'], source='NAME_i', target='NAME_k', edge_attr=True,create_using=nx.MultiGraph())
for e, datadict in G.edges.items():
print(e)
print(datadict)
for n, nbrsdict in G.adj.items():
print("!{0:s}".format(n))
for nox, mgdct in nbrsdict.items():
print(" {0:s}".format(nox))
for mg,edct in mgdct.items():
print(" {0:d}: {1:s}".format(mg,str(edct)))
print(nx.dijkstra_path(G, 'V-L', 'R-L'))
max([d for n,d in nx.degree(G)])
spmtx=nx.adjacency_matrix(G) # Return type: SciPy sparse matrix
plt.spy(spmtx)
"""
Explanation: NetworkX Beispiele
hydraulisches Prozessmodell
End of explanation
"""
spmtx=nx.laplacian_matrix(G)
plt.spy(spmtx)
nl=[n for n in G.nodes()]
A=nx.to_scipy_sparse_matrix(G)
nlo=scipy.sparse.csgraph.reverse_cuthill_mckee(A)
optnl=[nl[idx] for idx in nlo]
spmtx=nx.laplacian_matrix(G,nodelist=optnl)
plt.spy(spmtx)
"""
Explanation: Die Laplace-Matrix ist definiert als L:=D-A, wobei D die Gradmatrix und A die Adjazenzmatrix des Graphen bezeichnet.
End of explanation
"""
xm.delFiles()
mx.delFiles()
"""
Explanation: Clean Up
End of explanation
"""
xmlFile=os.path.join(path,'testdata\DHNetwork.XML')
xm=Xm.Xm(xmlFile=xmlFile)
mx=xm.MxAdd()
vRSTN=xm.dataFrames['vRSTN']
rstnDiePGRPStellen=vRSTN[[
'CONT'
#,'CONT_PARENT'
,'KA'
,'BESCHREIBUNG'
,'ITYP_OBJTYPE'
,'ITYP_OBJATTR'
,'Chk'
,'ik_Chk'
# ,'OBJTYPE'
,'NAME_i'
,'NAME_k'
,'CONT_i'
# ,'TABL_Chk'
# ,'TABL'
# ,'KNOT'
# ,'RART'
# ,'RART_TYP'
,'RARTPG'
# ,'RCPL'
# ,'RCPL_KNOT1'
# ,'RCPL_KNOT2'
# ,'NAME_i_PUMP'
# ,'NAME_k_PUMP'
]].sort_values(by=['ITYP_OBJTYPE','ITYP_OBJATTR','CONT','KA'])[vRSTN['ITYP_OBJTYPE']=='PGRP']
rstnDiePGRPStellen
rstnDiePGRPStellen[rstnDiePGRPStellen['NAME_i']=='R-A-SS']
# Verbindungslinien
vREdges=xm.dataFrames['vREdges']
# Signalweg (mit Knotentyp_INFO)
G=nx.from_pandas_edgelist(vREdges, source='KnExt_Ki', target='KnExt_Kk', edge_attr=True,create_using=nx.DiGraph())
nx.shortest_path(G,'Leck_1_Ein_RSLW','KA-0008_RSTN')
# Signalweg (ohne Knotentyp)
G=nx.from_pandas_edgelist(vREdges, source='Kn_Ki', target='Kn_Kk', edge_attr=True,create_using=nx.DiGraph())
nx.shortest_path(G,'Leck_1_Ein','KA-0008')
G=nx.from_pandas_edgelist(vREdges, source='Kn_Ki', target='Kn_Kk', edge_attr=True,create_using=nx.Graph())
nl=[n for n in G.nodes()]
A=nx.to_scipy_sparse_matrix(G)
nlo=scipy.sparse.csgraph.reverse_cuthill_mckee(A)
optnl=[nl[idx] for idx in nlo]
spmtx=nx.laplacian_matrix(G,nodelist=optnl)
plt.spy(spmtx)
"""
Explanation: regelungstechnisches Signalmodell
End of explanation
"""
import pandas as pd
import matplotlib
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import matplotlib.dates as mdates
from matplotlib import colors
from matplotlib.colorbar import make_axes
from mpl_toolkits.axes_grid1 import make_axes_locatable
import numpy as np
import scipy
import networkx as nx
import re
from itertools import chain
import math
import sys
path='.'
xmlFile=os.path.join(path,'testdata\DHNetwork.XML')
xm=Xm.Xm(xmlFile=xmlFile)
mx=xm.MxAdd()
xm.ToH5()
mx.ToH5()
"""
Explanation: Dashboard Beispiele
End of explanation
"""
tStartSz=mx.df.index[0]
tEndeSz=mx.df.index[-1]
tStartSz
tEndeSz
timeSpanSz=tEndeSz-tStartSz
timeSpanSz
"""
Explanation: TimeCurve Data
verfügbare Szenariumzeit
End of explanation
"""
tStart=tStartSz
tEnde=tEndeSz
tcData=mx.df.loc[tStart:tEnde,:]
tcData.index[0]
tcData.index[-1]
"""
Explanation: ausgewählter Zeitbereich
End of explanation
"""
mx1DfFwesW=tcData.filter(regex='^FWES').filter(regex='W$')
mx1DfFwesW=mx1DfFwesW.reindex(sorted(mx1DfFwesW.columns),axis=1)
mx1DfFwesW.head()
plt.close()
size_DINA6quer=(5.8,4.1)
plt.rc('figure',figsize=size_DINA6quer)
fig=plt.figure()
mx1DfFwesW.plot()
timeSumMaxMx1DfFwesW=mx1DfFwesW.sum(axis=1).idxmax()
timeSumMaxMx1DfFwesW
timeSumMinMx1DfFwesW=mx1DfFwesW.sum(axis=1).idxmin()
timeSumMinMx1DfFwesW
"""
Explanation: Fwes W
End of explanation
"""
mx1DfPumpQ=tcData.filter(regex='^PUMP').filter(regex='QM$')
mx1DfPumpQ=mx1DfPumpQ.reindex(sorted(mx1DfPumpQ.columns),axis=1)
mx1DfPumpQ.head()
plt.close()
size_DINA6quer=(5.8,4.1)
plt.rc('figure',figsize=size_DINA6quer)
fig=plt.figure()
mx1DfPumpQ.plot()
"""
Explanation: Pump Q
End of explanation
"""
mx1DfPumpNy=tcData.filter(regex='^PUMP').filter(regex='N$')
mx1DfPumpNy=mx1DfPumpNy.reindex(sorted(mx1DfPumpNy.columns),axis=1)
mx1DfPumpNy.head()
"""
Explanation: Pump Ny
End of explanation
"""
mx1DfPumpWCols=[col for col in mx.df.columns.tolist()
if
re.match(Mx.reSir3sIDcompiled,col).group('NAME1') in ['wNA','wNB','wNC']
and
re.match(Mx.reSir3sIDcompiled,col).group('ATTRTYPE') in ['XA']
and
re.match(Mx.reSir3sIDcompiled,col).group('OBJTYPE') in ['RSLW']
]
mx1DfPumpNw=tcData.filter(items=mx1DfPumpWCols)
mx1DfPumpNw=mx1DfPumpNw.reindex(sorted(mx1DfPumpNw.columns),axis=1)
mx1DfPumpNw.head()
"""
Explanation: Pump Nw
End of explanation
"""
mx1DfPumpNywPairs=pd.concat([mx1DfPumpNy,mx1DfPumpNw],axis=1)
mx1DfPumpNywPairs=mx1DfPumpNywPairs.filter(items=list(chain.from_iterable(
[x for x in zip(mx1DfPumpNy.columns.tolist(),mx1DfPumpNw.columns.tolist())]
)
)
)
mx1DfPumpNywPairs.head()
plt.close()
size_DINA6quer=(5.8,4.1)
plt.rc('figure',figsize=size_DINA6quer)
fig=plt.figure()
mx1DfPumpNywPairs.plot()
"""
Explanation: Pump Nyw-Pairs
End of explanation
"""
mx1DfPipeLIO=tcData.filter(regex='^ROHR').filter(regex='LECKEINAUS$')
mx1DfPipeLIO=mx1DfPipeLIO.reindex(sorted(mx1DfPipeLIO.columns),axis=1)
mx1DfPipeLIO.head()
mx1DfPipeLQ=tcData.filter(regex='^ROHR').filter(regex='LECKMENGE$')
mx1DfPipeLQ=mx1DfPipeLQ.reindex(sorted(mx1DfPipeLQ.columns),axis=1)
mx1DfPipeLQ.head()
mx1DfPipeLPairs=pd.concat([mx1DfPipeLIO,mx1DfPipeLQ],axis=1)
mx1DfPipeLPairs=mx1DfPipeLPairs.filter(items=list(chain.from_iterable(
[x for x in zip(mx1DfPipeLIO.columns.tolist(),mx1DfPipeLQ.columns.tolist())]
)
)
)
mx1DfPipeLPairs.describe()
# effektive Leckmengen als ~LEAK = ~LECKEINAUS * ~LECKMENGE
colList=mx1DfPipeLPairs.columns.tolist()
for idx in range(0,len(colList),2):
col=colList[idx]
mo=re.match(Mx.reSir3sIDcompiled,col)
colNew=mo.group('OBJTYPE')
colNew=colNew+Mx.reSir3sIDSep+str(mo.group('NAME1'))
colNew=colNew+Mx.reSir3sIDSep+mo.group('NAME2')
colNew=colNew+Mx.reSir3sIDSep+mo.group('OBJTYPE_PK')
colNew=colNew+Mx.reSir3sIDSep+'LEAK'
mx1DfPipeLPairs[colNew]=mx1DfPipeLPairs.apply(lambda row: row[idx] * row[idx+1] , axis=1)
mx1DfPipeLeaks=mx1DfPipeLPairs.filter(regex='LEAK$')
mx1DfPipeLeaks.describe()
s=mx1DfPipeLeaks.max()
s=s[s>0]
s.index.tolist()
mx1DfPipeLeaks=mx1DfPipeLeaks.filter(items=s.index.tolist())
mx1DfPipeLeaks.head()
plt.close()
size_DINA6quer=(5.8,4.1)
plt.rc('figure',figsize=size_DINA6quer)
fig=plt.figure()
mx1DfPipeLeaks.plot()
"""
Explanation: Leckagen
End of explanation
"""
colDct={}
for col in tcData.columns.tolist():
mo=re.match(Mx.reSir3sIDcompiled,col)
colNew=mo.group('OBJTYPE')
colNew=colNew+Mx.reSir3sIDSep+str(mo.group('NAME1'))
colNew=colNew+Mx.reSir3sIDSep+mo.group('NAME2')
#colNew=colNew+Mx.reSir3sIDSep+mo.group('OBJTYPE_PK')
colNew=colNew+Mx.reSir3sIDSep+mo.group('ATTRTYPE')
colDct[col]=colNew
df=tcData.rename(columns=colDct)
mx1DfDH=pd.concat([df['RSLW~wDH_RD_A~~XA']
,df['RMES~yDH_pRL_A~~XA']
,df['RSLW~wDH_MD_A~~XA']
,df['RADD~yDH_pMD_A~~XA']
]
, axis=1)
mx1DfDH.head()
plt.close()
size_DINA6quer=(5.8,4.1)
plt.rc('figure',figsize=size_DINA6quer)
fig=plt.figure()
mx1DfDH.plot()
mx1DfDHQ=pd.concat([df['RMES~QDHGes~~XA']
,df['ALLG~~~LINEPACKRATE']
]
, axis=1)
mx1DfDHQ.head()
mx1DfDHV=pd.concat([df['ALLG~~~LINEPACKGEOM']
]
, axis=1)
mx1DfDHV=mx1DfDHV-mx1DfDHV['ALLG~~~LINEPACKGEOM'][0]
mx1DfDHV.describe()
mx1DfDHV.shape
"""
Explanation: DH
Rename (um die Kanäle ohne die ID auswählen zu können)
End of explanation
"""
def mxAdd(xm,mx,timeReq=None,aggReq=None,timeReq2nd=None):
xm.MxAdd(mx=mx,timeReq=timeReq,aggReq=aggReq,timeReq2nd=timeReq2nd)
vAGSN=xm.dataFrames['vAGSN']
vKNOT=xm.dataFrames['vKNOT']
vROHR=xm.dataFrames['vROHR']
vFWVB=xm.dataFrames['vFWVB']
vVBEL=xm.dataFrames['vVBEL']
vAGSN=xm.dataFrames['vAGSN']
vAGSN['PH']=vAGSN.apply(lambda row: row.P*math.pow(10.,5.)/(row.RHO*9.81),axis=1)
vAGSN['PH']=vAGSN['PH']+vAGSN['Z'].astype('float64')
zBzg=30.
vAGSN['bBzg']=vAGSN.apply(lambda row: row.RHO*9.81/math.pow(10.,5.),axis=1)
vAGSN['zBzg']= (vAGSN['Z'].astype('float64')-zBzg)*vAGSN['bBzg']
vAGSN['zBzg0']= (vAGSN['Z'].astype('float64')-0 )*vAGSN['bBzg']
vAGSN['zBzgMin']= (vAGSN['Z'].astype('float64')-vAGSN['Z'].astype('float64').min())*vAGSN['bBzg']
vAGSN['bBzg']=vAGSN['P']+vAGSN['zBzg']
hpSL=vAGSN[(vAGSN['LFDNR']=='1') & (vAGSN['Layer']==1)]
hpRL=vAGSN[(vAGSN['LFDNR']=='1') & (vAGSN['Layer']==2)]
vROHR=vROHR[(vROHR.apply(lambda x: True if x.CONT_ID == '1001' else False,axis=1))]
return vAGSN,vKNOT,vROHR,vFWVB,vVBEL,hpSL,hpRL
vAGSN_TStart,vKNOT_TStart,vROHR_TStart,vFWVB_TStart,vVBEL_TStart,hpSL_Start,hpRL_Start=mxAdd(xm,mx,timeReq=mx.df.index[0])
vAGSN_TEnde,vKNOT_TEnde,vROHR_TEnde,vFWVB_TEnde,vVBEL_TEnde,hpSL_Ende,hpRL_Ende=mxAdd(xm,mx,timeReq=mx.df.index[-1])
vAGSN_SMin,vKNOT_SMin,vROHR_SMin,vFWVB_SMin,vVBEL_SMin,hpSL_SMin,hpRL_SMin=mxAdd(xm,mx,aggReq='TMIN')
vAGSN_SMax,vKNOT_SMax,vROHR_SMax,vFWVB_SMax,vVBEL_SMax,hpSL_SMax,hpRL_SMax=mxAdd(xm,mx,aggReq='TMAX')
# besserer Weg mehrere Zeiten auf 1x zu erhalten:
xm.MxAdd(mx=mx
,aggReq=['TIME','TMIN','TMAX','TIME'] # Start, Min, Max, Ende # z.B. P P_1 P_2 P_3
,timeReq=3*[mx.df.index[0]]+[mx.df.index[-1]]
,timeReq2nd=4*[mx.df.index[-1]]
,viewList=['vAGSN','vKNOT','vROHR','vFWVB','vVBEL']
,ForceNoH5Update=True)
vAGSN=xm.dataFrames['vAGSN']
vAGSN.filter(items=['P','P_1','P_2','P_3'],axis=1).head()
"""
Explanation: für Darstellungen, die mit 1 Zeit bzw. mit Aggregaten 1 Zeitraums arbeiten:
Start, Ende, Min, Max
End of explanation
"""
# bar
hpCSL='red'
hpCRL='blue'
# bBzg
hpCSL2='lightcoral'
hpCRL2='cornflowerblue'
hpZ='black'
# Q
hpCSL3='salmon'
hpCRL3='lightsteelblue'
# bar Min/Max
hpCSLMax='mediumvioletred'
hpCSLMin='palevioletred'
hpCRLMax='darkcyan'
hpCRLMin='aqua'
"""
Explanation: Farben für Längsschnitte
End of explanation
"""
# Linienattribute für Paare von Linien
lwThick=mpl.rcParams['lines.linewidth']*2 # zuerst gezeichnete Linie (hell)
lsThin='--' # danach gezeichnete Linie (dunkel)
"""
Explanation: Farben für Zeitkurven
Definitionen
End of explanation
"""
# fuer A,B,C: Auswahl aus konstruierten Tönen
ntcCat=10
ntcCatSub=3
tcCm=Rm.pltMakeCategoricalCmap(catagoryColors=[idx for idx in range(ntcCat)],nOfSubCatsReq=ntcCatSub)
catA=0 # blau
catB=1 # orange
catC=2 # grün
catFromIdx={}
catFromIdx[0]=catA
catFromIdx[1]=catB
catFromIdx[2]=catC
# DH: RD,MD,VD:
tcC_SL=plt.get_cmap("tab20b").colors[3*4+2]
tcC_RL=plt.get_cmap("tab20b").colors[0*4+2]
tcC_ML=plt.get_cmap("tab20b").colors[4*4+2]
tcC_SLl=plt.get_cmap("tab20b").colors[3*4+3]
tcC_RLl=plt.get_cmap("tab20b").colors[0*4+3]
tcC_MLl=plt.get_cmap("tab20b").colors[4*4+3]
# DH: RD,MD-Paare: Sequenzen
tcC_XL=[tcC_RLl,tcC_RL,tcC_MLl,tcC_ML]
tcLs_XL=[mpl.rcParams['lines.linestyle'],lsThin,mpl.rcParams['lines.linestyle'],lsThin]
tcLw_XL=[lwThick,mpl.rcParams['lines.linewidth'],lwThick,mpl.rcParams['lines.linewidth']]
# DH: Q,Linepackrate:
tcC_QDH=plt.get_cmap("tab20b").colors[2*4+0]
tcC_QLPRate=plt.get_cmap("tab20b").colors[2*4+3]
# DH: Q,Linepackrate-Paar: Sequenz
tcC_DH=[tcC_QDH,tcC_QLPRate]
tcLs_DH=[mpl.rcParams['lines.linestyle'],lsThin]
tcLw_DH=[lwThick,mpl.rcParams['lines.linewidth']]
# DH: V
tcC_VDH=plt.get_cmap("tab20b").colors[1*4+2]
"""
Explanation: Festlegung der Farben
End of explanation
"""
# 2 Zeiten auswählen
time1=timeSumMaxMx1DfFwesW
time2=timeSumMinMx1DfFwesW
# Ergebnisse für die 2 Zeiten holen
vAGSN_T1,vKNOT_T1,vROHR_T1,vFWVB_T1,vVBEL_T1,hpSL_T1,hpRL_T1=mxAdd(xm,mx,timeReq=time1)
vAGSN_T2,vKNOT_T2,vROHR_T2,vFWVB_T2,vVBEL_T2,hpSL_T2,hpRL_T2=mxAdd(xm,mx,timeReq=time2)
# Ref/Cmp zuordnen
timeRef=time1
timeCmp=time2
hpSLRef=hpSL_T1
hpRLRef=hpRL_T1
hpSLCmp=hpSL_T2
hpRLCmp=hpRL_T2
vROHRRef=vROHR_T1
vROHRCmp=vROHR_T2
vROHR_NFD=pd.merge(vROHRRef
,hpRLRef[hpRLRef.IptIdx=='S']
,how='left'
,left_on='pk'
,right_on='OBJID'
,suffixes=('','_AGSN')).filter(items=vROHRRef.columns.tolist()+['OBJID'])
plt.close()
size_DINA3quer=(16.5, 11.7)
plt.rc('figure',figsize=size_DINA3quer)
plt.rc('figure',dpi=72)
plt.rc('savefig',dpi=72*2)
fig=plt.figure()
# .............................................................
# Paramtrierung
# .............................................................
# links: 1 NFD und 1 NDD
# .............................................................
# links nimmt die ganz linke Hälfte des Dashboards ein
leftTileRight=0.5
# rechts: 3 ZKs
# .............................................................
# rechts nimmt die ganz linke Hälfte des Dashboards ein
rightTileStart=1-leftTileRight
# allerdings wird Platz für die y-Achsen benötigt
rightTileYAxisXSpace=0.125
rightTileStart=rightTileStart+rightTileYAxisXSpace
rightTileH_pad=0.5
# x-Achse
#majLocator=mdates.MinuteLocator(interval=5)
majLocator=mdates.MinuteLocator(byminute=[0,5,10,15,20,25,30,35,40,45,50,55])
majFormatter=mdates.DateFormatter('%d.%m.%y: %H:%M')
# .............................................................
# links: 1 NFD und 1 NDD
# .............................................................
gs1 = gridspec.GridSpec(2, 1)
axNfd = fig.add_subplot(gs1[0])
axHp = fig.add_subplot(gs1[1])
gs1.tight_layout(fig, rect=[0, 0, leftTileRight, 1])
#rect : if rect is given, it is interpreted as a rectangle
#(left, bottom, right, top) in the normalized figure coordinate that the whole subplots area (including labels) will fit into
# .............................................................
# NFD
# .............................................................
Rm.Rm.pltNetPipes(vROHR_NFD
,query="CONT_ID == '1001'"
,fmask=lambda row: True if row.KVR_i=='2' and row.KVR_k=='2' else False
,pAx=axNfd
,pAttributeFunc=lambda row: math.fabs(row['ROHR~*~*~*~QMAV'])
,pAttributeColorMapMin=0.
,pAttributeColorMapMax=1600.
,CBLabel='Q [t/h]'
,sort_values_by=['pAttributeFunc']
,sort_values_ascending=True
,pAttributeColorMapFmask=lambda row: True if not pd.isnull(row.OBJID) else False
,pAttributeColorMap2ndFmask=lambda row: True if pd.isnull(row.OBJID) else False
,pAttributeColorMap2ndUsageStart=1./4. # nicht zu weiß
,pAttributeColorMap2ndUsageEnd=1./2. # nicht zu schwarz
)
# .............................................................
# HP
# .............................................................
# (negativer) Abstand der 2. y-Achse von der Zeichenfläche
yTwinedAxesPosDeltaHP=-0.100
axHp.set_ylabel('p [bar]')
axHp.set_ylim(0,16)
axHp.set_yticks(np.arange(0, 16.1, 1))
axHp.plot(hpSLRef['x'],hpSLRef['bBzg'],color=hpCSL2)
axHp.plot(hpRLRef['x'],hpRLRef['bBzg'],color=hpCRL2)
axHp.plot(hpSLRef['x'],hpSLRef['zBzgMin'],color=hpZ,ls='--')
axHp.plot(hpSLRef['x'],hpSLRef['P'],color=hpCSL)
axHp.plot(hpRLRef['x'],hpRLRef['P'],color=hpCRL)
hpSLRef = hpSLRef.apply(pd.to_numeric, errors='ignore')
hpSLCmp = hpSLCmp.apply(pd.to_numeric, errors='ignore')
axHp.fill_between(hpSLRef['x'], hpSLRef['P'], hpSLCmp['P'], color='grey', alpha=0.5)
hpRLRef = hpRLRef.apply(pd.to_numeric, errors='ignore')
hpRLCmp = hpRLCmp.apply(pd.to_numeric, errors='ignore')
axHp.fill_between(hpRLRef['x'], hpRLRef['P'], hpRLCmp['P'], color='grey', alpha=0.5)
axHp.plot(hpSL_SMax['x'],hpSL_SMax['P'],color=hpCSLMax,ls='--')
axHp.plot(hpSL_SMin['x'],hpSL_SMin['P'],color=hpCSLMin,ls='--')
axHp.plot(hpRL_SMax['x'],hpRL_SMax['P'],color=hpCRLMax,ls='--')
axHp.plot(hpRL_SMin['x'],hpRL_SMin['P'],color=hpCRLMin,ls='--')
# x-Achse
ax=axHp
axHp.set_xlim(0,23000)
axHp.set_xticks(np.arange(0, 23000.1, 1000))
plt.setp(ax.xaxis.get_majorticklabels(),rotation='vertical',ha='center')
ax.grid()
# 2. y-Achse
axHp_2nd = axHp.twinx()
axHp_2nd.spines["left"].set_position(("axes", yTwinedAxesPosDeltaHP))
Rm.pltMakePatchSpinesInvisible(axHp_2nd)
axHp_2nd.spines['left'].set_visible(True)
axHp_2nd.yaxis.set_label_position('left')
axHp_2nd.yaxis.set_ticks_position('left')
axHp_2nd.set_ylabel('Q [t/h]')
axHp_2nd.set_ylim(-1600,1600)
axHp_2nd.set_yticks(np.arange(-1600, 1600.1, 200))
axHp_2nd.plot(hpRLRef['x'],hpRLRef['Q'],color=hpCRL3,ls='-')
axHp_2nd.fill_between(hpRLRef['x'], hpRLRef['Q'], hpRLCmp['Q'], color='mediumslateblue', alpha=0.5)
# .............................................................
# rechts: 3 ZKs
# .............................................................
gs2 = gridspec.GridSpec(3, 1)
axTcUp = fig.add_subplot(gs2[0])
axTcMi = fig.add_subplot(gs2[1])
axTcBo = fig.add_subplot(gs2[2])
gs2.tight_layout(fig, rect=[rightTileStart, 0, 1, 1], h_pad=rightTileH_pad)
#pad : float
#padding between the figure edge and the edges of subplots, as a fraction of the font-size
#h_pad, w_pad : float
#padding (height/width) between edges of adjacent subplots
# (negativer) Abstand der 2. y-Achse von der Zeichenfläche
yTwinedAxesPosDelta=-0.175
# .............................................................
# oberste ZK
# .............................................................
axTcUp.stackplot(mx1DfFwesW.index.values #x
,np.row_stack([mx1DfFwesW[col].values for col in mx1DfFwesW.columns.tolist()]) #y
,colors=[tcCm.colors[catA*ntcCatSub+0],tcCm.colors[catB*ntcCatSub+0],tcCm.colors[catC*ntcCatSub+0]]
,labels=['A','B','C']
)
axTcUp.set_ylabel('W [MW]')
axTcUp.set_ylim(200000,600000)
axTcUp.set_yticks(np.arange(200000, 600001, 100000))
axTcUp.set_yticklabels(["{0:2.0f}".format(x) for x in np.arange(20, 60.1,10)])
# x-Achse
ax=axTcUp
ax.xaxis.set_major_locator(majLocator)
ax.xaxis.set_major_formatter(majFormatter)
plt.setp(ax.xaxis.get_majorticklabels(),rotation='vertical',ha='center')
ax.grid()
# wg. der x-Achse (nur in der obersten ZK) muss der eigentliche Zeichenbereich verkleinert werden
pos1 = ax.get_position()
pos2 = [pos1.x0, pos1.y0+0.10, pos1.width, pos1.height * 0.65]
ax.set_position(pos2)
# .............................................................
# mittlere ZK
# .............................................................
# Bsp. Plot mit pltTC
yAxes,yLines,vLines,yLinesLegendLabels=Rm.Rm.pltTC(pd.concat([mx1DfPumpNywPairs,mx1DfPumpQ],axis=1)
,tcLines={
'RSLW~wNA~~XA'
:{'label':'RSLW~wNA~~XA'
,'forceYType':'N'
,'color':tcCm.colors[catFromIdx[0]*ntcCatSub+ntcCatSub-1]
,'linestyle':'-'
,'linewidth':lwThick}
,'PUMP~R-A-SS~R-A-DS~N'
:{'label':'PUMP~R-A-SS~R-A-DS~N','color':tcCm.colors[catFromIdx[0]*ntcCatSub+0]
,'linestyle':lsThin}
,'RSLW~wNB~~XA'
:{'label':'RSLW~wNB~~XA'
,'forceYType':'N'
,'color':tcCm.colors[catFromIdx[1]*ntcCatSub+ntcCatSub-1]
,'linestyle':'-'
,'linewidth':lwThick}
,'PUMP~R-B-SS~R-B-DS~N'
:{'label':'PUMP~R-B-SS~R-B-DS~N','color':tcCm.colors[catFromIdx[1]*ntcCatSub+0]
,'linestyle':lsThin}
,'RSLW~wNC~~XA'
:{'label':'RSLW~wNC~~XA'
,'forceYType':'N'
,'color':tcCm.colors[catFromIdx[2]*ntcCatSub+ntcCatSub-1]
,'linestyle':'-'
,'linewidth':lwThick}
,'PUMP~R-C-SS~R-C-DS~N'
:{'label':'PUMP~R-C-SS~R-C-DS~N','color':tcCm.colors[catFromIdx[2]*ntcCatSub+0]
,'linestyle':lsThin}
,'PUMP~R-A-SS~R-A-DS~QM'
:{'label':'PUMP~R-A-SS~R-A-DS~QM','color':tcCm.colors[catFromIdx[0]*ntcCatSub+0]
,'linestyle':lsThin}
,'PUMP~R-B-SS~R-B-DS~QM'
:{'label':'PUMP~R-B-SS~R-B-DS~QM','color':tcCm.colors[catFromIdx[1]*ntcCatSub+0]
,'linestyle':lsThin}
,'PUMP~R-C-SS~R-C-DS~QM'
:{'label':'PUMP~R-C-SS~R-C-DS~QM','color':tcCm.colors[catFromIdx[2]*ntcCatSub+0]
,'linestyle':lsThin}
}
,pAx=axTcMi
,majLocator=majLocator
,majFormatter=majFormatter
,xTicksLabelsOff=True
,yTwinedAxesPosDeltaHPStart=0
,yTwinedAxesPosDeltaHP=-0.175
,lOff=True
)
yAxes['N'].set_ylabel('N [rpm]')
yAxes['N'].set_ylim(1250,1650)
yAxes['N'].set_yticks(np.arange(1250, 1651, 50))
yAxes['QM'].set_ylabel('Q [t/h]')
yAxes['QM'].set_ylim(0,4000)
yAxes['QM'].set_yticks(np.arange(0, 4001, 500))
# .............................................................
# untere ZK
# .............................................................
axTcBo.set_ylabel('p [bar]')
axTcBo.set_ylim(4,8)
axTcBo.set_yticks(np.arange(4, 8.1, .4))
for idx,col in enumerate(mx1DfDH.columns.tolist()):
line,=axTcBo.plot(mx1DfDH.index.values,mx1DfDH[col]
,color= tcC_XL[idx]
,ls=tcLs_XL[idx]
,lw=tcLw_XL[idx]
)
ax=axTcBo
ax.xaxis.set_major_locator(majLocator)
ax.xaxis.set_major_formatter(majFormatter)
ax.set_xticklabels([])
ax.grid()
# x-Achsenbeschriftung ausschalten
for tic in ax.xaxis.get_major_ticks():
tic.tick1On = tic.tick2On = False
# 2. y-Achse
axTcBo_2nd = axTcBo.twinx()
axTcBo_2nd.spines["left"].set_position(("axes", yTwinedAxesPosDelta))
Rm.pltMakePatchSpinesInvisible(axTcBo_2nd)
axTcBo_2nd.spines['left'].set_visible(True)
axTcBo_2nd.yaxis.set_label_position('left')
axTcBo_2nd.yaxis.set_ticks_position('left')
axTcBo_2nd.set_ylabel('Q [t/h]')
axTcBo_2nd.set_ylim(-100,100)
axTcBo_2nd.set_yticks(np.arange(-100, 101, 20))
for idx,col in enumerate(mx1DfDHQ.columns.tolist()):
line,=axTcBo_2nd.plot(mx1DfDHQ.index.values,mx1DfDHQ[col]
,color= tcC_DH[idx]
,ls=tcLs_DH[idx]
,lw=tcLw_DH[idx]
)
# x-Achsenbeschriftung ausschalten
ax=axTcBo_2nd
ax.xaxis.set_major_locator(majLocator)
ax.xaxis.set_major_formatter(majFormatter)
plt.setp(ax.xaxis.get_majorticklabels(),rotation='vertical',ha='center')
ax.set_xticklabels([])
ax.grid()
for tic in ax.xaxis.get_major_ticks():
tic.tick1On = tic.tick2On = False
# 3. y-Achse
axTcBo_3rd = axTcBo.twinx()
axTcBo_3rd.spines["left"].set_position(("axes", yTwinedAxesPosDelta*2))
Rm.pltMakePatchSpinesInvisible(axTcBo_3rd)
axTcBo_3rd.spines['left'].set_visible(True)
axTcBo_3rd.yaxis.set_label_position('left')
axTcBo_3rd.yaxis.set_ticks_position('left')
axTcBo_3rd.set_ylabel('dV [(N)m3]')
axTcBo_3rd.set_ylim(-1,1)
axTcBo_3rd.set_yticks(np.arange(-1, 1.1, .2))
line,=axTcBo_3rd.plot(mx1DfDHV.index.values,mx1DfDHV['ALLG~~~LINEPACKGEOM']
,color= tcC_VDH
,ls='-.'
)
# x-Achsenbeschriftung ausschalten
ax=axTcBo_3rd
ax.xaxis.set_major_locator(majLocator)
ax.xaxis.set_major_formatter(majFormatter)
plt.setp(ax.xaxis.get_majorticklabels(),rotation='vertical',ha='center')
ax.set_xticklabels([])
ax.grid()
for tic in ax.xaxis.get_major_ticks():
tic.tick1On = tic.tick2On = False
# Zeitcursor für 2 Zeiten in allen ZKs darstellen
axLst=[axTcUp,axTcMi,axTcBo]
for ax in axLst:
vLinePlotted=ax.axvline(x=timeRef, ymin=0, ymax=1
,label='Zeit 1'
,color='dimgrey'
# ,linestyle=linestyle
# ,linewidth=linewidth
)
vLinePlotted=ax.axvline(x=timeCmp, ymin=0, ymax=1
,label='Zeit 2'
,color='dimgrey'
,linestyle='--'
# ,linewidth=linewidth
)
plt.savefig('Dashboard.pdf',format='pdf',bbox_inches='tight')#,pad_inches=2)
# wieder 1. Zeit
mx=xm.MxAdd()
vROHR=xm.dataFrames['vROHR']
vROHR.info()
vKNOT=xm.dataFrames['vKNOT']
vKNOTexp=xm.vKNOTexpEBES()
import re
qsCols=[col for col in vKNOTexp.columns.tolist() if re.search('^qs_',col) != None]
qsCols
qsInfCols=[col for col in vKNOTexp.columns.tolist() if re.search('^qs[a-zA-Z0-9]+',col) != None]
qsInfCols
vROHRexp=xm.vROHRexpEBES(vKNOTexp)
vROHRexp.shape
vROHR.shape
vROHRexp['QAbs']=vROHRexp.apply(lambda row: math.fabs(row['ROHR~*~*~*~QMAV']),axis=1)
vROHRexp=vROHRexp[vROHRexp['KVR']=='1']
grpObj=vROHRexp.groupby(by=['qsigRank'],as_index=False)
d={col:'min' for col in ['qsigStr','qs_1_A','qs_2_B','qs_3_C']}
d.update({'qsigFWVB~*~*~*~W':'min'})
d.update({'qsigRank_sumL':'min'})
df=grpObj.agg(d).sort_values(by=['qsigRank'],ascending=True)
df
df[df['qsigStr']=='100']['qsigFWVB~*~*~*~W'].iloc[0]
plt.close()
size_DINA3quer=(16.5, 11.7)
dpiSize=72
fig=plt.figure(figsize=size_DINA3quer,dpi=dpiSize)
gs = gridspec.GridSpec(1, 1)
# ---
#
axNfd = fig.add_subplot(gs[0])
Rm.Rm.pltNetPipes(vROHRexp
,query="CONT_ID == '1001'"
,fmask=lambda row: True if row.KVR_i=='1' and row.KVR_k=='1' else False
,pAx=axNfd
,pAttribute='qsA'
,pAttributeColorMapFmask=lambda row: True if row.qsA >0 else False
,pAttributeColorMap2ndFmask=lambda row: True if row.qsA <=0 else False # da A 0 ist wird nur 1 Farbwert verwendet;
# ... keine Unterscheidung zwischen von A nicht versorgten und gar nicht versorgten
,pAttrLineSize='QAbs'
,sort_values_by=['QAbs']
,sort_values_ascending=True
,pAttrLineSizeFactor=3*1./(vROHRexp['QAbs'].std()*2.)
,pAttributeColorMap2ndUsageStart=1./3
,CBBinTicks=21
,CBLabel='Versorgung durch A in %'
)
txt=axNfd.set_title('EB von EG A (nicht durch EG A oder gar nicht von EGn versorgte: grau)')
gs.tight_layout(fig)
plt.show()
# dasselbe mit einer diskreten CM die eine explizite Aussage trifft was magenta und was cyan sein soll
cmap = matplotlib.colors.ListedColormap(np.vstack(
(
Rm.pltMakeCategoricalColors(list(matplotlib.colors.to_rgb('cyan')),nOfSubColorsReq=5,reversedOrder=False),
Rm.pltMakeCategoricalColors(list(matplotlib.colors.to_rgb('magenta')),nOfSubColorsReq=15,reversedOrder=True)
)
))
plt.close()
size_DINA3quer=(16.5, 11.7)
dpiSize=72
fig=plt.figure(figsize=size_DINA3quer,dpi=dpiSize)
gs = gridspec.GridSpec(1, 1)
# ---
#
axNfd = fig.add_subplot(gs[0])
Rm.Rm.pltNetPipes(vROHRexp
,query="CONT_ID == '1001'"
,fmask=lambda row: True if row.KVR_i=='1' and row.KVR_k=='1' else False
,pAx=axNfd
,pAttribute='qsA'
,pAttributeColorMap=cmap
,pAttributeColorMapFmask=lambda row: True if row.qsA >0 else False
,pAttributeColorMap2ndFmask=lambda row: True if row.qsA <=0 else False # da A 0 ist wird nur 1 Farbwert verwendet;
# ... keine Unterscheidung zwischen von A nicht versorgten und gar nicht versorgten
,pAttrLineSize='QAbs'
,sort_values_by=['QAbs']
,sort_values_ascending=True
,pAttrLineSizeFactor=3*1./(vROHRexp['QAbs'].std()*2.)
,pAttributeColorMap2ndUsageStart=1./3
,CBBinTicks=21
,CBLabel='Versorgung durch A in %'
)
txt=axNfd.set_title('EB von EG A (nicht durch EG A oder gar nicht von EGn versorgte: grau)')
gs.tight_layout(fig)
plt.show()
# diskrete CM für die QSIGs vorbelegen
# ------------------------------------
# Anz. EGn
anzEG=len(qsCols)
anzQSIG_moeglich=int(math.pow(2,anzEG))
# kont. CM ist Ausgangsbasis
cMap=plt.cm.get_cmap('jet') # kont.: hasattr(cMap,'from_list')
# Farben
randOffset=.3
cmQSIG=cMap.from_list('cmQSIG'
, cMap(np.linspace(0+randOffset, 1-randOffset, anzQSIG_moeglich))
, anzQSIG_moeglich)
plt.close()
size_DINA6quer=(5.8,4.1)
fig, ax = plt.subplots(figsize=size_DINA6quer)
fig.subplots_adjust(bottom=0.5)
norm=matplotlib.colors.Normalize(vmin=0, vmax=anzQSIG_moeglich)
cb=matplotlib.colorbar.ColorbarBase(ax, cmap=cmQSIG,norm=norm,orientation='horizontal')
cb.set_label('so viele Farben wie mögliche QSIGen')
plt.show()
# einzelne Farben darin ersetzen
def f(cMap,idxCol,newCol):
colors=cMap(np.arange(cMap.N,dtype=int)) # alle Farben
newCol=list(matplotlib.colors.to_rgb(newCol))
newCol.extend([1.])
colors[idxCol]=newCol
return matplotlib.colors.ListedColormap(colors)
# Farbe der 0-Einspeisergruppe festlegen
ZColor='lightgray'
cmQSIG=f(cmQSIG,int('000',2),ZColor)
# Farbe der 1-Einspeisergruppe festlegen
ZColor='darkgray'
cmQSIG=f(cmQSIG,int('111',2),ZColor)
# Farbe der A-Einspeisergruppe festlegen -1.
AColColor='magenta'#'mediumorchid'
cmQSIG=f(cmQSIG,int('100',2),AColColor)
# 2.
NAColor='orange'
cmQSIG=f(cmQSIG,int('010',2),NAColor)
# 3.
cmQSIG=f(cmQSIG,int('001',2),'r')
plt.close()
size_DINA6quer=(5.8,4.1)
fig, ax = plt.subplots(figsize=size_DINA6quer)
fig.subplots_adjust(bottom=0.5)
norm=matplotlib.colors.Normalize(vmin=0, vmax=anzQSIG_moeglich)
cb=matplotlib.colorbar.ColorbarBase(ax, cmap=cmQSIG,norm=norm,orientation='horizontal')
cb.set_label('so viele Farben wie mögliche QSIGen')
plt.show()
vROHRexp['qsigInt']=vROHRexp.apply(lambda row: int(row.qsigStr,2) ,axis=1)
vROHRexp['qsigInt'].unique() # der Wertebereich ist weder zwingend lückenlos,
#noch hat er zwingend soviele Ausprägungen wie die CM Farben hat
import numpy as np
plt.close()
size_DINA3quer=(16.5, 11.7)
dpiSize=72
fig=plt.figure(figsize=size_DINA3quer,dpi=dpiSize)
gs = gridspec.GridSpec(1, 1)
# ---
#
axNfd = fig.add_subplot(gs[0])
Rm.Rm.pltNetPipes(vROHRexp
,query="CONT_ID == '1001'"
,fmask=lambda row: True if row.KVR_i=='1' and row.KVR_k=='1' else False
,pAx=axNfd
,pAttributeColorMap=cmQSIG
,pAttribute='qsigInt'
,CBBinBounds=[idx for idx in range(anzQSIG_moeglich+1)]
,pAttrLineSize='QAbs'
,sort_values_by=['QAbs']
,sort_values_ascending=True
,pAttrLineSizeFactor=3*1./(vROHRexp['QAbs'].std()*2.)
# es werden so viele Ticks generiert wie Bounds Einträge hat - 1 mehr als Anzahl Farben
# wir wollen weniger Ticks (so viele wie Anzahl Farben) und diese in der Mitte
,CBTicks=[idx+.5 for idx in range(anzQSIG_moeglich)]
,CBTickLabels=["{0:b}".format(idx).zfill(anzEG) for idx in range(anzQSIG_moeglich)]
,CBLabel='QSIG'
)
txt=axNfd.set_title('Quellsignatur ABC (alle Signaturen in Farbskala)')
gs.tight_layout(fig)
plt.show()
# nur die Farben holen, welche vorkommen
# alle Farben holen, welche die cmap hat
ccolors = plt.get_cmap(cmQSIG)(np.arange(plt.get_cmap(cmQSIG).N,dtype=int))
# die gewuenschten Farben extrahieren
QSIGBzLst=sorted([int(idx,2) for idx in vROHRexp['qsigStr'].unique()])
ccolors=[ccolors[idx] for idx in QSIGBzLst]
cmQSIGBz = matplotlib.colors.ListedColormap(ccolors)
QSIGBzLst
plt.close()
size_DINA6quer=(5.8,4.1)
fig, ax = plt.subplots(figsize=size_DINA6quer)
fig.subplots_adjust(bottom=0.5)
norm=matplotlib.colors.Normalize(vmin=0, vmax=len(QSIGBzLst))
cb=matplotlib.colorbar.ColorbarBase(ax, cmap=cmQSIGBz,norm=norm,orientation='horizontal')
cb.set_label('so viele Farben wie vorkommende QSIGen')
plt.show()
QSIGBzLstIdx=[idx for idx in range(len(QSIGBzLst))]
QSIGBzLstIdx
vROHRexp['qsigIntBzIdx']=vROHRexp.apply(lambda row: QSIGBzLst.index(row.qsigInt) ,axis=1)
vROHRexp['qsigIntBzIdx'].unique()
# der Wertebereich ist jetzt lückenlos und hat soviele voneinander verschiedene Ausprägungen wie die CM Farben hat
plt.close()
size_DINA3quer=(16.5, 11.7)
dpiSize=72
fig=plt.figure(figsize=size_DINA3quer,dpi=dpiSize)
gs = gridspec.GridSpec(1, 1)
ticksLabels=["{0:b}".format(QSIGBzLst[idx]).zfill(anzEG) for idx in range(plt.get_cmap(cmQSIGBz).N)]
# ---
#
axNfd = fig.add_subplot(gs[0])
Rm.Rm.pltNetPipes(vROHRexp
,query="CONT_ID == '1001'"
,fmask=lambda row: True if row.KVR_i=='1' and row.KVR_k=='1' else False
,pAx=axNfd
,pAttributeColorMap=cmQSIGBz
,pAttribute='qsigIntBzIdx'
,CBBinBounds=[idx for idx in range(len(vROHRexp['qsigIntBzIdx'].unique())+1)]
,pAttrLineSize='QAbs'
,sort_values_by=['QAbs']
,sort_values_ascending=True
,pAttrLineSizeFactor=3*1./(vROHRexp['QAbs'].std()*2.)
# es werden so viele Ticks generiert wie Bounds Einträge hat - 1 mehr als Anzahl Farben
# wir wollen weniger Ticks (so viele wie Anzahl Farben) und diese in der Mitte
,CBTicks=[idx+.5 for idx in range(len(vROHRexp['qsigIntBzIdx'].unique()))]
,CBTickLabels=["dummy" for idx in range(len(vROHRexp['qsigIntBzIdx'].unique()))]
,CBLabel='QSIG'
)
txt=axNfd.set_title('Quellsignatur ABC (nur vorkommende Signaturen in Farbskala)')
ax = plt.gca()
#print(ax.get_yticks())
#print(ax.get_yticklabels())
newTickLabels=len(vROHRexp['qsigIntBzIdx'].unique())*[""]
df=vROHRexp[['qsigIntBzIdx','qsigStr','qsigRank_L','qsigRank_sumL']].drop_duplicates().reset_index()
for index, row in df.iterrows():
idx2=row['qsigIntBzIdx']
qsigStr=row['qsigStr']
newTickLabels[idx2]=qsigStr
ax.set_yticklabels(newTickLabels,rotation=90,va='center',ha='left')
gs.tight_layout(fig)
plt.show()
"""
Explanation: Plot
End of explanation
"""
!python setup.py clean sdist
!twine upload -u PT3S -p PythonTools3S dist/*
"""
Explanation: Abschlussarbeiten
Anleitung Doc-Gen
Version-String in conf.py and setup.py
sphinx-build -b html . _build
sphinx-build -b latex . _build
in _build:
pdflatex --extra-mem-bot=10000000 PT3S.tex
erzeugt PT3S.PDF; kopieren nach ..
Anmerkungen
z.B. PDF Doc-Gen derzeit nur um die grundsätzliche Generierbarkeit zu prüfen: d.h. ob die Kommentar- und Testbereiche redaktionell soll aufgebaut sind, dass ein Generator damit zurecht kommt; die Einstiegs-Doc ist dieses Notebook
Deploy
End of explanation
"""
### ggf. Rechte erforderlich:
### entweder in PowerShell: Start-Process powershell -Verb runAs
### oder RechteMausTaste WindowsSymbol: Windows PowerShell (Administrator)
### dann (ohne ! in PowerShell auf pip-Verz.):
!pip install -e .
"""
Explanation: Develop (use local PT3S under Development)
End of explanation
"""
|
wasit7/algae2 | shrimp/numpy.ipynb | gpl-2.0 | print x
type(x)
y=np.ones((2,3))
print y
"""
Explanation: 1D called vector
2D called matrix
3D nad so on tensor
End of explanation
"""
z=np.arange(2,8,1)
alpha=np.reshape(z,(3,2))
print alpha
beta= np.random.randn(3,4)
print beta
gamma=beta*2.0
print gamma
a=[3,4,5]
a=np.array(a)
type(a)
"""
Explanation: I need a matrix like this
python
[[2,3],
[4,5],
[6,7]]
End of explanation
"""
a=np.random.randint(0,10,(2,3))
b=np.random.randint(0,10,(2,3))
print a
print b
print "element-wise addition:\n%s"%(a + b)
print "element-wise addition:\n%s"%(a * b)
print a
print b.T
print '-----'
print np.dot(a,b.T)
"""
Explanation: np array operator
End of explanation
"""
a=np.random.randint(0,10,(4,5))
print a
a[0,2]
a[3,3]=9
print a
print a[:,:3]
print a[:3,:]
print a[:3,:3]
print a[-3:,-3:]
b=np.array([2,3,5,7,8])
print b
b[::-1]
a=np.random.randint(0,10,(4,5))
print a
a[::-1]
print np.fliplr(a)
a.astype(float)
"""
Explanation: Sliccing
End of explanation
"""
print np.arange(16)[::2]
(-1,)+(1,2,3)+(4,5)
"""
Explanation: Goto opencv/build/python/2.7 folder.
Copy cv2.pyd to C:/Python27/lib/site-packeges.<br>
http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_setup/py_setup_in_windows/py_setup_in_windows.html
End of explanation
"""
|
Upward-Spiral-Science/the-fat-boys | docs/Team Fatboys 5 Updates Report Part 2.ipynb | apache-2.0 | filter = (abs(synapsin1 - synapsin2) < 5) & (synapsin1 > 15) & (synapsin2 > 15)
synapsin1_sub = synapsin1[filter]
synapsin2_sub = synapsin2[filter]
sub_sample = np.random.permutation(len(synapsin1_sub))[1:100]
plt.scatter(synapsin1_sub[sub_sample], synapsin2_sub[sub_sample], alpha=0.5)
plt.xlabel("Score on Synapsin-1 channel")
plt.ylabel("Score on Synapsin-2 channel")
plt.axis([0, 80, 0, 90])
plt.show()
"""
Explanation: With a more relaxed scoring for each synpase on the synpasin-1 and synapsin-2 raw images, we plot the score for each synpase across the two channels as a scatter plot. The scores between each channel do not correlate that much.
End of explanation
"""
import pickle
raw_data = pickle.load(open('raw_data'))
synapsin1_raw = np.array(raw_data[1])[0:1119259]
synapsin2_raw = np.array(raw_data[2])[0:1119259]
#sub_sample = np.random.permutation(len(synapsin2_raw))[1:1000]
plt.hist(synapsin1_raw)
plt.ylabel("Number of Synapses")
plt.xlabel("Synapsin-1 Expression Level")
plt.show()
plt.hist(synapsin2_raw)
plt.ylabel("Number of Synapses")
plt.xlabel("Synapsin-2 Expression Level")
plt.show()
filter = (abs(synapsin1 - synapsin2) < 5) & (synapsin1 > 15) & (synapsin2 > 15)
synapsin1_processed = synapsin1_raw[filter]
synapsin2_processed = synapsin2_raw[filter]
#sub_sample = np.random.permutation(len(synapsin2_processed))[1:1000]
plt.hist(synapsin1_processed)
plt.ylabel("Number of Synapses")
plt.xlabel("Synapsin-1 Expression Level After Processing")
plt.show()
plt.hist(synapsin2_processed)
plt.ylabel("Number of Synapses")
plt.xlabel("Synapsin-2 Expression Level After Processing")
plt.show()
"""
Explanation: We propose to build a filter that only keeps the synapse where the score for each of them based on the raw image across the two synapsin channel is consistent.
End of explanation
"""
synapsin1_processed_filtered = synapsin1_processed[synapsin1_processed > 1000000 ]
synapsin2_processed_filtered = synapsin2_processed[synapsin2_processed > 1000000 ]
plt.hist(synapsin1_processed_filtered)
plt.ylabel("Number of Synapses")
plt.xlabel("Synapsin-1 Expression Level After Processing and Thresheolded on the Integrad Brightness ")
plt.show()
plt.hist(synapsin2_processed_filtered)
plt.ylabel("Number of Synapses")
plt.xlabel("Synapsin-2 Expression Level After Processing and Thresheolded on the Integrad Brightness")
plt.show()
"""
Explanation: The filter Based on the histograms shown above, most of the called synpases with relatively high synapsin expressions only have a high expression shown in either one of the synapsin channels. Those synpases are filtered out tentaviely.
End of explanation
"""
|
mcocdawc/chemcoord | Tutorial/Zmat.ipynb | lgpl-3.0 | import chemcoord as cc
import time
water = cc.Cartesian.read_xyz('water_dimer.xyz', start_index=1).get_zmat()
small = cc.Cartesian.read_xyz('MIL53_small.xyz', start_index=1).get_zmat()
"""
Explanation: Zmatrices
End of explanation
"""
water
"""
Explanation: Let's have a look at it:
End of explanation
"""
water['bond']
# or explicit label based indexing
water.loc[:, 'bond']
# or explicit integer based indexing
water.iloc[:, 2]
"""
Explanation: This is the normal zmatrix with the only difference being that the upper right triangle is filled with references to the origin and canonical unit vectors. Chemically speaking this fixes translational and rotational degrees of freedom and allows to preserve the information about the absolute position in space.
In this tutorial we concentrate on operations you can do with zmatrices.
Keep in mind, that there is an own tutorial dedicated to transformation from cartesian space to internal coordinates and back.
Slicing
The slicing operations are the same as for pandas.DataFrames. (http://pandas.pydata.org/pandas-docs/stable/indexing.html)
If the 'x' axis is of particular interest you can slice it out with:
End of explanation
"""
water['atom'].value_counts()
"""
Explanation: Now it is very easy to e.g. count the atoms:
End of explanation
"""
import sympy
sympy.init_printing()
d = sympy.Symbol('d')
symb_water = water.copy()
symb_water.safe_loc[4, 'bond'] = d
symb_water
symb_water.subs(d, 2)
cc.xyz_functions.view([symb_water.subs(d, i).get_cartesian() for i in range(2, 5)])
# Uncomment if viewer cannot open molden files
# for i in range(2, 5):
# symb_water.subs(d, i).get_cartesian().view()
# time.sleep(1)
"""
Explanation: Returned type
The indexing behaves like Indexing and Selecting data in
Pandas.
You can slice with Zmat.loc[key], Zmat.iloc[keys], and Zmat[key].
The only question is about the return type.
If the information in the columns is enough to draw a molecule,
an instance of the own class (e.g. Zmat)
is returned.
If the information in the columns is not enough to draw a molecule, there
are two cases to consider:
A pandas.Series instance is returned for one dimensional slices
A pandas.DataFrame instance is returned in all other cases:molecule.loc[:, ['atom', 'b', 'bond', 'a', 'angle', 'd', 'dihedral']] returns a `Zmat`.
molecule.loc[:, ['atom', 'b']]`` returns a `pandas.DataFrame`.
molecule.loc[:, 'atom']`` returns a `pandas.Series`.
If rows are omitted, there is never a Zmat instance returned.
Assignments
There exist four different methods to perform assignments:
safe_loc, unsafe_loc, safe_iloc, and unsafe_iloc.
As in pandas safe_loc and unsafe_loc are used for label based indexing and
safe_iloc and unsafe_iloc for row based indexing.
The safe methods assert that assignments do not lead to zmatrices, which can't be transformed back to cartesian coordinates. They also insert dummy atoms where necessary.
The unsafe methods are wrappers around their pandas.DataFrame counterparts and are a lot faster than the safe assignments.
Unless there is a good (performance) reason, it is recommended to use the safe assignment methods!
Symbolic evaluation
It is possible to use symbolic expressions from sympy.
End of explanation
"""
distortion = water.copy()
distortion.unsafe_loc[:, ['bond', 'angle', 'dihedral']] = 0
distortion.safe_loc[4, 'bond'] = d
water + distortion
(water + distortion).subs(d, 3)
(water + distortion).subs(d, 3).get_cartesian().view()
"""
Explanation: Binary operators
Mathematical Operations:
The general rule is that mathematical operations using the binary operators
(+ - * /) and the unary operators (+ - abs)
are only applied to the ['bond', 'angle', 'dihedral'] columns.
Addition/Subtraction/Multiplication/Division:
The most common case is to add another Zmat instance.
In this case it is tested, if the used references are the same.
Afterwards the addition in the ['bond', 'angle', 'dihedral'] columns
is performed.
If you add a scalar to a Zmat it is added elementwise onto the
['bond', 'angle', 'dihedral'] columns.
If you add a 3-dimensional vector, list, tuple... the first element of this
vector is added elementwise to the 'bond' column of the
Zmat instance and so on.
The third possibility is to add a matrix with
shape=(len(Zmat), 3) which is again added elementwise.
The same rules are true for subtraction, division and multiplication.
End of explanation
"""
cartesians = cc.xyz_functions.read_molden('MIL53_ht_lt_movement.molden')
STEPS = 5
"""
Explanation: Movements without small angular approximation
End of explanation
"""
m1, m2 = cartesians[0], cartesians[-1]
"""
Explanation: A typical approximation is the small angular approximation when calculating activation barriers for the transition from one allotrope to another.
Let's import the following allotropes:
End of explanation
"""
cc.xyz_functions.view([m1 + (m2 - m1) * i / STEPS for i in range(STEPS)])
# Uncomment if viewer cannot open molden files
# for molecule in [m1 + (m2 - m1) * i / STEPS for i in range(STEPS)]:
# molecule.view()
# time.sleep(1)
"""
Explanation: The easiest approach for obtaining structures for the movement from m1 to m2, is to linearize the movement in cartesian space.
For geometric reasons this leads to shortened bond lengths, when bond angles change. The larger the change in angles, the larger this effect which leads to an overestimation of the activation energy.
End of explanation
"""
zm1 = m1.get_zmat()
c_table = zm1.loc[:, ['b', 'a', 'd']]
zm2 = m2.get_zmat(c_table)
"""
Explanation: Another approach is to build two Zmatrices with the exact same references/construction table and linearise the movement in the space of internal coordinates.
End of explanation
"""
with cc.TestOperators(False):
zmats = [zm1 + (zm2 - zm1).minimize_dihedrals() * i / STEPS for i in range(STEPS)]
"""
Explanation: The term zm2 - zm1 is not convertible to cartesian coordinates.
For this reason we have to switch of the testing for the validity of zmatrices when using operators.
The minimize_dihedrals method chooses the minimal absolute value representation of an angle equivalence class.
So it will move e.g. by -30° instead of 270°.
End of explanation
"""
cc.xyz_functions.view([x.get_cartesian() for x in zmats])
# Uncomment if viewer cannot open molden files
# for molecule in [x.get_cartesian() for x in zmats]:
# molecule.view()
# time.sleep(1)
"""
Explanation: The resulting structures are a lot more reasonable for the interconversion of allotropes.
(Look for example at the C-H distance in methyl groups).
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td1a_home/2021_random_graph.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
"""
Explanation: Algo - graphes aléatoires
Comment générer un graphe aléatoire... Générer une séquence de nombres aléatoires indépendants est un problème connu et plutôt bien résolu. Générer une structure aléatoire comme une graphe est aussi facile. En revanche, générer un graphe aléatoire vérifiant une propriété - la distribution des degrés - n'est pas aussi simple qu'anticipé.
End of explanation
"""
import numpy
mat = numpy.random.random((15, 15))
mat = mat + mat.T
adja = (mat >= 1.4).astype(int)
for i in range(adja.shape[0]):
adja[i ,i] = 0
adja
"""
Explanation: Graphe aléatoire - matrice d'adjacence aléatoire
L'existence de chaque arc est défini par une variable binomiale de paramètre $p$.
End of explanation
"""
import networkx
import matplotlib.pyplot as plt
fix, ax = plt.subplots(1, 1,figsize=(4,4))
G = networkx.from_numpy_matrix(adja)
networkx.draw(G, with_labels=True, ax=ax)
degres = adja.sum(axis=1)
degres
distrib = {}
for d in degres:
if d in distrib:
distrib[d] += 1
else:
distrib[d] = 1
distrib
"""
Explanation: En le visualisant...
End of explanation
"""
adjan = adja.copy()
conne = numpy.zeros(adja.shape)
for i in range(1, adja.shape[0]):
conne += adjan
adjan = adjan @ adja
(conne > 0).astype(int)
"""
Explanation: Vocabulaire lié aux graphes
arc (ou edge en anglais) : lien liant deux noeuds, il peut être orienté ou non, s'il est orienté, les deux extrémités ne jouent pas le même rôle, l'arc ne peut être "parcouru" que de la première extrémité vers la seconde.
noeud (vertex en anglais) : élément du graphe
graphe : un graphe est défini par un ensemble de noeuds et un ensemble d'arcs
matrice d'adjacence : matrice binaire de dimension $N \times N$, $A=(a_{ij}){ij}$ et $a{ij} = 1$ s'il existe un arc reliant le noeud i à j, 0 sinon.
chemin : séquence de noeuds et d'arcs appartenant au graphe
prédécesseur et successeur : si un arc orienté relie les noeuds i à j, i est le prédecesseur de j, j est le successeur de i. Par extension, si i apparaît toujours avec j dans tous les chemins possibles du graphes, i est un prédecesseur de j.
arbre : cas particulier des graphes orientés, il existe un unique prédecesseur à tous les noeuds nommés la racine, un noeud sans successeur est appelé feuille (ou leaf en anglais). Dans un arbre binaire, chaque noeud n'a que deux successeurs directs.
degré d'un noeud : nombre d'arcs connectés à un noeud, on peut distinguer pour les graphes orientés le fait que les arcs partent ou arrivent à un noeud
Composante connexe : au sein d'une composante connexe, il existe toujours un chemin reliant n'importe quelle paire de noeuds.
Quelques propriétés de la matrice d'adjacence :
Pour un graphe non orienté, la matrice d'adjacence est symétrique.
La somme sur la ligne i est égale au degré du noeud i.
La matrice d'adjacence est triangulaire supérieue dans le cas d'un arbre dont les noeuds sont numérotés en largeur d'abord (un noeud a toujours un numéro supérieur à tous des niveaux moins profonds).
Aparté : matrice d'adjacence à la puissance n
Si $A$ est une matrice d'adjacence, $a^2_{ik}$ et un coefficient de cette matrice. $a^2_{ik} \sum_j a_{ij} a_{jk}$. Comme $a_{pq} \in {0, 1}$, $a^2_{ik} > 0$ s'il existe un $j$ tel que $ a_{ij} = a_{jk} = 1$. Autrement dit, les noeuds $i j, k$ sont reliés. Si $a^2_{ik} > 0$, alors il existe un chemin de longueur 2 entre les noeuds $i, k$. Par récurrent, $A^3_{pq}$ est positif s'il existe un chemin de longueur 3 reliant les noeuds $p, q$.
On calcule $\sum{A}=A + A^2 + A^3 + ... + A^n$ où n est la dimension de la matrice.
End of explanation
"""
mat = numpy.random.random((15, 15))
mat = mat + mat.T
adja = (mat >= 1.45).astype(int)
for i in range(adja.shape[0]):
adja[i ,i] = 0
fix, ax = plt.subplots(1, 1, figsize=(4, 4))
G = networkx.from_numpy_matrix(adja)
networkx.draw(G, with_labels=True, ax=ax)
C = numpy.arange(adja.shape[0])
maj = 1
while maj > 0:
maj = 0
for i in range(adja.shape[0]):
for j in range(i + 1, adja.shape[1]):
if adja[i, j] > 0 and C[i] != C[j]:
maj += 1
C[i] = C[j] = min(C[i], C[j])
C
set(C)
print("Il y a %r composantes connexes." % len(set(C)))
"""
Explanation: D'après les remarques précédentes, $\sum A_{pq} > 0$ s'il existe un chemin reliant les noeuds $p, q$, donc s'il font partie de la même composante connexe. Et 0 si les deux noeuds font partie de deux composantes connexes distinctes.
Trouver le nombre de composantes connexes
On s'inspire d'un algorithme de coloriage. Au début, chaque noeud appartient à sa propre composante connexe noté $c_i$. Pour chaque arc reliant deux noeuds i et j, on associe aux deux noeuds à la composante $\min(c_i, c_j)$. On continue tant qu'un noeud change de composante connexe.
End of explanation
"""
def distribution_to_degree_list(hist):
N = int(hist.sum())
deg = numpy.zeros(N, dtype=numpy.int32)
p = 0
for i, nh in enumerate(hist):
for n in range(nh):
deg[p] = i
p += 1
return deg
dist = numpy.array(numpy.array([0, 4, 3, 2]))
distribution_to_degree_list(dist)
"""
Explanation: Génération d'un graphe aléatoire
Les graphes issues de réseaux sociaux ont souvent des propriétés statistiques particulières. La distribution des degrés des noeuds suit souvent une loi à queue épaisse. C'est dans cette catégorie qu'on range les lois qui admettent une espérance mais pas de variance.
Donc on ne veut pas générer n'importe quel graphe aléatoire, on veut générer un graphe aléatoire dont la distribution des degrés des noeuds est connue.
On s'inspire de algorithme graphe aléatoire.
Etape 1 : transformer une distribution des degrés des noeuds en une liste de degré souhaité pour chaque noeud.
End of explanation
"""
import warnings
from tqdm import tqdm # pour visualiser la progression de l'algorithme
def random_graph(distribution_degree):
degrees = distribution_to_degree_list(distribution_degree)
current = numpy.zeros(degrees.shape[0], dtype=numpy.int32)
expected = degrees.sum()
adja = numpy.zeros((degrees.shape[0], degrees.shape[0]), dtype=numpy.int32)
nb = 0
# tqdm: une boucle qui affiche l'avancement dans un notebook
# on évite la boucle infinie en limitant le nombre d'itération
loop = tqdm(range(expected * 5))
for n_iter in loop:
loop.set_description("sum=%r expected=%r" % (nb, expected))
nodes = [i for i, (c, d) in enumerate(zip(current, degrees))
if c < d]
if len(nodes) == 1:
i, j = 0, 0
elif len(nodes) == 2:
di, dj = 0, 0
i, j = nodes[di], nodes[dj]
else:
di, dj = numpy.random.randint(0, len(nodes), 2)
i, j = nodes[di], nodes[dj]
if i == j or adja[i, j] == 1:
# arc déjà créé ou impossible
continue
current[i] += 1
current[j] += 1
adja[i, j] = 1
adja[j, i] = 1
nb += 2
if nb >= expected:
# Tous les noeuds ont le degré souhaité.
loop.set_description("sum=%r expected=%r" % (nb, expected))
break
if nb < expected:
warnings.warn("Graphe incomplet\ndegrees=%r\ncurrent=%r" % (degrees, current))
return adja
adja = random_graph(numpy.array([0, 5, 3, 2]))
adja
"""
Explanation: Etape 2 : on part d'un tableau de même dimension qui représente les degrés du graphe en cours de construction. Il est nul au départ. On tire des noeuds de façon aléatoire tant que ce degré est inférieur au degré souhaité. On l'incrémente à chaque fois qu'un arc est créé.
Version 1
End of explanation
"""
adja = random_graph(numpy.array([0, 4, 3, 2]))
adja
"""
Explanation: On remarque que la somme des degrés ne peut être impaire car chaque arc est connecté à deux noeuds.
End of explanation
"""
adja.sum(axis=1)
from collections import Counter
Counter(adja.sum(axis=1))
"""
Explanation: On regarde la distribution des degrés :
End of explanation
"""
def random_graph_remove(distribution_degree):
degrees = distribution_to_degree_list(distribution_degree)
current = numpy.zeros(degrees.shape[0], dtype=numpy.int32)
expected = degrees.sum()
adja = numpy.zeros((degrees.shape[0], degrees.shape[0]), dtype=numpy.int32)
nb = 0
loop = tqdm(range(expected * 5))
last_added = 0
n_removed = 0
edges = {i: [] for i in range(current.shape[0])}
for n_iter in loop:
loop.set_description("sum=%r expected=%r n_removed=%r" % (nb, expected, n_removed))
nodes = [i for i, (c, d) in enumerate(zip(current, degrees))
if c < d]
if len(nodes) > 1:
di, dj = numpy.random.randint(0, len(nodes), 2)
i, j = nodes[di], nodes[dj]
else:
i = j = 0
if i == j or adja[i, j] == 1:
if last_added + 5 < n_iter:
# on supprime un arc
nodes = [i for i, c in enumerate(current) if c > 0]
di = (0 if len(nodes) <= 1 else
numpy.random.randint(0, len(nodes)))
i = nodes[di]
dh = (0 if len(edges[i]) <= 1 else
numpy.random.randint(0, len(edges[i])))
j = edges[i][dh]
adja[i, j] = 0
adja[j, i] = 0
edges[i].remove(j)
edges[j].remove(i)
current[i] -= 1
current[j] -= 1
nb -= 2
n_removed += 2
continue
current[i] += 1
current[j] += 1
adja[i, j] = 1
adja[j, i] = 1
nb += 2
last_added = n_iter
edges[i].append(j)
edges[j].append(i)
if nb >= expected:
# Tous les noeuds ont le degré souhaité.
loop.set_description("sum=%r expected=%r n_removed=%r" % (nb, expected, n_removed))
break
if nb < expected:
warnings.warn("Graphe incomplet\ndegrees=%r\ncurrent=%r" % (degrees, current))
return adja
adja = random_graph_remove(numpy.array([0, 4, 3, 2]))
adja
Counter(adja.sum(axis=1))
"""
Explanation: L'algorithme ne semble pas aboutir à un graphe qui répond au critère souhaité. Il existe deux cas pour lesquels l'algorithme reste bloqué. On note $A_t$ l'ensemble des noeuds à l'itération t dont les degrés sont inférieurs au degré souhaité.
Tous les noeuds dans l'ensemble $A_t$ sont déjà reliés entre eux.
La seule option possible est de créer un arc entre un noeud et lui-même.
Pour éviter cela, on choisit après 5 tirages ne donnant lieu à aucune création d'arc d'en supprimer quelques-uns. L'algorithme qui suit n'est pas le plus efficace mais voyons déjà s'il marche avant de nous pencher sur un autre.
Version 2
End of explanation
"""
def distribution_degree_realisable(distribution):
degrees = -numpy.array(sorted(-distribution_to_degree_list(distribution)))
if degrees.sum() % 2 != 0:
return False
sumdi = 0
for i in range(degrees.shape[0] - 1):
sumdi += degrees[i]
mindi = numpy.minimum(degrees[i+1:], i + 1).sum()
if sumdi >= i * (i + 1) + mindi:
return False
return True
distribution_degree_realisable(numpy.array([0, 2, 0, 0, 0, 0, 0, 0, 0, 1]))
distribution_degree_realisable(numpy.array([0, 4, 3, 2]))
fix, ax = plt.subplots(1, 1, figsize=(4, 4))
G = networkx.from_numpy_matrix(adja)
networkx.draw(G, with_labels=True, ax=ax)
"""
Explanation: Il est possible que cet algorithme aboutisse au résultat souhaité même avec beaucoup de temps. Est-ce la stratégie ou le fait que la distribution des noeuds ne soit pas réalisable.
End of explanation
"""
|
nishantsbi/pattern_classification | machine_learning/webapp/webapp.ipynb | gpl-3.0 | import numpy as np
from nltk.stem.porter import PorterStemmer
import re
from nltk.corpus import stopwords
stop = stopwords.words('english')
porter = PorterStemmer()
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
text = [w for w in text.split() if w not in stop]
tokenized = [porter.stem(w) for w in text]
return tokenized
def stream_docs(path):
with open(path, 'r') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='../../data/movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='../../data/movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
"""
Explanation: Sebastian Raschka, 2015
Training a model for movie review classification
End of explanation
"""
import pickle
import os
dest = os.path.join('movieclassifier', 'pkl_objects')
if not os.path.exists(dest):
os.mkdir(dest)
pickle.dump(stop, open(os.path.join(dest, 'stopwords.pkl'), 'wb'))
pickle.dump(porter, open(os.path.join(dest, 'porterstemmer.pkl'), 'wb'))
pickle.dump(vect, open(os.path.join(dest, 'vectorizer.pkl'), 'wb'))
pickle.dump(clf, open(os.path.join(dest, 'classifier.pkl'), 'wb'))
"""
Explanation: <br>
<br>
Serializing fitted scikit-learn estimators
After we trained the logistic regression model as shown above, we know save the classifier along woth the stop words, Porter Stemmer, and HashingVectorizer as serialized objects to our local disk so that we can use the fitted classifier in our web application later.
End of explanation
"""
import pickle
import re
import os
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '')
text = [w for w in text.split() if w not in stop]
tokenized = [porter.stem(w) for w in text]
return text
dest = os.path.join('movieclassifier', 'pkl_objects')
stop = pickle.load(open(os.path.join(dest, 'stopwords.pkl'), 'rb'))
porter = pickle.load(open(os.path.join(dest, 'porterstemmer.pkl'), 'rb'))
vect = pickle.load(open(os.path.join(dest, 'vectorizer.pkl'), 'rb'))
clf = pickle.load(open(os.path.join(dest, 'classifier.pkl'), 'rb'))
import numpy as np
label = {0:'negative', 1:'positive'}
example = ['I love this movie']
X = vect.transform(example)
print('Prediction: %s\nProbability: %.2f%%' %\
(label[clf.predict(X)[0]], np.max(clf.predict_proba(X))*100))
"""
Explanation: After executing the preceeding code cells, we can now restart the IPython notebook kernel to check if the objects were serialized correctly.
End of explanation
"""
import sqlite3
import os
dest = os.path.join('movieclassifier', 'reviews.sqlite')
conn = sqlite3.connect(dest)
c = conn.cursor()
c.execute('CREATE TABLE review_db (review TEXT, sentiment INTEGER, date TEXT)')
example1 = 'I love this movie'
c.execute("INSERT INTO review_db (review, sentiment, date) VALUES (?, ?, DATETIME('now'))", (example1, 1))
example2 = 'I disliked this movie'
c.execute("INSERT INTO review_db (review, sentiment, date) VALUES (?, ?, DATETIME('now'))", (example2, 0))
conn.commit()
conn.close()
conn = sqlite3.connect(dest)
c = conn.cursor()
c.execute("SELECT * FROM review_db WHERE date BETWEEN '2015-01-01 10:10:10' AND DATETIME('now')")
results = c.fetchall()
conn.close()
print(results)
"""
Explanation: <br>
<br>
Creating an SQLite Database
End of explanation
"""
|
ealogar/curso-python | basic/3_Booleans_Flow_Control_and_Comprehension.ipynb | apache-2.0 | # Let's declare some bools
spam = True
print spam
print type(spam)
eggs = False
print eggs
print type(eggs)
"""
Explanation: Booleans
End of explanation
"""
# Let's try boolean operations
print True or True
print True or False
print False or True # Boolean or. Short-circuited, so it only evaluates the second argument if the first one is False
print True and True
print True and False
print False and True # Boolean or. Short-circuited, so it only evaluates the second argument if the first one is True
print not True
print not False
# So, if all objects can be tested for truth, let's try something different
spam = [1.2345, 7, "x"]
eggs = ("a", 0x07, True)
fooo = "aeiou"
print spam or eggs
# Did you expect it to print True?
print fooo or []
print "" or eggs
print spam and eggs
print fooo and ()
print [] and eggs
print not spam
print not ""
print spam and eggs or "abcd" and False
print (spam and eggs) or ("abcd" and False)
print spam and (eggs or "abcd") and False
print spam and (eggs or "abcd" and False)
"""
Explanation: Python truth value testing
Any object can be tested for truth value
Truth value testing is used in flow control or in Boolean operations
All objects are evaluated as True except:
None (aka. null)
False
Zero of any numeric type: 0, 0L, 0.0, 0j, 0x0, 00
Any empty sequence or mapping: '', [], (), {}
Instances of user-defined classes implementing nonzero or len method
and returning 0 or False
End of explanation
"""
spam = 2
eggs = 2.5
print spam == 2 # equal
print spam != eggs # not equal
print spam >= eggs # greater than or equal
print spam > eggs # strictly greater than
print spam <= eggs # less than or equal
print spam < eggs # strictly less than
print spam is 2 # object identity, useful to compare with None (discussed latter)
print spam is not None # negated object identity
"""
Explanation: Python boolean operands:
ALWAYS return one of the incoming arguments!
x or y => if x is false, then y, else x
x and y => if x is false, then x, else y
not x => if x is false, then True, else False
They are short-circuited, so second argument is not always evaluated
Can take any object type as arguments
Even function calls, so boolean operands are used for flow control
Parentheses may be used to change order of boolean operands or comparissons
What about comparisons?
End of explanation
"""
spam = [1, 2, 3] # True
eggs = "" # False
if spam:
print "spam is True"
else:
print "spam is False"
print "outside the conditional" # Notice that theres is no closing fi statement
if spam:
print "spam is True"
else:
print "spam is False"
print "still inside the conditional"
"""
Explanation: Flow Control
Let's start with the conditional execution
End of explanation
"""
if eggs:
print "eggs is True"
elif spam:
print "eggs is False and spam is True"
else:
print "eggs and spam are False"
if eggs:
print "eggs is True"
elif max(spam) > 5:
print "eggs is False and second condition is True"
elif len(spam) == 3 and not eggs is None:
print "third condition is true"
else:
print "everything is False"
"""
Explanation: REMEMBER:
Indentation is Python's way of grouping statements!!
Typically four spaces per indentation level
No curly brackets { } or semicolons ; used anywhere
This enforces a more readable code
End of explanation
"""
spam = [1, 2, 3] # True
eggs = "" # False
print "first option" if spam else "second option"
print "first option" if eggs else "second option"
print "first option" if eggs else "second option" if spam else "last option" # We can even concatenate them
print "first option" if eggs else ("second option" if spam else "last option")
"""
Explanation: Let's see the ternary operator
End of explanation
"""
spam = [1, 2, 3]
while len(spam) > 0:
print spam.pop(0)
spam = [1, 2, 3]
idx = 0
while idx < len(spam):
print spam[idx]
idx += 1
"""
Explanation: Time for the while loop
End of explanation
"""
spam = [1, 2, 3]
for item in spam: # The for loop only iterates over the items of a sequence
print item
spam = [1, 2, 3]
for item in spam[::-1]: # As we saw, slicing may be slow. Keep it in mind
print item
eggs = "eggs"
for letter in eggs: # It can loop over characters of a string
print letter
spam = {"one": 1,
"two": 2,
"three": 3}
for key in spam: # Or even it can interate through a dictionary
print spam[key] # Note that it iterates over the keys of the dictionary
"""
Explanation: What about the for loop?
End of explanation
"""
spam = [1, 2, 3]
for item in spam:
if item == 2:
break
print item
"""
Explanation: Let's see how to interact with loops iterations
End of explanation
"""
# A bit more complicated example
spam = ["one", "two", "three"]
for item in spam: # This loop is never broken
for letter in item:
if letter in "wh": # Check if letter is either 'w' or 'h'
break # Break only the immediate inner loop
print letter
print # It prints a break line (empty line)
# A bit different example
spam = ["one", "two", "three"]
for item in spam:
for letter in item:
if letter in "whe": # Check if letter is either 'w', 'h' or 'e'
continue # Halt only current iteration, but continue the loop
print letter
print
"""
Explanation: break statement halts a loop execution (inside while or for)
Only affects the closer inner (or smallest enclosing) loop
End of explanation
"""
spam = [1, 2, 3, 4, 5, 6, 7, 8]
eggs = 5
while len(spam) > 0:
value = spam.pop()
if value == eggs:
print "Value found:", value
break
else: # Note that else has the same indentation than while
print "The right value was not found"
spam = [1, 2, 3, 4, 6, 7, 8]
eggs = 5
while len(spam) > 0:
value = spam.pop()
if value == eggs:
print "Value found:", value
break
else:
print "The right value was not found"
"""
Explanation: continue statement halts current iteration (inside while or for)
loops continue its normal execution
End of explanation
"""
spam = [1, 2, 3]
for item in spam:
pass
"""
Explanation: else clause after a loop is executed if all iterations were run without break statement called
End of explanation
"""
spam = [1, 2, 3]
try:
print spam[5]
except: # Use try and except to capture exceptions
print "Failed"
spam = {"one": 1, "two": 2, "three": 3}
try:
print spam[5]
except IndexError as e: # Inside the except clause 'e' will contain the exception instance
print "IndexError", e
except KeyError as e: # Use several except clauses for different types of exceptions
print "KeyError", e
try:
print 65 + "spam"
except (IndexError, KeyError) as e: # Or even group exception types
print "Index or Key Error", e
except TypeError as e:
print "TypeError", e
try:
print 65 + 2
except (IndexError, KeyError), e:
print "Index or Key Error", e
except TypeError, e:
print "TypeError", e
else:
print "No exception" # Use else clause to run code in case no exception was raised
try:
print 65 + "spam"
raise AttributeError # Use 'raise' to launch yourself exceptions
except (IndexError, KeyError), e:
print "Index or Key Error", e
except TypeError, e:
print "TypeError", e
else:
print "No exception"
finally:
print "Finally we clean up" # Use finally clause to ALWAYS execute clean up code
try:
print 65 + 2
except (IndexError, KeyError), e:
print "Index or Key Error", e
raise # Use 'raise' without arguments to relaunch the exception
except TypeError, e:
print "TypeError", e
else:
print "No exception"
finally:
print "Finally we clean up" # Use finally clause to ALWAYS execute clean up code
"""
Explanation: pass statement is Python's noop (does nothing)
Let's check exception handling
End of explanation
"""
try:
f = open("tmp_file.txt", "a")
except:
print "Exception opening file"
else:
try:
f.write("I'm writing to a file...\n")
except:
print "Can not write to a file"
finally:
f.close()
"""
Explanation: Let's see another construction
End of explanation
"""
try:
with open("tmp_file.txt", "a") as f:
f.write("I'm writing to a file...\n")
except:
print "Can not open file for writing"
"""
Explanation: Not pythonic, too much code for only three real lines
End of explanation
"""
spam = [0, 1, 2, 3, 4]
eggs = [0, 10, 20, 30]
fooo = []
for s in spam:
for e in eggs:
if s > 1 and e > 1:
fooo.append(s * e)
print fooo
"""
Explanation: Where is the file closed? What happens if an exception is raised?
Python context managers
Encapsulate common patterns used wrapping code blocks where real runs the program logic
Usually try/except/finally patterns
Several uses:
Automatic cleanup, closing files or network or DB connections when exiting the context block
Set temporary environment, like enable/disable logging, timing, profiling...
Use the 'with' and optionally the 'as' statements to open a context manager
It is automatically closed when code execution goes outside the block
Comprehension
End of explanation
"""
spam = [0, 1, 2, 3, 4]
eggs = [0, 10, 20, 30]
fooo = [s * e for s in spam for e in eggs if s > 1 and e > 1]
print fooo
"""
Explanation: Short code, right?
End of explanation
"""
fooo = [s * s for s in spam] # This is the most basic list comprehension construction
print fooo
fooo = [s * s for s in spam if s > 1] # We can add 'if' clauses
print fooo
spam = [1, 2, 3, 4]
eggs = [0, -1, -2, -3]
fooo = [l.upper() * (s + e) for s in spam
for e in eggs
for l in "SpaM aNd eGgs aNd stuFf"
if (s + e) >= 1
if l.islower()
if ord(l) % 2 == 0] # We can add lots of 'for' and 'if' clauses
print fooo
spam = [1, 2, 3, 4]
eggs = [10, 20, 30, 40]
fooo = [[s * e for s in spam] for e in eggs] # It is possible to nest list comprehensions
print fooo
"""
Explanation: What about now?
End of explanation
"""
spam = ['monday', 'tuesday',
'wednesday', 'thursday',
'friday']
fooo = {s: len(s) for s in spam} # The syntax is a merge of list comprehension and dicts
print fooo
spam = [(0, 'monday'), (1, 'tuesday'),
(2, 'wednesday'), (3, 'thursday'),
(4, 'friday')]
fooo = {s: idx for idx, s in spam} # Tuple unpacking is useful here
print fooo
spam = ['monday', 'tuesday',
'wednesday', 'thursday',
'friday']
fooo = {s: len(s) for s in spam if s[0] in "tm"} # Ofc, you can add more 'for' and 'if' clauses
print fooo
"""
Explanation: List comprehension is faster than standard loops (low level C optimizations)
However, built-in functions are still faster (see Functional and iterables tools module)
There is also dict comprehension (2.7 or higher)
End of explanation
"""
|
ecell/ecell4-notebooks | en/examples/example12.ipynb | gpl-2.0 | %matplotlib inline
from ecell4.prelude import *
"""
Explanation: How to Use the Unit System
Important: The unit system is a feature under development. The following section might not work properly yet.
Here, we show some examples using the unit system in ecell4. This feature requires Python library, pint. Install pint before running this example as follows: pip install pint. See also the website https://pint.readthedocs.io/en/latest/.
End of explanation
"""
with species_attributes():
A | B | C | {'D': 1, 'radius': 0.005}
with reaction_rules():
A + B == C | (0.01, 0.3)
m = get_model()
"""
Explanation: With no units
First, imagine a very simple system only with binding and unbinding reactions like:
End of explanation
"""
show(m)
"""
Explanation: The species_attributes section defines a diffusion constant and radius of Species, A, B and C. For example, the diffusion rate of A is 1, and its dimensionality is expected to be [length**2/time]. However, what is the scale? Is it meter? Or mile?
Once the base units are determined, e.g. micrometer as [length] and second as [time], all the units must be consistent within the model. The second order rate consant must have dimensionality [1/(substance/length**3)/time], which is micrometer**3/item/second. Thus, when the parameter is given as 1/molar/min in some literature, you have to translate it by yourself.
End of explanation
"""
from ecell4.extra.unit import getUnitRegistry
ureg = getUnitRegistry()
Q_ = ureg.Quantity
"""
Explanation: Introducing units
ecell4 provides the way to handle units in the modeling environment. Here is an example.
End of explanation
"""
with species_attributes():
A | B | C | {'D': Q_(1, 'um**2/s'), 'radius': Q_(0.005, 'um')}
with reaction_rules():
A + B == C | (Q_(0.01, '1/(item/um**3)/s'), Q_(0.3, '1/s'))
m = get_model()
"""
Explanation: First, create your own unit system,ureg, by using ecell4.extra.unit.getUnitRegistry. With this UnitRegistry, you can make a quantity with its unit as ureg.Quantity(value, unit). (Please be careful about the type of Quantity. It looks same with Quantity given by pint, but is slightly changed in ecell4 though all the original functionality in pint is availble even in ecell4. Please not use ureg = pint.UnitRegistry().)
End of explanation
"""
show(m)
"""
Explanation: The default base units are meter for [length], second for [time], and item (which means the number of molecules) for [substance]. When you change the default base unit, do like ureg = getUnitRegistry(length='micrometer').
End of explanation
"""
with species_attributes():
A | B | C | {'D': Q_(1e-8, 'cm**2/s'), 'radius': Q_(5, 'nm')}
with reaction_rules():
A + B == C | (Q_(6.02214129, '1/uM/s'), Q_(18, '1/min'))
m = get_model()
show(m)
"""
Explanation: Now you can provide quantities in any unit regardless of the base units.
End of explanation
"""
volume = Q_(1, 'fL')
conc = Q_(100, 'nM')
print((volume * conc).to('item'))
"""
Explanation: You can operate quantities, and make a new quantity. See https://pint.readthedocs.io/en/latest/ for more details.
End of explanation
"""
run_simulation(Q_(0.1, 'min'), y0={'C': Q_(60, 'item')}, volume=Q_(1, 'fL'), model=m, solver='ode')
"""
Explanation: In addition to the model creation, run_simulation (and ensemble_simulations) also supports the unit system.
End of explanation
"""
ureg = getUnitRegistry(length='micrometer', time='minute')
Q_ = ureg.Quantity
with species_attributes():
A | B | C | {'D': Q_(1e-8, 'cm**2/s'), 'radius': Q_(5, 'nm')}
with reaction_rules():
A + B == C | (Q_(6.02214129, '1/uM/s'), Q_(18, '1/min'))
m = get_model()
show(m)
run_simulation(Q_(0.1, 'min'), y0={'C': Q_(60, 'item')}, volume=Q_(1, 'fL'), model=m, solver='ode')
"""
Explanation: Even if you change the base units, the behavior of simulations is kept consistent. In the following example, base units are rescaled to micrometer and minute with no change in the modeling section.
End of explanation
"""
from ecell4.extra.unit import check_model, DimensionalityMismatchError
ureg = getUnitRegistry()
Q_ = ureg.Quantity
with species_attributes():
A | {'radius': Q_(0.005, 'um'), 'D': Q_(1.0, 'um/s')}
try:
check_model(get_model())
except DimensionalityMismatchError as e:
print('{}: {}'.format(e.__class__.__name__, e))
"""
Explanation: Checking dimensionality
<div class="alert alert-warning">
This feature is still being developed. Please report issues when getting the wrong behavior.
</div>
You can check units in the model by ecell4.extra.unit.check_model.
For example, the dimensionality of a diffusion constant must be [length**2/time]. When you give a unit with a wrong dimensionality, an exception DimensionalityMismatchError would be thrown as:
End of explanation
"""
with species_attributes():
A | {'radius': 0.005, 'D': Q_(1.0, 'um**2/s')}
try:
check_model(get_model())
except DimensionalityMismatchError as e:
print('{}: {}'.format(e.__class__.__name__, e))
"""
Explanation: When checking dimensionality of units in the model by check_model, you can omit no unit.
End of explanation
"""
with reaction_rules():
A + B > C | Q_(0.3, '1/s')
try:
check_model(get_model())
except DimensionalityMismatchError as e:
print('{}: {}'.format(e.__class__.__name__, e))
"""
Explanation: A kinetic rate constant of reactions is verified based on the order of the reaction. The first order reaction rate should have [1/time], and the second order should have [l/(substance/length**3)/time] in volume.
End of explanation
"""
with reaction_rules():
~A > A | Q_(0.3, '1/s')
try:
check_model(get_model())
except DimensionalityMismatchError as e:
print('{}: {}'.format(e.__class__.__name__, e))
"""
Explanation: The dimensionality of a synthetic reaction depends on the dimension which the products belongs to.
End of explanation
"""
with species_attributes():
B | {'location': 'M'}
M | {'dimension': 2}
with reaction_rules():
A + M > B | Q_(0.3, '1/s')
try:
check_model(get_model())
except DimensionalityMismatchError as e:
print('{}: {}'.format(e.__class__.__name__, e))
"""
Explanation: An unit of the reaction rate between a molecule and structure is also tested.
End of explanation
"""
with reaction_rules():
S > P | Q_(1.0, 'uM/s') * S / (Q_(100, 'nM') + S)
check_model(get_model())
"""
Explanation: Additionally, rate law representations accept quantities with a unit too. See the example below:
End of explanation
"""
with reaction_rules():
S > P | Q_(1.0, 'uM/s') * S / (Q_(100, 'nM/s') + S)
try:
check_model(get_model())
except DimensionalityMismatchError as e:
print('{}: {}'.format(e.__class__.__name__, e))
"""
Explanation: Here, the reaction above has two quantities, Vmax = Q_(1.0, 'uM/s') and Km = Q_(100, 'nM'). First, Km must have the same dimensionality with S, which is [concentration].
End of explanation
"""
with reaction_rules():
S > P | Q_(1.0, '1/s') * S / (Q_(100, 'nM') + S)
try:
check_model(get_model())
except DimensionalityMismatchError as e:
print('{}: {}'.format(e.__class__.__name__, e))
"""
Explanation: Secondly, the dimensionality of a rate equation must be [concentration/time]. Therefore, the dimensionality of Vmax should be [concentration/time] too.
End of explanation
"""
with reaction_rules():
S > P | 10.0 * Q_(0.1, 'uM/s') * S**2 / (Q_(100, 'nM')**2 + S**2)
m = get_model()
show(m)
check_model(m)
"""
Explanation: When you give a value with no unit, it is regarded as dimensionless.
End of explanation
"""
|
bgroveben/python3_machine_learning_projects | prepare_text_data/prepare_text_data.ipynb | mit | from sklearn.feature_extraction.text import CountVectorizer
# Create a list of text documents:
text = ["The quick brown fox jumped over the lazy dog."]
# Create the transform:
vectorizer = CountVectorizer()
# Tokenize and build vocabulary:
vectorizer.fit(text)
# Summarize:
print("vectorizer.vocabulary: {}".format(vectorizer.vocabulary_))
# Encode the document:
vector = vectorizer.transform(text)
# Summarize the encoded vector:
print("vector.shape: {}".format(vector.shape))
print("type(vector): {}".format(type(vector)))
print("vector.toarray(): {}".format(vector.toarray()))
"""
Explanation: How to Prepare Text Data for Machine Learning with scikit-learn
by Jason Brownlee on September 29, 2017 in Natural Language Processing
Introduction
Text data requires special preparation before you can start using it for predictive modeling.
The text must be parsed to remove words, called tokenization.
Then the words need to be encoded as integers or floating point values for use as input to a machine learning algorithm, called feature extraction (or vectorization).
The scikit-learn library offers easy-to-use tools to perform both tokenization and feature extraction of your text data.
In this tutorial, you will discover exactly how you can prepare your text data for predictive modeling in Python with scikit-learn.
After completing this tutorial, you will know:
How to convert text to word count vectors with CountVectorizer.
How to convert text to word frequency vectors with TfidfVectorizer.
How to convert text to unique integers with HashingVectorizer.
Let’s get started.
Bag-of-Words Model
We cannot work with text directly when using machine learning algorithms.
Instead, we need to convert the text to numbers.
We may want to perform classification of documents, so each document is an “input” and a class label is the “output” for our predictive algorithm.
Algorithms take vectors of numbers as input, therefore we need to convert documents to fixed-length vectors of numbers.
A simple and effective model for thinking about text documents in machine learning is called the Bag-of-Words Model, or BoW.
The model is simple in that it throws away all of the order information in the words and focuses on the occurrence of words in a document.
This can be done by assigning each word a unique number.
Then any document we see can be encoded as a fixed-length vector with the length of the vocabulary of known words.
The value in each position in the vector could be filled with a count or frequency of each word in the encoded document.
This is the bag of words model, where we are only concerned with encoding schemes that represent what words are present or the degree to which they are present in encoded documents without any information about order.
There are many ways to extend this simple method, both by better clarifying what a “word” is and in defining what to encode about each word in the vector.
The scikit-learn library provides 3 different schemes that we can use, and we will briefly look at each.
Word Counts with CountVectorizer
The CountVectorizer provides a simple way to both tokenize a collection of text documents and build a vocabulary of known words, but also to encode new documents using that vocabulary.
You can use it as follows:
1. Create an instance of the CountVectorizer class.
2. Call the fit() function in order to learn a vocabulary from one or more documents.
3. Call the transform() function on one or more documents as needed to encode each as a vector.
An encoded vector is returned with a length of the entire vocabulary and an integer count for the number of times each word appeared in the document.
Because these vectors will contain a lot of zeros, we call them sparse.
Python provides an efficient way of handling sparse vectors in the scipy.sparse package.
The vectors returned from a call to transform() will be sparse vectors, and you can transform them back to numpy arrays to look and better understand what is going on by calling the toarray() function.
Below is an example of using the CountVectorizer to tokenize, build a vocabulary, and then encode a document.
End of explanation
"""
print(vectorizer.vocabulary_)
"""
Explanation: Above, you can see that we access the vocabulary to see what exactly was tokenized by calling:
End of explanation
"""
print("vector.shape: {}".format(vector.shape))
print("type(vector): {}".format(type(vector)))
print("vector.toarray(): {}".format(vector.toarray()))
"""
Explanation: We can see that all words were made lowercase by default and that the punctuation was ignored.
These and other aspects of tokenizing can be configured and I encourage you to review all of the options in the API documentation.
Running the example first prints the vocabulary, then the shape of the encoded document.
We can see that there are 8 words in the vocab, and therefore encoded vectors have a length of 8.
We can then see that the encoded vector is a sparse matrix.
Finally, we can see an array version of the encoded vector showing a count of 1 occurrence for each word except the (index and id 7) that has an occurrence of 2.
End of explanation
"""
# Encode another sample document:
text2 = ["the puppy"]
vector = vectorizer.transform(text2)
print(vector.toarray())
"""
Explanation: Importantly, the same vectorizer can be used on documents that contain words not included in the vocabulary.
These words are ignored and no count is given in the resulting vector.
For example, below is an example of using the vectorizer above to encode a document with one word in the vocab and one word that is not.
End of explanation
"""
from sklearn.feature_extraction.text import TfidfVectorizer
# List of text documents:
text = ["The quick brown fox jumped over the lazy dog.",
"The dog.",
"The fox"]
# Create the transform:
vectorizer = TfidfVectorizer()
# Tokenize and build vocabulary:
vectorizer.fit(text)
# Summarize:
print("vectorizer.vocabulary_: {}".format(vectorizer.vocabulary_))
print("vectorizer.idf_: {}".format(vectorizer.idf_))
# Encode document:
vector = vectorizer.transform([text[0]])
# Summarize encoded vector:
print("vector.shape: {}".format(vector.shape))
print("vector.toarray(): {}".format(vector.toarray()))
"""
Explanation: Running this example prints the array version of the encoded sparse vector showing one occurrence of the one word in the vocabulary and the other word in the vocabulary ignored completely.
The encoded vectors can then be used directly with a machine learning algorithm.
Word Frequencies with TfidfVectorizer
Word counts are a good starting point, but are very basic.
One issue with simple counts is that some words like "the" will appear many times and their large counts will not be very meaningful in the encoded vectors.
An alternative is to calculate word frequencies, and by far the most popular method is called TF-IDF.
This is an acronym than stands for "Term Frequency – Inverse Document" Frequency which are the components of the resulting scores assigned to each word.
- Term Frequency: This summarizes how often a given word appears within a document.
- Inverse Document Frequency: This downscales words that appear a lot across documents.
Without going into the math, TF-IDF are word frequency scores that try to highlight words that are more interesting, e.g. frequent in a document but not across documents.
The TfidfVectorizer will tokenize documents, learn the vocabulary and inverse document frequency weightings, and allow you to encode new documents.
Alternately, if you already have a learned CountVectorizer, you can use it with a TfidfTransformer to just calculate the inverse document frequencies and start encoding documents.
The same create, fit, and transform process is used as with the CountVectorizer.
Below is an example of using the TfidfVectorizer to learn vocabulary and inverse document frequencies across 3 small documents and then encode one of those documents:
End of explanation
"""
from sklearn.feature_extraction.text import HashingVectorizer
# Make a list of text documents:
text = ["The quick brown fox jumped over the lazy dog."]
# Create the transform:
vectorizer = HashingVectorizer(n_features=20)
# Encode document:
vector = vectorizer.transform(text)
print("vector: \n{}".format(vector))
# Summarize encoded vector:
print("vector.shape: {}".format(vector.shape))
print("vector.toarray(): {}".format(vector.toarray()))
"""
Explanation: A vocabulary of 8 words is learned from the documents and each word is assigned a unique integer index in the output vector.
The inverse document frequencies are calculated for each word in the vocabulary, assigning the lowest score of 1.0 to the most frequently observed word: "the" at index 7.
Finally, the first document is encoded as an 8-element sparse array and we can review the final scorings of each word with different values for “the“, “fox“, and “dog” from the other words in the vocabulary.
The scores are normalized to values between 0 and 1 and the encoded document vectors can then be used directly with most machine learning algorithms.
Hashing with HashingVectorizer
Counts and frequencies can be very useful, but one limitation of these methods is that the vocabulary can become very large.
This, in turn, will require large vectors for encoding documents and impose large requirements on memory and slow down algorithms.
A clever work around is to use a one way hash of words to convert them to integers.
The clever part is that no vocabulary is required and you can choose an arbitrarily-long fixed length vector.
A downside is that the hash is a one-way function so there is no way to convert the encoding back to a word (which may not matter for many supervised learning tasks).
The HashingVectorizer class implements this approach that can be used to consistently hash words, then tokenize and encode documents as needed.
The example below demonstrates the HashingVectorizer for encoding a single document.
An arbitrary fixed-length vector size of 20 was chosen.
This corresponds to the range of the hash function, where small values (like 20) may result in hash collisions.
Remembering back to compsci classes, I believe there are heuristics that you can use to pick the hash length and probability of collision based on estimated vocabulary size.
Note that this vectorizer does not require a call to fit on the training data documents.
Instead, after instantiation, it can be used directly to start encoding documents.
End of explanation
"""
|
dmolina/es_intro_python | 12-Generators.ipynb | gpl-3.0 | [n ** 2 for n in range(12)]
"""
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="fig/cover-small.jpg">
This notebook contains an excerpt from the Whirlwind Tour of Python by Jake VanderPlas; the content is available on GitHub.
The text and code are released under the CC0 license; see also the companion project, the Python Data Science Handbook.
<!--NAVIGATION-->
< List Comprehensions | Contents | Modules and Packages >
Generators
Here we'll take a deeper dive into Python generators, including generator expressions and generator functions.
Generator Expressions
The difference between list comprehensions and generator expressions is sometimes confusing; here we'll quickly outline the differences between them:
List comprehensions use square brackets, while generator expressions use parentheses
This is a representative list comprehension:
End of explanation
"""
(n ** 2 for n in range(12))
"""
Explanation: While this is a representative generator expression:
End of explanation
"""
G = (n ** 2 for n in range(12))
list(G)
"""
Explanation: Notice that printing the generator expression does not print the contents; one way to print the contents of a generator expression is to pass it to the list constructor:
End of explanation
"""
L = [n ** 2 for n in range(12)]
for val in L:
print(val, end=' ')
G = (n ** 2 for n in range(12))
for val in G:
print(val, end=' ')
"""
Explanation: A list is a collection of values, while a generator is a recipe for producing values
When you create a list, you are actually building a collection of values, and there is some memory cost associated with that.
When you create a generator, you are not building a collection of values, but a recipe for producing those values.
Both expose the same iterator interface, as we can see here:
End of explanation
"""
from itertools import count
count()
for i in count():
print(i, end=' ')
if i >= 10: break
"""
Explanation: The difference is that a generator expression does not actually compute the values until they are needed.
This not only leads to memory efficiency, but to computational efficiency as well!
This also means that while the size of a list is limited by available memory, the size of a generator expression is unlimited!
An example of an infinite generator expression can be created using the count iterator defined in itertools:
End of explanation
"""
factors = [2, 3, 5, 7]
G = (i for i in count() if all(i % n > 0 for n in factors))
for val in G:
print(val, end=' ')
if val > 40: break
"""
Explanation: The count iterator will go on happily counting forever until you tell it to stop; this makes it convenient to create generators that will also go on forever:
End of explanation
"""
L = [n ** 2 for n in range(12)]
for val in L:
print(val, end=' ')
print()
for val in L:
print(val, end=' ')
"""
Explanation: You might see what we're getting at here: if we were to expand the list of factors appropriately, what we would have the beginnings of is a prime number generator, using the Sieve of Eratosthenes algorithm. We'll explore this more momentarily.
A list can be iterated multiple times; a generator expression is single-use
This is one of those potential gotchas of generator expressions.
With a list, we can straightforwardly do this:
End of explanation
"""
G = (n ** 2 for n in range(12))
list(G)
list(G)
"""
Explanation: A generator expression, on the other hand, is used-up after one iteration:
End of explanation
"""
G = (n**2 for n in range(12))
for n in G:
print(n, end=' ')
if n > 30: break
print("\ndoing something in between")
for n in G:
print(n, end=' ')
"""
Explanation: This can be very useful because it means iteration can be stopped and started:
End of explanation
"""
L1 = [n ** 2 for n in range(12)]
L2 = []
for n in range(12):
L2.append(n ** 2)
print(L1)
print(L2)
"""
Explanation: One place I've found this useful is when working with collections of data files on disk; it means that you can quite easily analyze them in batches, letting the generator keep track of which ones you have yet to see.
Generator Functions: Using yield
We saw in the previous section that list comprehensions are best used to create relatively simple lists, while using a normal for loop can be better in more complicated situations.
The same is true of generator expressions: we can make more complicated generators using generator functions, which make use of the yield statement.
Here we have two ways of constructing the same list:
End of explanation
"""
G1 = (n ** 2 for n in range(12))
def gen():
for n in range(12):
yield n ** 2
G2 = gen()
print(*G1)
print(*G2)
"""
Explanation: Similarly, here we have two ways of constructing equivalent generators:
End of explanation
"""
# Generate a list of candidates
L = [n for n in range(2, 40)]
print(L)
# Remove all multiples of the first value
L = [n for n in L if n == L[0] or n % L[0] > 0]
print(L)
# Remove all multiples of the second value
L = [n for n in L if n == L[1] or n % L[1] > 0]
print(L)
# Remove all multiples of the third value
L = [n for n in L if n == L[2] or n % L[2] > 0]
print(L)
"""
Explanation: A generator function is a function that, rather than using return to return a value once, uses yield to yield a (potentially infinite) sequence of values.
Just as in generator expressions, the state of the generator is preserved between partial iterations, but if we want a fresh copy of the generator we can simply call the function again.
Example: Prime Number Generator
Here I'll show my favorite example of a generator function: a function to generate an unbounded series of prime numbers.
A classic algorithm for this is the Sieve of Eratosthenes, which works something like this:
End of explanation
"""
def gen_primes(N):
"""Generate primes up to N"""
primes = set()
for n in range(2, N):
if all(n % p > 0 for p in primes):
primes.add(n)
yield n
print(*gen_primes(100))
"""
Explanation: If we repeat this procedure enough times on a large enough list, we can generate as many primes as we wish.
Let's encapsulate this logic in a generator function:
End of explanation
"""
|
xianjunzhengbackup/code | data science/find_underprice_apartment.ipynb | mit | df.fillna('n/a',inplace=True)
su=df[df['type_of_property'].str.contains('Apartment')]
mu=df[df['type_of_property'].str.contains('Apartments')]
print(len(mu))
print(len(su))
su['propertyinfo_value']
len(su[~(su['propertyinfo_value'].str.contains('bd') | su['propertyinfo_value'].str.contains('Studio'))])
"""
Explanation: 假如没有df.fillna('n/a',inplace=True)这句,会报错。因为在type_of_property这个column中,有空值,即N/A
End of explanation
"""
len(su[su['propertyinfo_value'].str.contains('bd') & ~su['propertyinfo_value'].str.contains('ba')])
no_baths=su[~su['propertyinfo_value'].str.contains('ba')]
sucln=su[~su.index.isin(no_baths.index)]
sucln
"""
Explanation: 带卧室但没厕所
End of explanation
"""
def parse_info(row):
print(row)
br,ba,sqft=row.split('·')[:3]
#_,br=br.split('mo')[:2]
rent,br=br.split('mo')[:2]
return pd.Series({'Beds':br,'Baths':ba,'Sqft':sqft,'Rent':rent+'mo'})
attr=sucln['propertyinfo_value'].apply(parse_info)
attr
sujnd=sucln.join(attr)
sujnd.T
"""
Explanation: 这里通过自定义函数来分类处理数据,从sucln['xxx']出来的是Series,parse_info中读入的row参数,是来自Series的一行文字
这里才是数据处理的精华
End of explanation
"""
|
xpharry/Udacity-DLFoudation | tutorials/transfer-learning/Transfer_Learning_Solution.ipynb | mit | from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
"""
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.
git clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.
End of explanation
"""
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
"""
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
"""
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
"""
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $244 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code:
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
"""
batch_size = 100
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
images = np.concatenate(batch)
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
"""
Explanation: Below I'm running images through the VGG network in batches.
End of explanation
"""
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
"""
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
"""
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)
"""
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
"""
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
train_idx, val_idx = next(ss.split(codes, labels))
half_val_len = int(len(val_idx)/2)
val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
"""
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
"""
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
fc = tf.contrib.layers.fully_connected(inputs_, 256)
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer().minimize(cost)
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
"""
def get_batches(x, y, n_batches=10):
""" Return a generator that yields batches from arrays x and y. """
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
"""
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
"""
epochs = 10
iteration = 0
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in get_batches(train_x, train_y):
feed = {inputs_: x,
labels_: y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(loss))
iteration += 1
if iteration % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
"""
Explanation: Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help.
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
"""
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
"""
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
"""
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation
"""
|
google/eng-edu | ml/fe/exercises/intro_to_modeling.ipynb | apache-2.0 | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 Google LLC.
End of explanation
"""
%reset -f
import numpy as np
import pandas as pd
import math
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
"""
Explanation: Intro to Modeling
Learning Objectives:
* Become familiar with pandas for handling small datasets
* Use the tf.Estimator and Feature Column API to experiment with feature transformations
* Use visualizations and run experiments to understand the value of feature transformations
Please make a copy of this Colab notebook before starting this lab. To do so, choose File->Save a copy in Drive.
Setup
Let's start by importing our dependencies.
End of explanation
"""
# Set pandas output display to have one digit for decimal places and limit it to
# printing 15 rows.
pd.options.display.float_format = '{:.2f}'.format
pd.options.display.max_rows = 15
"""
Explanation: Pandas, a helpful data analysis library for in-memory dataset
We use a package called Pandas for reading in our data, exploring our data and doing some basic processing. It is really helpful for datasets that fit in memory! And it has some nice integrations, as you will see.
First we set up some options to control how items are displayed and the maximum number of rows to show when displaying a table. Feel free to change this setup to whatever you'd like.
End of explanation
"""
# Provide the names for the columns since the CSV file with the data does
# not have a header row.
feature_names = ['symboling', 'normalized-losses', 'make', 'fuel-type',
'aspiration', 'num-doors', 'body-style', 'drive-wheels',
'engine-location', 'wheel-base', 'length', 'width', 'height', 'weight',
'engine-type', 'num-cylinders', 'engine-size', 'fuel-system', 'bore',
'stroke', 'compression-ratio', 'horsepower', 'peak-rpm', 'city-mpg',
'highway-mpg', 'price']
# Load in the data from a CSV file that is comma separated.
car_data = pd.read_csv('https://storage.googleapis.com/mledu-datasets/cars_data.csv',
sep=',', names=feature_names, header=None, encoding='latin-1')
# We'll then randomize the data, just to be sure not to get any pathological
# ordering effects that might harm the performance of Stochastic Gradient
# Descent.
car_data = car_data.reindex(np.random.permutation(car_data.index))
print("Data set loaded. Num examples: ", len(car_data))
"""
Explanation: Load the dataset with pandas
The car data set we will be using in this lab is provided as a comma separated file without a header row. In order for each column to have a meaningful header name we must provide it. We get the information about the columns from the Automobile Data Set.
We will use the features of the car, to try to predict its price.
End of explanation
"""
car_data[4:7]
LABEL = 'price'
numeric_feature_names = car_data[['symboling','normalized-losses','wheel-base','engine-size','bore','stroke','compression-ratio','horsepower','peak-rpm','city-mpg','highway-mpg','price']]
categorical_feature_names = list(set(feature_names) - set(numeric_feature_names) - set([LABEL]))
# The correct solution will pass these assert statements.
assert len(numeric_feature_names) == 15
assert len(categorical_feature_names) == 10
#@title Solution (to view code, from cell's menu, select Form -> Show Code)
numeric_feature_names = ['symboling', 'normalized-losses', 'wheel-base',
'length', 'width', 'height', 'weight', 'engine-size', 'horsepower',
'peak-rpm', 'city-mpg', 'highway-mpg', 'bore', 'stroke',
'compression-ratio']
categorical_feature_names = list(set(feature_names) - set(numeric_feature_names) - set([LABEL]))
assert len(numeric_feature_names) == 15
assert len(categorical_feature_names) == 10
# Run to inspect numeric features.
car_data[numeric_feature_names]
# Run to inspect categorical features.
car_data[categorical_feature_names]
# Coerce the numeric features to numbers. This is necessary because the model
# crashes because not all the values are numeric.
for feature_name in numeric_feature_names + [LABEL]:
car_data[feature_name] = pd.to_numeric(car_data[feature_name], errors='coerce')
# Fill missing values with 0.
# Is this an OK thing to do? You may want to come back and revisit this decision later.
car_data.fillna(0, inplace=True)
"""
Explanation: This is a really small dataset! Only 205 examples.
For simplicity in this codelab, we do not split the data further into training and validation. But you MUST do this on real datasets, or else you will overfit to your single dataset.
Task 0: Use pandas to explore and prepare the data
Use Pandas to inspect the data and manually curate a list of numeric_feature_names and categorical_feature_names.
Useful functions:
- type() called on any Python object describes the type of the object
- dataframe[4:7] pulls out rows 4, 5, 6 in a Pandas dataframe
- dataframe[['mycol1', 'mycol2']] pulls out the two requested columns into a new Pandas dataframe
- dataframe['mycol1'] returns a Pandas series -- not a dataframe!
- dataframe.describe() prints out statistics for each dataframe column
End of explanation
"""
# This code "works", but because of bad hyperparameter choices it gets NaN loss
# during training. Try fixing this.
batch_size = 16
print(numeric_feature_names)
x_df = car_data[numeric_feature_names]
y_series = car_data['price']
# Create input_fn's so that the estimator knows how to read in your data.
train_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
y=y_series,
batch_size=batch_size,
num_epochs=None,
shuffle=True)
eval_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
y=y_series,
batch_size=batch_size,
shuffle=False)
predict_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
batch_size=batch_size,
shuffle=False)
# Feature columns allow the model to parse the data, perform common
# preprocessing, and automatically generate an input layer for the tf.Estimator.
model_feature_columns = [
tf.feature_column.numeric_column(feature_name) for feature_name in numeric_feature_names
]
print('model_feature_columns', model_feature_columns)
est = tf.estimator.DNNRegressor(
feature_columns=model_feature_columns,
hidden_units=[64],
optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.01),
)
# TRAIN
num_print_statements = 10
num_training_steps = 10000
for _ in range(num_print_statements):
est.train(train_input_fn, steps=num_training_steps // num_print_statements)
scores = est.evaluate(eval_input_fn)
# The `scores` dictionary has several metrics automatically generated by the
# canned Estimator.
# `average_loss` is the average loss for an individual example.
# `loss` is the summed loss for the batch.
# In addition to these scalar losses, you may find the visualization functions
# in the next cell helpful for debugging model quality.
print('scores', scores)
#@title Possible solution
# Here is one possible solution:
# The only necessary change to fix the NaN training loss was the choice of optimizer.
# Changing other parameters could improve model quality, but take it with a
# grain of salt. The dataset is very small.
batch_size = 16
print(numeric_feature_names)
x_df = car_data[numeric_feature_names]
y_series = car_data['price']
train_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
y=y_series,
batch_size=batch_size,
num_epochs=None,
shuffle=True)
eval_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
y=y_series,
batch_size=batch_size,
shuffle=False)
predict_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
batch_size=batch_size,
shuffle=False)
# Feature columns allow the model to parse the data, perform common
# preprocessing, and automatically generate an input layer for the tf.Estimator.
model_feature_columns = [
tf.feature_column.numeric_column(feature_name) for feature_name in numeric_feature_names
]
print('model_feature_columns', model_feature_columns)
est = tf.estimator.DNNRegressor(
feature_columns=model_feature_columns,
hidden_units=[64],
optimizer=tf.train.AdagradOptimizer(learning_rate=0.01),
)
# TRAIN
num_print_statements = 10
num_training_steps = 10000
for _ in range(num_print_statements):
est.train(train_input_fn, steps=num_training_steps // num_print_statements)
scores = est.evaluate(eval_input_fn)
# The `scores` dictionary has several metrics automatically generated by the
# canned Estimator.
# `average_loss` is the average loss for an individual example.
# `loss` is the summed loss for the batch.
# In addition to these scalar losses, you may find the visualization functions
# in the next cell helpful for debugging model quality.
print('scores', scores)
"""
Explanation: Task 1: Make your best model with numeric features. No normalization allowed.
Modify the model provided below to achieve the lowest eval loss. You may want to change various hyperparameters:
- learning rate
- choice of optimizer
- hidden layer dimensions -- make sure your choice here makes sense given the number of training examples
- batch size
- num training steps
- (anything else you can think of changing)
Do not use the normalizer_fn arg on numeric_column.
End of explanation
"""
from matplotlib import pyplot as plt
def scatter_plot_inference_grid(est, x_df, feature_names):
"""Plots the predictions of the model against each feature.
Args:
est: The trained tf.Estimator.
x_df: The pandas dataframe with the input data (used to create
predict_input_fn).
feature_names: An iterable of string feature names to plot.
"""
def scatter_plot_inference(axis,
x_axis_feature_name,
y_axis_feature_name,
predictions):
"""Generate one subplot."""
# Plot the real data in grey.
y_axis_feature_name = 'price'
axis.set_ylabel(y_axis_feature_name)
axis.set_xlabel(x_axis_feature_name)
axis.scatter(car_data[x_axis_feature_name],
car_data[y_axis_feature_name],
c='grey')
# Plot the predicted data in orange.
axis.scatter(car_data[x_axis_feature_name], predictions, c='orange')
predict_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
batch_size=batch_size,
shuffle=False)
predictions = [
x['predictions'][0]
for x in est.predict(predict_input_fn)
]
num_cols = 3
num_rows = int(math.ceil(len(feature_names)/float(num_cols)))
f, axarr = plt.subplots(num_rows, num_cols)
size = 4.5
f.set_size_inches(num_cols*size, num_rows*size)
for i, feature_name in enumerate(numeric_feature_names):
axis = axarr[int(i/num_cols), i%num_cols]
scatter_plot_inference(axis, feature_name, 'price', predictions)
plt.show()
scatter_plot_inference_grid(est, x_df, numeric_feature_names)
"""
Explanation: Visualize your model's predictions
After you have a trained model, it may be helpful to understand how your model's inference differs from the actual data.
This helper function scatter_plot_inference does that for you. Real data is in grey. Your model's predictions are in orange.
End of explanation
"""
# This 1D visualization of each numeric feature might inform your normalization
# decisions.
for feature_name in numeric_feature_names:
car_data.hist(column=feature_name)
"""
Explanation: Task 2: Take your best numeric model from earlier. Add normalization.
Add normalization to your best numeric model from earlier
You decide what type of normalization to add, and for which features
You will need to use the normalizer_fn arg on numeric_column
An example of a silly normalizer_fn that shifts inputs down by 1, and then negates the value:
normalizer_fn = lambda x: tf.neg(tf.subtract(x, 1))
You may find these pandas functions helpful:
dataframe.mean()['your_feature_name']
dataframe.std()['your_feature_name']
You will need to retune the hyperparameters from earlier.
Does normalization improve model quality on this dataset? Why or why not?
End of explanation
"""
## Your code goes here
#@title Possible solution
# This does Z-score normalization since the distributions for most features looked
# roughly normally distributed.
# Z-score normalization subtracts the mean and divides by the standard deviation,
# to give a roughly standard normal distribution (mean = 0, std = 1) under a
# normal distribution assumption. Epsilon prevents divide by zero.
# With normalization, are you able to get the model working with
# GradientDescentOptimizer? Z-score normalization doesn't seem to be able to get
# SGD working. Maybe a different type of normalization would?
batch_size = 16
print(numeric_feature_names)
x_df = car_data[numeric_feature_names]
y_series = car_data['price']
train_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
y=y_series,
batch_size=batch_size,
num_epochs=None,
shuffle=True)
eval_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
y=y_series,
batch_size=batch_size,
shuffle=False)
predict_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
batch_size=batch_size,
shuffle=False)
# Epsilon prevents divide by zero.
epsilon = 0.000001
model_feature_columns = [
tf.feature_column.numeric_column(feature_name,
normalizer_fn=lambda val: (val - x_df.mean()[feature_name]) / (epsilon + x_df.std()[feature_name]))
for feature_name in numeric_feature_names
]
print('model_feature_columns', model_feature_columns)
est = tf.estimator.DNNRegressor(
feature_columns=model_feature_columns,
hidden_units=[64],
optimizer=tf.train.AdagradOptimizer(learning_rate=0.01),
)
# TRAIN
num_print_statements = 10
num_training_steps = 10000
for _ in range(num_print_statements):
est.train(train_input_fn, steps=num_training_steps // num_print_statements)
scores = est.evaluate(eval_input_fn)
# The `scores` dictionary has several metrics automatically generated by the
# canned Estimator.
# `average_loss` is the average loss for an individual example.
# `loss` is the summed loss for the batch.
# In addition to these scalar losses, you may find the visualization functions
# in the next cell helpful for debugging model quality.
print('scores', scores)
scatter_plot_inference_grid(est, x_df, numeric_feature_names)
"""
Explanation: Train your model with numeric features + normalization
End of explanation
"""
## Your code goes here
#@title Possible solution
# We have the full list of values that each feature takes on, and the list is
# relatively small so we use categorical_column_with_vocabulary_list.
batch_size = 16
x_df = car_data[categorical_feature_names]
y_series = car_data['price']
train_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
y=y_series,
batch_size=batch_size,
num_epochs=None,
shuffle=True)
eval_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
y=y_series,
batch_size=batch_size,
shuffle=False)
predict_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
batch_size=batch_size,
shuffle=False)
model_feature_columns = [
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list(
feature_name, vocabulary_list=car_data[feature_name].unique()))
for feature_name in categorical_feature_names
]
print('model_feature_columns', model_feature_columns)
est = tf.estimator.DNNRegressor(
feature_columns=model_feature_columns,
hidden_units=[64],
optimizer=tf.train.AdagradOptimizer(learning_rate=0.01),
)
# TRAIN
num_print_statements = 10
num_training_steps = 10000
for _ in range(num_print_statements):
est.train(train_input_fn, steps=num_training_steps // num_print_statements)
scores = est.evaluate(eval_input_fn)
# The `scores` dictionary has several metrics automatically generated by the
# canned Estimator.
# `average_loss` is the average loss for an individual example.
# `loss` is the summed loss for the batch.
# In addition to these scalar losses, you may find the visualization functions
# in the next cell helpful for debugging model quality.
print('scores', scores)
"""
Explanation: Task 3: Make your best model using only categorical features
Look at the possible feature columns for categorical features. They begin with categorical_column_with_ in go/tf-ops.
You may find dataframe[categorical_feature_names].unique() helpful.
End of explanation
"""
## Your code goes here
#@title Possible solution
# This is a first pass at a model that uses all the features.
# Do you have any improvements?
batch_size = 16
x_df = car_data[numeric_feature_names + categorical_feature_names]
y_series = car_data['price']
train_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
y=y_series,
batch_size=batch_size,
num_epochs=None,
shuffle=True)
eval_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
y=y_series,
batch_size=batch_size,
shuffle=False)
predict_input_fn = tf.estimator.inputs.pandas_input_fn(
x=x_df,
batch_size=batch_size,
shuffle=False)
epsilon = 0.000001
model_feature_columns = [
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list(
feature_name, vocabulary_list=car_data[feature_name].unique()))
for feature_name in categorical_feature_names
] + [
tf.feature_column.numeric_column(feature_name,
normalizer_fn=lambda val: (val - x_df.mean()[feature_name]) / (epsilon + x_df.std()[feature_name]))
for feature_name in numeric_feature_names
]
print('model_feature_columns', model_feature_columns)
est = tf.estimator.DNNRegressor(
feature_columns=model_feature_columns,
hidden_units=[64],
optimizer=tf.train.AdagradOptimizer(learning_rate=0.01),
)
# TRAIN
num_print_statements = 10
num_training_steps = 10000
for _ in range(num_print_statements):
est.train(train_input_fn, steps=num_training_steps // num_print_statements)
scores = est.evaluate(eval_input_fn)
# The `scores` dictionary has several metrics automatically generated by the
# canned Estimator.
# `average_loss` is the average loss for an individual example.
# `loss` is the summed loss for the batch.
# In addition to these scalar losses, you may find the visualization functions
# in the next cell helpful for debugging model quality.
print('scores', scores)
"""
Explanation: Task 4: Using all the features, make the best model that you can make
With all the features combined, your model should perform better than your earlier models using numerical and categorical models alone. Tune your model until that is the case.
End of explanation
"""
|
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies | ex25-Heatmap of Global Temperature Anomaly.ipynb | mit | import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
import calendar
%matplotlib inline
"""
Explanation: Heatmap of Global Temperature Anomaly
Global temperature anomaly (GTA, $^oC$) was downloaded from NCDC. The data come from the Global Historical Climatology Network-Monthly (GHCN-M) data set and International Comprehensive Ocean-Atmosphere Data Set (ICOADS), which have data from 1880 to the present. These two datasets are blended into a single product to produce the combined global land and ocean temperature anomalies. The available time series of global-scale temperature anomalies are calculated with respect to the 20th-century average.
The period from Jan/1958 to Mar/2018 was used in this notebook. The data are presented as Heatmap, which is a graphical representation of data where the individual values contained in a matrix are represented as colors. It is really useful to display a general view of numerical data, not to extract specific data point.
1. Load all needed libraries
End of explanation
"""
gta = pd.read_csv('data\gta_1958_2018.csv',
sep=",",
skiprows=5,
names = ["Month", "GTA"])
gta['Month'] = pd.to_datetime(gta['Month'], format='%Y%m', errors='ignore')
gta.set_index('Month', inplace=True)
"""
Explanation: 2. Read global temperature anomaly
End of explanation
"""
gta = pd.pivot(gta.index.year, gta.index.month, gta['GTA']) \
.rename(columns=calendar.month_name.__getitem__)
gta.index.name = None
gta.head()
"""
Explanation: 2.1 Convert to Year * Months
End of explanation
"""
def plot(returns,
title="Global Temperature Anomaly ($^\circ$C)\n",
title_color="black",
title_size=14,
annot_size=5,
vmin = -1.0,
vmax = 1.0,
figsize=None,
cmap='RdBu_r',
cbar=True,
square=False):
if figsize is None:
size = list(plt.gcf().get_size_inches())
figsize = (size[0], size[0] // 2)
plt.close()
fig, ax = plt.subplots(figsize=figsize)
ax = sns.heatmap(returns, ax=ax, annot=False, vmin=vmin, vmax=vmax, center=0,
annot_kws={"size": annot_size},
fmt="0.2f", linewidths=0.5,
square=square, cbar=cbar, cbar_kws={'fraction':0.10},
cmap=cmap)
ax.set_title(title, fontsize=title_size,
color=title_color, fontweight="bold")
fig.subplots_adjust(hspace=0)
plt.yticks(rotation=0)
plt.show()
plot(gta, figsize=[8, 25])
"""
Explanation: 3. Visualize
3.1 Heatmap
End of explanation
"""
gta.plot(title="Global Temperature Anomaly ($^\circ$C)", figsize=[15, 7])
"""
Explanation: 3.2 Classic line plots
End of explanation
"""
|
nitin-cherian/LifeLongLearning | Python/Python Morsels/3.compact/better_solution/.ipynb_checkpoints/compact-checkpoint.ipynb | mit | def compact(items):
"""Return new iterable with adjacent duplicate values removed."""
for item, prev in zip(items, [object(), *items]):
if item != prev:
yield item
"""
Explanation: Key Takeaways
object() - This will give an unique object, which can used instead of None
Another solution: Will not work for iterator objects passed as parameters
Reason: the *items will consume the iterator
End of explanation
"""
import unittest
class CompactTests(unittest.TestCase):
"""Tests for compact."""
def assertIterableEqual(self, iterable1, iterable2):
self.assertEqual(list(iterable1), list(iterable2))
def test_no_duplicates(self):
self.assertIterableEqual(compact([1, 2, 3]), [1, 2, 3])
def test_adjacent_duplicates(self):
self.assertIterableEqual(compact([1, 1, 2, 2, 3]), [1, 2, 3])
def test_non_adjacent_duplicates(self):
self.assertIterableEqual(compact([1, 2, 3, 1, 2]), [1, 2, 3, 1, 2])
def test_lots_of_adjacent_duplicates(self):
self.assertIterableEqual(compact([1, 1, 1, 1, 1, 1]), [1])
def test_empty_values(self):
self.assertIterableEqual(compact([None, 0, "", []]), [None, 0, "", []])
def test_empty_list(self):
self.assertIterableEqual(compact([]), [])
# To test the Bonus part of this exercise, comment out the following line
@unittest.expectedFailure
def test_accepts_iterator(self):
nums = (n**2 for n in [1, 2, 3])
self.assertIterableEqual(compact(nums), [1, 4, 9])
# To test the Bonus part of this exercise, comment out the following line
# @unittest.expectedFailure
def test_returns_iterator(self):
output = compact([1, 2, 3])
self.assertEqual(iter(output), iter(output))
if __name__ == "__main__":
unittest.main(argv=['first-arg-is-ignored'], exit=False)
"""
Explanation: Note the neat trick of shifting the items by 1 and then using the zip function to compare elements of the iterable
End of explanation
"""
|
ldhagen/docker-jupyter | OpenCV.ipynb | mit | ! wget --no-check-certificate http://www.hobieco.com/linked_images/H18-Magnum.jpg
%matplotlib inline
import cv2
from matplotlib import pyplot as plt
import numpy as np
import time as t
print "OpenCV Version : %s " % cv2.__version__
image = cv2.imread("H18-Magnum.jpg")
fig, ax = plt.subplots()
fig.set_size_inches(3, 3)
ax.axis([35, 150, 250, 100])
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image_rgb)
plt.show()
"""
Explanation: From http://giusedroid.blogspot.com/2015/04/blog-post.htmld
Quickie: Mix up OpenCV and Jupyter (iPython Notebook)
The purpose of this post is to show how to plot images acquired with opencv rather than matplotlib. Just in case.
First of all, set matplotlib inline and import the necessary stuff.
End of explanation
"""
from matplotlib.pyplot import imshow
import numpy as np
from PIL import Image
%matplotlib inline
pil_im = Image.open('H18-Magnum.jpg', 'r')
imshow(np.asarray(pil_im))
from IPython.display import Image
Image(filename='H18-Magnum.jpg')
BGRflags = [flag for flag in dir(cv2) if flag.startswith('COLOR_BGR') ]
print BGRflags
"""
Explanation: The image has been correctly loaded by openCV as a numpy array, but the color of each pixel has been sorted as BGR. Matplotlib's plot expects an RGB image so, for a correct display of the image, it is necessary to swap those channels. This operation can be done either by using openCV conversion functions cv2.cvtColor() or by working directly with the numpy array.
cvtColor
cvtColor is the openCV function which changes the color space of an image. It takes as input an image and a numerical flag which represents the conversion function. Let's list some of that.
End of explanation
"""
t0 = t.time()
cv_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
t1 = t.time()
dt_cv = t1-t0
print "Conversion took %0.5f seconds" % dt_cv
plt.imshow(cv_rgb)
plt.show()
"""
Explanation: In this case it's necessary to change the image space from BGR (Blue, Green, Red) to RGB, so the correct flag is cv2.COLOR_BGR2RGB
End of explanation
"""
# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
fig = plt.figure()
fig.suptitle('bold figure suptitle', fontsize=14, fontweight='bold')
ax = fig.add_subplot(111)
fig.subplots_adjust(top=0.85)
ax.set_title('axes title')
ax.set_xlabel('xlabel')
ax.set_ylabel('ylabel')
ax.text(3, 8, 'boxed italics text in data coords', style='italic',
bbox={'facecolor':'red', 'alpha':0.5, 'pad':10})
ax.text(2, 6, r'an equation: $E=mc^2$', fontsize=15)
ax.text(3, 2, u'unicode: Institut f\374r Festk\366rperphysik')
ax.text(0.95, 0.01, 'colored text in axes coords',
verticalalignment='bottom', horizontalalignment='right',
transform=ax.transAxes,
color='green', fontsize=15)
ax.plot([2], [1], 'o')
ax.annotate('annotate', xy=(2, 1), xytext=(3, 4),
arrowprops=dict(facecolor='black', shrink=0.05))
ax.axis([0, 10, 0, 10])
plt.show()
"""
Explanation: below from from http://matplotlib.org/users/text_intro.html
End of explanation
"""
%matplotlib inline
import cv2
from matplotlib import pyplot as plt
import matplotlib.cm as cm
image = cv2.imread("Screenshot_2016-02-23-12-47-43.png")
fig, ax = plt.subplots()
fig.set_size_inches(4, 4)
#ax.axis([1280, 1400, 400, 200])
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image_rgb)
plt.show()
image_gray = cv2.cvtColor(image_rgb, cv2.COLOR_RGB2GRAY)
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(2, 2))
up_right_gray_target = image_gray[210:310, 1280:1400]
plt.imshow(up_right_gray_target, cmap = cm.gray)
plt.show()
image_gray = cv2.cvtColor(image_rgb, cv2.COLOR_RGB2GRAY)
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(2, 2))
low_left_gray_target = image_gray[2412:2512,65:165]
plt.imshow(low_left_gray_target, cmap = cm.gray)
plt.show()
image_gray = cv2.imread("Screenshot_2016-02-23-12-47-43.png",0)
#targets = [up_right_gray_target,low_left_gray_target]
targets = [up_right_gray_target]
for tgt in targets:
w, h = tgt.shape[::-1]
res = cv2.matchTemplate(image_gray,tgt,cv2.TM_CCOEFF)
res1= cv2.matchTemplate(image_gray,tgt,cv2.TM_CCOEFF_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
top_left = max_loc
bottom_right = (top_left[0] + w, top_left[1] + h)
cv2.rectangle(image_gray,top_left, bottom_right, 255, 2)
#fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 14))
#plt.imshow(image_gray, cmap = cm.gray)
#plt.show()
plt.figure(figsize=(16,9))
plt.subplot(1,2,1)
plt.imshow(res,cmap=cm.gray)
plt.subplot(1,2,2)
plt.imshow(res1,cmap=cm.gray)
plt.show()
"""
Explanation: Added Friday afternoon 17 Mar 17
End of explanation
"""
%matplotlib inline
import cv2
from matplotlib import pyplot as plt
import matplotlib.cm as cm
image_gray = cv2.imread("Screenshot_2016-02-23-12-47-43.png",0)
up_right_gray_target = image_gray[210:310, 1280:1400]
#targets = [up_right_gray_target,low_left_gray_target]
targets = [up_right_gray_target]
for tgt in targets:
w, h = tgt.shape[::-1]
res_TM_CCOEFF = cv2.matchTemplate(image_gray,tgt,cv2.TM_CCOEFF)
res_TM_CCOEFF_NORMED = cv2.matchTemplate(image_gray,tgt,cv2.TM_CCOEFF_NORMED)
res_TM_SQDIFF = cv2.matchTemplate(image_gray,tgt,cv2.TM_SQDIFF)
res_TM_SQDIFF_NORMED = cv2.matchTemplate(image_gray,tgt,cv2.TM_SQDIFF_NORMED)
res_TM_CORR = cv2.matchTemplate(image_gray,tgt,cv2.TM_CCORR)
res_TM_CORR_NORMED = cv2.matchTemplate(image_gray,tgt,cv2.TM_CCORR_NORMED)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res_TM_SQDIFF_NORMED)
# top_left = max_loc
top_left = min_loc
bottom_right = (top_left[0] + w, top_left[1] + h)
cv2.rectangle(image_gray,top_left, bottom_right, 255, 2)
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 14))
plt.imshow(image_gray, cmap = cm.gray)
plt.show()
fig = plt.figure(figsize=(12,29))
ax1 = fig.add_subplot(321)
plt.title('CCOEFF')
plt.imshow(res_TM_CCOEFF,cmap=cm.gray)
plt.subplot(3,2,2)
plt.title('CCOEFF_NORMED')
plt.imshow(res_TM_CCOEFF_NORMED,cmap=cm.gray)
plt.subplot(3,2,3)
plt.title('TM_SQDIFF')
plt.imshow(res_TM_SQDIFF,cmap=cm.gray)
plt.subplot(3,2,4)
plt.title('TM_SQDIFF_NORMED')
plt.imshow(res_TM_SQDIFF_NORMED,cmap=cm.gray)
plt.subplot(3,2,5)
plt.title('TM_CORR')
plt.imshow(res_TM_CORR,cmap=cm.gray)
plt.subplot(3,2,6)
plt.title('TM_CORR_NORMED')
plt.imshow(res_TM_CORR_NORMED,cmap=cm.gray)
plt.show()
"""
Explanation: Added Thursday afternoon 23 Mar 17
End of explanation
"""
! pip install --upgrade pandas
%matplotlib inline
import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib.dates import date2num, MonthLocator, WeekdayLocator, DateFormatter
import datetime as dt
import numpy as np
import pandas as pd
count = (dt.datetime.today() - dt.datetime(2016,11,15)).days
count
dates = [dt.datetime(2016,11,15) + dt.timedelta(days=i) for i in xrange(count)]
type(dates)
import numpy as np
dates_np = np.arange(np.datetime64('2016-11-15','D'),np.datetime64(dt.datetime.today(),'D'))
dates_np
type1 = np.random.randint(0,5,count)
type2 = np.random.randint(0,5,count)
type3 = np.random.randint(0,7,count)
type(type1)
#plt.figure(figsize=(20,7))
#plt.title('Testing', fontsize=16)
#plt.xlabel('Date', fontsize=16)
#plt.ylabel('Frequency', fontsize=16)
fig, ax = plt.subplots(1,1)
p1 = plt.bar(dates_np, type1, width=1, label='Type 1')
p2 = plt.bar(dates_np, type2, bottom = type1, width=1, label='Type 2')
p3 = plt.bar(dates_np, type3, bottom = type1 + type2, width=1, label='Type 3')
ax.xaxis_date()
ax.xaxis.set_major_locator(MonthLocator())
ax.xaxis.set_minor_locator(WeekdayLocator())
ax.xaxis.set_major_formatter(DateFormatter('%b %y'))
ax.set_title('Testing', fontsize=16)
ax.set_xlabel('Date')
ax.set_ylabel('Frequency')
ax.set_xlim(dates_np[0],dates_np[-1])
fig.set_size_inches(17,6)
fig.autofmt_xdate()
fig.tight_layout()
plt.legend((p1[0],p2[0],p3[0]), ('First', 'Second','Third'))
plt.show()
type(dates_np[0]),type(type1[0])
"""
Explanation: Added Friday afternoon 15 Apr 17
End of explanation
"""
! pip install --upgrade pandas
import pandas
%matplotlib inline
import numpy as np
import datetime as dt
import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib.dates import date2num, MonthLocator, WeekdayLocator, DateFormatter
"""
Explanation: 20 Jun 17 test case for Matplotlib bug https://github.com/matplotlib/matplotlib/issues/7215/
behavior fixed by importing pandas (installing not enough as nothing seems to upgrade) even though it is not called
End of explanation
"""
class test_object_type:
'''builds test objects which have random dates within a range plus type name, and magnatudes'''
def __init__(self, first_date, last_date):
self.full_space_list = self.generate_random_spaced_list()
self.first_date = first_date
self.last_date = last_date
self.event_date_list = self.build_obj_date_list()
self.date_value_dict = self.build_obj_dict()
self.sorted_keys = self.build_obj_sorted_keys()
self.value_list = np.asarray(self.build_value_list())
self.full_date_dict = self.build_full_date_dict()
def generate_random_spaced_list(self):
return np.random.randint(4,size=325)
def build_obj_date_list(self):
'''Makes event days based on spacing by self.full_space_list. May get several
zero spaces in a row those are not checked for before attempting to recreated same key
instead a new entry overwrites the previous.'''
obj_date_list = []
current_date = self.first_date
for x in self.full_space_list:
current_date = current_date + np.timedelta64(x,'D')
if not current_date > self.last_date:
obj_date_list.append(current_date)
else:
return obj_date_list
def build_obj_dict(self):
date_value_dict = {}
for x in self.event_date_list:
value = np.random.randint(1,5)
date_value_dict[x] = value
return date_value_dict
def build_obj_sorted_keys(self):
dict_keys = self.date_value_dict.keys()
dict_keys.sort()
return dict_keys
def build_value_list(self):
value_list = []
for x in self.sorted_keys:
value_list.append(self.date_value_dict[x])
return value_list
def build_full_date_dict(self):
full_date_list =[]
current_date = self.first_date
while not current_date > self.last_date:
full_date_list.append(current_date)
current_date = current_date + np.timedelta64(1,'D')
full_date_dict = {}
for x in full_date_list:
if x in self.date_value_dict:
full_date_dict[x] = self.date_value_dict[x]
else:
full_date_dict[x] = 0
return full_date_dict
%prun aaa = test_object_type(np.datetime64('2016-11-15','D'),np.datetime64(dt.datetime.today(),'D'))
aaa.full_date_dict.keys()[-1] - aaa.full_date_dict.keys()[0]
xxx = np.asarray(aaa.full_date_dict.keys())
yyy = np.asarray(aaa.full_date_dict.values())
type(xxx[0]),type(yyy[0])
fig, ax = plt.subplots(1,1)
p1 = plt.bar(xxx, yyy, width=1, label='Type 1')
ax.xaxis_date()
ax.xaxis.set_major_locator(MonthLocator())
ax.xaxis.set_minor_locator(WeekdayLocator())
ax.xaxis.set_major_formatter(DateFormatter('%b %y'))
ax.set_title('Testing', fontsize=16)
ax.set_xlabel('Date')
ax.set_ylabel('Frequency')
fig.set_size_inches(17,6)
fig.autofmt_xdate()
fig.tight_layout()
plt.show()
import matplotlib as mpl
mpl.__version__numpy__
"""
Explanation: Below is (dangerously) relying on the latest python 2.7.13 dictionary preserving key order based on creation sequence. Will fix later
End of explanation
"""
|
HeardLibrary/workshops | Jupyter/Introduction to Jupyter Notebook.ipynb | gpl-3.0 | !conda list
"""
Explanation: Jupyter Notebook
Jupyter Notebook that evolved out of iPython and is aimed at providing a platform for easy sharing, interaction, and development of open-source software, standards and services. Althought primarily and originally used for phyton interactions, you can interact with multiple programming languages using Jupyter (although orignially expanded from iPython to includ Julia, Pythion, and R). Jupyter notebooks is a great way to create, test, and share live code, as well as equations and visualizations with markdown text.
To inlcude more languages, check out kernels that are supported and their installation instructions.
Installation
To get started, download and install Jupyter. Installation of Anaconda is reccomened by Jupyter becuase it will also cover your python and notebook installation at the same time. If you prefer, you can also install Jupyter with pip, but this is only suggested for advanced Python users. There is extensive documentation to help with installation and download availble on the Jupyter and Anaconda pages.
Once installed, launch the Anacdona Navigator and launch Jupter notebooks. Also open the Anaconda Prompt window. We will use this window to install extra widgets.
In your Anaconda Prompt window, type the statement below and press enter, continue by following the prompts.
conda install -c conda-forge ipyleaflet
In your Anaconda Prompt window, type the statement below and press enter, continue by following the prompts.
conda install -c conda-forge bqplot
In your Anaconda Prompt window, type the statement below and press enter, continue by following the prompts.
conda install -c conda-forge pythreejs
Check out this cheatsheet by DataCamp to help with the interface.
Help Dropdown
Spend some time going through these help features.
Markdown
Markdown can be easily used in Jupyter. Simply change the dropdown for the cell to "Markdown". If you need a refresher, here is another good cheatsheet.
Special Commands
Special commands can change shell to run different commands, for example you can run bash commands and magic commands directly within a new cell.
End of explanation
"""
%lsmagic
%matplotlib inline
"""
Explanation: Magic commands- Single percent sign means the cells arguments come from the same line, 2 mean the entire cell is used for the entire argument. lsmagic command can be used to list all other commands.
End of explanation
"""
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# Fixing random state for reproducibility
np.random.seed(19680801)
matplotlib.rcParams['axes.unicode_minus'] = False
fig, ax = plt.subplots()
ax.plot(10*np.random.randn(100), 10*np.random.randn(100), 'o')
ax.set_title('Using hyphen instead of Unicode minus')
plt.show()
import matplotlib.pyplot as plt
from matplotlib import collections, colors, transforms
import numpy as np
nverts = 50
npts = 100
# Make some spirals
r = np.arange(nverts)
theta = np.linspace(0, 2*np.pi, nverts)
xx = r * np.sin(theta)
yy = r * np.cos(theta)
spiral = list(zip(xx, yy))
# Make some offsets
# Fixing random state for reproducibility
rs = np.random.RandomState(19680801)
xo = rs.randn(npts)
yo = rs.randn(npts)
xyo = list(zip(xo, yo))
# Make a list of colors cycling through the default series.
colors = [colors.to_rgba(c)
for c in plt.rcParams['axes.prop_cycle'].by_key()['color']]
fig, axes = plt.subplots(2, 2)
fig.subplots_adjust(top=0.92, left=0.07, right=0.97,
hspace=0.3, wspace=0.3)
((ax1, ax2), (ax3, ax4)) = axes # unpack the axes
col = collections.LineCollection([spiral], offsets=xyo,
transOffset=ax1.transData)
trans = fig.dpi_scale_trans + transforms.Affine2D().scale(1.0/72.0)
col.set_transform(trans) # the points to pixels transform
# Note: the first argument to the collection initializer
# must be a list of sequences of x,y tuples; we have only
# one sequence, but we still have to put it in a list.
ax1.add_collection(col, autolim=True)
# autolim=True enables autoscaling. For collections with
# offsets like this, it is neither efficient nor accurate,
# but it is good enough to generate a plot that you can use
# as a starting point. If you know beforehand the range of
# x and y that you want to show, it is better to set them
# explicitly, leave out the autolim kwarg (or set it to False),
# and omit the 'ax1.autoscale_view()' call below.
# Make a transform for the line segments such that their size is
# given in points:
col.set_color(colors)
ax1.autoscale_view() # See comment above, after ax1.add_collection.
ax1.set_title('LineCollection using offsets')
# The same data as above, but fill the curves.
col = collections.PolyCollection([spiral], offsets=xyo,
transOffset=ax2.transData)
trans = transforms.Affine2D().scale(fig.dpi/72.0)
col.set_transform(trans) # the points to pixels transform
ax2.add_collection(col, autolim=True)
col.set_color(colors)
ax2.autoscale_view()
ax2.set_title('PolyCollection using offsets')
# 7-sided regular polygons
col = collections.RegularPolyCollection(
7, sizes=np.abs(xx) * 10.0, offsets=xyo, transOffset=ax3.transData)
trans = transforms.Affine2D().scale(fig.dpi / 72.0)
col.set_transform(trans) # the points to pixels transform
ax3.add_collection(col, autolim=True)
col.set_color(colors)
ax3.autoscale_view()
ax3.set_title('RegularPolyCollection using offsets')
# Simulate a series of ocean current profiles, successively
# offset by 0.1 m/s so that they form what is sometimes called
# a "waterfall" plot or a "stagger" plot.
nverts = 60
ncurves = 20
offs = (0.1, 0.0)
yy = np.linspace(0, 2*np.pi, nverts)
ym = np.max(yy)
xx = (0.2 + (ym - yy)/ym)**2 * np.cos(yy - 0.4)*0.5
segs = []
for i in range(ncurves):
xxx = xx + 0.02*rs.randn(nverts)
curve = list(zip(xxx, yy*100))
segs.append(curve)
col = collections.LineCollection(segs, offsets=offs)
ax4.add_collection(col, autolim=True)
col.set_color(colors)
ax4.autoscale_view()
ax4.set_title('Successive data offsets')
ax4.set_xlabel('Zonal velocity component (m/s)')
ax4.set_ylabel('Depth (m)')
# Reverse the y-axis so depth increases downward
ax4.set_ylim(ax4.get_ylim()[::-1])
plt.show()
%%HTML
<iframe src="https://giphy.com/embed/3o7qE32pRVNYJYKGBO" width="480" height="269" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/veep-3o7qE32pRVNYJYKGBO">via GIPHY</a></p>
"""
Explanation: Matplotlib examples can be found here.
End of explanation
"""
print ('hello world');
print('this is neat')
print('This is cool!')
"abc"*52
"""
Explanation: Python
To get started, let's practice some very basic python scripts. Start with trying to print "Hello World". Once you have entered your script hit Shift+Enter. This will execute your code and create or send you to the next cell. Once you have done that try checking out a Cheatsheet to help get you running with another basic code snippet.
Don't be afraid to try something new. You won't break the shell, it will simply return an error if something is wrong.
There are many cheatsheets out there if you are uncomfortable writing your own snippet. Just do a quick google search for a function and copy/paste/execute in a cell below.
End of explanation
"""
import pandas as pd
df = pd.DataFrame()
a=range(11)
b=range(10,21)
c=range(20,31)
df['a']=a
df['b']=b
df['c']=c
df
"""
Explanation: For this next example, we will import the pandas library. Most python scripts use some sort of library or module that contains global variables, source files, and functions. These libraries or modules can be called and executed with your code.
End of explanation
"""
import math
# Input list.
values = [0.9999999, 1, 2, 3]
# Sum values in list.
r = sum(values)
print(r)
# Sum values with fsum.
r = math.fsum(values)
print(r)
import math
value1 = 9
value2 = 16
value3 = 100
# Use sqrt method.
print(math.sqrt(value1))
print(math.sqrt(value2))
print(math.sqrt(value3))
"""
Explanation: The math functions can be called by saying "import math" before your code. This way jupyter, and any python script for that matter, knows to use those functions in the following code.
End of explanation
"""
import numpy as np
import bqplot.pyplot as plt
size = 100
plt.figure(title='Scatter plot with colors')
plt.scatter(np.random.randn(size), np.random.randn(size), color=np.random.randn(size))
plt.show()
from ipyleaflet import Map
Map(center=[36.146956, -86.779788], zoom=10)
from ipyleaflet import Map import json
Map(center=[36.146956, -86.779788], zoom=10)
from pythreejs import *
f = """
function f(origu,origv) {
// scale u and v to the ranges I want: [0, 2*pi]
var u = 2*Math.PI*origu;
var v = 2*Math.PI*origv;
var x = Math.sin(u);
var y = Math.cos(v);
var z = Math.cos(u+v);
return new THREE.Vector3(x,y,z)
}
"""
surf_g = ParametricGeometry(func=f);
surf = Mesh(geometry=surf_g, material=LambertMaterial(color='green', side='FrontSide'))
surf2 = Mesh(geometry=surf_g, material=LambertMaterial(color='yellow', side='BackSide'))
scene = Scene(children=[surf, surf2, AmbientLight(color='#777777')])
c = PerspectiveCamera(position=[2.5, 2.5, 2.5], up=[0, 0, 1],
children=[DirectionalLight(color='white',
position=[3, 5, 1],
intensity=0.6)])
Renderer(camera=c, scene=scene, controls=[OrbitControls(controlling=c)])
"""
Explanation: Widgets
End of explanation
"""
from IPython.display import display, Math, Latex
display(Math(r'F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx'))
from IPython.display import Latex
Latex(r"""\begin{eqnarray}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\
\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{eqnarray}""")
"""
Explanation: Equations
You can use LaTex within Jupyter. Check out the LaTex cheatsheet!
One alternative to LaTeX is prettyPy or sympy. These are all great ways to write out equations, although the pros and cons of each need to be weighed before using. For example prettyPy does not evaluate expressions but you also don't have to initialize variables.
End of explanation
"""
|
cdawei/digbeta | dchen/music/aotm2011_subset_repr.ipynb | gpl-3.0 | %matplotlib inline
%load_ext autoreload
%autoreload 2
import os, sys
import gzip
import pickle as pkl
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import precision_recall_fscore_support, roc_auc_score, average_precision_score
from scipy.sparse import lil_matrix, issparse
from collections import Counter
import matplotlib.pyplot as plt
import seaborn as sns
sys.path.append('src')
from BinaryRelevance import BinaryRelevance
#from PClassificationMLC import PClassificationMLC
from PCMLC import PCMLC as PClassificationMLC
from evaluate import calc_F1, calc_precisionK, calc_rank, f1_score_nowarn
data_dir = 'data/aotm-2011'
#faotm = os.path.join(data_dir, 'aotm2011-subset.pkl')
faotm = os.path.join(data_dir, 'aotm2011-user-playlist.pkl')
ffeature = 'data/msd/songID2Features.pkl.gz'
fgenre = 'data/msd/song2genre.pkl'
"""
Explanation: A representative subset of AotM-2011 Playlists with MSD Audio Features
End of explanation
"""
user_playlists = pkl.load(open(faotm, 'rb'))
print('#user :', len(user_playlists))
print('#playlist:', np.sum([len(user_playlists[u]) for u in user_playlists]))
pl_lengths = [len(pl) for u in user_playlists for pl in user_playlists[u]]
#plt.hist(pl_lengths, bins=100)
print('Average playlist length: %.1f' % np.mean(pl_lengths))
users = sorted(user_playlists.keys())
songs_user = {u: {sid for pl in user_playlists[u] for sid in pl} for u in users} # user: a set of songs
"""
Explanation: Load playlists
Load playlists.
End of explanation
"""
udf = pd.DataFrame(index=users, columns=['#playlist', '#song'])
udf['#playlist'] = [len(user_playlists[u]) for u in users]
udf['#song'] = [len(songs_user[u]) for u in users]
ax = plt.subplot(111)
udf['#playlist'].hist(bins=200, ax=ax)
ax.set_yscale('log')
u_npl = sorted([(u, len(user_playlists[u])) for u in users], key=lambda x: x[1])
#u_npl
step = 1000 # sample 0.1%
subset = [u_npl[ix] for ix in np.arange(0, len(u_npl), step)]
subset
uid_subset = [t[0] for t in subset]
#udf_subset = udf[ udf['#playlist'] == 10]
#udf_subset.head()
#uid_subset += udf_subset.index[[3,4]].tolist()
#udf_subset = udf[ udf['#playlist'] == 30]
#udf_subset.head()
#uid_subset += udf_subset.index[[1,2]].tolist()
#udf_subset = udf[ udf['#playlist'].isin(np.arange(95, 105))]
#udf_subset.sort_values(by='#playlist')
#uid_subset += udf_subset.index[[2,5]].tolist()
#udf.sort_values(by=['#playlist'], ascending=False).iloc[100:200]
#uid_subset
#udf[uid_subset] # tuple are used as multiindex in pandas
#udf[[uid_subset]]
"""
Explanation: Compute the number of playlists per user, and the number of songs covered by the user's playlists.
End of explanation
"""
playlists_subset = [pl for u in uid_subset for pl in user_playlists[u]]
len(playlists_subset)
song_set = sorted({sid for u in uid_subset for sid in songs_user[u]})
len(song_set)
"""
Explanation: Subset of data
The user whose playlists cover a proper number of playlists, e.g. 50.
End of explanation
"""
song_pl_mat = np.zeros((len(song_set), len(playlists_subset)))
songind = {sid: ix for ix, sid in enumerate(song_set)}
for j in range(len(playlists_subset)):
pl = playlists_subset[j]
ind = [songind[sid] for sid in pl]
song_pl_mat[ind, j] = 1
song_pop = np.sum(song_pl_mat, axis=1)
#plt.hist(song_pop, bins=20)
#print()
#np.arange(0, 100, 5)
sortix = np.argsort(song_pop)
step = 5 # 80/20 split
split_ix = np.arange(0, len(song_pop), step)
dev_ix = [sortix[ix] for ix in split_ix]
dev_song_set = [song_set[ix] for ix in dev_ix]
train_song_set = sorted(set(song_set) - set(dev_song_set))
"""
Explanation: Split songs for setting I
Split songs (80/20 split) such that the distributions of song popularity (the number of occurrence in playlists) in training and dev set are similiar.
End of explanation
"""
train_song_pop = [song_pop[songind[sid]] for sid in train_song_set]
ax = plt.subplot(111)
ax.hist(train_song_pop, bins=30)
ax.set_yscale('log')
ax.set_xlim(0, song_pop.max()+1)
print(len(train_song_set))
"""
Explanation: Histogram of song popularity in training set.
End of explanation
"""
dev_song_pop = [song_pop[songind[sid]] for sid in dev_song_set]
ax = plt.subplot(111)
ax.hist(dev_song_pop, bins=30)
ax.set_yscale('log')
ax.set_xlim(0, song_pop.max()+1)
print(len(dev_song_set))
"""
Explanation: Histogram of song popularity in dev set.
End of explanation
"""
train_playlists = []
dev_playlists = []
#np.arange(0, 10, 2)
for u in uid_subset:
u_playlists = user_playlists[u]
if len(u_playlists) < 2:
train_playlists.append((u, u_playlists[0]))
continue
sorted_pl = sorted(u_playlists, key=lambda pl: len(pl))
step = 2
sample_ix = np.arange(0, len(sorted_pl), step)
if len(sample_ix) <= len(sorted_pl) / 2:
dev_ix = sample_ix
else:
dev_ix = [ix for ix in range(len(sorted_pl)) if ix not in sample_ix]
dev_playlists += [(u, sorted_pl[ix]) for ix in dev_ix]
train_playlists += [(u, sorted_pl[ix]) for ix in range(len(sorted_pl)) if ix not in dev_ix]
xmax = np.max([len(pl) for pl in playlists_subset]) + 1
"""
Explanation: Split playlists
Split playlists (50/50 split) such that the distributions of playlist length (the number of songs in playlists) for each user in training and dev set are similiar.
End of explanation
"""
ax = plt.subplot(111)
ax.hist([len(t[1]) for t in train_playlists], bins=50)
ax.set_yscale('log')
ax.set_xlim(0, xmax)
print(len(train_playlists))
"""
Explanation: Histogram of playlist length in training set.
End of explanation
"""
ax = plt.subplot(111)
ax.hist([len(t[1]) for t in dev_playlists], bins=50)
ax.set_yscale('log')
ax.set_xlim(0, xmax)
print(len(dev_playlists))
"""
Explanation: Histogram of playlist length in training set.
End of explanation
"""
num_unknown = len(playlists_subset) * len(dev_song_set) # the number of unknown in setting I
num_dev_song2 = num_unknown / len(dev_playlists)
#np.arange(0, 100, 2.5)
sortix = np.argsort(song_pop)
step = len(song_pop) / (num_dev_song2)
split_ix = np.arange(0, len(song_pop), step)
np.random.seed(123456789)
rounding_prob = step - int(step)
dev_ix = [sortix[ix] for ix in [int(x) if np.random.rand() < rounding_prob or int(x) == len(sortix)-1 \
else int(x)+1 for x in split_ix]] # avoid index out of bounds
dev_song_set2 = [song_set[ix] for ix in dev_ix]
print(num_unknown, len(dev_playlists) * len(dev_song_set2))
"""
Explanation: Split songs for setting II
Split songs such that
- the number unknown entries is the same as that in setting I.
- the distributions of song popularity (the number of occurrence in playlists) in training and dev set are similiar.
End of explanation
"""
ax = plt.subplot(111)
ax.hist(song_pop, bins=30)
ax.set_yscale('log')
ax.set_xlim(0, song_pop.max()+1)
print(len(song_set))
"""
Explanation: Histogram of song popularity in training set (all songs).
End of explanation
"""
dev_song_pop2 = [song_pop[songind[sid]] for sid in dev_song_set2]
ax = plt.subplot(111)
ax.hist(dev_song_pop2, bins=30)
ax.set_yscale('log')
ax.set_xlim(0, song_pop.max()+1)
print(len(dev_song_set2))
"""
Explanation: Histogram of song popularity in dev set.
End of explanation
"""
song2Features = pkl.load(gzip.open(ffeature, 'rb'))
"""
Explanation: Load song features
Load song_id --> feature array mapping: map a song to the audio features of one of its corresponding tracks in MSD.
End of explanation
"""
song2genre = pkl.load(open(fgenre, 'rb'))
"""
Explanation: Load genres
End of explanation
"""
np.all([sid in song2genre for sid in song_set])
"""
Explanation: Check if all songs have genre info.
End of explanation
"""
def gen_dataset_subset(playlists, song_set, features_MSD, song2genre):
"""
Create labelled dataset: rows are songs, columns are users.
Input:
- playlists: a set of playlists
- song_set: a set of songIDs
- features_MSD: dictionary that maps songIDs to features from MSD
- song2genre: dictionary that maps songIDs to genre
Output:
- (Feature, Label) pair (X, Y)
X: #songs by #features
Y: #songs by #users
"""
song_indices = {sid: ix for ix, sid in enumerate(song_set)}
N = len(song_set)
K = len(playlists)
genre_set = sorted({v for v in song2genre.values()})
genre_indices = {genre: ix for ix, genre in enumerate(genre_set)}
def onehot_genre(songID):
"""
One-hot encoding of genres.
Data imputation: one extra entry for songs without genre info.
Should try other options:
mean imputation, sampling from the distribution of genre popularity.
"""
num = len(genre_set) + 1
vec = np.zeros(num, dtype=np.float)
if songID in song2genre:
genre_ix = genre_indices[song2genre[songID]]
vec[genre_ix] = 1
else:
vec[-1] = 1
return vec
#X = np.array([features_MSD[sid] for sid in song_set]) # without using genre
X = np.array([np.concatenate([features_MSD[sid], onehot_genre(sid)], axis=-1) for sid in song_set])
Y = np.zeros((N, K), dtype=np.bool)
for k in range(K):
pl = playlists[k]
indices = [song_indices[sid] for sid in pl if sid in song_indices]
Y[indices, k] = True
return X, Y
def mean_normalised_reciprocal_rank(Y_true, Y_pred):
"""
Compute the mean of normalised reciprocal rank (reciprocal rank are normalised by the best possible ranks)
"""
normalised_reci_rank = []
npos = np.sum(Y_true, axis=0)
for k in range(Y_true.shape[1]):
ranks = calc_rank(Y_pred[:, k])[Y_true[:, k]]
if len(ranks) > 0:
ideal = np.sum([1./nk for nk in range(1, npos[k]+1)])
real = np.sum([1./r for r in ranks])
normalised_reci_rank.append(real / ideal) # normalise the reciprocal ranks by the best possible ranks
return np.mean(normalised_reci_rank)
def eval_pl(Y_true, Y_pred):
nzcol = np.nonzero(np.sum(Y_true, axis=0))[0] # columns with at least one True
print('Average over %d columns' % len(nzcol))
print('%-15s %.4f' % ('Mean P@K:', np.mean(calc_precisionK(Y_true.T, Y_pred.T))))
print('%-15s %.4f' % ('Mean AUC:', roc_auc_score(Y_true[:, nzcol], Y_pred[:, nzcol], average='macro')))
print('%-15s %.4f' % ('MAP:', average_precision_score(Y_true[:, nzcol], Y_pred[:, nzcol], average='macro')))
print('%-15s %.4f' % ('Mean NRR:', mean_normalised_reciprocal_rank(Y_true, Y_pred)))
"""
Explanation: Create song-playlist matrix
Songs as rows, playlists as columns.
End of explanation
"""
playlists1 = [t[1] for t in train_playlists + dev_playlists]
X_train, Y_train = gen_dataset_subset(playlists=playlists1, song_set=train_song_set,
features_MSD=song2Features, song2genre=song2genre)
X_dev, Y_dev = gen_dataset_subset(playlists=playlists1, song_set=dev_song_set,
features_MSD=song2Features, song2genre=song2genre)
"""
Explanation: Setting I: hold a subset of songs, use all playlists
End of explanation
"""
X_train_mean = np.mean(X_train, axis=0).reshape((1, -1))
X_train_std = np.std(X_train, axis=0).reshape((1, -1)) + 10 ** (-6)
X_train -= X_train_mean
X_train /= X_train_std
X_dev -= X_train_mean
X_dev /= X_train_std
print('Train: %15s %15s' % (X_train.shape, Y_train.shape))
print('Dev : %15s %15s' % (X_dev.shape, Y_dev.shape))
print(np.mean(np.mean(X_train, axis=0)))
print(np.mean( np.std(X_train, axis=0)) - 1)
print(np.mean(np.mean(X_dev, axis=0)))
print(np.mean( np.std(X_dev, axis=0)) - 1)
#np.sum(Y_train, axis=0)
#np.sum(Y_dev, axis=0)
"""
Explanation: Feature normalisation.
End of explanation
"""
br = BinaryRelevance(C=1, n_jobs=4)
br.fit(X_train, Y_train)
"""
Explanation: M1. BR - Independent logistic regression
End of explanation
"""
print('Dev set:')
eval_pl(Y_dev, br.predict(X_dev))
print('Training set:')
eval_pl(Y_train, br.predict(X_train))
#print('Mean macro-F1:', f1_score_nowarn(Y_dev.T, Y_br.T>=0, average='macro'))
#print('P, R, F1:',precision_recall_fscore_support(Y_dev.ravel(), (Y_br>=0).ravel(),average='binary', warn_for=()))
"""
Explanation: Evaluation: normalise per playlist.
End of explanation
"""
%%script false
min_pos_score = []
for col in range(Y_dev.shape[1]):
val = Y_br[:,col][Y_dev[:,col]]
if len(val) > 0:
min_pos_score.append(np.min(val))
else:
min_pos_score.append(np.nan)
print(np.array(min_pos_score))
%%script false
max_neg_score = []
for col in range(Y_dev.shape[1]):
val = Y_br[:,col][np.logical_not(Y_dev[:,col])]
if len(val) > 0:
max_neg_score.append(np.max(val))
print(np.array(max_neg_score))
#print(np.array(min_pos_score)-np.array(max_neg_score))
"""
Explanation: P@K, MAP, NDCG, AUC are good options.
End of explanation
"""
pc = PClassificationMLC(C1=1, weighting='labels')
pc.fit(X_train, Y_train)
"""
Explanation: M2. PC - Multilabel p-classification
P-Classification ~ P-norm push ranking.
End of explanation
"""
print('Dev set:')
eval_pl(Y_dev, pc.predict(X_dev))
print('Training set:')
eval_pl(Y_train, pc.predict(X_train))
min_pos_score = []
for col in range(Y_test.shape[1]):
val = Y_pc[:,col][Y_test[:,col]]
if len(val) > 0:
min_pos_score.append(np.min(val))
else:
min_pos_score.append(np.nan)
#plt.hist((np.array(min_pos_score)))
#plt.hist((np.nan_to_num(min_pos_score)), bins=30)
#print(np.array(min_pos_score))
#print()
max_neg_score = []
for col in range(Y_test.shape[1]):
val = Y_pc[:,col][np.logical_not(Y_test[:,col])]
if len(val) > 0:
max_neg_score.append(np.max(val))
#plt.hist(np.array(max_neg_score), bins=30)
#print()
#plt.hist(np.nan_to_num(min_pos_score)-np.array(max_neg_score), bins=30)
#print()
"""
Explanation: Evaluation: normalise per playlist.
End of explanation
"""
nondev_song_set2 = sorted({sid for sid in song_set if sid not in dev_song_set2})
print(len(nondev_song_set2) + len(dev_song_set2), len(song_set))
playlists2 = [t[1] for t in train_playlists + dev_playlists]
X, Y = gen_dataset_subset(playlists=playlists2, song_set=nondev_song_set2 + dev_song_set2,
features_MSD=song2Features, song2genre=song2genre)
Y_train = Y.copy().astype(np.float) # note: np.nan is float
Y_train[len(nondev_song_set2):, len(train_playlists):] = np.nan
Y_dev = Y[-len(dev_song_set2):, len(train_playlists):]
#Y_train
print(np.sum(np.isnan(Y_train)), len(dev_playlists) * len(dev_song_set2))
X_train = X
"""
Explanation: Setting II: hold a subset of playlists, use all songs
End of explanation
"""
X_train_mean = np.mean(X_train, axis=0).reshape((1, -1))
X_train_std = np.std(X_train, axis=0).reshape((1, -1)) + 10 ** (-6)
X_train -= X_train_mean
X_train /= X_train_std
X_dev = X_train[len(nondev_song_set2):]
print('Train: %15s %15s' % (X_train.shape, Y_train.shape))
print('Dev : %15s %15s' % (X_dev.shape, Y_dev.shape))
print(np.mean(np.mean(X_train, axis=0)))
print(np.mean( np.std(X_train, axis=0)) - 1)
print(np.mean(np.mean(X_dev, axis=0)))
print(np.mean( np.std(X_dev, axis=0)) - 1)
#np.sum(Y_train, axis=0)
#np.sum(Y_dev, axis=0)
"""
Explanation: Feature normalisation.
End of explanation
"""
br2 = BinaryRelevance(C=1, n_jobs=4)
br2.fit(X_train, np.nan_to_num(Y_train))
col_start = -len(dev_playlists)
"""
Explanation: M3. Independent logistic regression
End of explanation
"""
print('Dev set:')
eval_pl(Y_dev, br2.predict(X_dev)[:, col_start:])
Y_train_gt = np.nan_to_num(Y_train).astype(np.bool)
Y_train_pred = br2.predict(X_train)
Y_train_pred[:, col_start:] = 0 # remove test region
print('Training set:')
eval_pl(Y_train_gt, Y_train_pred)
"""
Explanation: Evaluation: normalise per playlist.
End of explanation
"""
user_of_playlists2 = [t[0] for t in train_playlists + dev_playlists]
#user_of_playlists2
same_user_mat = np.zeros((len(playlists2), len(playlists2)), dtype=np.bool)
for i in range(len(playlists2)):
for j in range(i+1, len(playlists2)):
if user_of_playlists2[i] == user_of_playlists2[j]:
same_user_mat[i, j] = True
same_user_mat[j, i] = True
#same_user_mat
pla = PClassificationMLC(C1=1, C2=1, C3=10, weighting='both', similarMat=same_user_mat)
pla.fit(X_train, Y_train)
col_start = -len(dev_playlists)
"""
Explanation: M4. Multilabel p-classification with some playlist fully observed
End of explanation
"""
eval_pl(Y_dev, pla.predict(X_dev)[:, col_start:])
Y_train_gt = np.nan_to_num(Y_train).astype(np.bool)
Y_train_pred = pla.predict(X_train)
Y_train_pred[:, col_start:] = 0 # remove test region
eval_pl(Y_train_gt, Y_train_pred)
"""
Explanation: Evaluation: normalise per playlist.
End of explanation
"""
%%script false
rows, cols = np.nonzero(same_user_mat)
for row, col in zip(rows, cols):
diff = pla.W[row] - pla.W[col]
print('%g' % np.sqrt(np.dot(pla.W[row], pla.W[row])))
print('%g' % np.sqrt(np.dot(pla.W[col], pla.W[col])))
print('%g' % np.sqrt(np.dot(diff, diff)))
print('------------------------------')
"""
Explanation: Check the if the regulariser is effective
End of explanation
"""
A = np.dot(pla.W, pla.W.T)
B = np.tile(np.diag(A), (A.shape[0], 1))
M = np.sqrt(-2 * A + (B + B.T))
"""
Explanation: Compute matrix $M$ such that $M_{jk} = \sqrt{(w_j - w_k)^\top (w_j - w_k)}, \forall j, k$.
End of explanation
"""
#aa = np.arange(6).reshape(3, 2)
#np.einsum('ij,ij->i', aa, aa)
denorm = np.sqrt(np.einsum('ij,ij->i', pla.W, pla.W)) # compute the norm for each row in W
M1 = M / np.max(denorm)
plt.matshow(M1)
user_of_playlists2
rows, cols = np.nonzero(same_user_mat)
M2 = M1[rows, cols]
print(np.min(M2), np.max(M2), np.mean(M2), np.std(M2))
mat = same_user_mat.copy()
np.fill_diagonal(mat, 1) # remove the diagnoal from consideration
rows, cols = np.where(mat == 0)
M3 = M1[rows, cols]
print(np.min(M3), np.max(M3), np.mean(M3), np.std(M3))
"""
Explanation: Normalise $M$ by the vector with maximum norm in $W$.
End of explanation
"""
user_set = sorted(set(user_of_playlists2))
user_set
#user_of_playlists2
Y_pla = pla.predict(X_dev)[:, col_start:]
dev_col_start = len(train_playlists)
for u in user_set:
uind = np.where(np.array(user_of_playlists2, dtype=np.object) == u)[0]
ntrain = len(uind)
if len(uind) < 2: continue # filtering out users with less than 2 playlists
uind -= dev_col_start
uind = uind[uind >= 0]
ntest = len(uind)
#print(uind)
if len(uind) < 1: continue
print('--------------------')
print('USER:', u)
print('#train: %d, #test: %d' % (ntrain, ntest))
eval_pl(Y_dev[:, uind], Y_pla[:, uind])
print()
"""
Explanation: Check performance per user
End of explanation
"""
N, K = Y.shape
Y_nan = Y.copy().astype(np.float)
np.random.seed(8967321)
rand_num = int(0.2 * N)
ones = 0
for k in range(K):
randix = np.random.permutation(np.arange(N))[:rand_num]
Y_nan[randix, k] = np.nan
ones += Y[randix, k].sum()
"""
Explanation: M4. Multilabel p-classification with unknows in test set
End of explanation
"""
np.sum(np.isnan(Y_nan))
"""
Explanation: The number of NaN entries.
End of explanation
"""
pc2 = PClassificationMLC(weighting=True, verticalWeighting=True)
pc2.fit(X, Y_nan)
"""
Explanation: Train: keep running util no overflow warning occurred.
End of explanation
"""
Y_pred2 = pc2.predict(X)
pos_index = np.nan_to_num(Y_nan).astype(np.bool)
nan_index = np.isnan(Y_nan)
ground_truths = Y[nan_index]
thresholds = []
preds = []
for k in range(K):
val = Y_pred2[:, k][pos_index[:, k]]
th = np.min(val)
thresholds.append(th)
preds += (Y_pred2[nan_index[:,k], k] >= th).tolist()
f1_score_nowarn(ground_truths, preds, average='binary')
precision_recall_fscore_support(ground_truths, preds, average='binary', warn_for=None)
"""
Explanation: Prediction: use the minimum of positive entry score of the same example as threshold.
Evaluation: use F1 on all unknown entries (as a 1D array).
End of explanation
"""
Y_nan_part2 = Y.copy().astype(np.float)
np.random.seed(8967321)
rand_num = int(0.4 * N)
ones = 0
for k in range(int(K/2), K):
randix = np.random.permutation(np.arange(N))[:rand_num]
Y_nan_part2[randix, k] = np.nan
ones += Y[randix, k].sum()
"""
Explanation: M5. Multilabel p-classification w/ (some playlist fully observed) & (unknowns in test set)
End of explanation
"""
np.sum(np.isnan(Y_nan_part2))
pc4 = PClassificationMLC(weighting=True, verticalWeighting=True)
pc4.fit(X, Y_nan_part2)
"""
Explanation: The number of NaN entries.
End of explanation
"""
Y_pred4 = pc4.predict(X)
pos_index = np.nan_to_num(Y_nan_part2).astype(np.bool)
nan_index = np.isnan(Y_nan_part2)
ground_truths = Y[nan_index]
thresholds = []
preds = []
for k in range(int(K/2), K):
val = Y_pred4[:, k][pos_index[:, k]]
th = np.min(val)
#th = np.mean(val)
thresholds.append(th)
preds += (Y_pred4[nan_index[:,k], k] >= th).tolist()
f1_score_nowarn(ground_truths, preds, average='binary')
precision_recall_fscore_support(ground_truths, preds, average='binary', warn_for=None)
2 * np.sum(np.logical_and(ground_truths, preds)) / (np.sum(ground_truths) + np.sum(preds))
"""
Explanation: Prediction: use the minimum of positive entry score of the same example as threshold.
Evaluation: use F1 on all unknown entries (as a 1D array).
End of explanation
"""
|
wtbarnes/loops-workshop-2017-talk | notebooks/time_average_em.ipynb | mit | import os
import io
import copy
import glob
import urllib
import numpy as np
import h5py
import matplotlib.pyplot as plt
import matplotlib.colors
import seaborn as sns
import astropy.units as u
import astropy.constants as const
from scipy.ndimage import gaussian_filter
from sunpy.map import Map,GenericMap
import synthesizAR
from synthesizAR.util import EMCube
from synthesizAR.instruments import InstrumentHinodeEIS
%matplotlib inline
base1 = '/data/datadrive1/ar_forward_modeling/systematic_ar_study/noaa1109_tn{}'
base2 = '/data/datadrive2/ar_viz/systematic_ar_study/noaa1109_tn{}/'
eis = InstrumentHinodeEIS([7.5e3,1.25e4]*u.s)
frequencies = [250,750,'750-ion',2500,5000]
temperature_bin_edges = 10.**(np.arange(5.6, 7.0, 0.05))*u.K
"""
Explanation: Time-average EM Cubes
Calculate the time-averaged emission measure distributions from the exact thermodynamic results and save them to be easily reloaded and used later.
End of explanation
"""
time_averaged_ems = {'{}'.format(freq):None for freq in frequencies}
for freq in frequencies:
print('tn = {} s'.format(freq))
if type(freq) == int:
base = base1
else:
base = base2
# setup field and observer objects
field = synthesizAR.Skeleton.restore(os.path.join(base.format(freq),'field_checkpoint'))
observer = synthesizAR.Observer(field,[eis],ds=field._convert_angle_to_length(0.4*u.arcsec))
observer.build_detector_files(base.format(freq))
# iterate over time
for time in eis.observing_time:
print('t = {}'.format(time))
emcube = observer.make_emission_measure_map(time,eis,temperature_bin_edges=temperature_bin_edges)
if time_averaged_ems['{}'.format(freq)] is None:
time_averaged_ems['{}'.format(freq)] = emcube
for m in time_averaged_ems['{}'.format(freq)]:
m.data /= eis.observing_time.shape[0]
else:
for m1,m2 in zip(time_averaged_ems['{}'.format(freq)],emcube):
m1.data += m2.data/eis.observing_time.shape[0]
"""
Explanation: Iterate over all "true" emission measure distributions and time-average them over the given interval.
End of explanation
"""
fig = plt.figure(figsize=(20,15))
plt.subplots_adjust(right=0.87)
cax = fig.add_axes([0.88, 0.12, 0.025, 0.75])
plt.subplots_adjust(hspace=0.1)
for i in range(time_averaged_ems['250'].temperature_bin_edges.shape[0]-1):
# apply a filter to the
tmp = time_averaged_ems['250'][i].submap(u.Quantity([250,500],u.arcsec),u.Quantity([150,400],u.arcsec))
tmp.data = gaussian_filter(tmp.data,
eis.channels[0]['gaussian_width']['x'].value
)
# set up axes properly and add plot
ax = fig.add_subplot(6,5,i+1,projection=tmp)
im = tmp.plot(axes=ax,
annotate=False,
cmap=matplotlib.cm.get_cmap('magma'),
norm=matplotlib.colors.SymLogNorm(1, vmin=1e25, vmax=1e29)
)
# set title and labels
ax.set_title(r'${t0:.2f}-{t1:.2f}$ {uni}'.format(t0=np.log10(tmp.meta['temp_a']),
t1=np.log10(tmp.meta['temp_b']),uni='K'))
if i<25:
ax.coords[0].set_ticklabel_visible(False)
else:
ax.set_xlabel(r'$x$ ({})'.format(u.Unit(tmp.meta['cunit1'])))
if i%5==0:
ax.set_ylabel(r'$y$ ({})'.format(u.Unit(tmp.meta['cunit2'])))
else:
ax.coords[1].set_ticklabel_visible(False)
cbar = fig.colorbar(im,cax=cax)
"""
Explanation: Visualize the results to make sure we've averaged correctly.
End of explanation
"""
for key in time_averaged_ems:
time_averaged_ems[key].save('../data/em_cubes_true_tn{}_t7500-12500.h5'.format(key))
foo = EMCube.restore('../data/em_cubes_tn250_t7500-12500.h5')
fig = plt.figure(figsize=(20,15))
plt.subplots_adjust(right=0.87)
cax = fig.add_axes([0.88, 0.12, 0.025, 0.75])
plt.subplots_adjust(hspace=0.1)
for i in range(foo.temperature_bin_edges.shape[0]-1):
# apply a filter to the
tmp = foo[i].submap(u.Quantity([250,500],u.arcsec),u.Quantity([150,400],u.arcsec))
tmp.data = gaussian_filter(tmp.data,
eis.channels[0]['gaussian_width']['x'].value
)
# set up axes properly and add plot
ax = fig.add_subplot(6,5,i+1,projection=tmp)
im = tmp.plot(axes=ax,
annotate=False,
cmap=matplotlib.cm.get_cmap('magma'),
norm=matplotlib.colors.SymLogNorm(1, vmin=1e25, vmax=1e29)
)
# set title and labels
ax.set_title(r'${t0:.2f}-{t1:.2f}$ {uni}'.format(t0=np.log10(tmp.meta['temp_a']),
t1=np.log10(tmp.meta['temp_b']),uni='K'))
if i<25:
ax.coords[0].set_ticklabel_visible(False)
else:
ax.set_xlabel(r'$x$ ({})'.format(u.Unit(tmp.meta['cunit1'])))
if i%5==0:
ax.set_ylabel(r'$y$ ({})'.format(u.Unit(tmp.meta['cunit2'])))
else:
ax.coords[1].set_ticklabel_visible(False)
cbar = fig.colorbar(im,cax=cax)
"""
Explanation: Now save the results to our local temporary data folder.
End of explanation
"""
|
Santana9937/Regression_ML_Specialization | Week_5_Lasso/assign_1_week-5-lasso.ipynb | mit | import os
import zipfile
from math import log, sqrt
import numpy as np
import pandas as pd
from sklearn import linear_model
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline
"""
Explanation: Regression Week 5: Feature Selection and LASSO (Interpretation)
In this notebook, we will use LASSO to select features. You will:
* Run LASSO with different L1 penalties.
* Choose best L1 penalty using a validation set.
* Choose best L1 penalty using a validation set, with additional constraint on the size of subset.
In the second notebook, you will implement your own LASSO solver, using coordinate descent.
Importing Libraries
End of explanation
"""
# Put files in current direction into a list
files_list = [f for f in os.listdir('.') if os.path.isfile(f)]
# Filenames of unzipped files
unzip_files = ['kc_house_data.csv','wk3_kc_house_train_data.csv',
'wk3_kc_house_test_data.csv', 'wk3_kc_house_train_data.csv',
'wk3_kc_house_valid_data.csv']
# If upzipped file not in files_list, unzip the file
for filename in unzip_files:
if filename not in files_list:
zip_file = filename + '.zip'
unzipping = zipfile.ZipFile(zip_file)
unzipping.extractall()
unzipping.close
"""
Explanation: Unzipping files with house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
"""
# Dictionary with the correct dtypes for the DataFrame columns
dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int,
'sqft_living15':float, 'grade':int, 'yr_renovated':int,
'price':float, 'bedrooms':float, 'zipcode':str,
'long':float, 'sqft_lot15':float, 'sqft_living':float,
'floors':float, 'condition':int, 'lat':float, 'date':str,
'sqft_basement':int, 'yr_built':int, 'id':str,
'sqft_lot':int, 'view':int}
sales = pd.read_csv('kc_house_data.csv', dtype=dtype_dict)
"""
Explanation: Load in house sales data
End of explanation
"""
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']
sales['floors_square'] = sales['floors']*sales['floors']
"""
Explanation: Create new features
As in Week 2, we consider features that are some transformations of inputs.
End of explanation
"""
all_features = ['bedrooms', 'bedrooms_square',
'bathrooms',
'sqft_living', 'sqft_living_sqrt',
'sqft_lot', 'sqft_lot_sqrt',
'floors', 'floors_square',
'waterfront', 'view', 'condition', 'grade',
'sqft_above',
'sqft_basement',
'yr_built', 'yr_renovated']
"""
Explanation: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.
Learn regression weights with L1 penalty
Let us fit a model with all the features available, plus the features we just created above.
End of explanation
"""
model_all = linear_model.Lasso(alpha=5e2, normalize=True) # set parameters
model_all.fit(sales[all_features], sales['price']) # learn weights
"""
Explanation: Using the entire house dataset, learn regression weights using an L1 penalty of 5e2. Make sure to add "normalize=True" when creating the Lasso object.
End of explanation
"""
print model_all.coef_
"""
Explanation: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
End of explanation
"""
for feat, weight in zip(all_features, model_all.coef_):
if weight != 0.0:
print feat + ':', weight
"""
Explanation: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
QUIZ QUESTION:
For the model_all model, which of the features have been chosen, i.e. what features had non-zero weights?
End of explanation
"""
testing = pd.read_csv('wk3_kc_house_test_data.csv', dtype=dtype_dict)
training = pd.read_csv('wk3_kc_house_train_data.csv', dtype=dtype_dict)
validation = pd.read_csv('wk3_kc_house_valid_data.csv', dtype=dtype_dict)
"""
Explanation: Selecting an L1 penalty
To find a good L1 penalty, we will explore multiple values using a validation set. Let us do three way split into train, validation, and test sets:
End of explanation
"""
testing['sqft_living_sqrt'] = testing['sqft_living'].apply(sqrt)
testing['sqft_lot_sqrt'] = testing['sqft_lot'].apply(sqrt)
testing['bedrooms_square'] = testing['bedrooms']*testing['bedrooms']
testing['floors_square'] = testing['floors']*testing['floors']
training['sqft_living_sqrt'] = training['sqft_living'].apply(sqrt)
training['sqft_lot_sqrt'] = training['sqft_lot'].apply(sqrt)
training['bedrooms_square'] = training['bedrooms']*training['bedrooms']
training['floors_square'] = training['floors']*training['floors']
validation['sqft_living_sqrt'] = validation['sqft_living'].apply(sqrt)
validation['sqft_lot_sqrt'] = validation['sqft_lot'].apply(sqrt)
validation['bedrooms_square'] = validation['bedrooms']*validation['bedrooms']
validation['floors_square'] = validation['floors']*validation['floors']
"""
Explanation: Make sure to create the 4 features as we did above:
End of explanation
"""
l1_pen_val = np.logspace(1, 7, num=13)
"""
Explanation: Next, we write a loop that does the following:
* For l1_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, type np.logspace(1, 7, num=13).)
* Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list.
* Compute the RSS on VALIDATION data (here you will want to use .predict()) for that l1_penalty
* Report which l1_penalty produced the lowest RSS on validation data.
End of explanation
"""
models_diff_l1 = {}
"""
Explanation: Creating a dictionary to store the regression models for each L1 penalty. The key of the dictionary will be the index of the l1_pen_val array, passed as a string
End of explanation
"""
for i in range(len(l1_pen_val)):
key_val = str(i)
models_diff_l1[key_val] = linear_model.Lasso(alpha=l1_pen_val[i], normalize=True) # set parameters
models_diff_l1[key_val].fit(training[all_features], training['price']) # learn weights
"""
Explanation: Creating a regression model for each L1 penalty
End of explanation
"""
def RSS_val(output_vals, predictions):
RSS_error = sum( (output_vals - predictions)**2.0 )
return RSS_error
"""
Explanation: Making a function to compute the RSS on the validation data
End of explanation
"""
RSS_L1_vals = []
"""
Explanation: Making a list to store tuples of the form (RSS value for a L1 penalty, index of L1 penalty array)
End of explanation
"""
for i in range(len(l1_pen_val)):
key_val = str(i)
pred_vals = models_diff_l1[key_val].predict(validation[all_features])
RSS = RSS_val(validation['price'], pred_vals)
RSS_L1_vals.append( (RSS, i) )
"""
Explanation: In this loop, we use the repression model to calculate the predicted output values. We then use the predicted values and observed output value to calculate the RSS error. We then fill in values for the RSS_L1_vals.
End of explanation
"""
print l1_pen_val[ min(RSS_L1_vals)[1] ]
print '%.2e' % ( min(RSS_L1_vals)[0] )
"""
Explanation: QUIZ QUESTIONS
Q1. What was the best value for the l1_penalty?
End of explanation
"""
print ( np.count_nonzero(models_diff_l1[ str(min(RSS_L1_vals)[1]) ].coef_) +
np.count_nonzero(models_diff_l1[ str(min(RSS_L1_vals)[1]) ].intercept_) )
"""
Explanation: QUIZ QUESTION
Also, using this value of L1 penalty, how many nonzero weights do you have?
End of explanation
"""
max_nonzeros = 7
"""
Explanation: Limit the number of nonzero weights
What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.
In this section, you are going to implement a simple, two phase procedure to achive this goal:
1. Explore a large range of l1_penalty values to find a narrow region of l1_penalty values where models are likely to have the desired number of non-zero weights.
2. Further explore the narrow region you found to find a good value for l1_penalty that achieves the desired sparsity. Here, we will again use a validation set to choose the best value for l1_penalty.
End of explanation
"""
l1_penalty_values = np.logspace(1, 4, num=20)
"""
Explanation: Exploring the larger range of values to find a narrow range with the desired sparsity
Let's define a wide range of possible l1_penalty_values:
End of explanation
"""
list_l1_pen_n_less_nmax = []
list_l1_pen_n_larger_nmax = []
"""
Explanation: Now, implement a loop that search through this space of possible l1_penalty values:
For l1_penalty in np.logspace(8, 10, num=20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list.
Extract the weights of the model and count the number of nonzeros. Save the number of nonzeros to a list.
Creating lists to store L1 penalties for models with features less than max_nonzeros and for models with features more than max_nonzeros
End of explanation
"""
for i in range(len(l1_penalty_values)):
mod_diff_l1_n7 = linear_model.Lasso(alpha=l1_penalty_values[i], normalize=True) # set parameters
mod_diff_l1_n7.fit(training[all_features], training['price']) # learn weights
non_0_weights = ( np.count_nonzero(mod_diff_l1_n7.coef_) +
np.count_nonzero(mod_diff_l1_n7.intercept_) )
if non_0_weights<max_nonzeros:
list_l1_pen_n_less_nmax.append(l1_penalty_values[i])
if non_0_weights>max_nonzeros:
list_l1_pen_n_larger_nmax.append(l1_penalty_values[i])
"""
Explanation: Creating a regression model for each L1 penalty. Then, finding the non-zero entries for the regression models. If number of non-zero weights are larger or smaller than max_nonzeros, store the number of non_zero weights
End of explanation
"""
l1_penalty_min = max(list_l1_pen_n_larger_nmax)
l1_penalty_max = min(list_l1_pen_n_less_nmax)
print 'l1_penalty_min: ', round(l1_penalty_min,0)
print 'l1_penalty_max: ', round(l1_penalty_max,0)
"""
Explanation: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
More formally, find:
* The largest l1_penalty that has more non-zeros than max_nonzero (if we pick a penalty smaller than this value, we will definitely have too many non-zero weights)
* Store this value in the variable l1_penalty_min (we will use it later)
* The smallest l1_penalty that has fewer non-zeros than max_nonzero (if we pick a penalty larger than this value, we will definitely have too few non-zero weights)
* Store this value in the variable l1_penalty_max (we will use it later)
QUIZ QUESTIONS
What values did you find for l1_penalty_min andl1_penalty_max?
End of explanation
"""
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
"""
Explanation: Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set
We will now explore the narrow region of l1_penalty values we found:
End of explanation
"""
RSS_L1_vals_ref = []
"""
Explanation: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
Measure the RSS of the learned model on the VALIDATION set
Find the model that the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzero.
Creatting a list to store RSS values if number of non-zero weights is equal to max_nonzeros
End of explanation
"""
for i in range(len(l1_penalty_values)):
mod_diff_l1_ref = linear_model.Lasso(alpha=l1_penalty_values[i], normalize=True) # set parameters
mod_diff_l1_ref.fit(training[all_features], training['price']) # learn weights
non_0_weights = ( np.count_nonzero(mod_diff_l1_ref.coef_) +
np.count_nonzero(mod_diff_l1_ref.intercept_) )
if non_0_weights==max_nonzeros:
pred_vals = mod_diff_l1_ref.predict(validation[all_features])
RSS = RSS_val(validation['price'], pred_vals)
RSS_L1_vals_ref.append( (RSS, i) )
"""
Explanation: Creating a regression model for each L1 penalty. If the the number of non-zero weights is equal to max_nonzeros, storing the RSS on the validation set and the index for this L1 penalty in the l1_penalty_values list
End of explanation
"""
print round( l1_penalty_values[ min(RSS_L1_vals_ref)[1] ] , 0 )
"""
Explanation: QUIZ QUESTIONS
Q1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
End of explanation
"""
best_L1_index = min(RSS_L1_vals_ref)[1]
mod_diff_l1_ref = linear_model.Lasso(alpha=l1_penalty_values[ best_L1_index ], normalize=True) # set parameters
mod_diff_l1_ref.fit(training[all_features], training['price']) # learn weights
"""
Explanation: Q2. What features in this model have non-zero coefficients?
Re-learning the model with this L1 penalty
End of explanation
"""
if mod_diff_l1_ref.intercept_ != 0:
print 'intercept: %.2e' % (mod_diff_l1_ref.intercept_)
for feat, weight in zip(all_features, mod_diff_l1_ref.coef_):
if weight != 0.0:
print feat + ':', weight
"""
Explanation: Printing the features with non-zero weights and the values of the weights.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nims-kma/cmip6/models/sandbox-1/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-1', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: NIMS-KMA
Source ID: SANDBOX-1
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:28
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.21/_downloads/2b710ad55cbf958235c0d74bf0b0d4ae/plot_evoked_ers_source_power.ipynb | bsd-3-clause | # Authors: Luke Bloy <luke.bloy@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import mne
from mne.cov import compute_covariance
from mne.datasets import somato
from mne.time_frequency import csd_morlet
from mne.beamformer import (make_dics, apply_dics_csd, make_lcmv,
apply_lcmv_cov)
from mne.minimum_norm import (make_inverse_operator, apply_inverse_cov)
print(__doc__)
"""
Explanation: Compute evoked ERS source power using DICS, LCMV beamformer, and dSPM
Here we examine 3 ways of localizing event-related synchronization (ERS) of
beta band activity in this dataset: somato-dataset using
:term:DICS, :term:LCMV beamformer, and :term:dSPM applied to active and
baseline covariance matrices.
End of explanation
"""
data_path = somato.data_path()
subject = '01'
task = 'somato'
raw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',
'sub-{}_task-{}_meg.fif'.format(subject, task))
raw = mne.io.read_raw_fif(raw_fname)
# We are interested in the beta band (12-30 Hz)
raw.load_data().filter(12, 30)
# The DICS beamformer currently only supports a single sensor type.
# We'll use the gradiometers in this example.
picks = mne.pick_types(raw.info, meg='grad', exclude='bads')
# Read epochs
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=1, tmin=-1.5, tmax=2, picks=picks,
preload=True)
# Read forward operator and point to freesurfer subject directory
fname_fwd = op.join(data_path, 'derivatives', 'sub-{}'.format(subject),
'sub-{}_task-{}-fwd.fif'.format(subject, task))
subjects_dir = op.join(data_path, 'derivatives', 'freesurfer', 'subjects')
fwd = mne.read_forward_solution(fname_fwd)
"""
Explanation: Reading the raw data and creating epochs:
End of explanation
"""
active_win = (0.5, 1.5)
baseline_win = (-1, 0)
baseline_cov = compute_covariance(epochs, tmin=baseline_win[0],
tmax=baseline_win[1], method='shrunk',
rank=None)
active_cov = compute_covariance(epochs, tmin=active_win[0], tmax=active_win[1],
method='shrunk', rank=None)
# Weighted averaging is already in the addition of covariance objects.
common_cov = baseline_cov + active_cov
"""
Explanation: Compute covariances
ERS activity starts at 0.5 seconds after stimulus onset.
End of explanation
"""
def _gen_dics(active_win, baseline_win, epochs):
freqs = np.logspace(np.log10(12), np.log10(30), 9)
csd = csd_morlet(epochs, freqs, tmin=-1, tmax=1.5, decim=20)
csd_baseline = csd_morlet(epochs, freqs, tmin=baseline_win[0],
tmax=baseline_win[1], decim=20)
csd_ers = csd_morlet(epochs, freqs, tmin=active_win[0], tmax=active_win[1],
decim=20)
filters = make_dics(epochs.info, fwd, csd.mean(), pick_ori='max-power')
stc_base, freqs = apply_dics_csd(csd_baseline.mean(), filters)
stc_act, freqs = apply_dics_csd(csd_ers.mean(), filters)
stc_act /= stc_base
return stc_act
# generate lcmv source estimate
def _gen_lcmv(active_cov, baseline_cov, common_cov):
filters = make_lcmv(epochs.info, fwd, common_cov, reg=0.05,
noise_cov=None, pick_ori='max-power')
stc_base = apply_lcmv_cov(baseline_cov, filters)
stc_act = apply_lcmv_cov(active_cov, filters)
stc_act /= stc_base
return stc_act
# generate mne/dSPM source estimate
def _gen_mne(active_cov, baseline_cov, common_cov, fwd, info, method='dSPM'):
inverse_operator = make_inverse_operator(info, fwd, common_cov)
stc_act = apply_inverse_cov(active_cov, info, inverse_operator,
method=method, verbose=True)
stc_base = apply_inverse_cov(baseline_cov, info, inverse_operator,
method=method, verbose=True)
stc_act /= stc_base
return stc_act
# Compute source estimates
stc_dics = _gen_dics(active_win, baseline_win, epochs)
stc_lcmv = _gen_lcmv(active_cov, baseline_cov, common_cov)
stc_dspm = _gen_mne(active_cov, baseline_cov, common_cov, fwd, epochs.info)
"""
Explanation: Compute some source estimates
Here we will use DICS, LCMV beamformer, and dSPM.
See ex-inverse-source-power for more information about DICS.
End of explanation
"""
for method, stc in zip(['DICS', 'LCMV', 'dSPM'],
[stc_dics, stc_lcmv, stc_dspm]):
title = '%s source power in the 12-30 Hz frequency band' % method
brain = stc.plot(hemi='rh', subjects_dir=subjects_dir,
subject=subject, time_label=title)
"""
Explanation: Plot source estimates
End of explanation
"""
|
darshanbagul/ComputerVision | RegionMerging-BoundaryMelting/RegionMergingByBoundaryMelting.ipynb | gpl-3.0 | % matplotlib inline
import numpy as np
import cv2
import matplotlib.pyplot as plt
from scipy.signal import convolve2d
import scipy.ndimage as ndi
import math
"""
Explanation: Region Merging Segmentation
Problem Statement.
Region merging is an effective scheme for region growing based segmentation. Region growing may begin with each pixel within an image in which a pixel represents a single region initially. In this problem we shall use the Boundary Melting approach for merging regions, and effectively segment the given image of 'Mixed Vegetables'.
The fundamental part of this task, is to create a Supergrid Edge Data Structure. This matrix stores the crack edges between adjacent regions.
I shall state the strategy we will be using for tackling the problem below, and we shall follow the procedure step by step in following sections.
We begin by clustering/assigning a region to each pixel in the image based on the pixel intensity. If the pixel intensities of two adjacent pixels differ by less than a threshold, then we assign the same region label to it.
Next we calculate the average pixel intensity for each region that we obtained from step 1.
Now we initialise the supergrid data structure, which is twice the size of original image, and store the average intensities per pixel calculated above, for each pixel representation in the supergrid.
Next, we calculate the difference between the pixel intensities of adjacent regions, and store it as a crack edge in the pixels between the regions.
We remove the weak crack edges if the difference is below a threshold, and replace this edge by average value of neighbouring pixels. If we have a strong edge, we replace this pixel by 0.
We then calculate the perimeter of each region, and also the lengths of the common boundaries shared by every adjacent regions.
We eliminate the common boundaries between two regions if the following criteria is not satisfied:
W/min(l1,l2) ≥ Threshold, where l1 and l2 are the perimeters of adjacent regions.
When we recursively remove all the weak edges, until we have no more edges to remove as per the criterion, we terminate the process.
No we have the segmented regions and corresponding edges, which we shall super impose over the original image, to get the region segmentation.
Let us begin, by importing all the necessary python packages.
End of explanation
"""
def plot_input(img, title):
plt.imshow(img, cmap = 'gray')
plt.title(title), plt.xticks([]), plt.yticks([])
plt.show()
"""
Explanation: Let us define a helper function to plot our images/graphs. We shall be using this function quite often, for analysing our output, hence we define this function before we begin our solution.
End of explanation
"""
original_img = cv2.imread('./MixedVegetables.jpg', 0)
plot_input(original_img, 'Original Image')
"""
Explanation: Next, let us view the given test image in grayscale. For that we shall read the image using OpenCV defined reader, and plot the image using our helper function.
End of explanation
"""
gaussian_smooth_img = ndi.filters.gaussian_filter(original_img, 1.2)
plot_input(gaussian_smooth_img, 'Gaussian Smooth')
"""
Explanation: It is often beneficial to pre-process our given raw image.
In our preprocessing step, we shall smooth the given image using a Gaussian filter, to remove any noise in the image. As our crack-edge detection and region merging is highly dependent on the difference and uniformity of pixel intensities respectively, we would like to remove extremely high frequencies in the image.
End of explanation
"""
[M,N] = original_img.shape
adj_values = np.array([ {'left': 0, 'right':0,'top':0,'down':0} for x in range(M * N + 1)])
region_label = np.array([[ 0 for x in range(N)] for y in range(M)])
avg_intensity_reg = []
for i in range(M * N +1):
avg_intensity_reg.append({'min':0,'max':0,'avg':0})
"""
Explanation: Let us now begin with our solution, for implementing the region merging via boundary melting algorithm.
The most fundamental part of the solution is in appropriate creation of Supergrid data structure, which holds the crack edges.
Below, we have stored the original image dimensions in variables M and N.
Next we have defined a matrix 'adj_values' to store the intensities of 4 neighbors corresponding to each pixel.
'region_label' is a matrix that stores the labels of the regions to which each pixel belong. We have initialised the matrices to 0.
'avg_intensity_reg' is a matrix that stores the pixel intensity values(min,max,avg) for every region. Initially, each pixel is considered to be its own region, hence we have values corresponding to each pixel, but as the merging begins, each value in this matrix denotes the values corresponding to the region the pixel belongs.
End of explanation
"""
crack_edge_data = np.zeros((2*M-1,2*N-1))
label_data = np.zeros((2*M-1,2*N-1))
"""
Explanation: Next, we shall initialise the Supergrid data structure, which is represented by a matrix of size twice that of the original image.
We call this matrix, crack_edge_data. We begin by storing the crack edges between two regions (pixel intensity differences), and recursively eliminate weak edges in the supergrid, between two regions as we merge them.
End of explanation
"""
def split_label_recursively(i,j,label,pixel):
T1 = 5 # Tested for all values from 1 to 10, the higher the values the more lenient the merging, lower values create more regions and have high noise.
if (i < 0 or i >= M or j < 0 or j >= N) or region_label[i,j] != 0 or original_img[i,j] < pixel - T1 or original_img[i,j] > pixel + T1:
return
pixel = original_img[i,j]
if avg_intensity_reg[label]['min'] > pixel:
avg_intensity_reg[label]['min'] = pixel
elif avg_intensity_reg[label]['max'] < pixel:
avg_intensity_reg[label]['max'] = pixel
region_label[i,j] = label
if adj_values[label]['left'] > i:
adj_values[label]['left'] = i
elif adj_values[label]['right'] < i:
adj_values[label]['right'] = i
if adj_values[label]['top'] > j:
adj_values[label]['top'] = j
elif adj_values[label]['down'] < j:
adj_values[label]['down'] = j
split_label_recursively(i-1,j+1,label,pixel)
split_label_recursively(i,j+1,label,pixel)
split_label_recursively(i+1,j+1,label,pixel)
split_label_recursively(i+1,j,label,pixel)
split_label_recursively(i+1,j-1,label,pixel)
label = 0
for i in range(M):
for j in range(N):
if region_label[i,j] == 0:
label = label + 1
pixel = original_img[i,j]
adj_values[label]['left'] = i
adj_values[label]['right'] = i
adj_values[label]['top'] = j
adj_values[label]['down'] = j
split_label_recursively(i,j,label,pixel)
fin_label = label
"""
Explanation: Next, we shall recursively iterate over each pixel and check if the label is not already set for the given pixel. If not set, we give the pixel a label and proceed to its neighbour.
Note: If a label is not set ie. it is 0, but the pixel satisfies homogenity criteria, corresponding to its neighbours, we set the label of the neighbours to this pixel.
For choosing the threshold, I iteratively generated the segmentation result for T1 in [1,2,3,4,5,6,7,8,9,10].
From the experimented results, we opined that, high values of T1(in this context) tend to be more lenient in terms of merging, and average out a lot of regions as 1.
<img src="experiments/3.jpg" width="350"><center> Output for T1 = 3</center>
<img src="experiments/6.jpg" width="350"><center> Output for T1 = 6</center>
<img src="experiments/9.jpg" width="350"><center> Output for T1 = 9</center>
Lower values are more stringent, and as such create a lot of regions, which also comprise of noise. Hence, we had to choose the best threshold available for our use case.
Looking at the images above, T1=9 seems a good threshold for initial filtering, but its crack edge representation shall prove, why we have not chosen such a large threshold.
<center><img src="experiments/crack_edge_t1_9.png" width="500"> Super grid representation for T1=9</center>
<center><img src="experiments/crack_edge_t1_3.png" width="400"> Super grid representation for T1=3</center>
<center><img src="experiments/crack_edge_t1_5.png" width="500"> Super grid representation for T1=5</center>
After examining the crack edge data and the region segmentation results for different thresholds, we decided to choose T1 = 5.
After execution of the below code snippet, each pixel should have a label, and must belong to a region based on the average intensity of a region.
End of explanation
"""
for i in range(len(avg_intensity_reg)):
avg_intensity_reg[i]['avg'] = (avg_intensity_reg[i]['max'] + avg_intensity_reg[i]['min'])/2
"""
Explanation: Thus we have made a first pass through the image pixels, and split the image into different simple regions, based on the homogenous intensity criteria for a given region.
Next let us calculate the average pixel intensity for each region, which shall be stored across every pixel in a region.
End of explanation
"""
for i in range(M):
for j in range(N):
crack_edge_data[2*i,2*j] = avg_intensity_reg[region_label[i,j]]['avg']
label_data[2*i,2*j] = region_label[i,j]
"""
Explanation: Next, we shall initialise the Supergrid data to hold the average intensities uniformly as per the region labels defined above.
We shall also keep track of the region labels in the Supergrid Data structure using the matrix label_data.
End of explanation
"""
def check_neighbours_crack_edge(i,j):
pix_val = 0
pixel_label = 0
count = 0
if i-1 >= 0:
if crack_edge_data[i-1,j] != 0:
pix_val = crack_edge_data[i-1,j]
pixel_label = label_data[i-1,j]
else:
count = count + 1
if j-1 >= 0:
if crack_edge_data[i,j-1] != 0:
pix_val = crack_edge_data[i,j-1]
pixel_label = label_data[i,j-1]
else:
count = count + 1
if i+1 < len(crack_edge_data):
if crack_edge_data[i+1,j] != 0:
pix_val = crack_edge_data[i+1,j]
pixel_label = label_data[i+1,j]
else:
count = count + 1
if j+1 < len(crack_edge_data[0]):
if crack_edge_data[i,j+1] != 0:
pix_val = crack_edge_data[i,j+1]
pixel_label = label_data[i,j+1]
else:
count = count + 1
if count >2:
return 0,0
else:
return pix_val,pixel_label
for i in range(M):
for j in range(N):
if 2*j < len(crack_edge_data[0]) and 2*i+2 < len(crack_edge_data):
diff = crack_edge_data[2*i,2*j] - crack_edge_data[2*i+2,2*j]
if diff != 0:
crack_edge_data[2*i+1,2*j] = 0
label_data[2*i+1,2*j] = 0
else:
crack_edge_data[2*i+1,2*j] = crack_edge_data[2*i,2*j]
label_data[2*i+1,2*j] = label_data[2*i,2*j]
if 2*j+2 < len(crack_edge_data[0]) and 2*i < len(crack_edge_data):
diff = crack_edge_data[2*i,2*j] - crack_edge_data[2*i,2*j+2]
if diff != 0:
crack_edge_data[2*i,2*j+1] = 0
label_data[2*i,2*j+1] = 0
else:
crack_edge_data[2*i,2*j+1] = crack_edge_data[2*i,2*j]
label_data[2*i,2*j+1] = label_data[2*i,2*j]
if 2*i+1 < len(crack_edge_data) and 2*j+1 < len(crack_edge_data[0]):
crack_edge_data[2*i+1,2*j+1],label_data[2*i+1,2*j+1] = check_neighbours_crack_edge(2*i+1,2*j+1)
"""
Explanation: As we have studied for the Supergrid data structure, we have to store the differences between the pixel intensities of neighbouring pixels in the middle pixel. These are called as the CRACK EDGES.
The strength of these crack edges, help us determine if we should merge two regions or not.
In the following section, we shall compute the crack edges and store them as stated above.
First we shall iterate all the columns and store the corresponding crack edges, and then move vertically through the pixels for storing the crack edges. Next, we check how many neighbours have a crack edge and if current pixel is a crossing point of edges, by checking the adjacent pixels.
End of explanation
"""
def adjacent_reg(i,j):
lab1 = 0
lab2 = 0
test_labels = [0,0,0,0]
if i-1 >= 0:
if crack_edge_data[i-1,j] != 0 :
test_labels[0] = label_data[i-1,j]
if j-1 >= 0:
if crack_edge_data[i,j-1] != 0 :
test_labels[1] = label_data[i,j-1]
if i+1 < len(crack_edge_data):
if crack_edge_data[i+1,j] != 0 :
test_labels[2] = label_data[i+1,j]
if j+1 < len(crack_edge_data[0]):
if crack_edge_data[i,j+1] != 0 :
test_labels[3] = label_data[i,j+1]
lab1 = test_labels[0]
if lab1 == 0:
lab1 = test_labels[1]
else:
lab2 = test_labels[1]
if lab1 == 0:
lab1 = test_labels[2]
else:
lab2 = test_labels[2]
if lab1 == 0:
lab1 = test_labels[3]
else:
lab2 = test_labels[3]
return lab1,lab2
def update_perimeters():
common_perimeter = np.array([[ 0 for i in range(fin_label+1)] for j in range(fin_label+1)])
region_perimeter = np.array([ 0 for i in range(fin_label+1)])
for i in range(len(crack_edge_data)):
for j in range(len(crack_edge_data[0])):
R1 = 0
R2 = 0
if crack_edge_data[i,j] == 0:
[R1,R2] = adjacent_reg(i,j)
if R1 != 0:
region_perimeter[int(R1)] = region_perimeter[int(R1)] + 1
if R2 != 0:
region_perimeter[int(R2)] = region_perimeter[int(R2)] + 1
if R1 != 0:
common_perimeter[int(R1),int(R2)] = common_perimeter[int(R1),int(R2)] + 1
common_perimeter[int(R2),int(R1)] = common_perimeter[int(R2),int(R1)] + 1
return common_perimeter, region_perimeter
"""
Explanation: In the section below, we have defined two helper functions.
1. 'adjacent_reg' - The first one returns the labels assigned to the two adjacent regions which enclose a crack-edge pixel.
2. 'update_perimeters' - The second one computes the updated perimeters of the obtained regions. If two regions share a common boundary, this function also updates the length of shared boundary between two regions, along with the perimeters of individual regions.
These functions are useful in the merge process, after we have removed the weak edges, which we shall do in the following section. The perimeters calculated by these function are used to check the condition while merging two regions.
End of explanation
"""
def merge_reg(R1, R2):
reg = 0
if avg_intensity_reg[int(R1)]['avg'] < avg_intensity_reg[int(R2)]['avg']:
reg = avg_intensity_reg[int(R1)]['avg']
replaceByReg = avg_intensity_reg[int(R2)]['avg']
R = R2
else:
reg = avg_intensity_reg[int(R2)]['avg']
replaceByReg = avg_intensity_reg[int(R1)]['avg']
R = R1
for i in range(2* adj_values[int(reg)]['left'],2*adj_values[int(reg)]['right']):
for j in range(2*adj_values[int(reg)]['top'],2*adj_values[int(reg)]['down']):
if crack_edge_data[i,j] == reg:
crack_edge_data[i,j] = replaceByReg
label_data[i,j] = R
if crack_edge_data[i,j] == 0:
[R3,R4] = adjacent_reg(i,j)
if (R3 == R1 and R4 == R2) or (R3 == R2 and R4 == R1):
crack_edge_data[i,j] = replaceByReg
label_data[i,j] = R
"""
Explanation: Now that we have the crack edge data structure well defined, and our helper functions to identify regions, calculate their perimeters and length of common boundary are in place, let us define the function to merge two given regions.
The merge function is called after we have decided to eliminate an edge between two regions. The function replaces the region with lower average pixel intensity values with the one having higher average of the two.
The function implementation is shown below:
End of explanation
"""
def elim_weak_edges(common_perimeter, region_perimeter):
edge_rem_threshold = 0.8
for i in range(len(crack_edge_data)):
for j in range(len(crack_edge_data[0])):
R1 = 0
R2 = 0
if crack_edge_data[i,j] == 0:
[R1,R2] = adjacent_reg(i,j)
if R1 != 0 and R2 != 0:
W = common_perimeter[int(R1),int(R2)]
L1 = region_perimeter[int(R1)]
L2 = region_perimeter[int(R2)]
if L1 <= L2:
min_perimeter = L1
else:
min_perimeter = L2
score = W/min_perimeter
if score >= edge_rem_threshold:
merge_reg(R1,R2)
"""
Explanation: We have defined the function for merging two regions, but when do we decide if we have to merge two adjacent regions?
Common boundaries of adjacent regions R1 and R2 are removed if:
W/min(l1,l2) ≥ threshold
where, W is the the number of weak edges in common boundary and l1, l2 are the perimeters of R1 and R2 respectively.
We use this criteria for removing weak edges between two adjacent regions, and the threshold needs to be fixed by tuning.
In the function below, we have implemented a function to eliminate weak edges from adjacent regions R1 and R2, if the above criterion is satisfied.
End of explanation
"""
def superimpose_edges():
for i in range(len(crack_edge_data)):
for j in range(len(crack_edge_data[0])):
if crack_edge_data[i,j] == 0:
original_img[int((i+1)/2),int((j+1)/2)] = 255
common_perimeter, region_perimeter = update_perimeters()
elim_weak_edges(common_perimeter, region_perimeter)
superimpose_edges()
"""
Explanation: The threshold above can be set by tuning for particular value which helps us obtain ideal segmentation. I experimented over various values of threshold between 0.1 to 0.9, and set this threshold by observation.
Next we overlay the obtained region edges over the original image, to view the various segmented regions in the original image provided.
End of explanation
"""
plot_input(crack_edge_data, 'Super Grid- Crack Edge Data')
"""
Explanation: Results
In the following section, we shall visualise our results. We start by plotting our supergrid data structure, which holds the crack edges.
End of explanation
"""
plot_input(original_img, 'Region Segmentation - Boundary Melting')
plt.imshow(original_img)
plt.show()
"""
Explanation: Below, we can see the segmented regions in the image. Regions have pixels with same average intensity value.
The colored plot of the segmentation, provides a much understandable visualisation, as shown below.
End of explanation
"""
|
geektoni/shogun | doc/ipython-notebooks/logdet/logdet.ipynb | bsd-3-clause | %matplotlib inline
from scipy.sparse import eye
from scipy.io import mmread
import numpy as np
from matplotlib import pyplot as plt
import os
import shogun as sg
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
matFile=os.path.join(SHOGUN_DATA_DIR, 'logdet/apache2.mtx.gz')
M = mmread(matFile)
rows = M.shape[0]
cols = M.shape[1]
A = M + eye(rows, cols) * 10000.0
plt.title("A")
plt.spy(A, precision = 1e-2, marker = '.', markersize = 0.01)
plt.show()
"""
Explanation: Implement estimators of large-scale sparse Gaussian densities
by Soumyajit De (email: heavensdevil6909@gmail.com, soumyajitde@cse.iitb.ac.in. Github: <a href="https://github.com/lambday">lambday</a>)<br/> Many many thanks to my mentor Heiko Strathmann, Sergey Lisitsyn, Sören Sonnenburg, Viktor Gal
This notebook illustrates large-scale sparse Gaussian density likelihood estimation. It first introduces the reader to the mathematical background and then shows how one can do the estimation with Shogun on a number of real-world data sets.
<h2>Theoretical introduction</h2>
<p><i>Multivariate Gaussian distributions</i>, i.e. some random vector $\mathbf{x}\in\mathbb{R}^n$ having probability density function
$$p(\mathbf{x}|\boldsymbol\mu, \boldsymbol\Sigma)=(2\pi)^{-n/2}\text{det}(\boldsymbol\Sigma)^{-1/2} \exp\left(-\frac{1}{2}(\mathbf{x}-\boldsymbol\mu)^{T}\boldsymbol\Sigma^{-1}(\mathbf{x}-\boldsymbol\mu)\right)$$
$\boldsymbol\mu$ being the mean vector and $\boldsymbol\Sigma$ being the covariance matrix, arise in numerous occassions involving large datasets. Computing <i>log-likelihood</i> in these requires computation of the log-determinant of the covariance matrix
$$\mathcal{L}(\mathbf{x}|\boldsymbol\mu,\boldsymbol\Sigma)=-\frac{n}{2}\log(2\pi)-\frac{1}{2}\log(\text{det}(\boldsymbol\Sigma))-\frac{1}{2}(\mathbf{x}-\boldsymbol\mu)^{T}\boldsymbol\Sigma^{-1}(\mathbf{x}-\boldsymbol\mu)$$
The covariance matrix and its inverse are symmetric positive definite (spd) and are often sparse, e.g. due to conditional independence properties of Gaussian Markov Random Fields (GMRF). Therefore they can be stored efficiently even for large dimension $n$.</p>
<p>The usual technique for computing the log-determinant term in the likelihood expression relies on <i><a href="http://en.wikipedia.org/wiki/Cholesky_factorization">Cholesky factorization</a></i> of the matrix, i.e. $\boldsymbol\Sigma=\mathbf{LL}^{T}$, ($\mathbf{L}$ is the lower triangular Cholesky factor) and then using the diagonal entries of the factor to compute $\log(\text{det}(\boldsymbol\Sigma))=2\sum_{i=1}^{n}\log(\mathbf{L}_{ii})$. However, for sparse matrices, as covariance matrices usually are, the Cholesky factors often suffer from <i>fill-in</i> phenomena - they turn out to be not so sparse themselves. Therefore, for large dimensions this technique becomes infeasible because of a massive memory requirement for storing all these irrelevant non-diagonal co-efficients of the factor. While ordering techniques have been developed to permute the rows and columns beforehand in order to reduce fill-in, e.g. <i><a href="http://en.wikipedia.org/wiki/Minimum_degree_algorithm">approximate minimum degree</a></i> (AMD) reordering, these techniques depend largely on the sparsity pattern and therefore not guaranteed to give better result.</p>
<p>Recent research shows that using a number of techniques from complex analysis, numerical linear algebra and greedy graph coloring, we can, however, approximate the log-determinant up to an arbitrary precision [<a href="http://link.springer.com/article/10.1007%2Fs11222-012-9368-y">Aune et. al., 2012</a>]. The main trick lies within the observation that we can write $\log(\text{det}(\boldsymbol\Sigma))$ as $\text{trace}(\log(\boldsymbol\Sigma))$, where $\log(\boldsymbol\Sigma)$ is the matrix-logarithm. Computing the log-determinant then requires extracting the trace of the matrix-logarithm as
$$\text{trace}(\log(\boldsymbol\Sigma))=\sum_{j=1}^{n}\mathbf{e}^{T}_{j}\log(\boldsymbol\Sigma)\mathbf{e}_{j}$$
where each $\mathbf{e}_{j}$ is a unit basis vector having a 1 in its $j^{\text{th}}$ position while rest are zeros and we assume that we can compute $\log(\boldsymbol\Sigma)\mathbf{e}_{j}$ (explained later). For large dimension $n$, this approach is still costly, so one needs to rely on sampling the trace. For example, using stochastic vectors we can obtain a <i><a href="http://en.wikipedia.org/wiki/Monte_Carlo_method">Monte Carlo estimator</a></i> for the trace -
$$\text{trace}(\log(\boldsymbol\Sigma))=\mathbb{E}_{\mathbf{v}}(\mathbf{v}^{T}\log(\boldsymbol\Sigma)\mathbf{v})\approx \sum_{j=1}^{k}\mathbf{s}^{T}_{j}\log(\boldsymbol\Sigma)\mathbf{s}_{j}$$
where the source vectors ($\mathbf{s}_{j}$) have zero mean and unit variance (e.g. $\mathbf{s}_{j}\sim\mathcal{N}(\mathbf{0}, \mathbf{I}), \forall j\in[1\cdots k]$). But since this is a Monte Carlo method, we need many many samples to get sufficiently accurate approximation. However, by a method suggested in Aune et. al., we can reduce the number of samples required drastically by using <i>probing-vectors</i> that are obtained from <a href="http://en.wikipedia.org/wiki/Graph_coloring">coloring of the adjacency graph</a> represented by the power of the sparse-matrix, $\boldsymbol\Sigma^{p}$, i.e. we can obtain -
$$\mathbb{E}_{\mathbf{v}}(\mathbf{v}^{T}\log(\boldsymbol\Sigma)\mathbf{v})\approx \sum_{j=1}^{m}\mathbf{w}^{T}_{j}\log(\boldsymbol\Sigma)\mathbf{w}_{j}$$
with $m\ll n$, where $m$ is the number of colors used in the graph coloring. For a particular color $j$, the probing vector $\mathbb{w}_{j}$ is obtained by filling with $+1$ or $-1$ uniformly randomly for entries corresponding to nodes of the graph colored with $j$, keeping the rest of the entries as zeros. Since the matrix is sparse, the number of colors used is usually very small compared to the dimension $n$, promising the advantage of this approach.</p>
<p>There are two main issues in this technique. First, computing $\boldsymbol\Sigma^{p}$ is computationally costly, but experiments show that directly applying a <i>d-distance</i> coloring algorithm on the sparse matrix itself also results in a pretty good approximation. Second, computing the exact matrix-logarithm is often infeasible because its is not guaranteed to be sparse. Aune et. al. suggested that we can rely on rational approximation of the matrix-logarithm times vector using an approach described in <a href="http://eprints.ma.man.ac.uk/1136/01/covered/MIMS_ep2007_103.pdf">Hale et. al [2008]</a>, i.e. writing $\log(\boldsymbol\Sigma)\mathbf{w}_{j}$ in our desired expression using <i><a href="http://en.wikipedia.org/wiki/Cauchy's_integral_formula">Cauchy's integral formula</a></i> as -
$$log(\boldsymbol\Sigma)\mathbf{w}_{j}=\frac{1}{2\pi i}\oint_{\Gamma}log(z)(z\mathbf{I}-\boldsymbol\Sigma)^{-1}\mathbf{w}_{j}dz\approx \frac{-8K(\lambda_{m}\lambda_{M})^{\frac{1}{4}}}{k\pi N} \boldsymbol\Sigma\Im\left(-\sum_{l=1}^{N}\alpha_{l}(\boldsymbol\Sigma-\sigma_{l}\mathbf{I})^{-1}\mathbf{w}_{j}\right)$$
$K$, $k \in \mathbb{R}$, $\alpha_{l}$, $\sigma_{l} \in \mathbb{C}$ are coming from <i><a href="http://en.wikipedia.org/wiki/Jacobi_elliptic_functions">Jacobi elliptic functions</a></i>, $\lambda_{m}$ and $\lambda_{M}$ are the minimum/maximum eigenvalues of $\boldsymbol\Sigma$ (they have to be real-positive), respectively, $N$ is the number of contour points in the quadrature rule of the above integral and $\Im(\mathbf{x})$ represents the imaginary part of $\mathbf{x}\in\mathbb{C}^{n}$.</p>
<p>The problem then finally boils down to solving the shifted family of linear systems $(\boldsymbol\Sigma-\sigma_{l}\mathbf{I})\mathbb{x}_{j}=\mathbb{w}_{j}$. Since $\boldsymbol\Sigma$ is sparse, matrix-vector product is not much costly and therefore these systems can be solved with a low memory-requirement using <i>Krylov subspace iterative solvers</i> like <i><a href="http://en.wikipedia.org/wiki/Conjugate_gradient_method">Conjugate Gradient</a></i> (CG). Since the shifted matrices have complex entries along their diagonal, the appropriate method to choose is <i>Conjugate Orthogonal Conjugate Gradient</i> (COCG) [<a href="http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=106415&tag=1">H.A. van der Vorst et. al., 1990.</a>]. Alternatively, these systems can be solved at once using <i>CG-M</i> [<a href"http://arxiv.org/abs/hep-lat/9612014">Jegerlehner, 1996.</a>] solver which solves for $(\mathbf{A}+\sigma\mathbf{I})\mathbf{x}=\mathbf{b}$ for all values of $\sigma$ using as many matrix-vector products in the CG-iterations as required to solve for one single shifted system. This algorithm shows reliable convergance behavior for systems with reasonable condition number.</p>
<p>One interesting property of this approach is that once the graph coloring information and shifts/weights are known, all the computation components - solving linear systems, computing final vector-vector product - are independently computable. Therefore, computation can be speeded up using parallel computation of these. To use this, a computation framework for Shogun is developed and the whole log-det computation works on top of it.</p>
<h2>An example of using this approach in Shogun</h2>
<p>We demonstrate the usage of this technique to estimate log-determinant of a real-valued spd sparse matrix with dimension $715,176\times 715,176$ with $4,817,870$ non-zero entries, <a href="http://www.cise.ufl.edu/research/sparse/matrices/GHS_psdef/apache2.html">apache2</a>, which is obtained from the <a href="http://www.cise.ufl.edu/research/sparse/matrices/">The University of Florida Sparse Matrix Collection</a>. Cholesky factorization with AMD for this sparse-matrix gives rise to factors with $353,843,716$ non-zero entries (from source). We use CG-M solver to solve the shifted systems. Since the original matrix is badly conditioned, here we added a ridge along its diagonal to reduce the condition number so that the CG-M solver converges within reasonable time. Please note that for high condition number, the number of iteration has to be set very high.
End of explanation
"""
op = sg.RealSparseMatrixOperator(A.tocsc())
# Lanczos iterative Eigensolver to compute the min/max Eigenvalues which is required to compute the shifts
eigen_solver = sg.LanczosEigenSolver(op)
# we set the iteration limit high to compute the eigenvalues more accurately, default iteration limit is 1000
eigen_solver.set_max_iteration_limit(2000)
# computing the eigenvalues
eigen_solver.compute()
print('Minimum Eigenvalue:', eigen_solver.get_min_eigenvalue())
print('Maximum Eigenvalue:', eigen_solver.get_max_eigenvalue())
"""
Explanation: First, to keep the notion of Krylov subspace, we view the matrix as a linear operator that applies on a vector, resulting a new vector. We use <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1SparseMatrixOperator.html">RealSparseMatrixOperator</a> that is suitable for this example. All the solvers work with <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LinearOperator.html">LinearOperator</a> type objects. For computing the eigenvalues, we use <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LanczosEigenSolver.html">LanczosEigenSolver</a> class. Although computation of the Eigenvalues is done internally within the log-determinant estimator itself (see below), here we explicitely precompute them.
End of explanation
"""
# We can specify the power of the sparse-matrix that is to be used for coloring, default values will apply a
# 2-distance greedy graph coloring algorithm on the sparse-matrix itself. Matrix-power, if specified, is computed in O(lg p)
trace_sampler = sg.ProbingSampler(op)
# apply the graph coloring algorithm and generate the number of colors, i.e. number of trace samples
trace_sampler.precompute()
print('Number of colors used:', trace_sampler.get_num_samples())
"""
Explanation: Next, we use <a href="http://www.shogun-toolbox.org/doc/en/latest/ProbingSampler_8h_source.html">ProbingSampler</a> class which uses an external library <a href="http://www.cscapes.org/coloringpage/">ColPack</a>. Again, the number of colors used is precomputed for demonstration purpose, although computed internally inside the log-determinant estimator.
End of explanation
"""
cgm = sg.CGMShiftedFamilySolver()
# setting the iteration limit (set this to higher value for higher condition number)
cgm.set_iteration_limit(100)
# accuracy determines the number of contour points in the rational approximation (i.e. number of shifts in the systems)
accuracy = 1E-15
# we create a operator-log-function using the sparse matrix operator that uses CG-M to solve the shifted systems
op_func = sg.LogRationalApproximationCGM(op, eigen_solver, cgm, accuracy)
op_func.precompute()
print('Number of shifts:', op_func.get_num_shifts())
"""
Explanation: <p>This corresponds to averaging over 13 source vectors rather than one (but has much lower variance as using 13 Gaussian source vectors). A comparison between the convergence behavior of using probing sampler and Gaussian sampler is presented later.</p>
<p>Then we define <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLogRationalApproximationCGM.html">LogRationalApproximationCGM</a> operator function class, which internally uses the Eigensolver to compute the Eigenvalues, uses <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CJacobiEllipticFunctions.html">JacobiEllipticFunctions</a> to compute the complex shifts, weights and the constant multiplier in the rational approximation expression, takes the probing vector generated by the trace sampler and then uses CG-M solver (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMShiftedFamilySolver.html">CGMShiftedFamilySolver</a>) to solve the shifted systems. Precompute is not necessary here too.</p>
End of explanation
"""
# number of log-det samples (use a higher number to get better estimates)
# (this is 5 times number of colors estimate in practice, so usually 1 probing estimate is enough)
num_samples = 5
log_det_estimator = sg.LogDetEstimator(trace_sampler, op_func)
estimates = log_det_estimator.sample(num_samples)
estimated_logdet = np.mean(estimates)
print('Estimated log(det(A)):', estimated_logdet)
"""
Explanation: Finally, we use the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LogDetEstimator.html">LogDetEstimator</a> class to sample the log-determinant of the matrix.
End of explanation
"""
# the following method requires massive amount of memory, for demonstration purpose
# the following code is commented out and direct value obtained from running it once is used
# from shogun import Statistics
# actual_logdet = Statistics.log_det(A)
actual_logdet = 7120357.73878
print('Actual log(det(A)):', actual_logdet)
plt.hist(estimates)
plt.plot([actual_logdet, actual_logdet], [0,len(estimates)], linewidth=3)
plt.show()
"""
Explanation: To verify the accuracy of the estimate, we compute exact log-determinant of A using Cholesky factorization using <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Statistics.html#a9931a4ea72310b239efdc05503442525">Statistics::log_det</a> method.
End of explanation
"""
from scipy.sparse import csc_matrix
from scipy.sparse import identity
m = mmread(os.path.join(SHOGUN_DATA_DIR, 'logdet/west0479.mtx'))
# computing a spd with added ridge
B = csc_matrix(m.transpose() * m + identity(m.shape[0]) * 1000.0)
fig = plt.figure(figsize=(12, 4))
ax = fig.add_subplot(1,2,1)
ax.set_title('B')
ax.spy(B, precision = 1e-5, marker = '.', markersize = 2.0)
ax = fig.add_subplot(1,2,2)
ax.set_title('lower Cholesky factor')
dense_matrix = B.todense()
L = np.linalg.cholesky(dense_matrix)
ax.spy(csc_matrix(L), precision = 1e-5, marker = '.', markersize = 2.0)
plt.show()
op = sg.RealSparseMatrixOperator(B)
eigen_solver = sg.LanczosEigenSolver(op)
# computing log-det estimates using probing sampler
probing_sampler = sg.ProbingSampler(op)
cgm.set_iteration_limit(500)
op_func = sg.LogRationalApproximationCGM(op, eigen_solver, cgm, 1E-5)
log_det_estimator = sg.LogDetEstimator(probing_sampler, op_func)
num_probing_estimates = 100
probing_estimates = log_det_estimator.sample(num_probing_estimates)
# computing log-det estimates using Gaussian sampler
from shogun import Statistics
num_colors = probing_sampler.get_num_samples()
normal_sampler = sg.NormalSampler(op.get_dimension())
log_det_estimator = sg.LogDetEstimator(normal_sampler, op_func)
num_normal_estimates = num_probing_estimates * num_colors
normal_estimates = log_det_estimator.sample(num_normal_estimates)
# average in groups of n_effective_samples
effective_estimates_normal = np.zeros(num_probing_estimates)
for i in range(num_probing_estimates):
idx = i * num_colors
effective_estimates_normal[i] = np.mean(normal_estimates[idx:(idx + num_colors)])
actual_logdet = Statistics.log_det(B)
print('Actual log(det(B)):', actual_logdet)
print('Estimated log(det(B)) using probing sampler:', np.mean(probing_estimates))
print('Estimated log(det(B)) using Gaussian sampler:', np.mean(effective_estimates_normal))
print('Variance using probing sampler:', np.var(probing_estimates))
print('Variance using Gaussian sampler:', np.var(effective_estimates_normal))
fig = plt.figure(figsize=(15, 4))
ax = fig.add_subplot(1,3,1)
ax.set_title('Probing sampler')
ax.plot(np.cumsum(probing_estimates)/(np.arange(len(probing_estimates))+1))
ax.plot([0,len(probing_estimates)], [actual_logdet, actual_logdet])
ax.legend(["Probing", "True"])
ax = fig.add_subplot(1,3,2)
ax.set_title('Gaussian sampler')
ax.plot(np.cumsum(effective_estimates_normal)/(np.arange(len(effective_estimates_normal))+1))
ax.plot([0,len(probing_estimates)], [actual_logdet, actual_logdet])
ax.legend(["Gaussian", "True"])
ax = fig.add_subplot(1,3,3)
ax.hist(probing_estimates)
ax.hist(effective_estimates_normal)
ax.plot([actual_logdet, actual_logdet], [0,len(probing_estimates)], linewidth=3)
plt.show()
"""
Explanation: <h2>Statistics</h2>
We use a smaller sparse-matrix, <a href="http://www.cise.ufl.edu/research/sparse/matrices/HB/west0479.html">'west0479'</a> in this section to demonstrate the benefits of using probing vectors over standard Gaussian vectors to sample the trace of matrix-logarithm. In the following we can easily observe the fill-in phenomena described earlier. Again, a ridge has been added to reduce the runtime for demonstration purpose.
End of explanation
"""
from scipy.io import loadmat
def get_Q_y_A(kappa):
# read the ozone data and create the matrix Q
ozone = loadmat(os.path.join(SHOGUN_DATA_DIR, 'logdet/ozone_data.mat'))
GiCG = ozone["GiCG"]
G = ozone["G"]
C0 = ozone["C0"]
kappa = 13.1
Q = GiCG + 2 * (kappa ** 2) * G + (kappa ** 4) * C0
# also, added a ridge here
Q = Q + eye(Q.shape[0], Q.shape[1]) * 10000.0
plt.spy(Q, precision = 1e-5, marker = '.', markersize = 1.0)
plt.show()
# read y and A
y = ozone["y_ozone"]
A = ozone["A"]
return Q, y, A
def log_det(A):
op = sg.RealSparseMatrixOperator(A)
eigen_solver = sg.LanczosEigenSolver(op)
probing_sampler = sg.ProbingSampler(op)
cgm = sg.CGMShiftedFamilySolver()
cgm.set_iteration_limit(100)
op_func = sg.LogRationalApproximationCGM(op, eigen_solver, cgm, 1E-5)
log_det_estimator = sg.LogDetEstimator(probing_sampler, op_func)
num_estimates = 1
return np.mean(log_det_estimator.sample(num_estimates))
def log_likelihood(tau, kappa):
Q, y, A = get_Q_y_A(kappa)
n = len(y);
AtA = A.T.dot(A)
M = Q + tau * AtA;
# Computing log-determinants")
logdet1 = log_det(Q)
logdet2 = log_det(M)
first = 0.5 * logdet1 + 0.5 * n * np.log(tau) - 0.5 * logdet2
# computing the rest of the likelihood
second_a = -0.5 * tau * (y.T.dot(y))
second_b = np.array(A.T.dot(y))
from scipy.sparse.linalg import spsolve
second_b = spsolve(M, second_b)
second_b = A.dot(second_b)
second_b = y.T.dot(second_b)
second_b = 0.5 * (tau ** 2) * second_b
log_det_part = first
quadratic_part = second_a + second_b
const_part = -0.5 * n * np.log(2 * np.pi)
log_marignal_lik = const_part + log_det_part + quadratic_part
return log_marignal_lik
L = log_likelihood(1.0, 15.0)
print('Log-likelihood estimate:', L)
"""
Explanation: <h2>A motivational example - likelihood of the Ozone dataset</h2>
<p>In <a href="http://arxiv.org/abs/1306.4032">Lyne et. al. (2013)</a>, an interesting scenario is discussed where the log-likelihood of a model involving large spatial dataset is considered. The data, collected by a satellite consists of $N=173,405$ ozone measurements around the globe. The data is modelled using three stage hierarchical way -
$$y_{i}|\mathbf{x},\kappa,\tau\sim\mathcal{N}(\mathbf{Ax},\tau^{−1}\mathbf{I})$$
$$\mathbf{x}|\kappa\sim\mathcal{N}(\mathbf{0}, \mathbf{Q}(\kappa))$$
$$\kappa\sim\log_{2}\mathcal{N}(0, 100), \tau\sim\log_{2}\mathcal{N}(0, 100)$$
Where the precision matrix, $\mathbf{Q}$, of a Matern SPDE model, defined on a fixed traingulation of the globe, is sparse and the parameter $\kappa$ controls for the range at which correlations in the field are effectively zero (see Girolami et. al. for details). The log-likelihood estiamate of the posterior using this model is
$$2\mathcal{L}=2\log \pi(\mathbf{y}|\kappa,\tau)=C+\log(\text{det}(\mathbf{Q}(\kappa)))+N\log(\tau)−\log(\text{det}(\mathbf{Q}(\kappa)+\tau \mathbf{A}^{T}\mathbf{A}))− \tau\mathbf{y}^{T}\mathbf{y}+\tau^{2}\mathbf{y}^{T}\mathbf{A}(\mathbf{Q}(\kappa)+\tau\mathbf{A}^{T}\mathbf{A})^{−1}\mathbf{A}^{T}\mathbf{y}$$
In the expression, we have two terms involving log-determinant of large sparse matrices. The rational approximation approach described in the previous section can readily be applicable to estimate the log-likelihood. The following computation shows the usage of Shogun's log-determinant estimator for estimating this likelihood (code has been adapted from an open source library, <a href="https://github.com/karlnapf/ozone-roulette.git">ozone-roulette</a>, written by Heiko Strathmann, one of the authors of the original paper).
<b>Please note that we again added a ridge along the diagonal for faster execution of this example. Since the original matrix is badly conditioned, one needs to set the iteration limits very high for both the Eigen solver and the linear solver in absense of precondioning.</b>
End of explanation
"""
dim = 5
np.random.seed(10)
# create a random valued sparse matrix linear operator
A = csc_matrix(np.random.randn(dim, dim))
op = sg.RealSparseMatrixOperator(A)
# creating a random vector
np.random.seed(1)
b = np.array(np.random.randn(dim))
v = op.apply(b)
print('A.apply(b)=',v)
# create a dense matrix linear operator
B = np.array(np.random.randn(dim, dim)).astype(complex)
op = sg.ComplexDenseMatrixOperator(B)
print('Dimension:', op.get_dimension())
"""
Explanation: <h2>Useful components</h2>
<p>As a part of the implementation of log-determinant estimator, a number of classes have been developed, which may come useful for several other occassions as well.
<h3>1. <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LinearOperator.html">Linear Operators</a></h3>
All the linear solvers and Eigen solvers work with linear operators. Both real valued and complex valued operators are supported for dense/sparse matrix linear operators.
End of explanation
"""
from scipy.sparse import csc_matrix
from scipy.sparse import identity
# creating a random spd matrix
dim = 5
np.random.seed(10)
m = csc_matrix(np.random.randn(dim, dim))
a = m.transpose() * m + csc_matrix(np.identity(dim))
Q = sg.RealSparseMatrixOperator(a)
# creating a random vector
y = np.array(np.random.randn(dim))
# solve the system Qx=y
# the argument is set as True to gather convergence statistics (default is False)
cg = sg.ConjugateGradientSolver(True)
cg.set_iteration_limit(20)
x = cg.solve(Q,y)
print('x:',x)
# verifying the result
print('y:', y)
print('Qx:', Q.apply(x))
residuals = cg.get_residuals()
plt.plot(residuals)
plt.show()
"""
Explanation: <h3>2. <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LinearSolver.html">Linear Solvers</a></h3>
<p> Conjugate Gradient based iterative solvers, that construct the Krylov subspace in their iteration by computing matrix-vector products are most useful for solving sparse linear systems. Here is an overview of CG based solvers that are currently available in Shogun.</p>
<h4> <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CConjugateGradientSolver.html">Conjugate Gradient Solver</a></h4>
This solver solves for system $\mathbf{Qx}=\mathbf{y}$, where $\mathbf{Q}$ is real-valued spd linear operator (e.g. dense/sparse matrix operator), and $\mathbf{y}$ is real vector.
End of explanation
"""
# creating a random spd matrix
dim = 5
np.random.seed(10)
m = csc_matrix(np.random.randn(dim, dim))
a = m.transpose() * m + csc_matrix(np.identity(dim))
a = a.astype(complex)
# adding a complex entry along the diagonal
for i in range(0, dim):
a[i,i] += complex(np.random.randn(), np.random.randn())
Q = sg.ComplexSparseMatrixOperator(a)
z = np.array(np.random.randn(dim))
# solve for the system Qx=z
cocg = sg.ConjugateOrthogonalCGSolver(True)
cocg.set_iteration_limit(20)
x = cocg.solve(Q, z)
print('x:',x)
# verifying the result
print('z:',z)
print('Qx:',np.real(Q.apply(x)))
residuals = cocg.get_residuals()
plt.plot(residuals)
plt.show()
"""
Explanation: <h4><a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1ConjugateOrthogonalCGSolver.html">Conjugate Orthogonal CG Solver</a></h4>
Solves for systems $\mathbf{Qx}=\mathbf{z}$, where $\mathbf{Q}$ is symmetric but non-Hermitian (i.e. having complex entries in its diagonal) and $\mathbf{z}$ is real valued vector.
End of explanation
"""
cgm = sg.CGMShiftedFamilySolver()
# creating a random spd matrix
dim = 5
np.random.seed(10)
m = csc_matrix(np.random.randn(dim, dim))
a = m.transpose() * m + csc_matrix(np.identity(dim))
Q = sg.RealSparseMatrixOperator(a)
# creating a random vector
v = np.array(np.random.randn(dim))
# number of shifts (will be equal to the number of contour points)
num_shifts = 3;
# generating some random shifts
shifts = []
for i in range(0, num_shifts):
shifts.append(complex(np.random.randn(), np.random.randn()))
sigma = np.array(shifts)
print('Shifts:', sigma)
# generating some random weights
weights = []
for i in range(0, num_shifts):
weights.append(complex(np.random.randn(), np.random.randn()))
alpha = np.array(weights)
print('Weights:',alpha)
# solve for the systems
cgm = sg.CGMShiftedFamilySolver(True)
cgm.set_iteration_limit(20)
x = cgm.solve_shifted_weighted(Q, v, sigma, alpha)
print('x:',x)
residuals = cgm.get_residuals()
plt.plot(residuals)
plt.show()
# verifying the result with cocg
x_s = np.array([0+0j] * dim)
for i in range(0, num_shifts):
a_s = a.astype(complex)
for j in range(0, dim):
# moving the complex shift inside the operator
a_s[j,j] += sigma[i]
Q_s = sg.ComplexSparseMatrixOperator(a_s)
# multiplying the result with weight
x_s += alpha[i] * cocg.solve(Q_s, v)
print('x\':', x_s)
"""
Explanation: <h4><a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMShiftedFamilySolver.html">CG-M Shifted Family Solver</a></h4>
Solves for systems with real valued spd matrices with complex shifts. For using it with log-det, an option to specify the weight of each solution is also there. The solve_shifted_weighted method returns $\sum\alpha_{l}\mathbf{x}{l}$ where $\mathbf{x}{l}=(\mathbf{A}+\sigma_{l}\mathbf{I})^{-1}\mathbf{y}$, $\sigma,\alpha\in\mathbb{C}$, $\mathbf{y}\in\mathbb{R}$.
End of explanation
"""
# creating a random spd matrix
dim = 5
np.random.seed(10)
m = csc_matrix(np.random.randn(dim, dim))
a = m.transpose() * m + csc_matrix(np.identity(dim))
Q = sg.RealSparseMatrixOperator(a)
# creating a random vector
y = np.array(np.random.randn(dim))
# solve the system Qx=y
chol = sg.DirectSparseLinearSolver()
x = chol.solve(Q,y)
print('x:',x)
# verifying the result
print('y:', y)
print('Qx:', Q.apply(x))
"""
Explanation: Apart from iterative solvers, a few more triangular solvers are added.
<h4><a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDirectSparseLinearSolver.html">Direct Sparse Linear Solver</a></h4>
This uses sparse Cholesky to solve for linear systems $\mathbf{Qx}=\mathbf{y}$, where $\mathbf{Q}$ is real-valued spd linear operator (e.g. dense/sparse matrix operator), and $\mathbf{y}$ is real vector.
End of explanation
"""
# creating a random spd matrix
dim = 5
np.random.seed(10)
m = np.array(np.random.randn(dim, dim))
a = m.transpose() * m + csc_matrix(np.identity(dim))
a = a.astype(complex)
# adding a complex entry along the diagonal
for i in range(0, dim):
a[i,i] += complex(np.random.randn(), np.random.randn())
Q = sg.ComplexDenseMatrixOperator(a)
z = np.array(np.random.randn(dim))
# solve for the system Qx=z
solver = sg.DirectLinearSolverComplex()
x = solver.solve(Q, z)
print('x:',x)
# verifying the result
print('z:',z)
print('Qx:',np.real(Q.apply(x)))
"""
Explanation: <h4><a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDirectLinearSolverComplex.html">Direct Linear Solver for Complex</a></h4>
This solves linear systems $\mathbf{Qx}=\mathbf{y}$, where $\mathbf{Q}$ is complex-valued dense matrix linear operator, and $\mathbf{y}$ is real vector.
End of explanation
"""
|
idekerlab/cyrest-examples | notebooks/cookbook/Python-cookbook/Import.ipynb | mit | # import
from py2cytoscape.data.cyrest_client import CyRestClient
# Create REST client for Cytoscape
cy = CyRestClient()
# Reset current session for fresh start
cy.session.delete()
# Empty network
empty1 = cy.network.create()
# With name
empty2 = cy.network.create(name='Created in Jupyter Notebook')
# With name and collection name
empty3 = cy.network.create(name='Also created in Jupyter', collection='New network collection')
"""
Explanation: Import
In this section, you can learn how to import network data from various types of file format, for example local file, URLs, web services, and your creating data.
Supported format
In py2cytoscape, there are some supported formats that you can import.
Cytoscape.js
NetworkX
Pandas DataFrame
igraph
Numpy adjacency matrix (binary or weighted)
Table of contents
In this section, we load data from various type of format and data type. In addition to supported formats that we menthion above, we will import some python's data format data and update network table.
Create empty network
Load networks from files, URLs or web service
Create networks from various types of data
Update Table
Create empty network
End of explanation
"""
# import data from url
from py2cytoscape.data.cyrest_client import CyRestClient
import pandas as pd
from IPython.display import Image
# Create REST client for Cytoscape
cy = CyRestClient()
# Reset current session for fresh start
cy.session.delete()
# Load a sample network
#network1 = cy.network.create_from('http://chianti.ucsd.edu/~kono/data/galFiltered.sif')
# Load a single local file
network2 = cy.network.create_from('../sampleData/galFiltered.json')
network3 = cy.network.create_from('../sampleData/sample_yeast_network.xgmml', collection='My Collection')
network4 = cy.network.create_from('../sampleData/galFiltered.gml', collection='My Collection')
# Load from multiple locations
network_locations = [
# File1
'../sampleData/galFiltered.json',
# File2
'../sampleData/sample_yeast_network.xgmml',
# File3
'../sampleData/galFiltered.gml'
]
# This requrns Series
networks = cy.network.create_from(network_locations)
pd.DataFrame(networks, columns=['CyNetwork'])
# Apply layout to the cytoscape network object
cy.layout.apply(network = network2)
# Show it!!
Image(network2.get_png(height=400))
"""
Explanation: Load networks from files, URLs or web services
To load network data from files, URLs or web services is your usual task. By using following code, you can import data from such kind of data format.
End of explanation
"""
# import
from py2cytoscape.data.cyrest_client import CyRestClient
import py2cytoscape.util.cytoscapejs as cyjs
from IPython.display import Image
# Create REST client for Cytoscape
cy = CyRestClient()
# Reset current session for fresh start
cy.session.delete()
# Cytoscape.js JSON
network = cy.network.create(data=cyjs.get_empty_network(), name='Created from Cytoscape.js JSON')
# You can get empty network
Image(network.get_png(height=400))
"""
Explanation: Create networks from various types of data
Currently, py2cytoscape accepts the following data as input:
- Cytoscape.js
- NetworkX
- Pandas DataFrame
- igraph
- Numpy adjacency matrix (binary or weighted)
In the followin sections, you can look some examples of how to import various kinds of data type.
From Cytoscape.js JSON
End of explanation
"""
# import
from py2cytoscape.data.cyrest_client import CyRestClient
import networkx as nx
from IPython.display import Image
# Create REST client for Cytoscape
cy = CyRestClient()
# Reset current session for fresh start
cy.session.delete()
# Create NetworkX data object
nx_graph = nx.scale_free_graph(100)
# Generate cytoscape network object from networkx
scale_free100 = cy.network.create_from_networkx(nx_graph, collection='Generated by NetworkX')
# Apply layout to the cytoscape network object
cy.layout.apply(network = scale_free100)
# Show it!!
Image(scale_free100.get_png(height=400))
"""
Explanation: From NetworkX
End of explanation
"""
# import
from py2cytoscape.data.cyrest_client import CyRestClient
import pandas as pd
from IPython.display import Image
# Create REST client for Cytoscape
cy = CyRestClient()
# Reset current session for fresh start
cy.session.delete()
# Import Pandas DataFrame from a simple text table
df_from_sif = pd.read_csv('../sampleData/galFiltered.sif', names=['source', 'interaction', 'target'], sep=' ')
# Get the network object
network1 = cy.network.create_from_dataframe(df_from_sif)
# Apply layout
cy.layout.apply(network = network1)
# Show it!!
Image(network1.get_png(height=400))
"""
Explanation: From Pandas DataFrame
End of explanation
"""
# import
from py2cytoscape.data.cyrest_client import CyRestClient
import igraph as ig
from IPython.display import Image
# Create REST client for Cytoscape
cy = CyRestClient()
# Reset current session for fresh start
cy.session.delete()
# igraph
ig_data = ig.Graph.Tree(400, 5)
# Get the network object from igraph data
network_igraph = cy.network.create_from_igraph(ig_data)
# Apply layout to network
cy.layout.apply(network = network_igraph)
# Show it!!
Image(network_igraph.get_png(height=400))
"""
Explanation: igraph
End of explanation
"""
# Import
from py2cytoscape.data.cyrest_client import CyRestClient
import numpy as np
from IPython.display import Image
# Create REST client for Cytoscape
cy = CyRestClient()
# Reset current session for fresh start
cy.session.delete()
# Prepare ndarray data
matrix1 = np.array([
[0, 1, 1, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
[0, 0, 0, 0]
])
# Generate cytoscape network obejct from ndarray
net1 = cy.network.create_from_ndarray(matrix1, name='binary sample')
# Apply layout
cy.layout.apply(network=net1)
cy.layout.fit(network=net1)
# Show it!!
Image(net1.get_png(height=400))
"""
Explanation: Numpy adjacency matrix (binary or weighted)
End of explanation
"""
# Small utility function to convert ID sets
from py2cytoscape.data.cyrest_client import CyRestClient
from util.util_uniprot import *
import pandas as pd
from IPython.display import Image
# Create REST client for Cytoscape
cy = CyRestClient()
# Reset current session for fresh start
cy.session.delete()
# Load data from a simple text table as pandas data format
df_from_sif = pd.read_csv('../sampleData/galFiltered.sif', names=['source', 'interaction', 'target'], sep=' ')
# Create network from pandas data frame
network = cy.network.create_from_dataframe(df_from_sif, name='Yeast network created from pandas DataFrame')
# Apply layout
cy.layout.apply(network=network)
#Fit an existing network view to current window.
cy.layout.fit(network=network)
# Show it!!
Image(network.get_png(height=400))
network
# Get node table from Cytoscape
network_node_table = network.get_node_table()
# Show it
network_node_table.head()
# From KEGG ID to UniprotKB ID
query1 = ' '.join(network_node_table['name'].map(lambda gene_id: 'sce:' + gene_id).values)
id_map_kegg2uniprot = uniprot_id_mapping_service(query1, from_id='KEGG_ID', to_id='ID')
id_map_kegg2uniprot.columns = ['kegg', 'uniprot']
# Show it
id_map_kegg2uniprot.head()
# From UniprotKB to SGD
query2 = ' '.join(id_map_kegg2uniprot['uniprot'].values)
id_map_uniprot2sgd = uniprot_id_mapping_service(query2, from_id='ID', to_id='SGD_ID')
id_map_uniprot2sgd.columns = ['uniprot', 'sgd']
# Show it
id_map_uniprot2sgd.head()
# From UniprotKB to Entrez Gene ID
query3 = ' '.join(id_map_kegg2uniprot['uniprot'].values)
id_map_uniprot2ncbi = uniprot_id_mapping_service(query3, from_id='ID', to_id='P_ENTREZGENEID')
id_map_uniprot2ncbi.columns = ['uniprot', 'entrez']
# Show it
id_map_uniprot2ncbi.head()
# Merge them
merged = pd.merge(id_map_kegg2uniprot, id_map_uniprot2sgd, on='uniprot')
merged = pd.merge(merged, id_map_uniprot2ncbi, on='uniprot')
# Show it
merged.head()
# Add key column by removing prefix
merged['name'] = merged['kegg'].map(lambda kegg_id : kegg_id[4:])
# Show it
merged.head()
# Now update existing node table with the data frame above.
network.update_node_table(merged, network_key_col='name', data_key_col='name')
# Check the table is actually updated
network.get_node_table().head()
"""
Explanation: Update Table
Let's do something a bit more realistic. You can update any Tables by using DataFrame objects.
In this example, We wull use ID Conversion web service by Uniprot to add more information to existing yeast network in current session.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.13/_downloads/plot_sensor_permutation_test.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne import io
from mne.stats import permutation_t_test
from mne.datasets import sample
print(__doc__)
"""
Explanation: Permutation T-test on sensor data
One tests if the signal significantly deviates from 0
during a fixed time window of interest. Here computation
is performed on MNE sample dataset between 40 and 60 ms.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id = 1
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: MEG + STI 014 - bad channels (modify to your needs)
include = [] # or stim channel ['STI 014']
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# pick MEG Gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,
include=include, exclude='bads')
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6))
data = epochs.get_data()
times = epochs.times
temporal_mask = np.logical_and(0.04 <= times, times <= 0.06)
data = np.mean(data[:, :, temporal_mask], axis=2)
n_permutations = 50000
T0, p_values, H0 = permutation_t_test(data, n_permutations, n_jobs=1)
significant_sensors = picks[p_values <= 0.05]
significant_sensors_names = [raw.ch_names[k] for k in significant_sensors]
print("Number of significant sensors : %d" % len(significant_sensors))
print("Sensors names : %s" % significant_sensors_names)
"""
Explanation: Set parameters
End of explanation
"""
evoked = mne.EvokedArray(-np.log10(p_values)[:, np.newaxis],
epochs.info, tmin=0.)
# Extract mask and indices of active sensors in layout
stats_picks = mne.pick_channels(evoked.ch_names, significant_sensors_names)
mask = p_values[:, np.newaxis] <= 0.05
evoked.plot_topomap(ch_type='grad', times=[0], scale=1,
time_format=None, cmap='Reds', vmin=0., vmax=np.max,
unit='-log10(p)', cbar_fmt='-%0.1f', mask=mask,
size=3, show_names=lambda x: x[4:] + ' ' * 20)
"""
Explanation: View location of significantly active sensors
End of explanation
"""
|
paris-saclay-cds/python-workshop | Day_2_Software_engineering_best_practices/solutions/02_docstring.ipynb | bsd-3-clause | def read_spectra(path_csv):
"""Read and parse data in pandas DataFrames.
Parameters
----------
path_csv : str
Path to the CSV file to read.
Returns
-------
s : pandas DataFrame, shape (n_spectra, n_freq_point)
DataFrame containing all Raman spectra.
c : pandas Series, shape (n_spectra,)
Series containing the concentration of the molecule.
m : pandas Series, shape (n_spectra,)
Series containing the type of chemotherapeutic agent.
"""
s = pandas.read_csv(path_csv)
c = s['concentration']
m = s['molecule']
s = s['spectra']
x = []
for spec in s:
x.append(numpy.fromstring(spec[1:-1], sep=','))
s = pandas.DataFrame(x)
return s, c, m
# read the frequency and get a pandas serie
f = pandas.read_csv('data/freq.csv')['freqs']
# read all data for training
filenames = ['data/spectra_{}.csv'.format(i)
for i in range(4)]
stot = []
c = []
m = []
for filename in filenames:
s_tmp, c_tmp, m_tmp = read_spectra(filename)
stot.append(s_tmp)
c.append(c_tmp)
m.append(m_tmp)
stot = pandas.concat(stot)
c = pandas.concat(c)
m = pandas.concat(m)
"""
Explanation: IO: Reading and preprocess the data
We can define a function which will read the data and process them.
End of explanation
"""
def _apply_axis_layout(ax, title):
"""Apply despine style and add labels to axis."""
ax.set_xlabel('Frequency')
ax.set_ylabel('Concentration')
ax.set_title(title)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
ax.spines['left'].set_position(('outward', 10))
ax.spines['bottom'].set_position(('outward', 10))
def plot_spectra(f, s, title):
"""Plot a bunch of Raman spectra.
Parameters
----------
f : pandas Series, shape (n_freq_points,)
Frequencies for which the Raman spectra were acquired.
s : pandas DataFrame, shape (n_spectra, n_freq_points)
DataFrame containing all Raman spectra.
title : str
Title added to the plot.
Returns
-------
None
"""
fig, ax = pyplot.subplots()
ax.plot(f, s.T)
_apply_axis_layout(ax, title)
def plot_spectra_by_type(f, s, classes, title):
"""Plot mean spectrum with its variance for a given class.
Parameters
----------
f : pandas Series, shape (n_freq_points,)
Frequencies for which the Raman spectra were acquired.
s : pandas DataFrame, shape (n_spectra, n_freq_points)
DataFrame containing all Raman spectra.
classes : array-like, shape (n_classes,)
Array contining the different spectra class which will be plotted.
title : str
Title added to the plot.
Returns
-------
None
"""
fig, ax = pyplot.subplots()
for c_type in numpy.unique(classes):
i = numpy.nonzero(classes == c_type)[0]
ax.plot(f, numpy.mean(s.iloc[i], axis=0), label=c_type)
ax.fill_between(f, numpy.mean(s.iloc[i], axis=0) + numpy.std(s.iloc[i], axis=0), numpy.mean(s.iloc[i], axis=0) - numpy.std(s.iloc[i], axis=0), alpha=0.2)
_apply_axis_layout(ax, title)
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plot_spectra(f, stot, 'All training spectra')
plot_spectra_by_type(f, stot, m, 'Mean spectra in function of the molecules')
plot_spectra_by_type(f, stot, c, 'Mean spectra in function of the concentrations')
"""
Explanation: Plot helper functions
We can create two functions: (i) to plot all spectra and (ii) plot the mean spectra with the std intervals.
We will make a "private" function which will be used by both plot types.
End of explanation
"""
s4, c4, m4 = read_spectra('data/spectra_4.csv')
plot_spectra(f, stot, 'All training spectra')
plot_spectra_by_type(f, s4, m4, 'Mean spectra in function of the molecules')
plot_spectra_by_type(f, s4, c4, 'Mean spectra in function of the concentrations')
"""
Explanation: Reusability for new data:
End of explanation
"""
def plot_cm(cm, classes, title):
"""Plot a confusion matrix.
Parameters
----------
cm : ndarray, shape (n_classes, n_classes)
Confusion matrix.
classes : array-like, shape (n_classes,)
Array contining the different spectra classes used in the
classification problem.
title : str
Title added to the plot.
Returns
-------
None
"""
import itertools
fig, ax = pyplot.subplots()
pyplot.imshow(cm, interpolation='nearest', cmap='bwr')
pyplot.title(title)
pyplot.colorbar()
tick_marks = numpy.arange(len(numpy.unique(classes)))
pyplot.xticks(tick_marks, numpy.unique(classes), rotation=45)
pyplot.yticks(tick_marks, numpy.unique(classes))
fmt = 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
pyplot.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
pyplot.tight_layout()
pyplot.ylabel('True label')
pyplot.xlabel('Predicted label')
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.pipeline import make_pipeline
from sklearn.metrics import confusion_matrix
for clf in [RandomForestClassifier(random_state=0),
LinearSVC(random_state=0)]:
p = make_pipeline(StandardScaler(), PCA(n_components=100, random_state=0), clf)
p.fit(stot, m)
pred = p.predict(s4)
plot_cm(confusion_matrix(m4, pred), p.classes_, 'Confusion matrix using {}'.format(clf.__class__.__name__))
print('Accuracy score: {0:.2f}'.format(p.score(s4, m4)))
"""
Explanation: Training and testing a machine learning model for classification
End of explanation
"""
def plot_regression(y_true, y_pred, title):
"""Plot actual vs. predicted scatter plot.
Parameters
----------
y_true : array-like, shape (n_samples,)
Ground truth (correct) target values.
y_pred : array-like, shape (n_samples,)
Estimated targets as returned by a regressor.
title : str
Title added to the plot.
Returns
-------
None
"""
from sklearn.metrics import r2_score, median_absolute_error
fig, ax = pyplot.subplots()
ax.scatter(y_true, y_pred)
ax.plot([0, 25000], [0, 25000], '--k')
ax.set_ylabel('Target predicted')
ax.set_xlabel('True Target')
ax.set_title(title)
ax.text(1000, 20000, r'$R^2$=%.2f, MAE=%.2f' % (
r2_score(y_true, y_pred), median_absolute_error(y_true, y_pred)))
ax.set_xlim([0, 25000])
ax.set_ylim([0, 25000])
from sklearn.decomposition import PCA
from sklearn.linear_model import RidgeCV
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import make_pipeline
for reg in [RidgeCV(), RandomForestRegressor(random_state=0)]:
p = make_pipeline(PCA(n_components=100), reg)
p.fit(stot, c)
pred = p.predict(s4)
plot_regression(c4, pred, 'Regression using {}'.format(reg.__class__.__name__))
def fit_params(data):
"""Compute statistics for robustly scale data.
Compute the median and the variance, i.e. the difference
between the 75th and 25th percentiles.
These statistics are used later to scale data.
Parameters
----------
data : pandas DataFrame, shape (n_spectra, n_freq_point)
DataFrame containing all Raman spectra.
Returns
-------
median : ndarray, shape (n_freq_point,)
Median for each wavelength.
variance : ndarray, shape (n_freq_point,)
Variance (difference between the 75th and 25th
percentiles) for each wavelength.
"""
median = numpy.median(data, axis=0)
p_25 = numpy.percentile(data, 25, axis=0)
p_75 = numpy.percentile(data, 75, axis=0)
return median, (p_75 - p_25)
def transform(data, median, var_25_75):
"""Scale data using robust estimators.
Scale the data by subtracting the median and dividing by the
variance, i.e. the difference between the 75th and 25th percentiles.
Parameters
----------
data : pandas DataFrame, shape (n_spectra, n_freq_point)
DataFrame containing all Raman spectra.
median : ndarray, shape (n_freq_point,)
Median for each wavelength.
var_25_75 : ndarray, shape (n_freq_point,)
Variance (difference between the 75th and 25th
percentiles) for each wavelength.
Returns
-------
data_scaled : pandas DataFrame, shape (n_spectra, n_freq_point)
DataFrame containing all scaled Raman spectra.
"""
return (data - median) / var_25_75
median, var_25_75 = fit_params(stot)
stot_scaled = transform(stot, median, var_25_75)
s4_scaled = transform(s4, median, var_25_75)
from sklearn.decomposition import PCA
from sklearn.linear_model import RidgeCV
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import make_pipeline
for reg in [RidgeCV(), RandomForestRegressor(random_state=0)]:
p = make_pipeline(PCA(n_components=100), reg)
p.fit(stot_scaled, c)
pred = p.predict(s4_scaled)
plot_regression(c4, pred, 'Regression using {}'.format(reg.__class__.__name__))
"""
Explanation: Training and testing a machine learning model for regression
End of explanation
"""
|
lionell/university-labs | eco_systems/lab1.ipynb | mit | data = pd.read_csv('lab1v1.csv')
P, D, S = data['Price'].values, data['Demand'].values, data['Supply'].values
data
def plot(*args, x='Quantity', y='Price', **kw):
plt.figure(figsize=(15, 10))
plt.plot(*args)
plt.xlabel(x)
plt.ylabel(y)
plt.legend(kw['legend'])
plt.title(kw['title'])
plt.show()
plot(D, P, 'ro', S, P, 'bo', legend=['Demand', 'Supply'], title='Demand vs Supply')
"""
Explanation: Economy: Demand & Supply
<img src="https://thumbs.dreamstime.com/b/supply-demand-26790633.jpg" width='500px' height='300px'>
End of explanation
"""
def compare(x, y, fs, label, title):
plt.figure(figsize=(15, 10))
scores, cs = [], []
for i, f in enumerate(fs):
c, _ = curve_fit(f, x, y)
fy = f(x, *c)
plt.plot(fy, x, '-', label=f.__name__)
scores.append(np.sum(y - fy)**2)
cs.append(c)
plt.plot(y, x, 'o', label=label)
plt.xlabel('Quantity'); plt.ylabel('Price')
plt.legend(); plt.title(title); plt.show()
return fs[np.argmin(scores)], cs[np.argmin(scores)]
"""
Explanation: Approximation
End of explanation
"""
def inverted_linear(x, a, b, c): return a / (b*x + c)
def inverted_exponent(x, a, b, c): return a / (np.exp(b*x) + c)
d_f, d_c = compare(P, D, [inverted_linear, inverted_exponent], label='Demand', title='Demand approximation')
'Demand is better to approximate with {} function'.format(d_f.__name__)
"""
Explanation: We'll try to approximate our demand function with
$$Q_d(P) = \frac{a}{bP + c}$$
$$Q_d(P) = \frac{a}{e^{bP} + c}$$
End of explanation
"""
def exponent(x, a, b): return np.exp(a*x) + b
def logarithmic(x, a, b): return a*np.log(x) + b
s_f, s_c = compare(P, S, [exponent, logarithmic], label='Supply', title='Supply approximation')
'Supply is better to approximate with {} function'.format(s_f.__name__)
def diff(x): return abs(d_f(x, *d_c) - s_f(x, *s_c))
bounds = (P.min(), P.max())
e_p = minimize(diff, 1, bounds=[bounds]).x[0]
e_q = d_f(e_p, *d_c)
plot(d_f(P, *d_c), P, 'r-', s_f(P, *s_c), P, 'b-', D, P, 'ro', S, P, 'bo', e_q, e_p, 'go',
legend=['Demand(approx)', 'Supply(approx)', 'Demand', 'Supply', 'Equilibrium'], title='Equilibrium')
'Equilibrium is ({0:.2f}, {1:.2f})'.format(e_q, e_p)
"""
Explanation: Now let's approximate our supply function with
$$Q_s(P) = e^{aP} + b$$
$$Q_s(P) = a\log{P} + b$$
End of explanation
"""
d_c, s_c
"""
Explanation: Elasticity
By definition
$$E = \frac{dQ}{dP} \cdot \frac{P}{Q}$$
For demand we have
$$Q_d = \frac{a}{e^{bP}+c}$$
$$E_d = \frac{dQ_d}{dP} \cdot \frac{P}{Q_d} = -\frac{abe^{bP}}{(e^{bP}+c)^2} \cdot \frac{P(e^{bP}+c)}{a} =
-\frac{bPe^{bP}}{e^{bP}+c}$$
And for supply
$$Q_s = l\log{P}+m$$
$$E_s = \frac{dQ_s}{dP} \cdot \frac{P}{Q_s} = \frac{l}{P} \cdot \frac{P}{l\log{P}+m} =
\frac{l}{l\log{P}+m}$$
End of explanation
"""
def elasticity_d(p): return -d_c[1]*p*np.exp(d_c[1]*p) / (np.exp(d_c[1]*p) - d_c[2])
def elasticity_s(p): return s_c[0] / (s_c[0]*np.log(p) + s_c[1])
print('Demand elasticity: {0:.2f}'.format(elasticity_d(e_p)))
print('Supply elasticity: {0:.2f}'.format(elasticity_s(e_p)))
"""
Explanation: Provided that $a = 55.4, b = 0.97, c = -0.55, l = 39.02, m = 81.92$ we have
$$E_d = -\frac{0.97 \cdot Pe^{0.97 P}}{e^{0.97 P}-0.55}$$
$$E_s = \frac{39.02}{39.02\log{P}+81.92}$$
Let's calculate elasticity at the equilibrium
End of explanation
"""
def arc_elasticity(P, Q): return (Q[-1] - Q[0]) / (P[-1] - P[0]) * np.sum(P) / np.sum(Q)
print('Demand arc elasticity: {0:.2f}'.format(arc_elasticity(D, P)))
print('Supply arc elasticity: {0:.2f}'.format(arc_elasticity(S, P)))
"""
Explanation: Because $|E_d(e_p)| < |E_s(e_p)|$ we can say that equilibrium is not stable.
Arc elasticity
$$E_{arc} = \frac{Q_n - Q_1}{P_n - P_1} \frac{\frac{\sum_{i=1}^n P_i }{n}}{\frac{\sum_{i=1}^n Q_i }{n}} = \frac{Q_n - Q_1}{P_n - P_1} \frac{\sum_{i=1}^n P_i }{\sum_{i=1}^n Q_i }$$
End of explanation
"""
def taxed_d_f(x): return d_f(x + 0.5, *d_c)
def diff(x): return abs(taxed_d_f(x) - s_f(x, *s_c))
e_p = minimize(diff, 1, bounds=[bounds]).x[0]
e_q = s_f(e_p, *s_c)
plot(d_f(P, *d_c), P, 'r-', taxed_d_f(P), P, 'r--', s_f(P, *s_c), P, 'b-', e_q, e_p, 'go',
legend=['Demand', 'Demand(taxed)', 'Supply', 'Equilibrium'], title='Tax in demand')
'Equilibrium is ({0:.2f}, {1:.2f})'.format(e_q, e_p)
"""
Explanation: Tax in demand
Usually tax is defined as
$$Q_{taxed}(P) = Q(P + tax)$$
We just have to apply this transformation to our demand function
End of explanation
"""
|
BlancaCC/cultutrilla | python_aprendizaje/curiosidades_pitonicas.ipynb | gpl-3.0 | cuadrado = lambda a: a*a
cuadrado(2)
"""
Explanation: Funciones lambda, listas por compresión, decoradores y otras curiosidades pitónicas
Python es un lenguaje precioso, o por lo menos así me lo parece a mí, por tanto procedo a contarte aspectos, funciones y otras cosilla que a mí me llaman la función.
Funciones lambda
Las funciones lammbda son funciones anónimas, es decir no se declaran con un nombre, su sintáxis es: lambda <argumentos> : <expresión>
Un ejemplo sencillo podría ser:
End of explanation
"""
print( "Lista por compresión: " ,[ i for i in range(10)] )
l = list( map(lambda x : x*x ,range(10) ))
print( f"Esta preciosa lista { l } es resultado de la anterior")
print( f"Los números impares son: { list( filter( lambda x: x%2 , range(10)) )}")
[ (x,y) for x in range(3) for y in range(3,7,2)] #productos cartesianos
"""
Explanation: Por su sencillez se suelen utilizar como argumentso en funciones más simples como map o filter, antes de ejemplificar, para ver mejor su utilidad introduciré el concepto de listas por comprensión.
Listas por compresión
La creación de listas por compresión, refleja el concepto matemático de definir un conjunto, en este caso lista, a partir de una defición. Dando lugar a definiciones elegantes y rápidas:
End of explanation
"""
def sumandos(f):
def nueva_funcion(a,b):
print(f'Se están sumando {a} y {b}')
f(a,b)
return nueva_funcion
@sumandos
def suma(a,b):
return a+b
suma(1,2)
"""
Explanation: Espero que haya estado usted fino y haya notado el uso del "nuevo" format en el print de python :D
Decoradores
Los decoradores son funciones que modifican el comportarmiento de las funciones sobre las que se aplican. Esto se ve mejor con un ejemplo:
End of explanation
"""
|
Dima806/udacity-mlnd-capstone | capstone-step1-sensitivity-check-run1.ipynb | apache-2.0 | # Select test_size and random_state for splitting a subset
test_size=0.1
random_state=0
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import time
import gzip
import shutil
import seaborn as sns
from collections import Counter
from sklearn.mixture import GaussianMixture
from sklearn.cluster import KMeans, MeanShift, estimate_bandwidth, AgglomerativeClustering
from sklearn.metrics import silhouette_score #, make_scorer
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.neighbors import kneighbors_graph
from sklearn.model_selection import train_test_split
"""
Explanation: Udacity MLND Capstone Project
"Determination of students’ interaction patterns with an intelligent tutoring system and study of their correlation with successful learning"
Step 1 (sensitivity check, run 1)
End of explanation
"""
def hdf_fixed_write_compress(df):
df.to_hdf('data1-step1.hdf','test',mode='w',complib='blosc')
return
def hdf_fixed_read_compress():
df = pd.read_hdf('data.hdf','test')
return df
with gzip.open('data1.hdf.gz', 'rb') as f_in, open('data.hdf', 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
!ls -lh data.hdf
data = hdf_fixed_read_compress()
data.head()
"""
Explanation: Do some preprocessing to group the data by 'Anon Stud Id' and extract features for further analysis
End of explanation
"""
def prepare_stud_data_new(df):
start_time = time.time()
stud_list = df['Anon Student Id'].unique()
cols=['num_sess', \
'num_days', \
'num_probs', \
'num_atts', \
'num_hints', \
'frac_corr_atts', \
'frac_3s_atts', \
'frac_1s_hints', \
'time_atts', \
'time_hints', \
'max_probl_views', \
'max_atts']
numbers = []
#stud_data = pd.DataFrame(columns=cols)
stud_info_df = pd.DataFrame()
i = 0
for stud_name in stud_list:
stud_info_df = df[df['Anon Student Id'] == stud_name].copy()
# total number of days loading the system
num_days = len(set(stud_info_df['Day']))
# total number of sessions opened
num_sessions = len(set(stud_info_df['Session Id']))
# total number of problems entered
num_problems = len(set(stud_info_df['Problem Name']))
# total number of attempts made by the student
num_attempts = stud_info_df[stud_info_df['Student Response Type'] == 0].shape[0]
# total number of hints made by the student
num_hints = stud_info_df[stud_info_df['Student Response Type'] == 1].shape[0]
# fraction of short attemps (with time <= 3 sec)
if (num_attempts > 0):
frac_3s_atts = stud_info_df[(stud_info_df['Student Response Type'] == 0) & (stud_info_df['Duration (sec)'] <= 3.0)].shape[0] / num_attempts
else:
frac_3s_atts = 0
# fraction of short hints (with time <= 1 sec)
if (num_hints > 0):
frac_1s_hints = stud_info_df[(stud_info_df['Student Response Type'] == 1) & (stud_info_df['Duration (sec)'] <= 1.0)].shape[0] / num_hints
else:
frac_1s_hints = 0
# fraction of correct attempts
if (num_attempts > 0):
fraction_correct_attempts = stud_info_df[(stud_info_df['Student Response Type'] == 0) & (stud_info_df['Outcome'] == 0)].shape[0] / num_attempts
else:
fraction_correct_attempts = 0
# total number of time spent for attempts (in seconds)
total_time_attempts = stud_info_df[stud_info_df['Student Response Type'] == 0]['Duration (sec)'].sum()
# total number of time spent for hints (in seconds)
total_time_hints = stud_info_df[stud_info_df['Student Response Type'] == 1]['Duration (sec)'].sum()
# averaged maximal numbers of 'Problem View'
avg_max_problem_views = stud_info_df[['Problem Name', 'Problem View']].groupby(['Problem Name']).agg(np.max).mean()[0]
# averaged maximal number of attempts ('x')
avg_max_attempts = stud_info_df[['Problem Name', 'x']].groupby(['Problem Name']).agg(np.max).mean()[0]
stud_name = i # assign unique numerical ID to each student
if num_attempts != 0:
avd_time_att = total_time_attempts / num_attempts
else:
avg_time_att = 0
if num_hints != 0:
avg_time_hint = total_time_hints / num_hints
else:
avg_time_hint = 0
numbers.append([num_sessions, \
num_days, \
num_problems, \
num_attempts, \
num_hints, \
fraction_correct_attempts, \
frac_3s_atts, \
frac_1s_hints, \
total_time_attempts, \
total_time_hints, \
avg_max_problem_views, \
avg_max_attempts])
print("\r\t>>> Progress\t:{:.4%}".format((i + 1)/len(stud_list)), end='')
i += 1
stud_data = pd.DataFrame(data=numbers, columns=cols)
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
return stud_data
"""
Explanation: Note to reviewers: this algorithm is quite slow (~45 minutes), so you may consider processing a substantial subset of data (e.g. processing 500,000 rows takes only ~1 minute).
End of explanation
"""
#stud_data = prepare_stud_data_new(data.head(500000).copy())
#stud_data = prepare_stud_data_new(data.copy())
stud_data = pd.read_hdf('stud_data.hdf','test')
"""
Explanation: Reading from the scratch instead:
End of explanation
"""
#stud_data.to_hdf('stud_data.hdf','test',mode='w',complib='blosc')
stud_data.shape
stud_data.describe()
"""
Explanation: Making backup for stud_data in HDF5 format:
End of explanation
"""
print(test_size, random_state)
stud_data_1, stud_data_2 = train_test_split(stud_data, test_size=test_size, random_state=random_state)
stud_data_1.shape[0]/stud_data.shape[0]
stud_data = stud_data_1
"""
Explanation: Choosing a student subset for a sensitivity check
(note that this step updates stud_data):
End of explanation
"""
# old name: process_data
def transform_data(selected_columns, data):
'''
Apply log-transform and MinMaxScaler() to the selected data columns which are not fractions (frac_*)
Parameters
==========
selected_columns : list
list of columns to leave in processed data
data : pandas.DataFrame
data to process (note that data should contain all selected_columns)
Returns
=======
log_scaled_data : pandas.DataFrame
log-transformed and scaled data selected by selected_columns
'''
data.reset_index(drop=True, inplace=True)
log_data = data[selected_columns].copy()
skewed = log_data.columns.tolist()
skewed = [item for item in skewed if not item.startswith('frac_')]
log_data[skewed] = log_data[skewed].apply(lambda x: np.log10(x + 1))
scaler = MinMaxScaler().fit(log_data)
log_scaled_data = scaler.transform(log_data)
log_scaled_data = pd.DataFrame(log_scaled_data, columns=log_data.columns)
return log_scaled_data
def replace_group_numbers(best_preds):
'''
Replace group numbers in best_preds with sorting by group size
(so that the largest group is 0, the second largest is 1 etc.)
Parameters
==========
best_preds : numpy array
unsorted array of predictions
Returns
=======
best_preds_sorted : numpy array
sorted array of predictions
'''
pp = pd.DataFrame(best_preds, columns = ["old_group"])
dict_pp = {item[0]: i for i, item in enumerate(Counter(best_preds).most_common())}
pp['new_group'] = pp['old_group'].replace(dict_pp)
best_preds_sorted = np.array(pp['new_group'])
return best_preds_sorted
def kmeans(log_scaled_data):
'''
Apply KMeans clustering algorithm with 2 <= cluster_number <= 6 to log_scaled_data
(transformed and scaled by transform_data() function)
Parameters
==========
log_scaled_data : pandas.DataFrame
data log-transormed and MinMaxScaler()-ed for KMeans clustering
Returns
=======
best_clusterer : sklearn Model
clustering algorithm with the largest Silhouette Coefficient
best_score : float
the largest value of the Silhouette Coefficient
best_preds_sorted : numpy.array
array with clustering predictions for log_scaled_data
(0 is the largest cluster, 1 is the second largest etc.)
'''
best_score = 0
for n_clusters in range(2,6):
clusterer = KMeans(n_clusters=n_clusters, n_init=10, random_state=0)
clusterer.fit(log_scaled_data)
preds = clusterer.predict(log_scaled_data)
# Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(log_scaled_data, preds)
if best_score < score:
best_clusterer = clusterer
# Predict the cluster for each data point
best_preds = best_clusterer.predict(log_scaled_data)
best_score = score
best_clusters = n_clusters
best_preds_sorted = replace_group_numbers(best_preds)
return best_clusterer, best_score, best_preds_sorted
"""
Explanation: Clustering
Write a new clustering algorithm that:
- starts from stud_data or its subset (with monotonic index)
- finds a 2-column set with the largest score (using KMeans) and renames it that 0 is the largest group, 1 is the second largest etc.
- returns index file (with indices 0, 1, ) that could be used for further analysis
End of explanation
"""
all_columns = ['num_sess', 'num_days', 'num_probs', 'num_atts', 'num_hints', 'frac_corr_atts', \
'frac_3s_atts', 'frac_1s_hints', 'time_atts', 'time_hints', 'max_probl_views', 'max_atts']
def choose_pair_columns_kmeans(all_columns, log_scaled_all_data):
'''
Selects pair of columns in data that produces clusters with the largest score.
In this function, only KMeans clustering algorithm is used
Parameters
==========
all_columns : list
list of columns to look for the pair with the largest score
log_scaled_data : pandas DataFrame
properly scaled DataFrame with all columns
Returns
=======
best_columns : list
pair of data columns with the largest score
best_score : float
the largest value of the score
best_clusterer : sklearn Model
clustering algorithm with the largest score
best_preds : numpy.array
array with clustering predictions for log_scaled_data
(0 is the largest cluster, 1 is the second largest etc.)
'''
best_score = 0
best_columns = []
j = 0
l = len(all_columns)
num_pairs = (l-1)*l/2
for column in all_columns:
selected_columns = [column]
columns_to_add = [a for a in all_columns if (a not in selected_columns)]
for column1 in columns_to_add:
if all_columns.index(column) < all_columns.index(column1):
selected_columns = [column, column1]
print("\r\t>>> Progress\t:{:.4%}".format((j+1)/num_pairs), end='')
j += 1
#log_scaled_data = transform_data(selected_columns, stud_data)
clusterer, score, preds = kmeans(log_scaled_all_data[selected_columns])
if score > best_score:
best_score = score
best_clusterer = clusterer
best_preds = preds
best_columns = selected_columns.copy()
return best_columns, best_score, best_clusterer, best_preds
start_time = time.time()
log_scaled_all_data = transform_data(all_columns, stud_data)
# consider skipping the step below because it takes some time (~5 minutes)
best_columns, best_kmeans_score, best_kmeans_clusterer, best_kmeans_preds = choose_pair_columns_kmeans(all_columns, log_scaled_all_data)
# Instead run it single time (6 seconds only)
#best_columns = ['frac_1s_hints', 'max_probl_views']
#best_kmeans_clusterer, best_kmeans_score, best_kmeans_preds = kmeans(log_scaled_all_data[best_columns])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_columns)
print("\t>>> Best score:", best_kmeans_score)
print("\t>>> Best clusterer:", best_kmeans_clusterer)
print("\t>>> Best preds:", best_kmeans_preds)
def preds_to_indices(preds): # gives array and returns array of indices with 1s
new_list = []
for i, val in enumerate(preds):
if val == 1:
new_list.append(i)
return np.array(new_list)
"""
Explanation: Choose the pair of columns with best score:
End of explanation
"""
log_scaled_all_data.describe()
best_kmeans_preds_mask = preds_to_indices(best_kmeans_preds)
log_scaled_all_data_kmeans_0 = log_scaled_all_data.copy()[~log_scaled_all_data.index.isin(best_kmeans_preds_mask)]
log_scaled_all_data_kmeans_1 = log_scaled_all_data.copy()[log_scaled_all_data.index.isin(best_kmeans_preds_mask)]
plt.scatter(log_scaled_all_data_kmeans_0['frac_1s_hints'], \
log_scaled_all_data_kmeans_0['max_probl_views'], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_1['frac_1s_hints'], \
log_scaled_all_data_kmeans_1['max_probl_views'], \
alpha=0.6, s=15, c='grey')
plt.xlim([0.0, 0.6])
plt.ylim([0.0, 0.4])
plt.figtext(x=0.64, y=0.56, s='Group 1', ha='center', size=14, color='black')
plt.figtext(x=0.20, y=0.19, s='Group 0', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel('frac_1s_hints', size=14)
ax.set_ylabel('max_probl_views', size=14)
plt.plot((0.14, 0.14), (0.001, 0.399), 'k--', c='blue')
plt.show()
print(log_scaled_all_data_kmeans_0.shape, log_scaled_all_data_kmeans_1.shape)
"""
Explanation: Visualising the KMeans clusters:
End of explanation
"""
def cols_iterate_kmeans(selected_columns, best_score, best_clusterer, best_preds):
all_columns = ['num_sess', 'num_days', 'num_probs', 'num_atts', \
'num_hints', 'frac_corr_atts', 'frac_3s_atts', 'frac_1s_hints', \
'time_atts', 'time_hints', 'max_probl_views', 'max_atts']
columns_to_add = [a for a in all_columns if (a not in selected_columns)]
#print(columns_to_add)
for column in columns_to_add:
print("*"*40)
print("*** Trying to add column", column)
print("*"*40)
selected_columns.append(column)
log_scaled_data = transform_data(selected_columns, stud_data)
clusterer, score, preds = kmeans(log_scaled_data)
if score > best_score:
print("!!! Success !!!")
best_score = score
best_clusterer = clusterer
best_preds = preds
print("!!! New score is", best_score)
print("!!! New best clusterer is", best_clusterer)
print("!!! New best selected_columns are", selected_columns)
columns_to_add.remove(column)
else:
print("!!! Last score is equal or worse then our best one")
print("!!! According to Occam's razor, remove the column", column)
selected_columns.remove(column)
print("!!! Still the best selected columns are", selected_columns)
return selected_columns, best_score, best_clusterer, best_preds
# Just skip this step, it does not give new results:
kmeans_clusterer = best_kmeans_clusterer
kmeans_score = best_kmeans_score
kmeans_preds = best_kmeans_preds
selected_columns = best_columns # ['frac_1s_hints', 'max_probl_views']
new_columns, new_kmeans_score, new_kmeans_clusterer, new_kmeans_preds = cols_iterate_kmeans(selected_columns, kmeans_score, kmeans_clusterer, kmeans_preds)
if new_kmeans_score > kmeans_score:
print("+++ SUCCESS")
selected_columns = new_columns
best_kmeans_score = new_kmeans_score
best_kmeans_clusterer = new_kmeans_clusterer
best_kmeans_preds = new_kmeans_preds
else:
print("--- GIVE UP")
"""
Explanation: Then, consider adding one more column to further increase the score:
End of explanation
"""
def largest_cluster_fraction(preds):
'''
calculates the fraction of students that are in the largest group
Parameters
==========
preds : list
list of predictions
Returns
=======
fraction : float
largest fraction of students
best_i : integer
number of the largest group
'''
fraction = 0
ll = len(preds)
for i in np.unique(preds):
frac = len(preds[preds == i])/ll
if frac > fraction:
fraction = frac
best_i = i
return fraction, best_i
# Rewrite similar to kmeans procedure !!!
def meanshift(log_scaled_data):
'''
Apply MeanShift clustering algorithm to log_scaled_data
(transformed and scaled by transform_data() function)
Number of clusters is selected according to estimate_badwidth procedure
with quantiles in np.linspace(0.01, 0.99, 99)
Parameters
==========
log_scaled_data : pandas.DataFrame
data log-transormed and MinMaxScaler()-ed for KMeans clustering
Returns
=======
best_clusterer : sklearn Model
clustering algorithm with the largest Silhouette Coefficient
best_score : float
the largest value of the Silhouette Coefficient
best_preds_sorted : numpy.array
array with clustering predictions for log_scaled_data
(0 is the largest cluster, 1 is the second largest etc.)
cluster_frac : float
fraction of students inside the largest group
'''
start_time = time.time()
best_score = 0
best_cluster_frac = 0
for alpha in np.linspace(0.01, 0.99, 99):
bandwidth = estimate_bandwidth(log_scaled_data, quantile=alpha, n_samples=None, random_state=0)
clusterer = MeanShift(bandwidth=bandwidth, bin_seeding=True)
clusterer.fit(log_scaled_data)
preds = clusterer.fit_predict(log_scaled_data)
cluster_frac = largest_cluster_fraction(preds)[0]
# Calculate the mean silhouette coefficient for the number of clusters chosen
try:
score = silhouette_score(log_scaled_data, preds)
except ValueError:
score = 0
print(alpha, clusterer.cluster_centers_.shape[0], score, cluster_frac)
# setting cluster_frac > 0.85, the value obtained in KMeans algorithm for ['frac_1s_hints', 'max_probl_views']
if (best_score < score) and (cluster_frac < 0.85):
best_clusterer = clusterer
best_preds = preds
best_score = score
best_clusters = clusterer.cluster_centers_.shape[0]
best_cluster_frac = cluster_frac
print('*'*68)
print("Our best model has", best_clusters, "clusters and sihlouette is", best_score)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
print('>'*68)
best_preds_sorted = replace_group_numbers(best_preds)
cluster_frac = best_cluster_frac
return best_clusterer, best_score, best_preds_sorted, cluster_frac
# Rinning MeanShift is too slow: runs about 9 min for 1 pair,
# and produces too bad results (largest score = 0.56 for reasonable max_fractions < 0.85)
start_time = time.time()
log_scaled_data = transform_data(best_columns, stud_data)
best_meanshift_clusterer, best_meanshift_score, best_meanshift_preds, _ = meanshift(log_scaled_data)
print(best_meanshift_clusterer, best_meanshift_score, best_meanshift_preds)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
"""
Explanation: As expected, the pair ['frac_1s_hints', 'max_probl_views'] still gives the best score.
Now, trying with different clusterers.
MeanShift:
End of explanation
"""
def gaussmix(log_scaled_data): # GaussianMixture
start_time = time.time()
max_score = 0
for n_clusters in range(2,6):
clusterer = GaussianMixture(random_state=0, n_init=50, n_components=n_clusters).fit(log_scaled_data)
preds = clusterer.predict(log_scaled_data)
# Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(log_scaled_data, preds)
print("For our model with", clusterer.n_components, "clusters, the sihlouette score is", score)
if max_score < score:
best_clusterer = clusterer
# Predict the cluster for each data point
best_preds = best_clusterer.predict(log_scaled_data)
max_score = score
best_clusters = n_clusters
print('*'*68)
print("Our best model has", best_clusters, "clusters and sihlouette is", max_score)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
print('>'*68)
best_preds_sorted = replace_group_numbers(best_preds)
return best_clusterer, max_score, best_preds_sorted
def run_clustering_gaussmix(log_scaled_data):
best_score = 0
print(">>> GaussianMixture:")
clusterer, score, preds = gaussmix(log_scaled_data)
if score > best_score:
best_clusterer = clusterer
best_score = score
best_preds = preds
print("Best clusterer is", best_clusterer)
print("Max score is", best_score)
print("Best preds is", best_preds)
return best_clusterer, best_score, best_preds
# ~0.6 min running time but very small score (~0.39)
start_time = time.time()
log_scaled_data = transform_data(best_columns, stud_data)
gaussmix_best_clusterer, gaussmix_best_score, gaussmix_best_preds = run_clustering_gaussmix(log_scaled_data)
print(gaussmix_best_clusterer, gaussmix_best_score, gaussmix_best_preds)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
"""
Explanation: GaussianMixture:
End of explanation
"""
def agglom(log_scaled_data): # AgglomerativeClustering with 'ward' connectivity
start_time = time.time()
max_score = 0
for n_clusters in range(2,3): # use only 2 clusters
connectivity = kneighbors_graph(log_scaled_data, n_neighbors=100, include_self=False)
# make connectivity symmetric
connectivity = 0.5 * (connectivity + connectivity.T)
clusterer = AgglomerativeClustering(n_clusters=n_clusters, \
linkage='ward', \
connectivity=connectivity)
preds = clusterer.fit_predict(log_scaled_data)
# Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(log_scaled_data, preds)
print("For our model with", clusterer.n_clusters, "clusters, and the sihlouette score is", score)
if max_score < score:
best_clusterer = clusterer
# Predict the cluster for each data point
best_preds = preds
max_score = score
best_clusters = n_clusters
print('*'*68)
print("Our best model has", best_clusters, "clusters and sihlouette is", max_score)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
print('>'*68)
best_preds_sorted = replace_group_numbers(best_preds)
return best_clusterer, max_score, best_preds_sorted
def run_clustering_agglom(log_scaled_data):
best_score = 0
print(">>> AgglomerativeClustering:")
clusterer, score, preds = agglom(log_scaled_data)
if score > best_score:
best_clusterer = clusterer
best_score = score
best_preds = preds
print("Best clusterer is", best_clusterer)
print("Max score is", best_score)
print("Best preds is", best_preds)
return best_clusterer, best_score, best_preds
# Gives results very similar to KMeans but takes ~4 times more running time
start_time = time.time()
log_scaled_data = transform_data(best_columns, stud_data)
best_agglom_clusterer, best_agglom_score, best_agglom_preds = run_clustering_agglom(log_scaled_data)
print(best_agglom_clusterer, best_agglom_score, best_agglom_preds)
end_time = time.time()
print("Running time is {}s".format(end_time-start_time))
"""
Explanation: AgglomerativeClustering:
End of explanation
"""
best_agglom_preds_mask = preds_to_indices(best_agglom_preds)
log_scaled_data_agglom_0 = log_scaled_data.copy()[~log_scaled_data.index.isin(best_agglom_preds_mask)]
log_scaled_data_agglom_1 = log_scaled_data.copy()[log_scaled_data.index.isin(best_agglom_preds_mask)]
plt.scatter(log_scaled_data_agglom_0['frac_1s_hints'], \
log_scaled_data_agglom_0['max_probl_views'], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_data_agglom_1['frac_1s_hints'], \
log_scaled_data_agglom_1['max_probl_views'], \
alpha=0.6, s=15, c='grey')
plt.xlim([0.0, 0.6])
plt.ylim([0.0, 0.4])
plt.figtext(x=0.64, y=0.56, s='Group 1', ha='center', size=14, color='black')
plt.figtext(x=0.20, y=0.19, s='Group 0', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel('frac_1s_hints', size=14)
ax.set_ylabel('max_probl_views', size=14)
#plt.plot((0.145, 0.145), (0.001, 0.399), 'k--', c='blue')
plt.show()
"""
Explanation: Visualising the AgglomerativeClustering clusters:
End of explanation
"""
best_kmeans_preds_mask = preds_to_indices(best_kmeans_preds)
log_scaled_all_data_kmeans_0 = log_scaled_all_data.copy()[~log_scaled_all_data.index.isin(best_kmeans_preds_mask)]
# In this particular splitting, take drop=False to save the initial index
# (simplifying students recovery for step 2)
log_scaled_all_data_kmeans_0.reset_index(inplace=True, drop=False)
log_scaled_all_data_kmeans_0.index
start_time = time.time()
best_kmeans_columns_0, \
best_kmeans_score_0, \
best_kmeans_clusterer_0, \
best_kmeans_preds_0 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_0)
# best_kmeans_columns_0 = ['frac_3s_atts', 'max_probl_views']
# best_kmeans_clusterer_0, best_kmeans_score_0, best_kmeans_preds_0 = kmeans(log_scaled_all_data_kmeans_0[best_kmeans_columns_0])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_0)
print("\t>>> Best score:", best_kmeans_score_0)
print("\t>>> Best clusterer:", best_kmeans_clusterer_0)
print("\t>>> Best preds:", best_kmeans_preds_0)
print(sum(best_kmeans_preds_0), len(best_kmeans_preds_0), len(best_kmeans_preds_0[best_kmeans_preds_0 == 0]))
log_scaled_all_data_kmeans_0.reset_index(inplace=True, drop=True)
"""
Explanation: Further clustering of obtained KMeans groups:
I start from group 0 that contains 6934 students:
End of explanation
"""
best_kmeans_preds_mask_0 = preds_to_indices(best_kmeans_preds_0)
log_scaled_all_data_kmeans_00 = log_scaled_all_data_kmeans_0.copy()[~log_scaled_all_data_kmeans_0.index.isin(best_kmeans_preds_mask_0)]
log_scaled_all_data_kmeans_01 = log_scaled_all_data_kmeans_0.copy()[log_scaled_all_data_kmeans_0.index.isin(best_kmeans_preds_mask_0)]
plt.scatter(log_scaled_all_data_kmeans_00[best_kmeans_columns_0[0]], \
log_scaled_all_data_kmeans_00[best_kmeans_columns_0[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_01[best_kmeans_columns_0[0]], \
log_scaled_all_data_kmeans_01[best_kmeans_columns_0[1]], \
alpha=0.6, s=15, c='grey')
# plt.xlim([0.0, 0.6])
# plt.ylim([0.0, 0.4])
# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_0[0], size=14)
ax.set_ylabel(best_kmeans_columns_0[1], size=14)
#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
"""
Explanation: Visualise obtained clusters:
End of explanation
"""
len(best_kmeans_preds_0)
#best_kmeans_preds_mask_0 = preds_to_indices(best_kmeans_preds_0) # already implemented during group0 visualisation
log_scaled_all_data_kmeans_00 = log_scaled_all_data_kmeans_0.copy()[~log_scaled_all_data_kmeans_0.index.isin(best_kmeans_preds_mask_0)]
log_scaled_all_data_kmeans_00.reset_index(inplace=True, drop=True)
log_scaled_all_data_kmeans_00.index
start_time = time.time()
best_kmeans_columns_00, \
best_kmeans_score_00, \
best_kmeans_clusterer_00, \
best_kmeans_preds_00 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_00)
# best_kmeans_columns_00 = ['frac_3s_atts', 'time_hints']
# best_kmeans_clusterer_00, \
# best_kmeans_score_00, \
# best_kmeans_preds_00 = kmeans(log_scaled_all_data_kmeans_00[best_kmeans_columns_00])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_00)
print("\t>>> Best score:", best_kmeans_score_00)
print("\t>>> Best clusterer:", best_kmeans_clusterer_00)
print("\t>>> Best preds:", best_kmeans_preds_00)
print(sum(best_kmeans_preds_00), len(best_kmeans_preds_00), len(best_kmeans_preds_00[best_kmeans_preds_00 == 0]))
best_kmeans_preds_mask_00 = preds_to_indices(best_kmeans_preds_00)
log_scaled_all_data_kmeans_000 = log_scaled_all_data_kmeans_00.copy()[~log_scaled_all_data_kmeans_00.index.isin(best_kmeans_preds_mask_00)]
log_scaled_all_data_kmeans_001 = log_scaled_all_data_kmeans_00.copy()[log_scaled_all_data_kmeans_00.index.isin(best_kmeans_preds_mask_00)]
plt.scatter(log_scaled_all_data_kmeans_000[best_kmeans_columns_00[0]], \
log_scaled_all_data_kmeans_000[best_kmeans_columns_00[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_001[best_kmeans_columns_00[0]], \
log_scaled_all_data_kmeans_001[best_kmeans_columns_00[1]], \
alpha=0.6, s=15, c='grey')
# plt.xlim([0.0, 0.6])
# plt.ylim([0.0, 0.4])
# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_00[0], size=14)
ax.set_ylabel(best_kmeans_columns_00[1], size=14)
#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
"""
Explanation: As we see, group 01 contains more students with "gaming" behaviour, so I proceed with group 00:
End of explanation
"""
log_scaled_all_data_kmeans_000 = log_scaled_all_data_kmeans_00.copy()[~log_scaled_all_data_kmeans_00.index.isin(best_kmeans_preds_mask_00)]
log_scaled_all_data_kmeans_000.reset_index(inplace=True, drop=True)
log_scaled_all_data_kmeans_000.index
start_time = time.time()
best_kmeans_columns_000, \
best_kmeans_score_000, \
best_kmeans_clusterer_000, \
best_kmeans_preds_000 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_000)
# best_kmeans_columns_000 = ['num_sess', 'num_probs']
# best_kmeans_clusterer_000, \
# best_kmeans_score_000, \
# best_kmeans_preds_000 = kmeans(log_scaled_all_data_kmeans_000[best_kmeans_columns_000])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_000)
print("\t>>> Best score:", best_kmeans_score_000)
print("\t>>> Best clusterer:", best_kmeans_clusterer_000)
print("\t>>> Best preds:", best_kmeans_preds_000)
print(sum(best_kmeans_preds_000), len(best_kmeans_preds_000), len(best_kmeans_preds_000[best_kmeans_preds_000 == 0]))
best_kmeans_preds_mask_000 = preds_to_indices(best_kmeans_preds_000)
log_scaled_all_data_kmeans_0000 = log_scaled_all_data_kmeans_000.copy()[~log_scaled_all_data_kmeans_000.index.isin(best_kmeans_preds_mask_000)]
log_scaled_all_data_kmeans_0001 = log_scaled_all_data_kmeans_000.copy()[log_scaled_all_data_kmeans_000.index.isin(best_kmeans_preds_mask_000)]
plt.scatter(log_scaled_all_data_kmeans_0000[best_kmeans_columns_000[0]], \
log_scaled_all_data_kmeans_0000[best_kmeans_columns_000[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_0001[best_kmeans_columns_000[0]], \
log_scaled_all_data_kmeans_0001[best_kmeans_columns_000[1]], \
alpha=0.6, s=15, c='grey')
# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_000[0], size=14)
ax.set_ylabel(best_kmeans_columns_000[1], size=14)
#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
"""
Explanation: So, there is a subgroup 001 of 1001 students that do not use many hints. What about the rest (000, 5482 students)?
End of explanation
"""
log_scaled_all_data_kmeans_0000 = log_scaled_all_data_kmeans_000.copy()[~log_scaled_all_data_kmeans_000.index.isin(best_kmeans_preds_mask_000)]
log_scaled_all_data_kmeans_0000.reset_index(inplace=True, drop=True)
log_scaled_all_data_kmeans_0000.index
start_time = time.time()
best_kmeans_columns_0000, \
best_kmeans_score_0000, \
best_kmeans_clusterer_0000, \
best_kmeans_preds_0000 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_0000)
# best_kmeans_columns_0000 = ['num_sess', 'num_probs']
# best_kmeans_clusterer_0000, \
# best_kmeans_score_0000, \
# best_kmeans_preds_0000 = kmeans(log_scaled_all_data_kmeans_0000[best_kmeans_columns_0000])
end_time = time.time()
print("\n\t>>> Exec. time\t:{}s".format(end_time-start_time))
print("\t>>> Best pair of cols:", best_kmeans_columns_0000)
print("\t>>> Best score:", best_kmeans_score_0000)
print("\t>>> Best clusterer:", best_kmeans_clusterer_0000)
print("\t>>> Best preds:", best_kmeans_preds_0000)
print(sum(best_kmeans_preds_0000), \
len(best_kmeans_preds_0000), \
len(best_kmeans_preds_0000[best_kmeans_preds_0000 == 0]))
best_kmeans_preds_mask_0000 = preds_to_indices(best_kmeans_preds_0000)
log_scaled_all_data_kmeans_00000 = log_scaled_all_data_kmeans_0000.copy()[~log_scaled_all_data_kmeans_0000.index.isin(best_kmeans_preds_mask_0000)]
log_scaled_all_data_kmeans_00001 = log_scaled_all_data_kmeans_0000.copy()[log_scaled_all_data_kmeans_0000.index.isin(best_kmeans_preds_mask_0000)]
plt.scatter(log_scaled_all_data_kmeans_00000[best_kmeans_columns_0000[0]], \
log_scaled_all_data_kmeans_00000[best_kmeans_columns_0000[1]], \
alpha=0.6, s=15, c='lightgreen')
plt.scatter(log_scaled_all_data_kmeans_00001[best_kmeans_columns_0000[0]], \
log_scaled_all_data_kmeans_00001[best_kmeans_columns_0000[1]], \
alpha=0.6, s=15, c='grey')
# plt.xlim([0.0, 0.6])
# plt.ylim([0.0, 0.4])
# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')
# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')
ax = plt.gca()
ax.set_xlabel(best_kmeans_columns_0000[0], size=14)
ax.set_ylabel(best_kmeans_columns_0000[1], size=14)
#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')
plt.show()
"""
Explanation: Splitting group 0000 (students with large 'num_sess' and 'num_probs')
End of explanation
"""
group1_index = np.array(log_scaled_all_data_kmeans_1.index)
len(group1_index)
group2_index = np.array(log_scaled_all_data_kmeans_01['index'])
len(group2_index)
group3_index = np.array(log_scaled_all_data_kmeans_001['index'])
len(group3_index)
group4_index = np.array(log_scaled_all_data_kmeans_0001['index'])
len(group4_index)
group5_index = np.array(log_scaled_all_data_kmeans_00000['index'])
len(group5_index)
group6_index = np.array(log_scaled_all_data_kmeans_00001['index'])
len(group6_index)
def create_joint_cluster_index():
'''
Saves group index files into cluster_index.csv for further analysis
'''
cluster_index_lst = []
for i in range(len(stud_data)+1):
if i in group1_index:
cluster_index_lst.append(1)
elif i in group2_index:
cluster_index_lst.append(2)
elif i in group3_index:
cluster_index_lst.append(3)
elif i in group4_index:
cluster_index_lst.append(4)
elif i in group5_index:
cluster_index_lst.append(5)
elif i in group6_index:
cluster_index_lst.append(6)
print(Counter(cluster_index_lst))
cluster_index = pd.Series(cluster_index_lst, dtype=int)
cluster_index.to_csv('cluster_index_run1.csv')
return
create_joint_cluster_index()
! ls -lh cluster_index_run1.csv
"""
Explanation: As we see, these two groups represent students with "intermediate experience" (00000) and "largest experience" (00001).
During this sensitivity check, I splitted 8082 students (90% of ASSISTments students) into 6 different groups:
- group 1, 1148 students with large 'frac_1s_hints' ("gaming" behaviour);
- group 2, 451 students with small 'frac_1s_hints' and large 'frac_3s_atts' ("gaming" behaviour);
- group 3, 1001 students with small 'time_hints' ("non-gaming" behaviour, small usage of hints);
- group 4, 2151 students with small 'num_sess' and 'num_probs' ("non-gaming" behaviour, large usage of hints, small experience);
- group 5, 1734 students with medium 'num_sess' and 'num_probs' ("non-gaming" behaviour, large usage of hints, medium experience);
- group 6, 1597 students with large 'num_sess' and 'num_probs' ("non-gaming" behaviour, large usage of hints, large experience).
The final result of this step is the joint cluster index that contains numbers 1-6 for each student:
End of explanation
"""
|
cathalmccabe/PYNQ | boards/Pynq-Z2/logictools/notebooks/wavedrom_tutorial.ipynb | bsd-3-clause | from pynq.lib.logictools.waveform import draw_wavedrom
"""
Explanation: logictools WaveDrom Tutorial
WaveDrom is a tool for rendering digital timing waveforms. The waveforms are defined in a simple textual format.
This notebook will show how to render digital waveforms using the pynq library.
The logictools overlay uses the same format as WaveDrom to specify and generate real signals on the board.
A full tutorial of WaveDrom can be found here
Step 1: Import the draw_wavedrom() method from the pynq library
End of explanation
"""
clock = {'signal': [{'name': 'clock_0', 'wave': 'hlhlhlhlhlhlhlhl'}],
'foot': {'tock': 1},
'head': {'text': 'Clock Signal'}}
draw_wavedrom(clock)
"""
Explanation: A simple function to add wavedrom diagrams into an jupyter notebook. It utilises the wavedrom java script library.
<font color="DodgerBlue">Example usage:</font>
```python
from pynq.lib.logictools.waveform import draw_wavedrom
clock = {'signal': [{'name': 'clk', 'wave': 'h....l...'}]}
draw_wavedrom(clock)
<font color="DodgerBlue">**Method:**</font>python
def draw_wavedrom(data, width=None):
# Note the optional argument width forces the width in pixels
```
Step 2: Specify and render a waveform
End of explanation
"""
pattern = {'signal': [{'name': 'clk', 'wave': 'hl' * 8},
{'name': 'clkn', 'wave': 'lh' * 8},
{'name': 'data0', 'wave': 'l.......h.......'},
{'name': 'data1', 'wave': 'h.l...h...l.....'}],
'foot': {'tock': 1},
'head': {'text': 'Pattern'}}
draw_wavedrom(pattern)
"""
Explanation: Notes on waveform specification
Step 3: Adding more signals to the waveform
End of explanation
"""
pattern_group = {'signal': [['Group1',
{'name': 'clk', 'wave': 'hl' * 8},
{'name': 'clkn', 'wave': 'lh' * 8},
{'name': 'data0', 'wave': 'l.......h.......'},
{'name': 'data1', 'wave': 'h.l...h...l.....'}],
{},
['Group2',
{'name': 'data2', 'wave': 'l...h..l.h......'},
{'name': 'data3', 'wave': 'l.h.' * 4}]],
'foot': {'tock': 1},
'head': {'text': 'Pattern'}}
draw_wavedrom(pattern_group)
"""
Explanation: Notes on waveform specification
Adding multiple wave groups and spaces
End of explanation
"""
from pynq.lib.logictools import Waveform
from pynq.overlays.logictools import LogicToolsOverlay
logictools_olay = LogicToolsOverlay('logictools.bit')
loopback_test = {'signal': [
['stimulus',
{'name': 'output0', 'pin': 'D0', 'wave': 'lh' * 8},
{'name': 'output1', 'pin': 'D1', 'wave': 'l.h.' * 4},
{'name': 'output2', 'pin': 'D2', 'wave': 'l...h...' * 2}],
{},
['analysis',
{'name': 'input0', 'pin': 'D0'},
{'name': 'input1', 'pin': 'D1'},
{'name': 'input2', 'pin': 'D2'}]],
'foot': {'tock': 1},
'head': {'text': 'loopback_test'}}
waveform = Waveform(loopback_test)
waveform.display()
"""
Explanation: Notes on waveform specification
WaveDrom for real-time pattern generation and trace analysis
The logictools overlay uses WaveJSON format to specify and generate real signals on the board.
As shown in the figure above, the Pattern Generator is an output-only block that specifies a sequence of logic values (patterns) which appear on the output pins of the ARDUINO interface. The logictools API for Pattern Generator accepts WaveDrom specification syntax with some enhancements.
The Trace Analyzer is an input-only block that captures and records all the IO signals. These signals may be outputs driven by the generators or inputs to the PL that are driven by external circuits. The Trace Analyzer allows us to verify that the output signals we have specified from the generators are being applied correctly. It also allows us to debug and analyze the opearion of the external interface.
The signals generated or captured by both the blocks can be displayed in the notebook by populating the WaveJSON dictionary that we have seen in this notebook. Users can access this dictionary through the provided API to extend or modify the waveform with special annotations.
we use a subset of the wave tokens that are allowed by WaveDrom to specify the waveforms for the Pattern Generator. However, users can call the draw_waveform() method on the dictionary populated by the Trace Analyzer to extend and modify the dictionary with annotations.
In the example below, we are going to generate 3 signals on the Arduino interface pins D0, D1 and D2 using the Pattern Generator. Since all IOs are accessible to the Trace analyzer, we will capture the data on the pins as well. This operation will serve as an internal loopback.
Step 1: Download the logictools overlay and specify the pattern
The pattern to be generated is specified in the waveJSON format. The Waveform class is used to display the specified waveform.
End of explanation
"""
pattern_generator = logictools_olay.pattern_generator
pattern_generator.trace(num_analyzer_samples=16)
pattern_generator.setup(loopback_test,
stimulus_group_name='stimulus',
analysis_group_name='analysis')
pattern_generator.run()
pattern_generator.show_waveform()
"""
Explanation: Note: Since there are no captured samples at this moment, the analysis group will be empty.
Notes on the enhanced WaveJSON specification format
Step 2: Run the pattern generator and trace the loopback signals.
This step populates the WaveJSON dict with the captured trace analyzer samples. The dict can now serve as an ouput that we can further modify. It is shown in the next step.
End of explanation
"""
import pprint
output_wavejson = pattern_generator.waveform.waveform_dict
pprint.pprint(output_wavejson)
"""
Explanation: Step 3: View the output waveJSON dict.
End of explanation
"""
state_list = ['S0', 'S1', 'S2', 'S3', 'S4', 'S5', 'S6', 'S7',
'S0', 'S1', 'S2', 'S3', 'S4', 'S5', 'S6', 'S7']
color_dict = {'white': '2', 'yellow': '3', 'orange': '4', 'blue': '5'}
output_wavejson['signal'].extend([{}, ['Annotation',
{'name': 'state',
'wave': color_dict['yellow'] * 8 +
color_dict['blue'] * 8,
'data': state_list}]])
"""
Explanation: Step 4: Extending the output waveJSON dict with state annotation
End of explanation
"""
draw_wavedrom(output_wavejson)
"""
Explanation: Note: The color_dict is a color code map as defined by WaveDrom
End of explanation
"""
|
hongsups/insightfl_shin | project/clean_biz_pattern_census.ipynb | mit | import requests
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import csv
import seaborn as sns
font = {'family' : 'Arial',
'weight' : 'bold',
'size' : 25}
matplotlib.rc('font', **font)
from census import Census
from us import states
import csv
"""
Explanation: Data munghing for business pattern data
End of explanation
"""
# load zip codes
import pickle
with open('zipcode_final.txt', 'rb') as f:
zip_codes = pickle.load(f)
# concatenate tables with different years and zip codes
df = pd.DataFrame()
years = range(2011,2014)
for year in years:
temp = pd.read_csv('zbp'+ str(year-2000) +'detail.txt',error_bad_lines=False)
temp['year']=year
temp = temp.loc[temp['zip'].isin(map(int,zip_codes))]
df = pd.concat([df,temp],ignore_index=True)
df.head()
"""
Explanation: 1. Download the table from the website
zip code, business type, # of establishment by # of employees
2. Concatenate tables for corresponding zip codes
End of explanation
"""
"""
11 Agriculture, Forestry, Fishing and Hunting
21 Mining, Quarrying, and Oil and Gas Extraction
22 Utilities
#23 Construction
31-33 Manufacturing
#42 Wholesale Trade
#44-45 Retail Trade
48-49 Transportation and Warehousing
#51 Information
#52 Finance and Insurance
#53 Real Estate and Rental and Leasing
#54 Professional, Scientific, and Technical Services
#55 Management of Companies and Enterprises
56 Administrative and Support and Waste Management and Remediation Services
61 Educational Services
62 Health Care and Social Assistance
71 Arts, Entertainment, and Recreation
#72 Accommodation and Food Services
81 Other Services (except Public Administration)
92 Public Administration
potential parameters: 23, 42, 45, 51, 52, 53, 54, 55, 72
"""
def conver_business_type(old_code):
"""
Convert N digit nacis codes to 2 digits
http://www.census.gov/cgi-bin/sssd/naics/naicsrch?chart=2012
"""
# get the first two digit
new_code = old_code[:2]
# simplify
# 31-33 Manufacturing
# 44-45 Retail Trade
# 48-49 Transportation and Warehousing
if new_code != '--':
new_int = int(new_code)
if new_int >= 31 and new_int <= 33:
new_code = '30'
elif new_int >= 44 and new_int <= 45:
new_code = '45'
elif new_int >= 48 and new_int <= 49:
new_code = '49'
return new_code
# create new column for simplified business_code
df['business_code']=df['naics'].apply(lambda x: conver_business_type(x))
# rename for consistence
df=df.rename(columns = {'zip':'zip_code'})
df.head()
# Add column that show potentially good indicators
# 23 Construction
# 42 Wholesale Trade
# 45 Retail Trade
# 51 Information
# 52 Finance and Insurance
# 53 Real Estate and Rental and Leasing
# 54 Professional, Scientific, and Technical Services
# 55 Management of Companies and Enterprises
# 72 Accommodation and Food Services
# good_biz_code_list = [23,42,45,51,52,53,54,55,72]
# save
df.to_csv('biz_sum.csv',encoding='utf-8',index=False)
df = pd.read_csv('biz_sum.csv')
df.head()
"""
Explanation: 3. Simplify the business code
End of explanation
"""
# simplify the table
df2 = df[['year','zip_code','business_code','est']]
df2.head()
# sum and normalize
# 1. Sum: use pandas groupby
gb = df2.groupby(('zip_code','year','business_code'),as_index=False)
gb_sum = gb.agg(np.sum)
gb_sum.head()
# Compute growth rate: divide year_(N+1) by year_N (>1: growth)
gb2 = gb_sum.groupby(('zip_code','year'))
# Create empty cell if the business type does not exist.
business_codes = [11, 21, 22, 23, 30, 42, 45, 49, 51, 52, 53, 54, 55, 56, 61, 62, 71, 72, 81, 99]
# for every year and every zip code, create same list of business_codes
# if not exist, est value = 0
years = range(2011,2014)
df_biz = pd.DataFrame()
for year in years:
for zip_code in zip_codes:
try:
a = gb2.get_group((int(zip_code),year))
est_list = []
temp = pd.DataFrame()
for i in range(len(business_codes)):
business_code = business_codes[i]
if str(business_code) in a['business_code'].values:
est_list.append(int(a[a['business_code']==str(business_code)]['est']))
else:
est_list.append(0)
temp['business_code']=pd.Series(business_codes)
temp['est']=pd.DataFrame(est_list)
temp['zip_code']=zip_code
temp['year']=year
df_biz = pd.concat([df_biz,temp],ignore_index=True)
except KeyError:
continue
# save
df_biz.to_csv('biz_sum_final.csv',encoding='utf-8',index=False)
"""
Explanation: 4. Fill the gaps for non-existing data
End of explanation
"""
df_biz.head()
gb_biz = df_biz.groupby(('year','zip_code','business_code'))
df_all
ref_years = [2011, 2012]
df_biz_growth = pd.DataFrame()
for ref_year in ref_years:
for zip_code in zip_codes:
for business_code in business_codes:
try:
prev_year = int(gb_biz.get_group((ref_year,zip_code,business_code))['est'].values)
this_year = int(gb_biz.get_group((ref_year+1,zip_code,business_code))['est'].values)
if prev_year == 0 & this_year > 0:
growth = 2
elif prev_year == 0 & this_year == 0:
growth = 0
else:
growth = (this_year - prev_year+.0)/prev_year
data = [dict(year = ref_year,
zip_code = zip_code,
business_code = business_code,
growth = growth)]
temp = pd.DataFrame(data)
df_biz_growth = pd.concat([df_biz_growth,temp],ignore_index=True)
except KeyError:
continue
print df_biz_growth.head()
len(df_biz_growth)
df_biz_growth.to_csv('biz_growth.csv',encoding='utf-8',index=False)
gb3 = df_biz_growth.groupby(('year','zip_code'))
gb3.get_group((2012,'21201'))
# original zip code
len(zip_codes)
# new zip code based on available data: it's fewer
len(df_biz_growth.zip_code.unique())
# we're getting only these zip codes
new_zip = df_biz_growth.zip_code.unique()
np.save('new_zip_codes', new_zip)
"""
load: np.load('new_zip_codes.npy')
"""
"""
Explanation: 5. Predictor: growth rate per business category
End of explanation
"""
|
tensorflow/neural-structured-learning | neural_structured_learning/examples/notebooks/graph_keras_cnn_flowers.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 Google LLC
End of explanation
"""
!pip install --quiet neural-structured-learning
!pip install --quiet tensorflow-hub
"""
Explanation: Graph regularization for image classification using synthesized graphs
By Sayak Paul
<br>
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/graph_keras_cnn_flowers.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/graph_keras_cnn_flowers.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Overview
This notebook is the counterpart of Graph regularization for sentiment
classification using synthesized graphs for image classification. In this notebook, we will build a flower classification model that can categorize images of flowers into discrete classes such as sunflowers, roses,
tulips, etc.
We will demonstrate the use of graph regularization in this notebook by building a graph from the given input. The general recipe for building a graph-regularized model using the Neural Structured Learning (NSL) framework when the input does not contain an explicit graph is as follows:
Create embeddings for each image sample in the input. This can be done using
pre-trained models such as EfficientNet,
Inception,
BiT etc.
Build a graph based on these embeddings by using a similarity metric such as
the 'L2' distance, 'cosine' distance, etc. Nodes in the graph correspond to
samples and edges in the graph correspond to similarity between pairs of
samples.
Generate training data from the above synthesized graph and sample features.
The resulting training data will contain neighbor features in addition to
the original node features.
Create a neural network as a base model using the Keras sequential,
functional, or subclass API.
Wrap the base model with the GraphRegularization wrapper class, which is
provided by the NSL framework, to create a new graph Keras model. This new
model will include a graph regularization loss as the regularization term in
its training objective.
Train and evaluate the graph Keras model.
Note: We expect that it would take readers about 1 hour to go through this
tutorial.
Requirements
Install the Neural Structured Learning package.
Install tensorflow-hub.
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
import neural_structured_learning as nsl
import tensorflow as tf
import tensorflow_datasets as tfds
# Resets notebook state
tf.keras.backend.clear_session()
tfds.disable_progress_bar()
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print(
"GPU is",
"available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE")
"""
Explanation: Dependencies and imports
End of explanation
"""
train_ds, validation_ds = tfds.load(
"tf_flowers",
split=["train[:85%]", "train[85%:]"],
as_supervised=True
)
"""
Explanation: Flowers dataset
The flowers dataset contains a total of 3670 images of flowers categorized into 5 classes -
daisy
dandelion
roses
sunflowers
tulips
The dataset is balanced i.e., each class contains roughly the same number of examples. In the cells below, we first load the dataset using TensorFlow Datasets and then visualize a couple of examples from the dataset.
The dataset does not come pre-split into training, validation, and test splits. However, when downloading the dataset, we can specify a splitting ratio. Here we'll be using a split of 85:15 for the training and validation sets.
End of explanation
"""
plt.figure(figsize=(10, 10))
for i, (image, label) in enumerate(train_ds.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image)
plt.title(int(label))
plt.axis("off")
"""
Explanation: After downloading the dataset and splitting it, we can visualize a few samples from it.
End of explanation
"""
num_train_examples = tf.data.experimental.cardinality(train_ds)
num_val_examples = tf.data.experimental.cardinality(validation_ds)
print(f"Total training examples: {num_train_examples}")
print(f"Total validation examples: {num_val_examples}")
"""
Explanation: In the next cell, we determine the number of samples present in the splits.
End of explanation
"""
IMG_SIZE = 224 #@param ["128", "224"] {type:"raw"}
PROJECTED_DIM = 128 #@param {type:"slider", min:128, max:1024, step:128}
#@markdown `IMG_SIZE` of 224 denotes the 224 $\times$ 224 resolution.
def create_feature_extractor_model():
"""Creates a feature extractor model with DenseNet121."""
inputs = tf.keras.layers.Input((IMG_SIZE, IMG_SIZE, 3))
densenet_model = tf.keras.applications.DenseNet121(weights="imagenet",
input_shape=(IMG_SIZE, IMG_SIZE, 3),
pooling="avg", include_top=False
)
densenet_model.trainable = False
x = tf.keras.applications.densenet.preprocess_input(inputs)
outputs = densenet_model(x, training=False)
return tf.keras.Model(inputs, outputs, name="densenet_feature_extractor")
feature_extractor = create_feature_extractor_model()
feature_extractor.summary()
"""
Explanation: Graph construction
Graph construction involves creating embeddings for the images and then using
a similarity function to compare the embeddings.
Create sample embeddings
We will use a pre-trained DenseNet121 model to create embeddings in the
tf.train.Example format for each sample in the input. We will store the
resulting embeddings in the TFRecord format along with an additional feature
that represents the ID of each sample. This is important and will allow us match
sample embeddings with corresponding nodes in the graph later. We'll start this section with a utility function to build a feature extraction model.
One important detail to note here is that we are using random projections in order to reduce the dimensionality of the final vector coming out of the pre-trained model. The final embedding vector is 1024-dimensional. When the size of the dataset is large, such high-dimensional vectors can consume a lot of memory. Hence depending on your use-case, it might be a good idea to further reduce the dimensionality.
End of explanation
"""
def resize(image, label):
"""Resizes the images to (IMG_SIZE x IMG_SIZE) size."""
image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE))
return image, label
# Resize all the images to uniform shape so that they can
# be batched.
train_ds = train_ds.map(resize)
validation_ds = validation_ds.map(resize)
def _int64_feature(value):
"""Returns int64 tf.train.Feature."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=value.tolist()))
def _bytes_feature(value):
"""Returns bytes tf.train.Feature."""
return tf.train.Feature(
bytes_list=tf.train.BytesList(value=[value.encode('utf-8')]))
def _float_feature(value):
"""Returns float tf.train.Feature."""
return tf.train.Feature(float_list=tf.train.FloatList(value=value.tolist()))
def create_embedding_example(feature_extractor, image,
projection_matrix, record_id):
"""Create tf.Example containing the sample's embedding and its ID."""
image_features = feature_extractor(image[None, ...])
image_features_numpy = image_features.numpy().squeeze()
compressed_image_features = image_features_numpy.dot(projection_matrix)
features = {
"id": _bytes_feature(str(record_id)),
"embedding": _float_feature(compressed_image_features)
}
return tf.train.Example(features=tf.train.Features(feature=features))
def generate_random_projection_weights(original_dim=1024,
projected_dim=PROJECTED_DIM):
"""Generates a random projection matrix."""
random_projection_matrix = np.random.randn(
projected_dim, original_dim).T
return random_projection_matrix
def create_embeddings(feature_extractor, dataset, output_path,
starting_record_id):
"""Creates TFRecords with embeddings of the images."""
projection_matrix = generate_random_projection_weights()
record_id = int(starting_record_id)
with tf.io.TFRecordWriter(output_path) as writer:
for image, _ in dataset:
example = create_embedding_example(feature_extractor,
image,
projection_matrix,
record_id)
record_id = record_id + 1
writer.write(example.SerializeToString())
return record_id
# Persist TF.Example features containing embeddings for training data in
# TFRecord format.
create_embeddings(feature_extractor, train_ds, "flowers_embeddings.tfr", 0)
"""
Explanation: We encourage you to try out other pre-trained models available in the tf.keras.applications module and also on TensorFlow Hub. We'll now write a couple of utility functions to create the sample embeddings for graph construction.
End of explanation
"""
similarity_threshold = 0.7
graph_builder_config = nsl.configs.GraphBuilderConfig(
similarity_threshold=similarity_threshold,
lsh_splits=10, lsh_rounds=15, random_seed=12345)
nsl.tools.build_graph_from_config(["flowers_embeddings.tfr"],
"flowers_graph_70.tsv",
graph_builder_config)
"""
Explanation: Graph building
Now that we have the sample embeddings, we will use them to build a similarity
graph, i.e, nodes in this graph will correspond to samples and edges in this
graph will correspond to similarity between pairs of nodes.
Neural Structured Learning provides a graph building library to build a graph
based on sample embeddings. It uses
cosine similarity as the
similarity measure to compare embeddings and build edges between them. It also
allows us to specify a similarity threshold, which can be used to discard
dissimilar edges from the final graph. In this example, using 0.7 as the
similarity threshold and 12345 as the random seed, we end up with a graph that
has 987,246 bi-directional edges. Here we're using the graph builder's support
for locality-sensitive hashing
(LSH) to speed up graph building. For details on using the graph builder's LSH
support, see the
build_graph_from_config
API documentation.
End of explanation
"""
!wc -l flowers_graph_70.tsv
"""
Explanation: Each bi-directional edge is represented by two directed edges in the output TSV
file, so that file contains 493,623 * 2 = 98,246 total lines:
End of explanation
"""
def _bytes_feature_image(value):
"""Returns bytes tf.train.Feature."""
return tf.train.Feature(
bytes_list=tf.train.BytesList(value=[value]))
def create_example(image, label, record_id):
"""Create tf.Example containing the image, label, and ID."""
features = {
"id": _bytes_feature(str(record_id)),
"image": _bytes_feature_image(image.numpy()),
"label": _int64_feature(np.asarray([label])),
}
return tf.train.Example(features=tf.train.Features(feature=features))
def create_records(dataset, record_path, starting_record_id):
"""Generates TFRecords from a tf.data.Dataset object."""
record_id = int(starting_record_id)
with tf.io.TFRecordWriter(record_path) as writer:
for image, label in dataset:
image = tf.cast(image, tf.uint8)
image = tf.image.encode_jpeg(image, optimize_size=True,
chroma_downsampling=False)
example = create_example(image, label, record_id)
record_id = record_id + 1
writer.write(example.SerializeToString())
return record_id
# Persist TF.Example features (images and labels) for training and validation
# data in TFRecord format.
next_record_id = create_records(train_ds,
"train_data.tfr", 0)
create_records(validation_ds, "validation_data.tfr",
next_record_id)
"""
Explanation: Sample features
We create sample features for our problem using the tf.train.Example format
and persist them in the TFRecord format. Each sample will include the
following three features:
id: The node ID of the sample.
image: A byte list containing raw image vectors.
label: A singleton int64 identifying the target class of the image.
End of explanation
"""
nsl.tools.pack_nbrs(
"train_data.tfr",
"",
"flowers_graph_70.tsv",
"nsl_train_data.tfr",
add_undirected_edges=True,
max_nbrs=3
)
!ls -lh *_data.tfr
"""
Explanation: A note on create_records():
Images are serialized as byte-strings in TFRecords. tf.image.encode_jpeg() function allows us to do that in an optimal way (when optimize_size is set to True). Inputs to that function should be integers. This is why, we first cast the image to tf.uint8 and then pass it to tf.image.encode_jpeg(). To know more, refer to this tutorial.
Augment training data with graph neighbors
Since we have the sample features and the synthesized graph, we can generate the
augmented training data for Neural Structured Learning. The NSL framework
provides a library to combine the graph and the sample features to produce
the final training data for graph regularization. The resulting training data
will include original sample features as well as features of their corresponding
neighbors.
In this tutorial, we consider undirected edges and use a maximum of 3 neighbors
per sample to augment training data with graph neighbors.
End of explanation
"""
NBR_FEATURE_PREFIX = "NL_nbr_"
NBR_WEIGHT_SUFFIX = "_weight"
"""
Explanation: Base model
We'll first build a model without graph regularization. We'll use a simple convolutional neural network (CNN) for the purpose of this tutorial. However, this can be easily replaced with more sophisticated network architectures.
Global variables
End of explanation
"""
class HParams(object):
"""Hyperparameters used for training."""
def __init__(self):
### dataset parameters
self.num_classes = 5
self.num_train_examples = num_train_examples
self.num_val_examples = num_val_examples
### neural graph learning parameters
self.distance_type = nsl.configs.DistanceType.L2
self.graph_regularization_multiplier = 0.3
self.num_neighbors = 2
### network architecture parameters
self.num_channels = 32
self.kernel_size = 3
### training parameters
self.train_epochs = 30
self.batch_size = 64
### eval parameters
self.eval_steps = None # All instances in the test set are evaluated.
HPARAMS = HParams()
"""
Explanation: Hyperparameters
We will use an instance of HParams to inclue various hyperparameters and
constants used for training and evaluation. We briefly describe each of them
below:
num_classes: There are 5 classes.
num_train_examples The number of training examples.
num_val_examples: The number of validation examples.
distance_type: This is the distance metric used to regularize the sample
with its neighbors.
graph_regularization_multiplier: This controls the relative weight of
the graph regularization term in the overall loss function.
num_neighbors: The number of neighbors used for graph regularization.
This value has to be less than or equal to the max_nbrs argument used
above when invoking nsl.tools.pack_nbrs.
num_channels: The number of channels to be used in the convolutional layers.
kernel_size: Kernel size to be used in the convolutional layers.
train_epochs: The number of training epochs.
batch_size: Batch size used for training and evaluation.
eval_steps: The number of batches to process before deeming evaluation
is complete. If set to None, all instances in the test set are evaluated.
End of explanation
"""
default_jpeg_value = tf.ones((IMG_SIZE, IMG_SIZE, 3), dtype=tf.uint8)
default_jpeg_value *= 255
default_jpeg_value = tf.image.encode_jpeg(default_jpeg_value, optimize_size=True,
chroma_downsampling=False)
def make_dataset(file_path, training=False):
"""Creates a `tf.data.TFRecordDataset`.
Args:
file_path: Name of the file in the `.tfrecord` format containing
`tf.train.Example` objects.
training: Boolean indicating if we are in training mode.
Returns:
An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
objects.
"""
def parse_example(example_proto):
"""Extracts relevant fields from the `example_proto`.
Args:
example_proto: An instance of `tf.train.Example`.
Returns:
A pair whose first value is a dictionary containing relevant features
and whose second value contains the ground truth labels.
"""
feature_spec = {
'image': tf.io.FixedLenFeature([], tf.string,
default_value=default_jpeg_value),
'label': tf.io.FixedLenFeature((), tf.int64, default_value=-1),
}
# We also extract corresponding neighbor features in a similar manner to
# the features above during training.
if training:
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'image')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i,
NBR_WEIGHT_SUFFIX)
feature_spec[nbr_feature_key] = tf.io.FixedLenFeature([], tf.string,
default_value=default_jpeg_value)
# We assign a default value of 0.0 for the neighbor weight so that
# graph regularization is done on samples based on their exact number
# of neighbors. In other words, non-existent neighbors are discounted.
feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
[1], tf.float32, default_value=tf.constant([0.0]))
features = tf.io.parse_single_example(example_proto, feature_spec)
labels = features.pop('label')
# We need to convert the byte-strings back to images.
features['image'] = tf.image.decode_jpeg(features['image'], channels=3)
if training:
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'image')
features[nbr_feature_key] = tf.image.decode_jpeg(features[nbr_feature_key],
channels=3)
return features, labels
dataset = tf.data.TFRecordDataset([file_path])
if training:
dataset = dataset.shuffle(HPARAMS.batch_size * 10)
dataset = dataset.map(parse_example)
dataset = dataset.batch(HPARAMS.batch_size)
return dataset
train_dataset = make_dataset('nsl_train_data.tfr', True)
validation_dataset = make_dataset('validation_data.tfr')
"""
Explanation: Prepare the data
Now, we will prepare our dataset with TensorFlow's data module (tf.data). This will include parsing the TFRecords, structuring them, batching them, and shuffling them if necessary.
In the next two cells, we first define a default value for the image examples which will come in handy when parsing the neighbor examples. Then we write a utility function to create the tf.data.Dataset objects which will be fed to our model in a moment.
End of explanation
"""
sample = next(iter(train_dataset))
sample[0].keys()
plt.figure(figsize=(20, 20))
plt.subplot(1, 3, 1)
plt.imshow(sample[0]["NL_nbr_0_image"][0])
neighbor_one_weight = float(sample[0]["NL_nbr_0_weight"][0].numpy())
plt.title(f"Neighbor 1 with weight: {neighbor_one_weight:.3f}", fontsize=14)
plt.axis("off")
plt.subplot(1, 3, 2)
plt.imshow(sample[0]["NL_nbr_1_image"][0])
neighbor_two_weight = float(sample[0]["NL_nbr_1_weight"][0].numpy())
plt.title(f"Neighbor 2 with weight: {neighbor_two_weight:.3f}", fontsize=14)
plt.axis("off")
plt.subplot(1, 3, 3)
plt.imshow(sample[0]["image"][0])
plt.title(f"Original image with label: {int(sample[1][0])}", fontsize=14)
plt.axis("off")
plt.show()
"""
Explanation: Visualization
In this section, we visualize the neighbor images as computed by Neural Structured Learning during building the graph.
End of explanation
"""
def make_cnn_model():
"""Creates a simple CNN model."""
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(
input_shape=(IMG_SIZE, IMG_SIZE, 3), name='image'),
tf.keras.layers.experimental.preprocessing.Rescaling(scale=1. / 255),
tf.keras.layers.Conv2D(
HPARAMS.num_channels, HPARAMS.kernel_size, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(
HPARAMS.num_channels, HPARAMS.kernel_size, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.GlobalAvgPool2D(),
tf.keras.layers.Dense(HPARAMS.num_classes)
])
return model
model = make_cnn_model()
model.summary()
"""
Explanation: In the figure above, weight denotes the similarity strength of the examples.
Model training (no graph regularization)
With our datasets prepared, we are now ready to train our shallow CNN model without graph regularization.
End of explanation
"""
model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=HPARAMS.train_epochs,
callbacks=[tf.keras.callbacks.EarlyStopping(patience=5)])
"""
Explanation: After building and initializing the model in Keras, we can compile it and finally train it.
End of explanation
"""
history_dict = history.history
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "-r^" is for solid red line with triangle markers.
plt.plot(epochs, loss, '-r^', label='Training loss')
# "-b0" is for solid blue line with circle markers.
plt.plot(epochs, val_loss, '-bo', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(loc='best')
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, '-r^', label='Training acc')
plt.plot(epochs, val_acc, '-bo', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='best')
plt.show()
"""
Explanation: Plot training metrics
End of explanation
"""
# Build a new base CNN model.
base_reg_model = make_cnn_model()
# Wrap the base model with graph regularization.
graph_reg_config = nsl.configs.make_graph_reg_config(
max_neighbors=HPARAMS.num_neighbors,
multiplier=HPARAMS.graph_regularization_multiplier,
distance_type=HPARAMS.distance_type,
sum_over_axis=-1)
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
graph_reg_config)
graph_reg_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
graph_reg_history = graph_reg_model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=HPARAMS.train_epochs,
callbacks=[tf.keras.callbacks.EarlyStopping(patience=5)])
"""
Explanation: Graph regularization
We are now ready to try graph regularization using the base model that we built
above. We will use the GraphRegularization wrapper class provided by the
Neural Structured Learning framework to wrap the base (CNN) model to include
graph regularization. The rest of the steps for training and evaluating the
graph-regularized model are similar to that of the base model.
Create graph-regularized model
To assess the incremental benefit of graph regularization, we will create a new
base model instance. This is because model has already been trained for a few
iterations, and reusing this trained model to create a graph-regularized model
will not be a fair comparison for model.
End of explanation
"""
graph_reg_history_dict = graph_reg_history.history
acc = graph_reg_history_dict['accuracy']
val_acc = graph_reg_history_dict['val_accuracy']
loss = graph_reg_history_dict['loss']
graph_loss = graph_reg_history_dict['scaled_graph_loss']
val_loss = graph_reg_history_dict['val_loss']
epochs = range(1, len(acc) + 1)
plt.clf() # clear figure
# "-r^" is for solid red line with triangle markers.
plt.plot(epochs, loss, '-r^', label='Training loss')
# "-gD" is for solid green line with diamond markers.
plt.plot(epochs, graph_loss, '-gD', label='Training graph loss')
# "-b0" is for solid blue line with circle markers.
plt.plot(epochs, val_loss, '-bo', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(loc='best')
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, '-r^', label='Training acc')
plt.plot(epochs, val_acc, '-bo', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='best')
plt.show()
"""
Explanation: Plot training metrics of the graph-regularized model
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.17/_downloads/3ade035c928216ab770a554e9a0c0cef/plot_stats_cluster_spatio_temporal.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
from scipy import stats as stats
import mne
from mne.epochs import equalize_epoch_counts
from mne.stats import (spatio_temporal_cluster_1samp_test,
summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
"""
Explanation: Permutation t-test on source data with spatio-temporal clustering
Tests if the evoked response is significantly different between
two conditions across subjects. Here just for demonstration purposes
we simulate data from multiple subjects using one subject's data.
The multiple comparisons problem is addressed with a cluster-level
permutation test across space and time.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
src_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
"""
Explanation: Set parameters
End of explanation
"""
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
event_id = 1 # L auditory
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
event_id = 3 # L visual
epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
equalize_epoch_counts([epochs1, epochs2])
"""
Explanation: Read epochs for all channels, removing a bad one
End of explanation
"""
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE, sLORETA, or eLORETA)
inverse_operator = read_inverse_operator(fname_inv)
sample_vertices = [s['vertno'] for s in inverse_operator['src']]
# Let's average and compute inverse, resampling to speed things up
evoked1 = epochs1.average()
evoked1.resample(50, npad='auto')
condition1 = apply_inverse(evoked1, inverse_operator, lambda2, method)
evoked2 = epochs2.average()
evoked2.resample(50, npad='auto')
condition2 = apply_inverse(evoked2, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition1.crop(0, None)
condition2.crop(0, None)
tmin = condition1.tmin
tstep = condition1.tstep
"""
Explanation: Transform to source space
End of explanation
"""
n_vertices_sample, n_times = condition1.data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 2) * 10
X[:, :, :, 0] += condition1.data[:, :, np.newaxis]
X[:, :, :, 1] += condition2.data[:, :, np.newaxis]
"""
Explanation: Transform to common cortical space
Normally you would read in estimates across several subjects and morph
them to the same cortical space (e.g. fsaverage). For example purposes,
we will simulate this by just having each "subject" have the same
response (just noisy in source space) here.
<div class="alert alert-info"><h4>Note</h4><p>Note that for 7 subjects with a two-sided statistical test, the minimum
significance under a permutation test is only p = 1/(2 ** 6) = 0.015,
which is large.</p></div>
End of explanation
"""
# Read the source space we are morphing to
src = mne.read_source_spaces(src_fname)
fsave_vertices = [s['vertno'] for s in src]
morph_mat = mne.compute_source_morph(
src=inverse_operator['src'], subject_to='fsaverage',
spacing=fsave_vertices, subjects_dir=subjects_dir).morph_mat
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 2)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 2)
"""
Explanation: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 source space
with vertices 0:10242 for each hemisphere. Usually you'd have to morph
each subject's data separately (and you might want to use morph_data
instead), but here since all estimates are on 'sample' we can use one
morph matrix for all the heavy lifting.
End of explanation
"""
X = np.abs(X) # only magnitude
X = X[:, :, :, 0] - X[:, :, :, 1] # make paired contrast
"""
Explanation: Finally, we want to compare the overall activity levels in each condition,
the diff is taken along the last axis (condition). The negative sign makes
it so condition1 > condition2 shows up as "red blobs" (instead of blue).
End of explanation
"""
print('Computing connectivity.')
connectivity = mne.spatial_src_connectivity(src)
# Note that X needs to be a multi-dimensional array of shape
# samples (subjects) x time x space, so we permute dimensions
X = np.transpose(X, [2, 1, 0])
# Now let's actually do the clustering. This can take a long time...
# Here we set the threshold quite high to reduce computation.
p_threshold = 0.001
t_threshold = -stats.distributions.t.ppf(p_threshold / 2., n_subjects - 1)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_1samp_test(X, connectivity=connectivity, n_jobs=1,
threshold=t_threshold, buffer_size=None,
verbose=True)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
"""
Explanation: Compute statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial connectivity matrix (instead of spatio-temporal)
End of explanation
"""
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# blue blobs are for condition A < condition B, red for A > B
brain = stc_all_cluster_vis.plot(
hemi='both', views='lateral', subjects_dir=subjects_dir,
time_label='Duration significant (ms)', size=(800, 800),
smoothing_steps=5, clim=dict(kind='value', pos_lims=[0, 1, 40]))
# brain.save_image('clusters.png')
"""
Explanation: Visualize the clusters
End of explanation
"""
|
chrismcginlay/crazy-koala | jupyter/09_print_formatting_decimal_places.ipynb | gpl-3.0 | answer = 4/7
print(answer)
"""
Explanation: Pretty Printing - Decimal Places
Python will print real numbers to many decimal places, usually more than is necessary.
Run this code to see well over 10 decimal places
End of explanation
"""
answer = 4/7
answer_2dp = format(answer, '0.2f')
print(answer_2dp)
"""
Explanation: Now, suppose you would like to have just two decimal places. We can use Python's format() function to help.
Run this code, it produces just two decimal places, rounding up as appropriate.
End of explanation
"""
average_price = 300/11 #just some random calculation with lots of decimal places
average_price_2dp = format(average_price, '0.2f')
print("Average price £"+average_price_2dp)
"""
Explanation: Next up, let's see how to print money! (without getting arrested):
End of explanation
"""
|
KMFleischer/PyEarthScience | Tutorial/02_Numpy_basics.ipynb | mit | import numpy as np
"""
Explanation: 2. Numpy basics
Numpy is a python module for scientific computing (linear algebra, Fourier transform, random number capabilities), and efficient handling of N-dimensional arrays. Have a closer look at https://numpy.org/.
2.1 Import numpy
First, we have to import the numpy module to our environment and, just to make it shorter, we want to use an abbreviation np instead of numpy.
End of explanation
"""
temp_list = [10, 12, 14, 29, 18, 21]
temp_array = np.array(temp_list)
print('--> temp_list: ',temp_list,type(temp_list))
print('--> temp_array: ',temp_array,type(temp_array))
"""
Explanation: 2.2 Numpy arrays
In addition to the capabilities of performing calculations with numpy, numpy's great strength is the use and the modification of arrays.
2.2.1 Create an array
In the chapter before you've learned how to create and use lists which can be used as a starting point to create a numpy array with np.array.
End of explanation
"""
ndim = temp_array.ndim
shape = temp_array.shape
size = temp_array.size
dtype = temp_array.dtype
print('--> number of array dimensions: ', ndim)
print('--> shape of array dimensions: ', shape)
print('--> size of array dimensions: ', size)
print('--> dtype of array dimensions: ', dtype)
"""
Explanation: 2.2.2 Array attributes
Numpy provides attributes for the arrays to retrieve the dimension size, shape, size, data type, and data itself.
End of explanation
"""
print('--> type(temp_array): ',type(temp_array))
print('--> temp_array.astype(float)): ',temp_array.astype(float))
"""
Explanation: You can convert the array from the given data type to another. Here, we want to convert it from integer to floating point values.
End of explanation
"""
print('--> temp_array[:]: ', temp_array[:])
print('--> temp_array[1]): ', temp_array[1])
print('--> temp_array[0:3]: ', temp_array[0:3])
print('--> temp_array[:3]: ', temp_array[:3])
print('--> temp_array[3:]: ', temp_array[3:])
print('--> temp_array[-1]: ', temp_array[-1])
print('--> temp_array[-3:-1]:', temp_array[-3:-1])
"""
Explanation: 2.2.3 Indexing and slicing
Indexing and sclicing of arrays is equivalent to indexing and slicing of lists (see chapter 01).
End of explanation
"""
a_array = np.arange(5)
print(a_array)
"""
Explanation: 2.3 Create and reshape arrays
To generate an evenly spaced array, let's say (0,1,2,3,4), you can use the np.arange function. np.range is similar to the range function but it creates an array instead of a list.
To create an array starting at 0 and have 5 elements you can use one single input value as parameter:
End of explanation
"""
a_array = np.arange(2, 10, 2)
print(a_array)
"""
Explanation: To create an array giving the start, end, and increment value:
start value = 2
end value = 10
increment = 2
End of explanation
"""
a_array = np.arange(2, 10, 1.5)
print(a_array)
"""
Explanation: To create an array with an increment of type float:
End of explanation
"""
b_array = np.linspace(1, 2, num=11)
print(b_array)
"""
Explanation: If you want to create an array on n-elements without knowing the exact increment the np.linspace function is what you are looking for. You have to give the start, end value, and the number of elements to be generated.
For example
start value = 1
end value = 2
number of elements = 11
End of explanation
"""
a = np.arange(0, 12, 1,)
print('a:\n ', a)
a_2d = a.reshape(4, 3)
print('\na_2d:\n ', a_2d)
a_3d = a.reshape(2, 3, 2)
print('\na_3d:\n ', a_3d)
"""
Explanation: 2.3.1 Reshape
So far we've only worked with one-dimensional arrays in this section, and we'll see how easy it is to convert them into two- or three-dimensional arrays.
End of explanation
"""
a_1d = a_3d.flatten()
print('a_1d: ', a_1d)
"""
Explanation: Of course, you can convert an n-dimensional array to an one-dimensional array using the attribute flatten.
End of explanation
"""
a_1d = np.ravel(a_3d)
print('a_1d: ', a_1d)
"""
Explanation: There is a numpy function doing this too, but it is called ravel.
End of explanation
"""
m = np.array([2.1, 3.0, 4.7, 5.3, 6.2])
n = np.arange(5)
print('m = ', m)
print('n = ', n)
mn_add = m + n
mn_mul = m * n
print('m + n = ', mn_add)
print('m * n = ', mn_mul)
"""
Explanation: Adding or multiplying arrays of the same size is simple.
We define two arrays, each has 5 elements, and compute the sum and the product of them.
End of explanation
"""
m_min = m.min()
m_max = m.max()
m_sum = m.sum()
m_mean = m.mean()
m_std = m.std()
m_round = m.round()
print('m_min: ',m_min)
print('m_max: ',m_max)
print('m_sum: ',m_sum)
print('m_mean: ',m_mean)
print('m_std: ',m.std())
print('m_round: ',m.round())
"""
Explanation: 2.4 Mathematical attributes and functions
The attributes of a numpy array can be used to retrieve the minimum or maximum value, or to compute the sum, mean, or standard deviation of arrays.
End of explanation
"""
m_sqrt = np.sqrt(m)
m_exp = np.exp(m)
mn_add = np.add(m,n)
print('m_sqrt: ', m_sqrt)
print('m_exp: ', m_exp)
print('mn_add: ', mn_add)
data_radians = np.linspace(0., 6., 5)
print('data_radians: ', data_radians)
data_sin = np.sin(data_radians)
print('data_sin: ', data_sin)
data_cos = np.cos(data_radians)
print('data_cos: ', data_cos)
data_degrees = np.degrees(data_sin)
print('data_degrees: ', data_degrees)
"""
Explanation: Numpy also provides many mathematical routines, see https://docs.scipy.org/doc/numpy/reference/routines.math.html.
End of explanation
"""
zeros = np.zeros((3,4))
print('zeros: ',zeros)
zeros = np.zeros((3,4),dtype=int)
print('zeros type integer: ',zeros)
ones = np.ones((3,4))
print('ones: ',ones)
ones = np.ones((3,4),dtype=int)
print('ones type integer: ',ones)
"""
Explanation: 2.5 Creating arrays for masking
Being able to create arrays containing only zeros ore only ones can be very helpful if you want to mask your data. Numpy provides the functions np.zeros and np.ones.
End of explanation
"""
v = np.vstack((zeros,ones))
h = np.hstack((zeros,ones))
print('np.vstack(zeros,ones): \n', v)
print('np.hstack(zeros,ones): \n', h)
"""
Explanation: 2.5 Array stacking
Numpy gives you the possibilities to stack arrays on different axes using the functions np.vstack and hstack.
End of explanation
"""
a_origin = np.arange(12).reshape(3,4)
print('a_origin: \n', a_origin)
b_copy_of_a = a_origin
b_copy_of_a[1,3] = 999
print('b_copy_of_a: ', b_copy_of_a)
print('a_origin: ', a_origin)
"""
Explanation: 2.6 Shallow and deep copy
Maybe you have recognized that copying an array a doen't mean to get a physical copy. If you change your copy the origin array will be changed to because it isn't a physical copy it is more or less a link to the origin array. It's called a shallow copy.
End of explanation
"""
a_origin = np.arange(12).reshape(3,4)
c_deep_copy = a_origin.copy()
c_deep_copy[1,3] = 222
print('a_origin: \n', a_origin)
print('c_deep_copy: \n', c_deep_copy)
"""
Explanation: To create a physical copy, so called deep copy, you have to use numpy's np.copy function.
End of explanation
"""
x = np.array([-1, 2, 0, 5, -3, -2])
x_ge0 = np.where(x >= 0, x, -9999)
print('x: ', x)
print('x_ge0: ', x_ge0)
"""
Explanation: 2.7 Most useful functions
Working with arrays 1- or multiple-dimensional arrays makes the programs sometimes slow when you have to find values in given ranges or to change values to set missing values for invalid data. Usually, you would thinl about a for or while loop to go through all elements of an array, but numpy has efficient functions to do it in just one line.
Useful functions
np.where
np.argwhere
np.all
np.any
The np.where function allows you to look at the array using a logical expression. If it is True than let the value untouched but when it is False change it to the given value, maybe a kind of a missing value, but this is NOT the same as a netCDF missing value (see below masked arrays).
End of explanation
"""
x_ind = np.argwhere(x < 0)
print('--> indices x >= 0: \n', x_ind)
y = x
y[x_ind] = -9999
print('y[x_ind] where x >= 0: \n', y)
"""
Explanation: In the upper example the values of the array were located and directly changed when the condition is False. But sometimes you want to retrieve the indices of the values instead the values themselves, because you need the same indices later again. Then use the np.argwhere function with a logical condition.
End of explanation
"""
if(x.any() < 0):
print('some elements are less 0')
else:
print('no values are less 0')
if(x.all() > 0):
print('all elements are greater than 0')
else:
print('not all values are greater than 0')
"""
Explanation: To see if the values of an array are less than 0 for instance you can't try do it like below - ah, no that would be too easy.
```
if(x < 0):
print('some elements are less 0')
else:
print('no values are less 0')
if(x > 0):
print('all elements are greater than 0')
else:
print('not all values are greater than 0')
```
The result would be the following error:
```
ValueError Traceback (most recent call last)
<ipython-input-23-6f0311fb54a3> in <module>
----> 1 if(x < 0):
2 print('some elements are less 0')
3 else:
4 print('no values are less 0')
5
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
The last line of the error message gives you the right hint to use the array functions any or all.
End of explanation
"""
if(np.any(x < 0)):
print('some elements are less 0')
else:
print('no values are less 0')
if(np.all(x > 0)):
print('all elements are greater than 0')
else:
print('not all values are greater than 0')
"""
Explanation: Or you can use the numpy function np.any or np.all.
End of explanation
"""
lines = np.loadtxt('/Users/k204045/data/Station_data/pw.dat', dtype='str', skiprows=1)
ID = lines[:,0]
lat = lines[:,1]
lon = lines[:,2]
pw = lines[:,3]
print('ID: \n', ID)
print('lat: \n', lat)
print('lon: \n', lon)
print('pw: \n', pw)
"""
Explanation: 2.8 Read data from CSV file
Numpy also provides functions to read data from files. Our next example shows how to read data from a CSV file. The input CSV file pw.dat looks like
ID LAT LON PW
BLAC 36.75 -97.25 48.00
BREC 36.41 -97.69 46.30
BURB 36.63 -96.81 49.80
...
We want to read all values, and because the mixed data types in the file, we read all as strings. We don't need the header line so we skip it.
End of explanation
"""
data = np.loadtxt('/Users/k204045/data/Station_data/pw.dat', usecols=(1,2,3), skiprows=1)
print()
print('--> data: \n', data)
print()
print('--> lon: \n', data[:,0])
print('--> lon: \n', data[:,1])
print('--> pw: \n', data[:,2])
"""
Explanation: If you don't need the IDs then you can directly read the lat, lon, and pw values using the usecols parameter. There is no need to use the dtype parameter anymore because the default data type is float, and that's what we want.
End of explanation
"""
IDs = np.loadtxt('/Users/k204045/data/Station_data/pw.dat', dtype='str', usecols=(0), skiprows=1)
print('IDs: \n', IDs)
"""
Explanation: However, we want to have the IDs, too. It's never too late...
But again, we have to tell numpy to use the data type string.
End of explanation
"""
field = np.arange(1,9,1).reshape((4,2))
mask = np.array([[0,0],[1,0],[1,1],[1,0]])
mask_field = np.ma.MaskedArray(field,mask)
print('field: \n', field)
print('mask: \n', mask)
print('mask_field: \n', mask_field)
"""
Explanation: 2.9 Masking
In our daily work we are confronted with data which we want to mask to see only the values in sections we need. Masking is sometimes tricky and you have to take care.
In the following example we try to demonstrate how to mask a 2-dimensional array by a given mask array containing zeros and ones, where 0 means 'don't mask', and 1 means 'mask'.
End of explanation
"""
A = np.random.randint(-3, high=5, size=10)
B = np.random.randint(-4, high=4, size=10)
print('--> A: \n', A)
print('--> B: \n', B)
"""
Explanation: For the next example we want to get data from one array depending on data of a second array.
To create two arrays with random data of type integer we use numpy's random generator.
End of explanation
"""
ind_ge = list(np.greater_equal(A,B))
ind_lt = list(np.less(A,B))
print('--> ind_ge: \n', ind_ge)
print('--> ind_lt: \n', ind_lt)
"""
Explanation: Now, we want only the values of array A which are
1. greater equal array B values
2. less than array B values
First, we have to find the indices of those values. Numpy has routines to do that for us, presupposed that both arrays are of the same shape.
End of explanation
"""
ind_ge = list(A>=B)
ind_lt = list(A<B)
print('--> ind_ge: \n', ind_ge)
print('--> ind_lt: \n', ind_lt)
"""
Explanation: This is the same as
End of explanation
"""
A_ge = A[ind_ge]
A_lt = A[ind_lt]
print('--> A_ge: \n', A_ge)
print('--> A_lt: \n', A_lt)
"""
Explanation: Use these indices to get the data we want.
End of explanation
"""
A_ge2 = np.ma.MaskedArray(A, ind_ge)
A_lt2 = np.ma.MaskedArray(A, ind_lt)
print('--> A_ge2: \n', A_ge2)
print('--> A_lt2: \n', A_lt2)
print(type(A_ge))
print(type(A_ge2))
"""
Explanation: In this case we get only the values of the array A which conforms the condition, and not the complete masked array. This has to be done with the numpy.ma.MaskedArray.
End of explanation
"""
C = np.random.randint(-2, high=1, size=10)
C_count_nonzero = np.count_nonzero(C)
C_nonzero_ind = np.nonzero(C)
C2 = C[C_nonzero_ind]
C3 = np.ma.MaskedArray(C, C==0)
print('--> C: ', C)
print('--> C_count_nonzero: ', C_count_nonzero)
print('--> C_nonzero_ind: ', C_nonzero_ind)
print('--> C2: ', C2)
print('--> C3: ', C3)
"""
Explanation: Here is just a brief example in order to locate all the values of an array that are not equal to zero. Of course, it also shows how to locate the values equal to zero.
Count and locate the non zero values of an array, select them, and mask the array to save the shape of the array.
End of explanation
"""
Z_count_zero = np.count_nonzero(C==0)
Z_zero_ind = np.argwhere(C==0)
Z = C[Z_zero_ind]
Z2 = np.ma.MaskedArray(C, C!=0)
print('--> Z_count_zero: ', Z_count_zero)
print('--> Z_zero_ind: ', Z_zero_ind.flatten())
print('--> Z: ', Z.flatten())
print('--> Z2: ', Z2)
"""
Explanation: Now, we look at the zeros.
Count and locate the zero values of an array, select them, and mask the array to save the shape of the array.
End of explanation
"""
empty_array = np.full(100,1.0e20)
print(empty_array)
"""
Explanation: 2.10 Some hints
You've learned to generate random arrays, arrays of zeros or ones but one important array generation ist still missing - the array with missing values.
To generate an array which contains only missing values use the numpy routine np.full.
End of explanation
"""
|
drpngx/tensorflow | tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb | apache-2.0 | from __future__ import absolute_import, division, print_function
# Import TensorFlow >= 1.9 and enable eager execution
import tensorflow as tf
tf.enable_eager_execution()
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import time
print(tf.__version__)
"""
Explanation: Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Neural Machine Translation with Attention
<table align="left"><td>
<a target="_blank" href="https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on Github</a></td></table>
This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation using tf.keras and eager execution. This is an advanced example that assumes some knowledge of sequence to sequence models.
After training the model in this notebook, you will be able to input a Spanish sentence, such as "¿todavia estan en casa?", and return the English translation: "are you still at home?"
The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:
<img src="https://tensorflow.org/images/spanish-english.png" alt="spanish-english attention plot">
Note: This example takes approximately 10 mintues to run on a single P100 GPU.
End of explanation
"""
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
# Converts the unicode file to ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.rstrip().strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
# 1. Remove the accents
# 2. Clean the sentences
# 3. Return word pairs in the format: [ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return word_pairs
# This class creates a word -> index mapping (e.g,. "dad" -> 5) and vice-versa
# (e.g., 5 -> "dad") for each language,
class LanguageIndex():
def __init__(self, lang):
self.lang = lang
self.word2idx = {}
self.idx2word = {}
self.vocab = set()
self.create_index()
def create_index(self):
for phrase in self.lang:
self.vocab.update(phrase.split(' '))
self.vocab = sorted(self.vocab)
self.word2idx['<pad>'] = 0
for index, word in enumerate(self.vocab):
self.word2idx[word] = index + 1
for word, index in self.word2idx.items():
self.idx2word[index] = word
def max_length(tensor):
return max(len(t) for t in tensor)
def load_dataset(path, num_examples):
# creating cleaned input, output pairs
pairs = create_dataset(path, num_examples)
# index language using the class defined above
inp_lang = LanguageIndex(sp for en, sp in pairs)
targ_lang = LanguageIndex(en for en, sp in pairs)
# Vectorize the input and target languages
# Spanish sentences
input_tensor = [[inp_lang.word2idx[s] for s in sp.split(' ')] for en, sp in pairs]
# English sentences
target_tensor = [[targ_lang.word2idx[s] for s in en.split(' ')] for en, sp in pairs]
# Calculate max_length of input and output tensor
# Here, we'll set those to the longest sentence in the dataset
max_length_inp, max_length_tar = max_length(input_tensor), max_length(target_tensor)
# Padding the input and output tensor to the maximum length
input_tensor = tf.keras.preprocessing.sequence.pad_sequences(input_tensor,
maxlen=max_length_inp,
padding='post')
target_tensor = tf.keras.preprocessing.sequence.pad_sequences(target_tensor,
maxlen=max_length_tar,
padding='post')
return input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_tar
"""
Explanation: Download and prepare the dataset
We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:
May I borrow this book? ¿Puedo tomar prestado este libro?
There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:
Add a start and end token to each sentence.
Clean the sentences by removing special characters.
Create a word index and reverse word index (dictionaries mapping from word → id and id → word).
Pad each sentence to a maximum length.
End of explanation
"""
# Try experimenting with the size of that dataset
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_targ = load_dataset(path_to_file, num_examples)
# Creating training and validation sets using an 80-20 split
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# Show length
len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val)
"""
Explanation: Limit the size of the dataset to experiment faster (optional)
Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with less data):
End of explanation
"""
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
embedding_dim = 256
units = 1024
vocab_inp_size = len(inp_lang.word2idx)
vocab_tar_size = len(targ_lang.word2idx)
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.apply(tf.contrib.data.batch_and_drop_remainder(BATCH_SIZE))
"""
Explanation: Create a tf.data dataset
End of explanation
"""
def gru(units):
# If you have a GPU, we recommend using CuDNNGRU(provides a 3x speedup than GRU)
# the code automatically does that.
if tf.test.is_gpu_available():
return tf.keras.layers.CuDNNGRU(units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
else:
return tf.keras.layers.GRU(units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(self.enc_units)
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(self.dec_units)
self.fc = tf.keras.layers.Dense(vocab_size)
# used for attention
self.W1 = tf.keras.layers.Dense(self.dec_units)
self.W2 = tf.keras.layers.Dense(self.dec_units)
self.V = tf.keras.layers.Dense(1)
def call(self, x, hidden, enc_output):
# enc_output shape == (batch_size, max_length, hidden_size)
# hidden shape == (batch_size, hidden size)
# hidden_with_time_axis shape == (batch_size, 1, hidden size)
# we are doing this to perform addition to calculate the score
hidden_with_time_axis = tf.expand_dims(hidden, 1)
# score shape == (batch_size, max_length, hidden_size)
score = tf.nn.tanh(self.W1(enc_output) + self.W2(hidden_with_time_axis))
# attention_weights shape == (batch_size, max_length, 1)
# we get 1 at the last axis because we are applying score to self.V
attention_weights = tf.nn.softmax(self.V(score), axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * enc_output
context_vector = tf.reduce_sum(context_vector, axis=1)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# output shape == (batch_size * max_length, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size * max_length, vocab)
x = self.fc(output)
return x, state, attention_weights
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.dec_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
"""
Explanation: Write the encoder and decoder model
Here, we'll implement an encoder-decoder model with attention which you can read about in the TensorFlow Neural Machine Translation (seq2seq) tutorial. This example uses a more recent set of APIs. This notebook implements the attention equations from the seq2seq tutorial. The following diagram shows that each input words is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence.
<img src="https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism">
The input is put through an encoder model which gives us the encoder output of shape (batch_size, max_length, hidden_size) and the encoder hidden state of shape (batch_size, hidden_size).
Here are the equations that are implemented:
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800">
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800">
We're using Bahdanau attention. Lets decide on notation before writing the simplified form:
FC = Fully connected (dense) layer
EO = Encoder output
H = hidden state
X = input to the decoder
And the pseudo-code:
score = FC(tanh(FC(EO) + FC(H)))
attention weights = softmax(score, axis = 1). Softmax by default is applied on the last axis but here we want to apply it on the 1st axis, since the shape of score is (batch_size, max_length, hidden_size). Max_length is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.
context vector = sum(attention weights * EO, axis = 1). Same reason as above for choosing axis as 1.
embedding output = The input to the decoder X is passed through an embedding layer.
merged vector = concat(embedding output, context vector)
This merged vector is then given to the GRU
The shapes of all the vectors at each step have been specified in the comments in the code:
End of explanation
"""
optimizer = tf.train.AdamOptimizer()
def loss_function(real, pred):
mask = 1 - np.equal(real, 0)
loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask
return tf.reduce_mean(loss_)
"""
Explanation: Define the optimizer and the loss function
End of explanation
"""
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word2idx['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
total_loss += (loss / int(targ.shape[1]))
variables = encoder.variables + decoder.variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables), tf.train.get_or_create_global_step())
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
loss.numpy() / int(targ.shape[1])))
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss/len(input_tensor)))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
"""
Explanation: Training
Pass the input through the encoder which return encoder output and the encoder hidden state.
The encoder output, encoder hidden state and the decoder input (which is the start token) is passed to the decoder.
The decoder returns the predictions and the decoder hidden state.
The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
Use teacher forcing to decide the next input to the decoder.
Teacher forcing is the technique where the target word is passed as the next input to the decoder.
The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
End of explanation
"""
def evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word2idx[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs], maxlen=max_length_inp, padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word2idx['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input, dec_hidden, enc_out)
# storing the attention weigths to plot later on
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.multinomial(tf.exp(predictions), num_samples=1)[0][0].numpy()
result += targ_lang.idx2word[predicted_id] + ' '
if targ_lang.idx2word[predicted_id] == '<end>':
return result, sentence, attention_plot
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# function for plotting the attention weights
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
plt.show()
def translate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ):
result, sentence, attention_plot = evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
print('Input: {}'.format(sentence))
print('Predicted translation: {}'.format(result))
attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]
plot_attention(attention_plot, sentence.split(' '), result.split(' '))
translate('hace mucho frio aqui.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate('esta es mi vida.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate('¿todavia estan en casa?', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
# wrong translation
translate('trata de averiguarlo.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
"""
Explanation: Translate
The evaluate function is similar to the training loop, except we don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
Stop predicting when the model predicts the end token.
And store the attention weights for every time step.
Note: The encoder output is calculated only once for one input.
End of explanation
"""
|
texib/spark_tutorial | 4.AnalysisArticle_Content.ipynb | gpl-2.0 | def parseRaw(json_map):
url = json_map['url']
content = json_map['html']
return (url,content)
"""
Explanation: Parse Json
End of explanation
"""
import json
import pprint
pp = pprint.PrettyPrinter(indent=2)
path = "./pixnet.txt"
all_content = sc.textFile(path).map(json.loads).map(parseRaw)
"""
Explanation: 載入原始 RAW Data
End of explanation
"""
def getContent(x):
from bs4 import BeautifulSoup
soup = BeautifulSoup(x)
text = soup.getText().replace('\n','').replace('\r','').replace(' ','').replace('\t','')
import jieba
r = list()
for term in jieba.cut(text):
if len(term) > 1 and checkword(term): r.append(term)
return r
def checkword(x):
return all(u'\u4e00' <= c <= u'\u9fff' for c in x)
"""
Explanation: 使用 BeautifulSoup 及 Jieba 來處理文章內容
BeautifulSoup
Official Doc
BeautifulSoup 是一個常用來 parsing HTML 或是 XML 文件的強大工具,只要把標準格式的資料以字串型態帶入,就能夠以物件屬性的方式操作文件裡各個區塊。(ex. soup.title.name, soup.p, soup.b.attrs, soup.p['class'], ...)
實用涵式
soup.find_all('a'): get a list
soup.{element}.get_text(): get a string
soup.{element}.get('href'): get a string
soup.{element}.contents: get element's childrens in a list
soup.{element}.contents[0].name
soup.descendants: let user iterate over all of a tag’s children
soup.prettify(): show document in the format frendly to viewer
Jieba
Official Doc
FUKUBALL Blog
Jieba 是知名的 open source 中文斷詞系統,default 為簡體語系,但能支援設定繁體語庫或是自定義語庫(ex. 自建台語語庫)。支援的功能有:基本斷詞、詞性判斷、TF-IDF文章關鍵字萃取。
實用涵式
jieba.cut(sentence, cut_all=False): 基本斷詞
False(精确模式): 對內文做最精確的分析
True(全模式): 把所有可能的斷詞都列出來
jieba.cut_for_search(sentence): 搜索引擎模式
以精確模式為基礎,對長詞再次切分,提高recall比例,適合搜尋相關應用。
jieba.set_dictionary('dict.txt'): 指定語庫
jieba.posseg.cut(): 取出斷詞及詞性 (回傳值具有 word 及 flag 屬性,flag 極為詞性)
jieba.analyse.extract_tags(sentence, tag_num): 文章提取關鍵字
End of explanation
"""
parsed = all_content.mapValues(lambda x : getContent(x))
print 'url:',parsed.first()[0]
print 'term:',
for term in parsed.first()[1][:10] :
print term ,
"""
Explanation: 印出第一筆資料
End of explanation
"""
for i in parsed.map(lambda x: x[1]).flatMap(lambda x : x).take(10):
print i
"""
Explanation: 請完成以下程碼來計算 Global 詞頻,並取出前 10 個最常出現的詞
End of explanation
"""
from operator import add
top_term = parsed.map(
lambda x: x[1]).flatMap(
lambda x : x).map(
lambda x: (x,1)).reduceByKey(
add).sortBy(
lambda x: x[1],ascending=False)
for term in top_term.take(10):
print term[0] , term[1]
"""
Explanation: 依照詞頻由高到低取出前 10
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
from wordcloud import WordCloud, STOPWORDS
no_urls_no_tags = " ".join(top_term.map(lambda x : x[0]).take(30))
wordcloud = WordCloud(
font_path='./cwTeXQFangsong-Medium.ttf',
background_color='white',
width=600,
height=600
).generate(no_urls_no_tags)
plt.imshow(wordcloud)
plt.axis('off')
plt.show()
"""
Explanation: 產生 WordCloud
End of explanation
"""
top_term2 = parsed.map(
lambda x: x[1]).flatMap(
lambda x : x).map(
lambda x: (x,1)).reduceByKey(
add).sortBy(
lambda x: x[1],ascending=True)
top_term2.first()
no_urls_no_tags = " ".join(top_term2.map(lambda x : x[0]).take(40))
wordcloud = WordCloud(
font_path='./cwTeXQFangsong-Medium.ttf',
background_color='white',
width=600,
height=600
).generate(no_urls_no_tags)
plt.imshow(wordcloud)
plt.axis('off')
plt.show()
"""
Explanation: 考題 : 反向由小到大,取出最小的40個字,並畫出 WordCloud
End of explanation
"""
|
daniestevez/jupyter_notebooks | Linrad resampler.ipynb | gpl-3.0 | samp_rate = 48000
pulse_on = samp_rate//100
pulse_off = samp_rate//10 - pulse_on
duration = 600 # seconds
n_pulses = duration * 10
amplitude = 0.01
pulse = np.array([1]*pulse_on + [0]*pulse_off, dtype='float32')
i = amplitude*np.tile(pulse, n_pulses)
iq = np.zeros((len(i),2), dtype='float32')
iq[:,0] = i
wavfile.write('/home/daniel/pulses.wav', rate=samp_rate, data=iq)
del iq
del i
"""
Explanation: Generate a WAV IQ file at 48ksps containing pulses with 10Hz repetition rate. The duration of each pulse is 10ms.
End of explanation
"""
def plot_audio(file):
audio = np.memmap(file, offset=0x28, dtype=np.int16)
#audio = np.abs(hilbert(audio)) # improves display, but takes computation time
pulse_len = 4800
lines = len(audio)//pulse_len
plt.imshow(np.abs(audio)[:lines*pulse_len].reshape(lines, pulse_len), cmap='viridis', vmin = 0, vmax=1e4)
del audio
"""
Explanation: Function to stack up the pulses and plot.
End of explanation
"""
plot_audio('/tmp/output-file-input.wav')
"""
Explanation: We open the WAV IQ file in Linrad using adwav. The output soundcard is a snd-aloop. The pulses are tuned in SSB mode and the audio output is recorded at 48ksps in Linrad using 'W'.
The pulses are stacked and plotted. They are slanted, which means that the resampling rate is constant but not correct (not 1:1 from input to output). The results are the same regardless of whether ntpd is running.
End of explanation
"""
plot_audio('/tmp/output-alsa-ntpd-off.wav')
"""
Explanation: Now we set up the audio input in Linrad to use a snd-aloop device. The WAV IQ file is played into Linrad through snd-aloop using aplay -D plughw:1,0 ~/pulses.wav The results seem to depend on whether ntpd is running, but now we always see a line with changing slope, meaning that the resampling rate is being adjusted.
This is with ntpd not running.
End of explanation
"""
plot_audio('/tmp/output-alsa-ntpd-on.wav')
"""
Explanation: This is with ntpd running. The resampling rate variations sometimes seem much greater.
End of explanation
"""
|
rainyear/pytips | Tips/2016-04-07-Thread-vs-Coroutine-ii.ipynb | mit | def jump_range(upper):
index = 0
while index < upper:
jump = yield index
if jump is None:
jump = 1
index += jump
jump = jump_range(5)
print(jump)
print(jump.send(None))
print(jump.send(3))
print(jump.send(None))
"""
Explanation: Python 线程与协程(2)
我之前翻译了Python 3.5 协程原理这篇文章之后尝试用了 Tornado + Motor 模式下的协程进行异步开发,确实感受到协程所带来的好处(至少是语法上的:D)。至于协程的 async/await 语法是如何由开始的 yield 生成器一步一步上位至 Python 的 async/await 组合语句,前面那篇翻译的文章里面讲得已经非常详尽了。我们知道协程的本质上是:
allowing multiple entry points for suspending and resuming execution at certain locations.
允许多个入口对程序进行挂起、继续执行等操作,我们首先想到的自然也是生成器:
End of explanation
"""
def wait_index(i):
# processing i...
return (yield i)
def jump_range(upper):
index = 0
while index < upper:
jump = yield from wait_index(index)
if jump is None:
jump = 1
index += jump
jump = jump_range(5)
print(jump)
print(jump.send(None))
print(jump.send(3))
print(jump.send(None))
"""
Explanation: 后来又新增了 yield from 语法,可以将生成器串联起来:
End of explanation
"""
class Wait(object):
"""
由于 Coroutine 协议规定 await 后只能跟 awaitable 对象,
而 awaitable 对象必须是实现了 __await__ 方法且返回迭代器
或者也是一个协程对象,
因此这里临时实现一个 awaitable 对象。
"""
def __init__(self, index):
self.index = index
def __await__(self):
return (yield self.index)
async def jump_range(upper):
index = 0
while index < upper:
jump = await Wait(index)
if jump is None:
jump = 1
index += jump
jump = jump_range(5)
print(jump)
print(jump.send(None))
print(jump.send(3))
print(jump.send(None))
"""
Explanation: yield from/send 似乎已经满足了协程所定义的需求,最初也确实是用 @types.coroutine 修饰器将生成器转换成协程来使用,在 Python 3.5 之后则以专用的 async/await 取代了 @types.coroutine/yield from:
End of explanation
"""
import asyncio
import time
import types
@types.coroutine
def _sum(x, y):
print("Compute {} + {}...".format(x, y))
yield time.sleep(2.0)
return x+y
@types.coroutine
def compute_sum(x, y):
result = yield from _sum(x, y)
print("{} + {} = {}".format(x, y, result))
loop = asyncio.get_event_loop()
loop.run_until_complete(compute_sum(0,0))
"""
Explanation: 与线程相比
协程的执行过程如下所示:
End of explanation
"""
import asyncio
import time
# 上面的例子为了从生成器过度,下面全部改用 async/await 语法
async def _sum(x, y):
print("Compute {} + {}...".format(x, y))
await asyncio.sleep(2.0)
return x+y
async def compute_sum(x, y):
result = await _sum(x, y)
print("{} + {} = {}".format(x, y, result))
start = time.time()
loop = asyncio.get_event_loop()
tasks = [
asyncio.ensure_future(compute_sum(0, 0)),
asyncio.ensure_future(compute_sum(1, 1)),
asyncio.ensure_future(compute_sum(2, 2)),
]
loop.run_until_complete(asyncio.wait(tasks))
loop.close()
print("Total elapsed time {}".format(time.time() - start))
"""
Explanation: 这张图(来自: PyDocs: 18.5.3. Tasks and coroutines)清楚地描绘了由事件循环调度的协程的执行过程,上面的例子中事件循环的队列里只有一个协程,如果要与上一部分中线程实现的并发的例子相比较,只要向事件循环的任务队列中添加协程即可:
End of explanation
"""
|
belavenir/Udacity_P1_Lane_Detetion | find_lane.ipynb | gpl-3.0 | #importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimesions:', image.shape)
plt.imshow(image) #call as plt.imshow(gray, cmap='gray') to show a grayscaled image
"""
Explanation: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
End of explanation
"""
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((*img.shape, 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
# Step 1: Detect segments on a single image
def detect_seg(frame):
#grayscale
gray = grayscale(frame)
x_end = gray.shape[1] #960
y_end = gray.shape[0] #540
# Gaussian blur
kernel_size = 5
blur_gray = gaussian_blur(gray,kernel_size)
# Edge contours
low_threshold = 50
high_threshold =150
edges = canny(blur_gray, low_threshold, high_threshold)
# Marked region
horizon_line = 320
vertices = np.array([[(55,y_end),(450,horizon_line),(490,horizon_line),(x_end,y_end)]], dtype = np.int32)
marked_region = region_of_interest(edges,vertices)
# Lines
line_edges = hough_lines(marked_region, rho = 2, theta = np.pi/180,\
threshold = 10, min_line_len =50, \
max_line_gap = 20)
frame = weighted_img(line_edges,frame)
return frame
# Result of Step 2
plt.imshow(detect_seg(image))
"""
Explanation: Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
"""
# Test on video of segments lane detecction
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
result = detect_seg(image)
return result
yellow_output_seg = 'yellow_seg.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output_seg, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output_seg))
# Step2: Extrapolate lanes from the horizon to the base of frame
def extrapo(x1, y1, x2, y2, horizon_line, frame_height):
if x1 == x2:
if y1 < y2:
y1 = horizon_line
y2 = frame_height
else:
y2 = horizon_line
y1 = frame_height
if y1 < y2:
slope = (y2-y1)/(x2-x1)
x1 = ((horizon_line-y1)/slope)+ x1
y1 = horizon_line
x2 = ((frame_height-y2)/slope) + x2
y2 = frame_height
else:
slope = (y2-y1)/(x2-x1)
x2 = ((horizon_line-y2)/slope) + x2
y2 = horizon_line
x1 = ((frame_height - y1)/slope) + x1
y1 = frame_height
return x1, y1, x2, y2
#Step 3: Inital extend lane detection function
def detect(frame):
#grayscale
gray = grayscale(frame)
x_end = gray.shape[1] #960
y_end = gray.shape[0] #540
# Gaussian blur
kernel_size = 3
blur_gray = gaussian_blur(gray,kernel_size)
# Edge contours
low_threshold = 50
high_threshold =150
edges = canny(blur_gray, low_threshold, high_threshold)
# Marked region
horizon_line = 0.6*y_end
vertices = np.array([[(0,y_end),(x_end/2-20,horizon_line),(x_end/2+20,horizon_line),(x_end,y_end)]], dtype = np.int32)
marked_region = region_of_interest(edges,vertices)
# Lines
line_edges = cv2.HoughLinesP(marked_region, rho = 2, theta = np.pi/180,\
threshold = 10, minLineLength =45, \
maxLineGap = 20)
# Set up a slope threshold to filter useless lines
#min_slope = abs((y_end - horizon_line)*2/x_end)
min_theta = 0.45
#plt.imshow(line_edges)
if line_edges is not None:
left_bound = None #left-most of right line
right_bound = None #right_most of left line
dist1 = x_end / 2 # centre line of frame x = x_end
dist2 = x_end / 2
for line in line_edges:
for x1, y1, x2, y2 in line:
slope = (y2-y1)/(x2-x1)
theta = np.abs(np.arctan2((y2-y1),(x2-x1)))
if theta > min_theta:
if slope > 0: # right lane
dist = x1+(y_end-y1)/(y2-y1)*(x2-x1)-x_end/2 #baseline y= y_end, dist between centra frame & right lane
if dist < dist1:
right_bound = (x1, y1, x2, y2)
dist1 = dist
if slope < 0:
dist = x_end/2 - x1 - (y_end-y1)/(y2-y1)*(x2-x1)
if dist < dist2:
left_bound = (x1, y1, x2, y2)
dist2 = dist
line_img = np.zeros((*gray.shape,3), dtype=np.uint8)
if left_bound is not None:
left_bound = extrapo(left_bound[0], left_bound[1], left_bound[2], left_bound[3], horizon_line, y_end)
left_bound = list(map(int,left_bound))
cv2.line(line_img, (left_bound[0],left_bound[1]), (left_bound[2], left_bound[3]), [255,0,0], 8)
if right_bound is not None:
right_bound = extrapo(right_bound[0], right_bound[1], right_bound[2], right_bound[3], horizon_line, y_end)
right_bound = list(map(int, right_bound))
cv2.line(line_img, (right_bound[0],right_bound[1]), (right_bound[2], right_bound[3]), [255,0,0], 12)
frame = weighted_img(line_img,frame)
return frame
# Step 4: Test on sigle image & result
plt.imshow(detect(image))
import os
files = os.listdir("test_images/")
newpath = "./test_images_copy/"
if not os.path.exists(newpath):
os.makedirs(newpath)
print(newpath)
# Step 5: Run on all test_image and save results
for pic in files:
frame = mpimg.imread("test_images/"+pic)
lanes = detect(frame)
filename = pic[:-4]+'_copy'+'.jpg'
cv2.imwrite(newpath+filename,lanes)
"""
Explanation: <p style="color:red;"> NOTE: First result " yellow_seg.mp4 " (wrt Critera II) - which is equivalent to the given video sample " raw-lines-example.mp4 "</p>
End of explanation
"""
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image with lines are drawn on lanes)
result = detect(image)
return result
"""
Explanation: run your solution on all test_images and make copies into the test_images directory).
Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
End of explanation
"""
white_output = 'white.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
"""
Explanation: Let's try the one with the solid white lane on the right first ...
<p style="color:red;"> NOTE: Second result wrt Critera III - connect/extrapolate lane segments together;</p>
End of explanation
"""
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
"""
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
"""
yellow_output = 'yellow.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
"""
Explanation: At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
"""
challenge_output = 'extra.mp4'
clip2 = VideoFileClip('challenge.mp4')
challenge_clip = clip2.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
"""
Explanation: Reflections
Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail?
Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below!
### <p style="color:red;">REFRLECTIONS</p>
1. Pipleline
<p>Step1: Grayscale</p>
<p>Step2: Gauss filter</p>
<p>Step3: Canny detection</p>
<p>Step4: Set regions by a horizon line and a minimum slope wrt this horizon line</p>
<p>Step5: Devide left/right line according to the slope value (positive/negative sign)</p>
<p>Step6: Get the most right bound from left lines group; get most left bound from right lines group</p>
<p>Step7: Scale the line accroding to the bound in fixed regions</p>
2. How to improve
How to improve the robustness ?
Get desired slope from candidate line edges, in particular for curve lanes
In my case, the lane is not very continous cause the bound of lane changes a lot. A better way is to get a mean slope from left/right line group to keep smoomthness from frame to frame. But my way to get bound is still useful, because in real driving, we mostly consider the lane inner bound.
Reduce the perspective effect of frame
The perspective effect means that in the video, lanes appear to converge in horizontal line. In fact, the horizontla line is tricky to select
Filter the illumination disturbance (ex. shadows in challenge video)
In the challegne video, it's obvious that if there are shadows, the algorithm does not work well. It seems difficult to solve this disturbation from the point of changing filter (gaussian, median). Maybe tracking method, or other filter way ? I'm not sure
3. Conclusion
The marked-region based lane detection is useful to the straight lane but not very suitable for a cuve lane. The region of interst is tricky to define. At the same time, the parameters in the function cv2.HoughLineP asks for trial and errors, mainly setting minLineLength of about 20~30% of the total line length, maxLineGap between 5~10% of the expected total line length.
Submission
If you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation
"""
|
dsquareindia/gensim | docs/notebooks/doc2vec-IMDB.ipynb | lgpl-2.1 | import locale
import glob
import os.path
import requests
import tarfile
dirname = 'aclImdb'
filename = 'aclImdb_v1.tar.gz'
locale.setlocale(locale.LC_ALL, 'C')
# Convert text to lower-case and strip punctuation/symbols from words
def normalize_text(text):
norm_text = text.lower()
# Replace breaks with spaces
norm_text = norm_text.replace('<br />', ' ')
# Pad punctuation with spaces on both sides
for char in ['.', '"', ',', '(', ')', '!', '?', ';', ':']:
norm_text = norm_text.replace(char, ' ' + char + ' ')
return norm_text
if not os.path.isfile('aclImdb/alldata-id.txt'):
if not os.path.isdir(dirname):
if not os.path.isfile(filename):
# Download IMDB archive
url = 'http://ai.stanford.edu/~amaas/data/sentiment/' + filename
r = requests.get(url)
with open(filename, 'wb') as f:
f.write(r.content)
tar = tarfile.open(filename, mode='r')
tar.extractall()
tar.close()
# Concat and normalize test/train data
folders = ['train/pos', 'train/neg', 'test/pos', 'test/neg', 'train/unsup']
alldata = u''
for fol in folders:
temp = u''
output = fol.replace('/', '-') + '.txt'
# Is there a better pattern to use?
txt_files = glob.glob('/'.join([dirname, fol, '*.txt']))
for txt in txt_files:
with open(txt, 'r', encoding='utf-8') as t:
control_chars = [chr(0x85)]
t_clean = t.read()
for c in control_chars:
t_clean = t_clean.replace(c, ' ')
temp += t_clean
temp += "\n"
temp_norm = normalize_text(temp)
with open('/'.join([dirname, output]), 'w', encoding='utf-8') as n:
n.write(temp_norm)
alldata += temp_norm
with open('/'.join([dirname, 'alldata-id.txt']), 'w', encoding='utf-8') as f:
for idx, line in enumerate(alldata.splitlines()):
num_line = "_*{0} {1}\n".format(idx, line)
f.write(num_line)
import os.path
assert os.path.isfile("aclImdb/alldata-id.txt"), "alldata-id.txt unavailable"
"""
Explanation: gensim doc2vec & IMDB sentiment dataset
TODO: section on introduction & motivation
TODO: prerequisites + dependencies (statsmodels, patsy, ?)
Requirements
Following are the dependencies for this tutorial:
- testfixtures
- statsmodels
Load corpus
Fetch and prep exactly as in Mikolov's go.sh shell script. (Note this cell tests for existence of required files, so steps won't repeat once the final summary file (aclImdb/alldata-id.txt) is available alongside this notebook.)
End of explanation
"""
import gensim
from gensim.models.doc2vec import TaggedDocument
from collections import namedtuple
SentimentDocument = namedtuple('SentimentDocument', 'words tags split sentiment')
alldocs = [] # will hold all docs in original order
with open('aclImdb/alldata-id.txt', encoding='utf-8') as alldata:
for line_no, line in enumerate(alldata):
tokens = gensim.utils.to_unicode(line).split()
words = tokens[1:]
tags = [line_no] # `tags = [tokens[0]]` would also work at extra memory cost
split = ['train','test','extra','extra'][line_no//25000] # 25k train, 25k test, 25k extra
sentiment = [1.0, 0.0, 1.0, 0.0, None, None, None, None][line_no//12500] # [12.5K pos, 12.5K neg]*2 then unknown
alldocs.append(SentimentDocument(words, tags, split, sentiment))
train_docs = [doc for doc in alldocs if doc.split == 'train']
test_docs = [doc for doc in alldocs if doc.split == 'test']
doc_list = alldocs[:] # for reshuffling per pass
print('%d docs: %d train-sentiment, %d test-sentiment' % (len(doc_list), len(train_docs), len(test_docs)))
"""
Explanation: The data is small enough to be read into memory.
End of explanation
"""
from gensim.models import Doc2Vec
import gensim.models.doc2vec
from collections import OrderedDict
import multiprocessing
cores = multiprocessing.cpu_count()
assert gensim.models.doc2vec.FAST_VERSION > -1, "this will be painfully slow otherwise"
simple_models = [
# PV-DM w/concatenation - window=5 (both sides) approximates paper's 10-word total window size
Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores),
# PV-DBOW
Doc2Vec(dm=0, size=100, negative=5, hs=0, min_count=2, workers=cores),
# PV-DM w/average
Doc2Vec(dm=1, dm_mean=1, size=100, window=10, negative=5, hs=0, min_count=2, workers=cores),
]
# speed setup by sharing results of 1st model's vocabulary scan
simple_models[0].build_vocab(alldocs) # PV-DM/concat requires one special NULL word so it serves as template
print(simple_models[0])
for model in simple_models[1:]:
model.reset_from(simple_models[0])
print(model)
models_by_name = OrderedDict((str(model), model) for model in simple_models)
"""
Explanation: Set-up Doc2Vec Training & Evaluation Models
Approximating experiment of Le & Mikolov "Distributed Representations of Sentences and Documents", also with guidance from Mikolov's example go.sh:
./word2vec -train ../alldata-id.txt -output vectors.txt -cbow 0 -size 100 -window 10 -negative 5 -hs 0 -sample 1e-4 -threads 40 -binary 0 -iter 20 -min-count 1 -sentence-vectors 1
Parameter choices below vary:
100-dimensional vectors, as the 400d vectors of the paper don't seem to offer much benefit on this task
similarly, frequent word subsampling seems to decrease sentiment-prediction accuracy, so it's left out
cbow=0 means skip-gram which is equivalent to the paper's 'PV-DBOW' mode, matched in gensim with dm=0
added to that DBOW model are two DM models, one which averages context vectors (dm_mean) and one which concatenates them (dm_concat, resulting in a much larger, slower, more data-hungry model)
a min_count=2 saves quite a bit of model memory, discarding only words that appear in a single doc (and are thus no more expressive than the unique-to-each doc vectors themselves)
End of explanation
"""
from gensim.test.test_doc2vec import ConcatenatedDoc2Vec
models_by_name['dbow+dmm'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[2]])
models_by_name['dbow+dmc'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[0]])
"""
Explanation: Following the paper, we also evaluate models in pairs. These wrappers return the concatenation of the vectors from each model. (Only the singular models are trained.)
End of explanation
"""
import numpy as np
import statsmodels.api as sm
from random import sample
# for timing
from contextlib import contextmanager
from timeit import default_timer
import time
@contextmanager
def elapsed_timer():
start = default_timer()
elapser = lambda: default_timer() - start
yield lambda: elapser()
end = default_timer()
elapser = lambda: end-start
def logistic_predictor_from_data(train_targets, train_regressors):
logit = sm.Logit(train_targets, train_regressors)
predictor = logit.fit(disp=0)
#print(predictor.summary())
return predictor
def error_rate_for_model(test_model, train_set, test_set, infer=False, infer_steps=3, infer_alpha=0.1, infer_subsample=0.1):
"""Report error rate on test_doc sentiments, using supplied model and train_docs"""
train_targets, train_regressors = zip(*[(doc.sentiment, test_model.docvecs[doc.tags[0]]) for doc in train_set])
train_regressors = sm.add_constant(train_regressors)
predictor = logistic_predictor_from_data(train_targets, train_regressors)
test_data = test_set
if infer:
if infer_subsample < 1.0:
test_data = sample(test_data, int(infer_subsample * len(test_data)))
test_regressors = [test_model.infer_vector(doc.words, steps=infer_steps, alpha=infer_alpha) for doc in test_data]
else:
test_regressors = [test_model.docvecs[doc.tags[0]] for doc in test_docs]
test_regressors = sm.add_constant(test_regressors)
# predict & evaluate
test_predictions = predictor.predict(test_regressors)
corrects = sum(np.rint(test_predictions) == [doc.sentiment for doc in test_data])
errors = len(test_predictions) - corrects
error_rate = float(errors) / len(test_predictions)
return (error_rate, errors, len(test_predictions), predictor)
"""
Explanation: Predictive Evaluation Methods
Helper methods for evaluating error rate.
End of explanation
"""
from collections import defaultdict
best_error = defaultdict(lambda :1.0) # to selectively-print only best errors achieved
from random import shuffle
import datetime
alpha, min_alpha, passes = (0.025, 0.001, 20)
alpha_delta = (alpha - min_alpha) / passes
print("START %s" % datetime.datetime.now())
for epoch in range(passes):
shuffle(doc_list) # shuffling gets best results
for name, train_model in models_by_name.items():
# train
duration = 'na'
train_model.alpha, train_model.min_alpha = alpha, alpha
with elapsed_timer() as elapsed:
train_model.train(doc_list)
duration = '%.1f' % elapsed()
# evaluate
eval_duration = ''
with elapsed_timer() as eval_elapsed:
err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs)
eval_duration = '%.1f' % eval_elapsed()
best_indicator = ' '
if err <= best_error[name]:
best_error[name] = err
best_indicator = '*'
print("%s%f : %i passes : %s %ss %ss" % (best_indicator, err, epoch + 1, name, duration, eval_duration))
if ((epoch + 1) % 5) == 0 or epoch == 0:
eval_duration = ''
with elapsed_timer() as eval_elapsed:
infer_err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs, infer=True)
eval_duration = '%.1f' % eval_elapsed()
best_indicator = ' '
if infer_err < best_error[name + '_inferred']:
best_error[name + '_inferred'] = infer_err
best_indicator = '*'
print("%s%f : %i passes : %s %ss %ss" % (best_indicator, infer_err, epoch + 1, name + '_inferred', duration, eval_duration))
print('completed pass %i at alpha %f' % (epoch + 1, alpha))
alpha -= alpha_delta
print("END %s" % str(datetime.datetime.now()))
"""
Explanation: Bulk Training
Using explicit multiple-pass, alpha-reduction approach as sketched in gensim doc2vec blog post – with added shuffling of corpus on each pass.
Note that vector training is occurring on all documents of the dataset, which includes all TRAIN/TEST/DEV docs.
Evaluation of each model's sentiment-predictive power is repeated after each pass, as an error rate (lower is better), to see the rates-of-relative-improvement. The base numbers reuse the TRAIN and TEST vectors stored in the models for the logistic regression, while the inferred results use newly-inferred TEST vectors.
(On a 4-core 2.6Ghz Intel Core i7, these 20 passes training and evaluating 3 main models takes about an hour.)
End of explanation
"""
# print best error rates achieved
for rate, name in sorted((rate, name) for name, rate in best_error.items()):
print("%f %s" % (rate, name))
"""
Explanation: Achieved Sentiment-Prediction Accuracy
End of explanation
"""
doc_id = np.random.randint(simple_models[0].docvecs.count) # pick random doc; re-run cell for more examples
print('for doc %d...' % doc_id)
for model in simple_models:
inferred_docvec = model.infer_vector(alldocs[doc_id].words)
print('%s:\n %s' % (model, model.docvecs.most_similar([inferred_docvec], topn=3)))
"""
Explanation: In my testing, unlike the paper's report, DBOW performs best. Concatenating vectors from different models only offers a small predictive improvement. The best results I've seen are still just under 10% error rate, still a ways from the paper's 7.42%.
Examining Results
Are inferred vectors close to the precalculated ones?
End of explanation
"""
import random
doc_id = np.random.randint(simple_models[0].docvecs.count) # pick random doc, re-run cell for more examples
model = random.choice(simple_models) # and a random model
sims = model.docvecs.most_similar(doc_id, topn=model.docvecs.count) # get *all* similar documents
print(u'TARGET (%d): «%s»\n' % (doc_id, ' '.join(alldocs[doc_id].words)))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(alldocs[sims[index][0]].words)))
"""
Explanation: (Yes, here the stored vector from 20 epochs of training is usually one of the closest to a freshly-inferred vector for the same words. Note the defaults for inference are very abbreviated – just 3 steps starting at a high alpha – and likely need tuning for other applications.)
Do close documents seem more related than distant ones?
End of explanation
"""
word_models = simple_models[:]
import random
from IPython.display import HTML
# pick a random word with a suitable number of occurences
while True:
word = random.choice(word_models[0].wv.index2word)
if word_models[0].wv.vocab[word].count > 10:
break
# or uncomment below line, to just pick a word from the relevant domain:
#word = 'comedy/drama'
similars_per_model = [str(model.most_similar(word, topn=20)).replace('), ','),<br>\n') for model in word_models]
similar_table = ("<table><tr><th>" +
"</th><th>".join([str(model) for model in word_models]) +
"</th></tr><tr><td>" +
"</td><td>".join(similars_per_model) +
"</td></tr></table>")
print("most similar words for '%s' (%d occurences)" % (word, simple_models[0].wv.vocab[word].count))
HTML(similar_table)
"""
Explanation: (Somewhat, in terms of reviewer tone, movie genre, etc... the MOST cosine-similar docs usually seem more like the TARGET than the MEDIAN or LEAST.)
Do the word vectors show useful similarities?
End of explanation
"""
# assuming something like
# https://word2vec.googlecode.com/svn/trunk/questions-words.txt
# is in local directory
# note: this takes many minutes
for model in word_models:
sections = model.accuracy('questions-words.txt')
correct, incorrect = len(sections[-1]['correct']), len(sections[-1]['incorrect'])
print('%s: %0.2f%% correct (%d of %d)' % (model, float(correct*100)/(correct+incorrect), correct, correct+incorrect))
"""
Explanation: Do the DBOW words look meaningless? That's because the gensim DBOW model doesn't train word vectors – they remain at their random initialized values – unless you ask with the dbow_words=1 initialization parameter. Concurrent word-training slows DBOW mode significantly, and offers little improvement (and sometimes a little worsening) of the error rate on this IMDB sentiment-prediction task.
Words from DM models tend to show meaningfully similar words when there are many examples in the training data (as with 'plot' or 'actor'). (All DM modes inherently involve word vector training concurrent with doc vector training.)
Are the word vectors from this dataset any good at analogies?
End of explanation
"""
This cell left intentionally erroneous.
"""
Explanation: Even though this is a tiny, domain-specific dataset, it shows some meager capability on the general word analogies – at least for the DM/concat and DM/mean models which actually train word vectors. (The untrained random-initialized words of the DBOW model of course fail miserably.)
Slop
End of explanation
"""
from gensim.models import KeyedVectors
w2v_g100b = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True)
w2v_g100b.compact_name = 'w2v_g100b'
word_models.append(w2v_g100b)
"""
Explanation: To mix the Google dataset (if locally available) into the word tests...
End of explanation
"""
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
rootLogger = logging.getLogger()
rootLogger.setLevel(logging.INFO)
"""
Explanation: To get copious logging output from above steps...
End of explanation
"""
%load_ext autoreload
%autoreload 2
"""
Explanation: To auto-reload python code while developing...
End of explanation
"""
|
mediagit2016/workcamp-maschinelles-lernen-grundlagen | 18-05-14-ml-workcamp/sensor-daten-10/Projekt-Sensordaten-Daten Skalieren-Workcamp-ML.ipynb | gpl-3.0 | # Laden der entsprechenden Module (kann etwas dauern !)
# Wir laden die Module offen, damit man einmal sieht, was da alles benötigt wird
# Allerdings aufpassen, dann werden die Module anderst angesprochen wie beim Standard
# zum Beispiel pyplot und nicht plt
from matplotlib import pyplot
pyplot.rcParams["figure.figsize"] = (15,12)
%matplotlib inline
import numpy as np #wird allerdings nicht benötigt
from pandas import read_csv
from pandas import set_option
from pandas.plotting import scatter_matrix
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
"""
Explanation: <h1>WorkCamp # Maschinelles Lernen - ## Grundlagen - ###2018</h1>
<h2>Praktische Übung</h2>
<h3>Beispiel xx # Arbeiten mit Sensordaten ## Einlesen und Skalierung von Daten</h3>
Problemstellung:<br>
In diesem Jupyter Notebook , werden Sie in einer Fallstudie die Aufbereitung von Daten durch Skalierung, Normalisierung, Skalenänderung und Binärisierung kennenlernen. Dies ist für einige Algorithmen für Maschinelles Lernen notwendig.
Nach Abschluss dieses Notebooks sollten Sie wissen:
<ul>
<li>Wie man ein Vorhersagemodellierungsproblem auf Basis einer Fragestelluung zur Classification durchgehend abarbeitet.
<li>Wie man bisher unbekannte Daten in panda DataFrames lädt: (csv, xlsx, xls, xml, json, hdf5 etc.).
<li>Wie man unbekannte Daten mit einer deskriptiven Statistik in python analysiert.
<li>Wie man unbekannte Daten mit python Bibliotheken visualisiert.
<li>Wie man erzeugte Plots, speichert und dokumentiert.
<li>Wie man Datentransformationen verwendet, um die Performance des Modells zu verbessern, zum Beispiel Normalisierung oder Standardisierung.
<li>Wie man Algorithmus-, oder Hyperparameter-Tuning verwendet, um die Modell-Leistung zu verbessern.
<li>Wie man Ensemble-Methoden verwendet und eine Abstimmung der Parameter zur Verbesserung der Modell-Performance durchführt.
<li>Wie man die Kreuz-Validierung zur Beurteilung der Performance von ML-Algorithmen einsetzt.
<li> Auf welcher Basis eine Beurteilung der verwendetn Classification Algorithmen stattfindet. (Classification Matrix, Confusion Matrix)
</ul>
Die Module und Bibliotheken stehen alle in der <b>Anaconda scikit-learn</b> Umgebung zum Maschinellen Lernen direkt zur Verfügung.<br>
<b>Arbeiten mit Zeitreihen:</b><br>
Insbesondere beim arbeiten mit Zeitreihen (timeseries) wird, falls notwendig, statsmodels und dessen Klassen, Bibliotheken und Module nachgeladen.<br>
<b>Tipp:</b><br>
<b>Falls in Ihrer Version statsmodels nicht vorhanden ist, mit: !pip install statsmodels in einer Jupyter Zelle
nachinstallieren.</b><br>
Informationen zu statsmodels finden Sie hier: http://www.statsmodels.org/<br>
##Eventuell Strukturbild einbauen
##Evtl. nochmals Vorgehen als Ablaufmodell
End of explanation
"""
#Laden der Daten [12100 Datensätze mit 10 Sensoren und einer Spalte Label (12100x11)Matrix]
url = 'sensordaten-10.csv'
datensatz = read_csv(url, sep=';', header=0)
"""
Explanation: Problem Beschreibung:<br>
Der Fokus dieses Projektes liegt auf dem Datensatz "sensordaten-10.csv". Das Problem ist die Vorhersage von guten und schlechten Werkstücken aus den 10 Sensordaten. Jedes Muster ist ein Satz von 10 Zahlen. Die Sensoren decken unterschiedliche Wertebereiche ab.Das Label, das jeder Datenreihe zugeordnet ist, enthält 0 oder 1. Wenn das Werkstück die Beurteilung gut hat steht eine 1 in der Spalte Label, sonst eine 0.<br>
<b>Aufgabe:</b><br>
Laden Sie die Daten und verschaffen Sie sich einen ersten Überblick<br>
End of explanation
"""
# Ausgabe df.shape
print(datensatz.shape)
# Ausgabe df.dtypes
# Spalte enthält die Classifikation R oder M
set_option('display.max_rows', 50)
print(datensatz.dtypes)
# Ausgabe df.head mit vergösserter display width
set_option('display.width', 100)
print(datensatz.head(20))
# Ausgabe df.describe() mit 4 Nachkomma Stellen
set_option('precision', 4)
print(datensatz.describe())
# Ausgabe der Klassen Verteilung in der Spalte 60
print(datensatz.groupby('Label').size())
"""
Explanation: <h3>Beschreibende Statistik</h3>
End of explanation
"""
# Ausgabe Histogramm
pyplot.rcParams["figure.figsize"] = (15,12)
datensatz.hist()
pyplot.show()
"""
Explanation: <h3>Visualisierung der Daten</h3>
End of explanation
"""
# Standardisierung der Daten (Mittelwert = 0 , Standardabweichung = 1)
# mit from sklearn.preprocessing import StandardScaler wurde die Klasse in Zelle 1 bereits geladen
#Laden des Moduls set_printoptions
from numpy import set_printoptions
# Übergabe der Werte in datensatz an ein array
array = datensatz.values
# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X
X = array[:,0:10]
Y = array[:,10]
scaler = StandardScaler().fit(X)
rescaledX = scaler.transform(X)
# Ausgabe einer Kurzfassung der Daten 0:10
set_printoptions(precision=3)
print(rescaledX[0:10,:])
"""
Explanation: <h3>Standardisierung der Daten</h3>
Standardisierung ist eine nützliche Technik, um Attribute mit einer beliebigen Gauss-Normalverteilung N( , ) in eine Standard-Gaußsche Verteilung N(0,1) mit einem Mittelwert von 0 und einer Standardabweichung von 1 zu transformieren. Die Standardisierung ist am besten geeignet für Algorithmen, die eine Gaußsche Verteilung in den Eingangsvariablen annehmen und besser mit skalierten Daten arbeiten, wie z.B. Lineare Regression, Logistische Regression und Lineare Diskriminanzanalyse. Wir können Daten mit scikit-learn mit der StandardScaler-Klasse standardisieren.
End of explanation
"""
# Normalisierung der Daten (Vektorlänge 1)
# Laden des Moduls Normalizer
from sklearn.preprocessing import Normalizer
#Übergabe der Werte in datensatz an ein array2
array2 = datensatz.values
# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X
X = array2[:,0:10]
Y = array2[:,10]
scaler = Normalizer().fit(X)
normalizedX = scaler.transform(X)
# Ausgabe einer Kurzfassung der Daten 0:10
set_printoptions(precision=3)
print(normalizedX[0:10,:])
"""
Explanation: <h3>Normalisierung der Daten</h3>
Die Normalisierung in scikit-learn bezieht sich auf die Skalierung jeder Beobachtung (Zeile) auf eine Länge von 1 (eine Einheitsnorm oder ein Vektor mit der Länge von 1 in der linearen Algebra). Diese Vorverarbeitungsmethode kann für spärliche Datensätze (viele Nullen) mit Attributen unterschiedlicher Größenordnung nützlich sein, wenn Algorithmen verwendet werden, die Eingabewerte gewichten, wie Neuronale Netze und Algorithmen mit Entfernungsmessungen wie k-Nearest Neighbors. Wir können Daten in Python in scikit-learn mit der Normalizer class normalisieren.
End of explanation
"""
# Binärisierung der Daten
# Laden des Moduls Binarizer aus sklearn.preprocessing
from sklearn.preprocessing import Binarizer
# Übergabe der Werte in datensatz an ein array3
array3 = datensatz.values
# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X
X = array3[:,0:10]
Y = array3[:,10]
# Einstellen des threshold() für den Binarizer. Wir wählen 50
binarizer = Binarizer(threshold=50.0).fit(X)
binaryX = binarizer.transform(X)
# Ausgabe einer Kurzfassung der Daten 0:10
set_printoptions(precision=3)
print(binaryX[0:10,:])
"""
Explanation: <h3>Binärisierung der Daten</h3>
Wir können die Daten mit einer binären Schwelle (threshold) transformieren. Alle Werte oberhalb der Schwelle werden mit 1 und alle gleich oder kleiner mit 0 gekennzeichnet. Dies wird als Binärisierung der Daten. Es kann nützlich sein, wenn wir scharfe Trennungen haben wollen. Es ist auch im Feature Engineering nützlich, wenn neue Festures mit großer Bedeutung hinzugefügt werden sollen. Wir können neue binäre Attribute in Python in scikit-learn mit der Binarizer class erstellen.
End of explanation
"""
# Skalierung der Daten
# Laden des Moduls MinMaxScaler aus sklearn.preprocessing
from sklearn.preprocessing import MinMaxScaler
# Übergabe der Werte in datensatz an ein array4
array4 = datensatz.values
# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X
X = array4[:,0:10]
Y = array4[:,10]
# Einstellen der feature_range() im MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
rescaledX = scaler.fit_transform(X)
# Ausgabe einer Kurzfassung der Daten 0:10 mit precision=2
set_printoptions(precision=2)
print(rescaledX[0:10,:])
"""
Explanation: <h3>Skalierung der Daten</h3>
Wenn die Daten aus Attributen mit unterschiedlichen Größenordnungen bestehen, soe wie hier im Datensatz der Sensordaten, können sich viele Algorithmen des maschinellen Lernens durch die Neuskalierung der Attribute im Ergebnis verbessern. Oft wird dies als Normalisierung bezeichnet. Die Attribute werden dann in den Bereich zwischen 0 und 1 skaliert. Dies ist nützlich zum Beispiel für Optimierungsalgorithmen, die im Kern von maschinellen Lernalgorithmen wie dem Gradientenabstieg (gradient descent) verwendet werden. Dies ist auch nützlich bei Algorithmen, wie Regression und Neuronale Netze, die Eingaben gewichten und Algorithmen, die Entfernungsmessungen verwenden, wie zum Beipiel k-Nearest Neighbors. Wir können die Daten in scikit-learn mit der MinMaxScaler-Klasse neu skalieren.
End of explanation
"""
# Standardisierung der Daten (Mittelwert = 0 , Standardabweichung = 1)
from sklearn.preprocessing import StandardScaler
from pandas import read_csv
from numpy import set_printoptions
# Übergabe des Dateinamens an die Variable Dateiname
dateiname = 'pima-indians-diabetes.data.csv'
# Festlegen der Spalten Namen für den DataFrame
namen = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
# Einlesen der Daten in einen panda DataFrame mit read_csv()
df = read_csv(dateiname, names=namen)
# Übergabe der Werte in df an ein array5
array5 = df.values
# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X - hier steht die Klasse in Spalte 9
X = array5[:,0:8]
Y = array5[:,8]
scaler = StandardScaler().fit(X)
rescaledX = scaler.transform(X)
# summarize transformed data
set_printoptions(precision=3)
print(rescaledX[0:10,:])
"""
Explanation: <h3>Weiteres Beispiel</h3>
End of explanation
"""
|
atcemgil/notes | Sampling1.ipynb | mit | import numpy as np
x_1 = np.random.rand()
print(x_1)
"""
Explanation: Basic Distributions
A. Taylan Cemgil
Boğaziçi University, Dept. of Computer Engineering
Notebook Summary
We review the notation and parametrization of densities of some basic distributions that are often encountered
We show how random numbers are generated using python libraries
We show some basic visualization methods such as displaying histograms
Sampling From Basic Distributions
Sampling from basic distribution is easy using the numpy library.
Formally we will write
$x \sim p(X|\theta)$
where $\theta$ is the parameter vector, $p(X| \theta)$ denotes the density of the random variable $X$ and $x$ is a realization, a particular draw from the density $p$.
The following distributions are building blocks from which more complicated processes may be constructed. It is important to have a basic understanding of these distributions.
Continuous Univariate
Uniform $\mathcal{U}$
Univariate Gaussian $\mathcal{N}$
Gamma $\mathcal{G}$
Inverse Gamma $\mathcal{IG}$
Beta $\mathcal{B}$
Discrete
Poisson $\mathcal{P}$
Bernoulli $\mathcal{BE}$
Binomial $\mathcal{BI}$
Categorical $\mathcal{M}$
Multinomial $\mathcal{M}$
Continuous Multivariate (todo)
Multivariate Gaussian $\mathcal{N}$
Dirichlet $\mathcal{D}$
Continuous Matrix-variate (todo)
Wishart $\mathcal{W}$
Inverse Wishart $\mathcal{IW}$
Matrix Gaussian $\mathcal{N}$
Sampling from the standard uniform $\mathcal{U}(0,1)$
For generating a single random number in the interval $[0, 1)$ we use the notation
$$
x_1 \sim \mathcal{U}(x; 0,1)
$$
In python, this is implemented as
End of explanation
"""
import numpy as np
N = 5
x = np.random.rand(N)
print(x)
"""
Explanation: We can also generate an array of realizations $x_i$ for $i=1 \dots N$,
$$
x_i \sim \mathcal{U}(x; 0,1)
$$
End of explanation
"""
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Number of realizations
N = 50000
x = np.random.rand(N)
plt.hist(x, bins=20)
plt.xlabel('x')
plt.ylabel('Count')
plt.show()
"""
Explanation: For large $N$, it is more informative to display an histogram of generated data:
End of explanation
"""
N = 1000
# Bin width
Delta = 0.02
# Bin edges
b = np.arange(0 ,1+Delta, Delta)
# Evaluate the density
g = np.ones(b.size)
# Draw the samples
u = np.random.rand(N)
counts,edges = np.histogram(u, bins=b)
plt.bar(b[:-1], counts/N/Delta, width=Delta)
#plt.hold(True)
plt.plot(b, g, linewidth=3, color='y')
#plt.hold(False)
plt.show()
"""
Explanation: $\newcommand{\indi}[1]{\left[{#1}\right]}$
$\newcommand{\E}[1]{\left\langle{#1}\right\rangle}$
We know that the density of the uniform distribution $\mathcal{U}(0,1)$ is
$$
\mathcal{U}(x; 0,1) = \left{ \begin{array}{cc} 1 & 0 \leq x < 1 \ 0 & \text{otherwise} \end{array} \right.
$$
or using the indicator notation
$$
\mathcal{U}(x; 0,1) = \left[ x \in [0,1) \right]
$$
Indicator function
To write and manipulate discrete probability distributions in algebraic expression, the indicator function is useful:
$$ \left[x\right] = \left{ \begin{array}{cc}
1 & x\;\;\text{is true} \
0 & x\;\;\text{is false}
\end{array}
\right.$$
This notation is also known as the Iverson's convention.
Aside: How to plot the density and the histogram onto the same plot?
In one dimension, the histogram is simply the count of the data points that fall to a given interval. Mathematically, we have
$j = 1\dots J$ intervals where $B_j = [b_{j-1}, b_j]$ and $b_j$ are bin boundries such that $b_0 < b_1 < \dots < b_J$.
$$
h(x) = \sum_{j=1}^J \sum_{i=1}^N \indi{x \in B_j} \indi{x_i \in B_j}
$$
This expression, at the first sight looks somewhat more complicated than it really is. The indicator product just encodes the logical condition $x \in B_j$ and $x_i \in B_j$. The sum over $j$ is just a convenient way of writing the result instead of specifying the histogram as a case by case basis for each bin. It is important to get used to such nested sums.
When the density $p(x)$ is given, the probability that a single realization is in bin $B_j$ is given by
$$
\Pr\left{x \in B_j\right} = \int_{B_j} dx p(x) = \int_{-\infty}^{\infty} dx \indi{x\in B_j} p(x) = \E{\indi{x\in B_j}}
$$
In other words, the probability is just the expectation of the indicator.
The histogram can be written as follows
$$
h(x) = \sum_{j=1}^J \indi{x \in B_j} \sum_{i=1}^N \indi{x_i \in B_j}
$$
We define the counts at each bin as
$$
c_j \equiv \sum_{i=1}^N \indi{x_i \in B_j}
$$
If all bins have the same width, i.e., $b_j - b_{j-1} = \Delta$ for $\forall j$, and if $\Delta$ is sufficiently small we have
$$
\E{\indi{x\in B_j}} \approx p(b_{j-1}+\Delta/2) \Delta
$$
i.e., the probability is roughly the interval width times the density evaluated at the middle point of the bin. The expected value of the counts is
$$
\E{c_j} = \sum_{i=1}^N \E{\indi{x_i \in B_j}} \approx N \Delta p(b_{j-1}+\Delta/2)
$$
Hence, the density should be roughly
$$
p(b_{j-1}+\Delta/2) \approx \frac{\E{c_j} }{N \Delta}
$$
The $N$ term is intuitive but the $\Delta$ term is easily forgotten. When plotting the histograms on top of the corresponding densities, we should scale the normalized histogram ${ c_j }/{N}$ by dividing by $\Delta$.
End of explanation
"""
N = 1000
Delta = 0.05
b = np.arange(0 ,1+Delta, Delta)
g = np.ones(b.size)
u = np.random.rand(N)
#plt.hold(True)
plt.plot(b, g, linewidth=3, color='y')
plt.hist(u, bins=b, normed=True)
#plt.hold(False)
plt.show()
"""
Explanation: The plt.hist function (calling np.histogram) can do this calculation automatically if the option normed=True. However, when the grid is not uniform, it is better to write your own code to be sure what is going on.
End of explanation
"""
from IPython.display import display, Math, Latex, HTML
import notes_utilities as nut
print('Gaussian')
L = nut.pdf2latex_gauss(x=r'Z_{i,j}', m=r'\mu_{i,j}',v=r'l_{i,j}')
display(HTML(nut.eqs2html_table(L)))
print('Gamma')
L = nut.pdf2latex_gamma(x=r'u', a=r'a',b=r'b')
display(HTML(nut.eqs2html_table(L)))
print('Inverse Gamma')
L = nut.pdf2latex_invgamma(x=r'z', a=r'a',b=r'b')
display(HTML(nut.eqs2html_table(L)))
print('Beta')
L = nut.pdf2latex_beta(x=r'\pi', a=r'\alpha',b=r'\beta')
display(HTML(nut.eqs2html_table(L)))
"""
Explanation: Continuous Univariate Distributions
Uniform $\mathcal{U}$
Univariate Gaussian $\mathcal{N}$
$${\cal N}(x;\mu, v) = \frac{1}{\sqrt{2\pi v}} \exp\left(-\frac12 \frac{(x - \mu)^2}{v}\right) $$
Gamma $\mathcal{G}$
$${\cal G}(\lambda; a, b) = \frac{b^a \lambda^{a-1}}{\Gamma(a)} \exp( - b \lambda)$$
Inverse Gamma $\mathcal{IG}$
$${\cal IG}(v; \alpha, \beta) = \frac{\beta^\alpha}{\Gamma(\alpha) v^{\alpha+1}} \exp(- \frac{\beta}{v}) $$
Beta $\mathcal{B}$
$${\cal B}(r; \alpha, \beta) = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha) \Gamma(\beta) } r^{\alpha-1} (1-r)^{\beta-1}$$
In derivations, the distributions are often needed as building blocks. The following code segment prints the latex strings to be copied and pasted.
$\DeclareMathOperator{\trace}{Tr}$
End of explanation
"""
import numpy as np
import scipy.special as sps
import matplotlib.pyplot as plt
x = np.arange(0.1,5,0.01)
f = sps.gammaln(x)
df = sps.psi(x)
# First derivative of the digamma function
ddf = sps.polygamma(1,x)
# sps.psi(x) == sps.polygamma(0,x)
plt.figure(figsize=(8,10))
plt.subplot(3,1,1)
plt.plot(x, f, 'r')
plt.grid(True)
plt.xlabel('x')
plt.ylabel('log Gamma(x)')
plt.subplot(3,1,2)
plt.grid(True)
plt.plot(x, df, 'b')
plt.xlabel('x')
plt.ylabel('Psi(x)')
plt.subplot(3,1,3)
plt.plot(x, ddf, 'k')
plt.grid(True)
plt.xlabel('x')
plt.ylabel('Psi\'(x)')
plt.show()
"""
Explanation: We will illustrate two alternative ways for sampling from continuous distributions.
The first method has minimal dependence on the numpy and scipy libraries. This is initially the preferred method. Only random variable generators and the $\log \Gamma(x)$ (gammaln) function is used and nothing more.
The second method uses scipy. This is a lot more practical but requires knowing more about the internals of the library.
Aside: The Gamma function $\Gamma(x)$
The gamma function $\Gamma(x)$ is the (generalized) factorial.
- Defined by
$$\Gamma(x) = \int_0^{\infty} t^{x-1} e^{-t}\, dt$$
- For integer $x$, $\Gamma(x) = (x-1)!$. Remember that for positive integers $x$, the factorial function can be defined recursively $x! = (x-1)! x $ for $x\geq 1$.
- For real $x>1$, the gamma function satisfies
$$
\Gamma(x+1) = \Gamma(x) x
$$
- Interestingly, we have
$$\Gamma(1/2) = \sqrt{\pi}$$
- Hence
$$\Gamma(3/2) = \Gamma(1/2 + 1) = \Gamma(1/2) (1/2) = \sqrt{\pi}/2$$
- It is available in many numerical computation packages, in python it is available as scipy.special.gamma.
- To compute $\log \Gamma(x)$, you should always use the implementation as scipy.special.gammaln. The gamma function blows up super-exponentially so numerically you should never evaluate $\log \Gamma(x)$ as
python
import numpy as np
import scipy.special as sps
np.log(sps.gamma(x)) # Don't
sps.gammaln(x) # Do
- A related function is the Beta function
$$B(x,y) = \int_0^{1} t^{x-1} (1-t)^{y-1}\, dt$$
- We have
$$B(x,y) = \frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$$
- Both $\Gamma(x)$ and $B(x)$ pop up as normalizing constant of the gamma and beta distributions.
Derivatives of $\Gamma(x)$
<span style="color:red"> </span> The derivatives of $\log \Gamma(x)$ pop up quite often when fitting densities. The first derivative has a specific name, often called the digamma function or the psi function.
$$
\Psi(x) \equiv \frac{d}{d x} \log \Gamma(x)
$$
It is available as scipy.special.digamma or scipy.special.psi
Higher order derivatives of the $\log \Gamma(x)$ function (including digamma itself) are available as scipy.special.polygamma
End of explanation
"""
import matplotlib.pylab as plt
import numpy as np
from scipy.special import polygamma
from scipy.special import gammaln as loggamma
from scipy.special import psi
x = np.arange(0.001,6,0.001)
ylim = [-1,8]
xlim = [-1,6]
plt.plot(x, loggamma(x), 'b')
stir = x*np.log(x)-x +0.5*np.log(2*np.pi*x)
plt.plot(x+1, stir,'r')
plt.hlines(0,0,8)
plt.vlines([0,1,2],ylim[0],ylim[1],linestyles=':')
plt.hlines(range(ylim[0],ylim[1]),xlim[0],xlim[1],linestyles=':',colors='g')
plt.ylim(ylim)
plt.xlim(xlim)
plt.legend([r'\log\Gamma(x)',r'\log(x-1)'],loc=1)
plt.xlabel('x')
plt.show()
"""
Explanation: Stirling's approximation
An important approximation to the factorial is the famous Stirling's approximation
\begin{align}
n! \sim \sqrt{2 \pi n}\left(\frac{n}{e}\right)^n
\end{align}
\begin{align}
\log \Gamma(x+1) \approx \frac{1}{2}\log(2 \pi) + x \log(x) - \frac{1}{2} \log(x)
\end{align}
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.special import gammaln
def plot_histogram_and_density(N, c, edges, dx, g, title='Put a title'):
'''
N : Number of Datapoints
c : Counts, as obtained from np.histogram function
edges : bin edges, as obtained from np.histogram
dx : The bin width
g : Density evaluated at the points given in edges
title : for the plot
'''
plt.bar(edges[:-1], c/N/dx, width=dx)
# plt.hold(True)
plt.plot(edges, g, linewidth=3, color='y')
# plt.hold(False)
plt.title(title)
def log_gaussian_pdf(x, mu, V):
return -0.5*np.log(2*np.pi*V) -0.5*(x-mu)**2/V
def log_gamma_pdf(x, a, b):
return (a-1)*np.log(x) - b*x - gammaln(a) + a*np.log(b)
def log_invgamma_pdf(x, a, b):
return -(a+1)*np.log(x) - b/x - gammaln(a) + a*np.log(b)
def log_beta_pdf(x, a, b):
return - gammaln(a) - gammaln(b) + gammaln(a+b) + np.log(x)*(a-1) + np.log(1-x)*(b-1)
N = 1000
# Univariate Gaussian
mu = 2 # mean
V = 1.2 # Variance
x_normal = np.random.normal(mu, np.sqrt(V), N)
dx = 10*np.sqrt(V)/50
x = np.arange(mu-5*np.sqrt(V) ,mu+5*np.sqrt(V),dx)
g = np.exp(log_gaussian_pdf(x, mu, V))
#g = scs.norm.pdf(x, loc=mu, scale=np.sqrt(V))
c,edges = np.histogram(x_normal, bins=x)
plt.figure(num=None, figsize=(16, 5), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(2,2,1)
plot_histogram_and_density(N, c, x, dx, g, 'Gaussian')
## Gamma
# Shape
a = 1.2
# inverse scale
b = 30
# Generate unit scale first than scale with inverse scale parameter b
x_gamma = np.random.gamma(a, 1, N)/b
dx = np.max(x_gamma)/500
x = np.arange(dx, 250*dx, dx)
g = np.exp(log_gamma_pdf(x, a, b))
c,edges = np.histogram(x_gamma, bins=x)
plt.subplot(2,2,2)
plot_histogram_and_density(N, c, x, dx, g, 'Gamma')
## Inverse Gamma
a = 3.5
b = 0.2
x_invgamma = b/np.random.gamma(a, 1, N)
dx = np.max(x_invgamma)/500
x = np.arange(dx, 150*dx, dx)
g = np.exp(log_invgamma_pdf(x,a,b))
c,edges = np.histogram(x_invgamma, bins=x)
plt.subplot(2,2,3)
plot_histogram_and_density(N, c, x, dx, g, 'Inverse Gamma')
## Beta
a = 0.5
b = 1
x_beta = np.random.beta(a, b, N)
dx = 0.01
x = np.arange(dx, 1, dx)
g = np.exp(log_beta_pdf(x, a, b))
c,edges = np.histogram(x_beta, bins=x)
plt.subplot(2,2,4)
plot_histogram_and_density(N, c, x, dx, g, 'Beta')
plt.show()
"""
Explanation: Sampling from Continuous Univariate Distributions
Sampling using numpy.random
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as scs
N = 2000
# Univariate Gaussian
mu = 2 # mean
V = 1.2 # Variance
rv_normal = scs.norm(loc=mu, scale=np.sqrt(V))
x_normal = rv_normal.rvs(size=N)
dx = 10*np.sqrt(V)/50
x = np.arange(mu-5*np.sqrt(V) ,mu+5*np.sqrt(V),dx)
g = rv_normal.pdf(x)
c,edges = np.histogram(x_normal, bins=x)
plt.figure(num=None, figsize=(16, 5), dpi=80, facecolor='w', edgecolor='k')
plt.subplot(2,2,1)
plot_histogram_and_density(N, c, x, dx, g, 'Gaussian')
## Gamma
a = 3.2
b = 30
# The following is equivalent to our parametrization of gamma, note the 1/b term
rv_gamma = scs.gamma(a, scale=1/b)
x_gamma = rv_gamma.rvs(N)
dx = np.max(x_gamma)/500
x = np.arange(0, 250*dx, dx)
g = rv_gamma.pdf(x)
c,edges = np.histogram(x_gamma, bins=x)
plt.subplot(2,2,2)
plot_histogram_and_density(N, c, x, dx, g, 'Gamma')
## Inverse Gamma
a = 3.5
b = 0.2
# Note the b term
rv_invgamma = scs.invgamma(a, scale=b)
x_invgamma = rv_invgamma.rvs(N)
dx = np.max(x_invgamma)/500
x = np.arange(dx, 150*dx, dx)
g = rv_invgamma.pdf(x)
c,edges = np.histogram(x_invgamma, bins=x)
plt.subplot(2,2,3)
plot_histogram_and_density(N, c, x, dx, g, 'Inverse Gamma')
## Beta
a = 0.7
b = 0.8
rv_beta = scs.beta(a, b)
x_beta = rv_beta.rvs(N)
dx = 0.02
x = np.arange(0, 1+dx, dx)
g = rv_beta.pdf(x)
c,edges = np.histogram(x_beta, bins=x)
plt.subplot(2,2,4)
plot_histogram_and_density(N, c, x, dx, g, 'Beta')
plt.show()
"""
Explanation: Sampling using scipy.stats
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
def plot_histogram_and_pmf(N, c, domain, dx, g, title='Put a title'):
'''
N : Number of Datapoints
c : Counts, as obtained from np.bincount function
domain : integers for each c, same size as c
dx : The bin width
g : Density evaluated at the points given in edges
title : for the plot
'''
plt.bar(domain-dx/2, c/N, width=dx)
# plt.hold(True)
plt.plot(domain, g, 'ro:', linewidth=3, color='y')
# plt.hold(False)
plt.title(title)
def log_poisson_pdf(x, lam):
return -lam + x*np.log(lam) - gammaln(x+1)
def log_bernoulli_pdf(r, pr):
return r*np.log(pr) + (1-r)*np.log(1 - pr)
def log_binomial_pdf(r, pr, L):
return gammaln(L+1) - gammaln(r+1) - gammaln(L-r+1) + r*np.log(pr) + (L-r)*np.log(1 - pr)
N = 100
pr = 0.8
# For plots
bin_width = 0.3
# Bernoulli
L = 1
x_bern = np.random.binomial(n=L, p=pr, size=N)
c = np.bincount(x_bern, minlength=L+1)
g = np.exp(log_bernoulli_pdf(np.arange(L+1), pr))
plt.figure(figsize=(20,4))
plt.subplot(1,3,1)
plot_histogram_and_pmf(N, c, np.arange(L+1), bin_width, g, 'Bernoulli')
plt.xticks([0,1])
# Binomial
L = 10
pr = 0.7
x_binom = np.random.binomial(n=L, p=pr, size=N)
c = np.bincount(x_binom, minlength=L+1)
g = np.exp(log_binomial_pdf(np.arange(L+1), pr, L))
plt.subplot(1,3,2)
plot_histogram_and_pmf(N, c, np.arange(L+1), bin_width, g, 'Binomial')
plt.xticks(np.arange(L+1))
# Poisson
intensity = 10.5
x_poiss = np.random.poisson(intensity, size =N )
c = np.bincount(x_poiss)
x = np.arange(len(c))
g = np.exp(log_poisson_pdf(x, intensity))
plt.subplot(1,3,3)
plot_histogram_and_pmf(N, c, x, bin_width, g, 'Poisson')
"""
Explanation: Sampling from Discrete Densities
Bernoulli $\mathcal{BE}$
$$
{\cal BE}(r; w) = w^r (1-w)^{1-r} \;\; \text{if} \; r \in {0, 1}
$$
Binomial $\mathcal{BI}$
$${\cal BI}(r; L, w) = \binom{L}{r, (L-r)} w^r (1-w)^{L-r} \;\; \text{if} \; r \in {0, 1, \dots, L} $$
Here, the binomial coefficient is defined as
$$
\binom{L}{r, (L-r)} = \frac{N!}{r!(L-r)!}
$$
Note that
$$
{\cal BE}(r; w) = {\cal BI}(r; L=1, w)
$$
Poisson $\mathcal{PO}$, with intensity $\lambda$
$${\cal PO}(x;\lambda) = \frac{e^{-\lambda} \lambda^x}{x!} = \exp(x \log \lambda - \lambda - \log\Gamma(x+1)) $$
Given samples on nonnegative integers, we can obtain histograms easily using np.bincount.
python
c = np.bincount(samples)
The functionality is equivalent to the following sniplet, while implementation is possibly different and more efficient.
python
upper_bound = np.max()
c = np.zeros(upper_bound+1)
for i in samples:
c[i] += 1
End of explanation
"""
# The probability parameter
pr = 0.3
fig = plt.figure(figsize=(16,50), edgecolor=None)
maxL = 12
plt.subplot(maxL-1,2,1)
plt.grid(False)
# Set up the scalar binomial density as a bivariate density
for L in range(1,maxL):
r = np.arange(L+1)
p = np.exp(log_binomial_pdf(r, pr=pr, L=L))
A = np.zeros(shape=(13,13))
for s in range(L):
s0 = s
s1 = L-s
A[s0, s1] = p[s]
#plt.subplot(maxL-1,2,2*L-1)
# plt.bar(r-0.25, p, width=0.5)
# ax.set_xlim(-1,maxL)
# ax.set_xticks(range(0,maxL))
if True:
plt.subplot(maxL-1,2,2*L-1)
plt.barh(bottom=r-0.25, width=p, height=0.5)
ax2 = fig.gca()
pos = ax2.get_position()
pos2 = [pos.x0, pos.y0, 0.04, pos.height]
ax2.set_position(pos2)
ax2.set_ylim(-1,maxL)
ax2.set_yticks(range(0,maxL))
ax2.set_xlim([0,1])
ax2.set_xticks([0,1])
plt.ylabel('s1')
ax2.invert_xaxis()
plt.subplot(maxL-1,2,2*L)
plt.imshow(A, interpolation='nearest', origin='lower',cmap='gray_r',vmin=0,vmax=0.7)
plt.xlabel('s0')
ax1 = fig.gca()
pos = ax1.get_position()
pos2 = [pos.x0-0.45, pos.y0, pos.width, pos.height]
ax1.set_position(pos2)
ax1.set_ylim(-1,maxL)
ax1.set_yticks(range(0,maxL))
ax1.set_xlim(-1,maxL)
ax1.set_xticks(range(0,maxL))
plt.show()
"""
Explanation: Bernoulli, Binomial, Categorical and Multinomial Distributions
The Bernoulli and Binomial distributions are quite simple and well known distributions on small integers, so it may come as a surprise that they have another, less obvious but arguably more useful representation as discrete multivariate densities. This representation makes the link to categorical distributions where there are more than two possible outcomes. Finally, all Bernoulli, Binomial or Categorical distributions are special cases of Multinomial distribution.
Bernoulli
Recall the Bernoulli distribution $r \in {0, 1}$
$$
{\cal BE}(r; w) = w^r (1-w)^{1-r}
$$
We will define $\pi_0 = 1-w$ and $\pi_1 = w$, such that $\pi_0 + \pi_1 = 1$. The parameter vector is $\pi = (\pi_0, \pi_1)$
We will also introduce a positional encoding such that
\begin{eqnarray}
r = 0 & \Rightarrow & s = (1, 0) \
r = 1 & \Rightarrow & s = (0, 1)
\end{eqnarray}
In other words $s = (s_0, s_1)$ is a 2-dimensional vector where
$$s_0, s_1 \in {0,1}\;\text{and}\; s_0 + s_1 = 1$$
We can now write the Bernoulli density
$$
p(s | \pi) = \pi_0^{s_0} \pi_1^{s_1}
$$
Binomial
Similarly, recall the Binomial density where $r \in {0, 1, \dots, L}$
$${\cal BI}(r; L, w) = \binom{L}{r, (L-r)} w^r (1-w)^{L-r} $$
We will again define $\pi_0 = 1-w$ and $\pi_1 = w$, such that $\pi_0 + \pi_1 = 1$. The parameter vector is $\pi = (\pi_0, \pi_1)$
\begin{eqnarray}
r = 0 & \Rightarrow & s = (L, 0) \
r = 1 & \Rightarrow & s = (L-1, 1)\
r = 2 & \Rightarrow & s = (L-2, 2)\
\dots \
r = L & \Rightarrow & s = (0, L)
\end{eqnarray}
where $s = (s_0, s_1)$ is a 2-dimensional vector where $$s_0, s_1 \in {0,\dots,L} \;\text{and}\; s_0 + s_1 = L$$
We can now write the Binomial density as
$$
p(s | \pi) = \binom{L}{s_0, s_1} \pi_0^{s_0} \pi_1^{s_1}
$$
Categorical (Multinouilli)
One of the advantages of this new notation is that we can write the density even if the outcomes are not numerical. For example, the result of a single coin flip experiment when $r \in {$ 'Tail', 'Head' $}$ where the probability of 'Tail' is $w$ can be written as
$$
p(r | w) = w^{\indi{r=\text{'Tail'}}} (1-w)^{\indi{r=\text{'Head'}}}
$$
We define $s_0 = \indi{r=\text{'Head'}}$ and $s_1 = \indi{r=\text{'Tail'}}$, then the density can be written in the same form as
$$
p(s | \pi) = \pi_0^{s_0} \pi_1^{s_1}
$$
where $\pi_0 = 1-w$ and $\pi_1 = w$.
More generally, when $r$ is from a set with $K$ elements, i.e., $r \in R = { v_0, v_1, \dots, v_{K-1} }$ with probability of the event $r = v_k$ given as $\pi_k$, we define $s = (s_0, s_1, \dots, s_{K-1})$ for $k=0,\dots, K-1$
$$
s_k = \indi{r=v_k}
$$
Note that by construction, we have $\sum_k s_k = 1$.
The resulting density, known as the Categorical density, can be writen as
$$
p(s|\pi) = \pi_0^{s_0} \pi_1^{s_1} \dots \pi_{K-1}^{s_{K-1}}
$$
Multinomial
When drawing from a categorical distribution, one chooses a single category from $K$ options with given probabilities. A standard model for this is placing a single ball into $K$ different bins. The vector $s = (s_0, s_1, \dots,s_k, \dots, s_{K-1})$ represents how many balls eack bin $k$ contains.
Now, place $L$ balls instead of one into $K$ bins with placing each ball idependently into bin $k$ where $k \in{0,\dots,K-1}$ with the probability $\pi_k$. The multinomial is the joint distribution of $s$ where $s_k$ is the number of balls placed into bin $k$.
The density will be denoted as
$${\cal M}(s; L, \pi) = \binom{L}{s_0, s_1, \dots, s_{K-1}}\prod_{k=0}^{K-1} \pi_k^{s_k} $$
Here $\pi \equiv [\pi_0, \pi_2, \dots, \pi_{K-1} ]$ is the probability vector and $L$ is referred as the index parameter.
Clearly, we have the normalization constraint $ \sum_k \pi_k = 1$ and realization of the counts $s$ satisfy
$ \sum_k s_k = L $.
Here, the multinomial coefficient is defined as
$$\binom{L}{s_0, s_1, \dots, s_{K-1}} = \frac{L!}{s_0! s_1! \dots s_{K-1}!}$$
Binomial, Bernoulli and Categorical distributions are all special cases of the Multinomial distribution, with a suitable representation.
The picture is as follows:
~~~
|Balls/Bins | $2$ Bins | $K$ Bins |
|-------- | -------- | ---------|
| $1$ Ball | Bernoulli ${\cal BE}$ | Categorical ${\cal C}$ |
|-------- | -------- | ---------|
| $L$ Balls | Binomial ${\cal BI}$ | Multinomial ${\cal M}$ |
~~~
Murphy calls the categorical distribution ($1$ Ball, $K$ Bins) as the Multinoulli. This is non-standard but logical (and somewhat cute).
It is common to consider Bernoulli and Binomial as scalar random variables. However, when we think of them as special case of a Multinomial it is better to think of them as bivariate, albeit degenerate, random variables, as
illustrated in the following cell along with an alternative visualization.
End of explanation
"""
# Number of samples
sz = 3
# Multinomial
p = np.array([0.3, 0.1, 0.1, 0.5])
K = len(p) # number of Bins
L = 20 # number of Balls
print('Multinomial with number of bins K = {K} and Number of balls L = {L}'.format(K=K,L=L))
print(np.random.multinomial(L, p, size=sz))
# Categorical
L = 1 # number of Balls
print('Categorical with number of bins K = {K} and a single ball L=1'.format(K=K))
print(np.random.multinomial(L, p, size=sz))
# Binomial
p = np.array([0.3, 0.7])
K = len(p) # number of Bins = 2
L = 20 # number of Balls
print('Binomial with two bins K=2 and L={L} balls'.format(L=L))
print(np.random.multinomial(L, p, size=sz))
# Bernoulli
L = 1 # number of Balls
p = np.array([0.3, 0.7])
K = len(p) # number of Bins = 2
print('Bernoulli, two bins and a single ball')
print(np.random.multinomial(L, p, size=sz))
"""
Explanation: The following cell illustrates sampling from the Multinomial density.
End of explanation
"""
|
tensorflow/tfx-addons | tfx_addons/feature_selection/example/Pima_Indians_Diabetes_example_colab.ipynb | apache-2.0 | !pip install -U tfx
# getting the code directly from the repo
x = !pwd
if 'feature_selection' not in str(x):
!git clone -b main https://github.com/deutranium/tfx-addons.git
%cd tfx-addons/tfx_addons/feature_selection
"""
Explanation: <a href="https://colab.research.google.com/github/deutranium/tfx-addons/blob/main/tfx_addons/feature_selection/example/Pima_Indians_Diabetes_example_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
TFX Feature Selection Component
You may find the source code for the same here
This example demonstrate the use of feature selection component. This project allows the user to select different algorithms for performing feature selection on datasets artifacts in TFX pipelines
Base code taken from: https://github.com/tensorflow/tfx/blob/master/docs/tutorials/tfx/components_keras.ipynb
Setup
Install TFX
Note: In Google Colab, because of package updates, the first time you run this cell you must restart the runtime (Runtime > Restart runtime ...).
End of explanation
"""
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
import importlib
pp = pprint.PrettyPrinter()
from tfx import v1 as tfx
import importlib
from tfx.components import CsvExampleGen
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
# importing the feature selection component
from component import FeatureSelection
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
"""
Explanation: Import packages
Importing the necessary packages, including the standard TFX component classes
End of explanation
"""
# getting the dataset
_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/npradaschnor/Pima-Indians-Diabetes-Dataset/master/diabetes.csv'
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
"""
Explanation: Pima Indians Diabetes example pipeline
Download Example Data
We download the example dataset for use in our TFX pipeline.
The dataset we're using is the Pima Indians Diabetes dataset
There are eight features in this dataset:
Pregnancies
Glucose
BloodPressure
SkinThickness
Insulin
BMI
DiabetesPedigreeFunction
Age
The dataset corresponds to classification tasks on which you need to predict if a person has diabetes based on the 8 features above
End of explanation
"""
context = InteractiveContext()
#create and run exampleGen component
example_gen = CsvExampleGen(input_base=_data_root )
context.run(example_gen)
#create and run statisticsGen component
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
# using the feature selection component
#feature selection component
feature_selector = FeatureSelection(orig_examples = example_gen.outputs['examples'],
module_file='example.modules.pima_indians_module_file')
context.run(feature_selector)
# Display Selected Features
context.show(feature_selector.outputs['feature_selection']._artifacts[0])
"""
Explanation: Run TFX Components
In the cells that follow, we create TFX components one-by-one and generates example using exampleGen component.
End of explanation
"""
context.show(feature_selector.outputs['updated_data']._artifacts[0])
"""
Explanation: As seen above, .selected_features contains the features selected after running the component with the speified parameters.
To get the info about updated Example artifact, one can view it as follows:
End of explanation
"""
|
martinjrobins/hobo | examples/toy/model-goodwin-oscillator.ipynb | bsd-3-clause | import pints
import pints.plot
import pints.toy
import matplotlib.pyplot as plt
import numpy as np
model = pints.toy.GoodwinOscillatorModel()
"""
Explanation: Goodwin's oscillator toy model
This example shows how the Goodwin's Oscillator toy model can be used.
Our version of this model has five parameters and three oscillating states as described in [1].
[1] Estimating Bayes factors via thermodynamic integration and population MCMC. Ben Calderhead and Mark Girolami, 2009, Computational Statistics and Data Analysis.
We start by creating a toy model:
End of explanation
"""
real_parameters = model.suggested_parameters()
times = model.suggested_times()
values = model.simulate(real_parameters, times)
"""
Explanation: The model also provides suggested parameters and sampling times, allowing us to run a simulation:
End of explanation
"""
plt.figure()
plt.subplot(3, 1, 1)
plt.plot(times, values[:, 0], 'b')
plt.subplot(3, 1, 2)
plt.plot(times, values[:, 1], 'g')
plt.subplot(3, 1, 3)
plt.plot(times, values[:, 2], 'r')
plt.show()
"""
Explanation: This gives us all we need to create a plot of current versus time:
End of explanation
"""
noise1 = 0.001
noise2 = 0.01
noise3 = 0.1
noisy_values = np.array(values, copy=True)
noisy_values[:, 0] += np.random.normal(0, noise1, len(times))
noisy_values[:, 1] += np.random.normal(0, noise2, len(times))
noisy_values[:, 2] += np.random.normal(0, noise3, len(times))
plt.figure()
plt.subplot(3, 1, 1)
plt.plot(times, noisy_values[:, 0], 'b')
plt.subplot(3, 1, 2)
plt.plot(times, noisy_values[:, 1], 'g')
plt.subplot(3, 1, 3)
plt.plot(times, noisy_values[:, 2], 'r')
plt.show()
"""
Explanation: Now we will add some noise to generate some fake "experimental" data and try to recover the original parameters.
End of explanation
"""
# Create an object with links to the model and time series
problem = pints.MultiOutputProblem(model, times, values)
# Create a log posterior
log_prior = pints.UniformLogPrior([1, 1, 0.01, 0.01, 0.01], [10, 10, 1, 1, 1])
log_likelihood = pints.GaussianKnownSigmaLogLikelihood(problem, [noise1, noise2, noise3])
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Run MCMC on the noisy data
x0 = [[5, 5, 0.5, 0.5, 0.5]]*3
mcmc = pints.MCMCController(log_posterior, 3, x0)
mcmc.set_max_iterations(5000)
mcmc.set_log_to_screen(False)
print('Running')
chains = mcmc.run()
print('Done!')
"""
Explanation: Now we can try and infer the original parameters:
End of explanation
"""
results = pints.MCMCSummary(
chains=chains,
time=mcmc.time(),
parameter_names=['k2', 'k3', 'm1', 'm2', 'm3']
)
print(results)
"""
Explanation: We can use an MCMCSummary to display the results:
End of explanation
"""
pints.plot.trace(chains, ref_parameters=real_parameters)
plt.show()
"""
Explanation: Now we can inspect the resulting chains:
End of explanation
"""
# Fit to the noisy data
parameters = []
opt = pints.OptimisationController(log_posterior, x0[0], method=pints.XNES)
opt.set_log_to_screen(False)
parameters, fbest = opt.run()
print('')
print(' p1 p2 p3 p4 p5')
print('real ' + ' '.join(['{: 8.4g}'.format(float(x)) for x in real_parameters]))
print('found ' + ' '.join(['{: 8.4g}'.format(x) for x in parameters]))
"""
Explanation: This is a pretty hard problem!
And what about optimisation?
End of explanation
"""
problem = pints.MultiOutputProblem(model, times, noisy_values)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0, 0, 0, 0, 0, 0, 0, 0],
[10, 10, 1, 1, 1, 1, 1, 1]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
real_parameters1 = np.array(real_parameters.tolist() + [noise1, noise2, noise3])
xs = [
real_parameters1 * 1.1,
real_parameters1 * 0.9,
real_parameters1 * 1.15,
real_parameters1 * 1.2,
]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, 4, xs, method=pints.RelativisticMCMC)
# Add stopping criterion
mcmc.set_max_iterations(200)
# Run in parallel
mcmc.set_parallel(True)
mcmc.set_log_interval(1)
# Tune the samplers' hyper-parameters
for sampler in mcmc.samplers():
sampler.set_leapfrog_step_size([0.1, 0.5, 0.002, 0.002, 0.002, 0.0005, 0.001, 0.01])
sampler.set_leapfrog_steps(10)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
"""
Explanation: Sampling using relativistic HMC
The Goodwin-oscillator model has sensitivities calculated by the forward sensitivities approach, so we can use samplers that use gradients (although they will be slower per iteration; although perhaps not by ESS per second!), like Relativistic HMC.
End of explanation
"""
results = pints.MCMCSummary(
chains=chains,
time=mcmc.time(),
parameter_names=['k2', 'k3', 'm1', 'm2', 'm3', 'sigma_x', 'sigma_y', 'sigma_z'])
print(results)
pints.plot.trace(chains, ref_parameters=real_parameters1)
plt.show()
"""
Explanation: Display the results:
End of explanation
"""
pints.plot.series(np.vstack(chains), problem)
plt.show()
"""
Explanation: Plot posterior predictive distribution.
End of explanation
"""
|
RaoUmer/lightning-example-notebooks | plots/graph.ipynb | mit | import os
from lightning import Lightning
from numpy import random, asarray, argmin
from colorsys import hsv_to_rgb
import networkx as nx
"""
Explanation: <img style='float: left' src="http://lightning-viz.github.io/images/logo.png"> <br> <br> Graph plots in <a href='http://lightning-viz.github.io/'><font color='#9175f0'>Lightning</font></a>
<hr> Setup
End of explanation
"""
lgn = Lightning(ipython=True, host='http://public.lightning-viz.org')
"""
Explanation: Connect to server
End of explanation
"""
G = nx.random_geometric_graph(100, 0.2)
pos = asarray(nx.get_node_attributes(G, 'pos').values())
mat = nx.adjacency_matrix(G).todense()
lgn.graph(pos[:,0], pos[:,1], mat)
"""
Explanation: <hr> Random spatial graphs
Spatial graphs have nodes with fixed spatial positions, and links between them.
<br>
They are useful for spatial network data, where position has meaning (unlike force-directed graphs, where position is used dynamically during rendering).
<br>
First we'll generate a random geometric graph using networkx and plot with basic styling.
<br>
The random geometric graph places links between nodes with probability that depends on their spatial distance.
End of explanation
"""
dists = [(x - 0.5)**2 + (y - 0.5)**2 for x, y in pos]
lgn.graph(pos[:,0], pos[:,1], mat, values=dists, colormap='Greens')
"""
Explanation: We can add a color to each node. Here we color the same graph based on distance from the origin.
End of explanation
"""
center = argmin(dists)
p = nx.single_source_shortest_path_length(G, center)
xy = asarray([pos[i,:] for i in p.keys()])
g = p.values()
lgn.graph(xy[:,0], xy[:,1], mat, group=g)
"""
Explanation: As with other plots, we can also color using group labels.
<br>
Here we assign a label to each point based on the shortest path from it to the center.
End of explanation
"""
G = nx.random_geometric_graph(50, 0.5)
pos = asarray(nx.get_node_attributes(G, 'pos').values())
dists = [(x - 0.5)**2 + (y - 0.5)**2 for x, y in pos]
mat = nx.adjacency_matrix(G).todense()
lgn.graph(pos[:,0], pos[:,1], mat)
"""
Explanation: <hr> Edge bundling
Graphs with many edges can become hard to visualize (due to hairballs).
<br>
Lightning helps with this by letting you click on points and see only the links to that node. Try it!
End of explanation
"""
lgn.graphbundled(pos[:,0], pos[:,1], mat)
"""
Explanation: Another option is to bundle edges together using an algorithm by Holton and Van Wijk, emphasizing large tracts.
See this link for the implementaiton.
<br>
As with the unbundled version, you can click on points to highlight links.
End of explanation
"""
|
probml/pyprobml | notebooks/book1/14/resnet_jax.ipynb | mit | import jax
import jax.numpy as jnp # JAX NumPy
from jax import lax
import matplotlib.pyplot as plt
import math
from IPython import display
try:
from flax import linen as nn # The Linen API
except ModuleNotFoundError:
%pip install -qq flax
from flax import linen as nn # The Linen API
from flax.training import train_state # Useful dataclass to keep train state
import numpy as np # Ordinary NumPy
try:
import optax # Optimizers
except ModuleNotFoundError:
%pip install -qq optax
import optax # Optimizers
try:
import torchvision
except ModuleNotFoundError:
%pip install -qq torchvision
import torchvision
try:
from torch.utils import data
except ModuleNotFoundError:
%pip install -qq torch
from torch.utils import data
from torchvision import transforms
try:
import tensorflow_datasets as tfds # TFDS for MNIST
except ModuleNotFoundError:
%pip install -qq tensorflow tensorflow_datasets
import tensorflow_datasets as tfds # TFDS for MNIST
import random
import os
import time
from typing import Any, Callable, Sequence, Tuple
from functools import partial
rng = jax.random.PRNGKey(0)
!mkdir figures # for saving plots
ModuleDef = Any
"""
Explanation: Please find torch implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/14/resnet_torch.ipynb
<a href="https://colab.research.google.com/github/codeboy5/probml-notebooks/blob/add-resnet-jax/notebooks-d2l/resnet_jax.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
"""
class Residual(nn.Module):
"""The Residual block of ResNet."""
filters: int
conv: ModuleDef
norm: ModuleDef
strides: Tuple[int, int] = (1, 1)
use_1x1conv: bool = False
@nn.compact
def __call__(self, x):
residual = x
x = self.conv(self.filters, (3, 3), self.strides)(x)
x = self.norm()(x)
x = nn.relu(x)
x = self.conv(self.filters, (3, 3))(x)
x = self.norm(scale_init=nn.initializers.zeros)(x)
if self.use_1x1conv:
residual = self.conv(self.filters, (1, 1), self.strides, name="conv_proj")(residual)
residual = self.norm(name="norm_proj")(residual)
return nn.relu(x + residual)
"""
Explanation: Residual block
End of explanation
"""
train = False
conv = partial(nn.Conv, use_bias=False, dtype=jnp.float32)
norm = partial(nn.BatchNorm, use_running_average=not train, momentum=0.9, epsilon=1e-5, dtype=jnp.float32)
model = Residual(3, conv, norm)
batch = jnp.ones((4, 6, 6, 3)) # (N, H, W, C) format
variables = model.init(jax.random.PRNGKey(0), batch)
output = model.apply(variables, batch)
output.shape
"""
Explanation: Example where number of input and output channels is the same.
End of explanation
"""
model = Residual(6, conv, norm, use_1x1conv=True)
variables = model.init(jax.random.PRNGKey(0), batch)
output = model.apply(variables, batch)
output.shape
"""
Explanation: Example where we change the number of channels.
End of explanation
"""
model = Residual(6, conv, norm, (2, 2), True)
variables = model.init(jax.random.PRNGKey(0), batch)
output = model.apply(variables, batch)
output.shape
"""
Explanation: Example where we change the number of channels and the spatial size.
End of explanation
"""
def resnet_block(num_channels, conv, norm, num_residuals, first_block=False):
blk = []
for i in range(num_residuals):
if i == 0 and not first_block:
blk.append(Residual(num_channels, conv, norm, (2, 2), True))
else:
blk.append(Residual(num_channels, conv, norm))
return blk
"""
Explanation: Resnet block
We define a resnet block to be a sequence of residual blocks, where the first element in the sequence has a 1x1 convolution. However, the first such resnet block does not have 1x1 conv.
End of explanation
"""
class ResNet(nn.Module):
@nn.compact
def __call__(self, x, train: bool = True):
conv = partial(nn.Conv, use_bias=False, dtype=jnp.float32)
norm = partial(nn.BatchNorm, use_running_average=not train, momentum=0.9, epsilon=1e-5, dtype=jnp.float32)
x = nn.Conv(64, (7, 7), (2, 2), padding=[(3, 3), (3, 3)], name="conv_init")(x)
x = norm(name="bn_init")(x)
x = nn.relu(x)
x = nn.max_pool(x, (3, 3), strides=(2, 2), padding="SAME")
b2 = resnet_block(64, conv, norm, 2, True)
b3 = resnet_block(128, conv, norm, 2)
b4 = resnet_block(256, conv, norm, 2)
b5 = resnet_block(512, conv, norm, 2)
stages = [b2, b3, b4, b5]
for stage in stages:
for layer in stage:
x = layer(x)
x = jnp.mean(x, axis=(1, 2)) # Works as adaptive avg pooling
x = nn.Dense(10, dtype=jnp.float32)(x)
x = jnp.asarray(x, np.float32)
return x
model = ResNet()
batch = jnp.ones((1, 224, 224, 1)) # (N, H, W, C) format
variables = model.init(jax.random.PRNGKey(0), batch)
output = model.apply(variables, batch, False)
output.shape
"""
Explanation: The Full Resnet18 Model
End of explanation
"""
def load_data_fashion_mnist(batch_size, resize=None):
"""Download the Fashion-MNIST dataset and then load it into memory."""
trans = [transforms.ToTensor()]
if resize:
trans.insert(0, transforms.Resize(resize))
trans = transforms.Compose(trans)
mnist_train = torchvision.datasets.FashionMNIST(root="../data", train=True, transform=trans, download=True)
mnist_test = torchvision.datasets.FashionMNIST(root="../data", train=False, transform=trans, download=True)
return (
data.DataLoader(mnist_train, batch_size, shuffle=True, num_workers=2),
data.DataLoader(mnist_test, batch_size, shuffle=False, num_workers=2),
)
"""
Explanation: Train on Fashion-MNIST
We upscale images from 28x28 to 96x96, so that the input to the global average pooling layer has size 3x3 (since the network downscales by a factor of 32).
End of explanation
"""
class TrainState(train_state.TrainState):
batch_stats: Any
def create_train_state(rng, learning_rate, momentum):
cnn = ResNet()
variables = cnn.init(rng, jnp.ones([1, 96, 96, 1], jnp.float32))
params, batch_stats = variables["params"], variables["batch_stats"]
tx = optax.sgd(learning_rate, momentum)
state = TrainState.create(apply_fn=cnn.apply, params=params, tx=tx, batch_stats=batch_stats)
return state
"""
Explanation: # Create train state
End of explanation
"""
def compute_metrics(*, logits, labels):
one_hot = jax.nn.one_hot(labels, num_classes=10)
loss = jnp.mean(optax.softmax_cross_entropy(logits=logits, labels=one_hot))
accuracy = jnp.mean(jnp.argmax(logits, -1) == labels)
metrics = {
"loss": loss,
"accuracy": accuracy,
}
return metrics
"""
Explanation: Metric computation
End of explanation
"""
@jax.jit
def train_step(state, batch):
"""Train for a single step."""
def loss_fn(params):
logits, new_model_state = state.apply_fn(
{"params": params, "batch_stats": state.batch_stats}, batch["image"], mutable=["batch_stats"]
)
one_hot = jax.nn.one_hot(batch["label"], num_classes=10)
loss = jnp.mean(optax.softmax_cross_entropy(logits=logits, labels=one_hot))
return loss, (new_model_state, logits)
grad_fn = jax.value_and_grad(loss_fn, has_aux=True)
aux, grads = grad_fn(state.params)
# grads = lax.pmean(grads, axis_name='batch')
new_model_state, logits = aux[1]
metrics = compute_metrics(logits=logits, labels=batch["label"])
new_state = state.apply_gradients(grads=grads, batch_stats=new_model_state["batch_stats"])
return new_state, metrics
def eval_step(state, batch):
variables = {"params": state.params, "batch_stats": state.batch_stats}
logits = state.apply_fn(variables, batch["image"], train=False, mutable=False)
return compute_metrics(logits=logits, labels=batch["label"])
def eval_model(state, test_iter):
batch_metrics = []
for i, (X, y) in enumerate(test_iter):
batch = {}
batch["image"] = jnp.reshape(jnp.float32(X), (-1, 96, 96, 1))
batch["label"] = jnp.float32(y)
metrics = eval_step(state, batch)
batch_metrics.append(metrics)
# compute mean of metrics across each batch in epoch.
batch_metrics_np = jax.device_get(batch_metrics)
epoch_metrics_np = {k: np.mean([metrics[k] for metrics in batch_metrics_np]) for k in batch_metrics_np[0]}
return epoch_metrics_np["accuracy"]
"""
Explanation: Training step
End of explanation
"""
class Animator:
"""For plotting data in animation."""
def __init__(
self,
xlabel=None,
ylabel=None,
legend=None,
xlim=None,
ylim=None,
xscale="linear",
yscale="linear",
fmts=("-", "m--", "g-.", "r:"),
nrows=1,
ncols=1,
figsize=(3.5, 2.5),
):
# Incrementally plot multiple lines
if legend is None:
legend = []
display.set_matplotlib_formats("svg")
self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize)
if nrows * ncols == 1:
self.axes = [
self.axes,
]
# Use a lambda function to capture arguments
self.config_axes = lambda: set_axes(self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
self.X, self.Y, self.fmts = None, None, fmts
def add(self, x, y):
# Add multiple data points into the figure
if not hasattr(y, "__len__"):
y = [y]
n = len(y)
if not hasattr(x, "__len__"):
x = [x] * n
if not self.X:
self.X = [[] for _ in range(n)]
if not self.Y:
self.Y = [[] for _ in range(n)]
for i, (a, b) in enumerate(zip(x, y)):
if a is not None and b is not None:
self.X[i].append(a)
self.Y[i].append(b)
self.axes[0].cla()
for x, y, fmt in zip(self.X, self.Y, self.fmts):
self.axes[0].plot(x, y, fmt)
self.config_axes()
display.display(self.fig)
display.clear_output(wait=True)
def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend):
"""Set the axes for matplotlib."""
axes.set_xlabel(xlabel)
axes.set_ylabel(ylabel)
axes.set_xscale(xscale)
axes.set_yscale(yscale)
axes.set_xlim(xlim)
axes.set_ylim(ylim)
if legend:
axes.legend(legend)
axes.grid()
"""
Explanation: Plotting
End of explanation
"""
train_iter, test_iter = load_data_fashion_mnist(256, resize=96)
rng, init_rng = jax.random.split(rng)
learning_rate = 0.1
momentum = 0.9
state = create_train_state(init_rng, learning_rate, momentum)
del init_rng # Must not be used anymore.
num_epochs = 10
animator = Animator(xlabel="epoch", xlim=[1, num_epochs], legend=["train loss", "train acc", "test acc"])
for epoch in range(num_epochs):
batch_metrics = []
for i, (X, y) in enumerate(train_iter):
batch = {}
batch["image"] = jnp.reshape(jnp.float32(X), (-1, 96, 96, 1))
batch["label"] = jnp.float32(y)
state, metrics = train_step(state, batch)
batch_metrics.append(metrics)
# compute mean of metrics across each batch in epoch.
batch_metrics_np = jax.device_get(batch_metrics)
epoch_metrics_np = {k: np.mean([metrics[k] for metrics in batch_metrics_np]) for k in batch_metrics_np[0]}
animator.add(epoch + 1, (epoch_metrics_np["loss"], epoch_metrics_np["accuracy"], None))
acc = eval_model(state, test_iter)
animator.add(epoch + 1, (None, None, acc))
print(
"epoch: %d, loss: %.4f, train_accuracy: %.2f, test_accuracy: %.2f"
% (epoch, epoch_metrics_np["loss"], epoch_metrics_np["accuracy"] * 100, acc * 100)
)
"""
Explanation: Train function
End of explanation
"""
|
UCSBarchlab/PyRTL | ipynb-examples/example7-synth-timing.ipynb | bsd-3-clause | import pyrtl
"""
Explanation: Example 7: Reduction and Speed Analysis
After building a circuit, one might want to do some stuff to reduce the
hardware into simpler nets as well as analyze various metrics of the
hardware. This functionality is provided in the Passes part of PyRTL
and will demonstrated here.
End of explanation
"""
# Creating a sample harware block
pyrtl.reset_working_block()
const_wire = pyrtl.Const(6, bitwidth=4)
in_wire2 = pyrtl.Input(bitwidth=4, name="input2")
out_wire = pyrtl.Output(bitwidth=5, name="output")
out_wire <<= const_wire + in_wire2
"""
Explanation: Part 1: Timing Analysis
Timing and area usage are key considerations of any hardware block that one
makes.
PyRTL provides functions to do these opertions
End of explanation
"""
# Generating timing analysis information
print("Pre Synthesis:")
timing = pyrtl.TimingAnalysis()
timing.print_max_length()
"""
Explanation: Now we will do the timing analysis as well as print out the critical path
End of explanation
"""
critical_path_info = timing.critical_path()
"""
Explanation: We are also able to print out the critical paths as well as get them
back as an array.
End of explanation
"""
logic_area, mem_area = pyrtl.area_estimation(tech_in_nm=65)
est_area = logic_area + mem_area
print("Estimated Area of block", est_area, "sq mm")
print()
"""
Explanation: Part 2: Area Analysis
PyRTL also provides estimates for the area that would be used up if the
circuit was printed as an ASIC
End of explanation
"""
pyrtl.synthesize()
print("Pre Optimization:")
timing = pyrtl.TimingAnalysis()
timing.print_max_length()
for net in pyrtl.working_block().logic:
print(str(net))
print()
"""
Explanation: Part 3: Synthesis
Synthesis is the operation of reducing the circuit into simpler components
The base synthesis function breaks down the more complex logic operations
into logic gates (keeps registers and memories intact) as well as reduces
all combinatorial logic into ops that only use one bitwidth wires
This synthesis allows for PyRTL to make optimizations to the net structure
as well as prepares it for further transformations on the PyRTL Toolchain
End of explanation
"""
pyrtl.optimize()
"""
Explanation: Part 4: Optimization
PyRTL has functions built in to eliminate unnecessary logic from the
circuit.
These functions are all done with a simple call:
End of explanation
"""
print("Post Optimization:")
timing = pyrtl.TimingAnalysis()
timing.print_max_length()
for net in pyrtl.working_block().logic:
print(str(net))
"""
Explanation: Now to see the difference
End of explanation
"""
|
swirlingsand/deep-learning-foundations | reinforcement/.ipynb_checkpoints/Q-learning-cart-checkpoint.ipynb | mit | import gym
import tensorflow as tf
import numpy as np
"""
Explanation: Deep Q-learning
In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.
We can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.
End of explanation
"""
# Create the Cart-Pole game environment
env = gym.make('CartPole-v0')
"""
Explanation: Note: Make sure you have OpenAI Gym cloned into the same directory with this notebook. I've included gym as a submodule, so you can run git submodule --init --recursive to pull the contents into the gym repo.
End of explanation
"""
env.reset() # why here? clears old envrionment?
rewards = [] # why here? we will be printing rewards later
for _ in range(100):
# env.render() # we want a human friendly rendering
# why here? we want to actually step through our enviornment
state, reward, done, info = env.step(env.action_space.sample()) # take a random action
# for tracking later
rewards.append(reward)
if done:
rewards = []
env.reset()
"""
Explanation: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
Run the code below to watch the simulation run.
End of explanation
"""
env.close()
"""
Explanation: To shut the window showing the simulation, use env.close().
End of explanation
"""
print(rewards[-20:])
"""
Explanation: If you ran the simulation above, we can look at the rewards:
End of explanation
"""
class QNetwork:
def __init__(self, learning_rate=0.01, state_size=4,
action_size=2, hidden_size=10,
name='QNetwork'):
# state inputs to the Q-network
with tf.variable_scope(name):
self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs')
# One hot encode the actions to later choose the Q-value for the action
self.actions_ = tf.placeholder(tf.int32, [None], name='actions')
one_hot_actions = tf.one_hot(self.actions_, action_size)
# Target Q values for training
self.targetQs_ = tf.placeholder(tf.float32, [None], name='target')
# ReLU hidden layers
self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size)
self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size)
# Linear output layer
self.output = tf.contrib.layers.fully_connected(self.fc2, action_size,
activation_fn=None)
### Train with loss (targetQ - Q)^2
# output has length 2, for two actions. This next line chooses
# one value from output (per row) according to the one-hot encoded actions.
self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1)
self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q))
self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)
"""
Explanation: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
Q-Network
We train our Q-learning agent using the Bellman Equation:
$$
Q(s, a) = r + \gamma \max{Q(s', a')}
$$
where $s$ is a state, $a$ is an action, and $s'$ is the next state from state $s$ and action $a$.
Before we used this equation to learn values for a Q-table. However, for this game there are a huge number of states available. The state has four values: the position and velocity of the cart, and the position and velocity of the pole. These are all real-valued numbers, so ignoring floating point precisions, you practically have infinite states. Instead of using a table then, we'll replace it with a neural network that will approximate the Q-table lookup function.
<img src="assets/deep-q-learning.png" width=450px>
Now, our Q value, $Q(s, a)$ is calculated by passing in a state to the network. The output will be Q-values for each available action, with fully connected hidden layers.
<img src="assets/q-network.png" width=550px>
As I showed before, we can define our targets for training as $\hat{Q}(s,a) = r + \gamma \max{Q(s', a')}$. Then we update the weights by minimizing $(\hat{Q}(s,a) - Q(s,a))^2$.
For this Cart-Pole game, we have four inputs, one for each value in the state, and two outputs, one for each action. To get $\hat{Q}$, we'll first choose an action, then simulate the game using that action. This will get us the next state, $s'$, and the reward. With that, we can calculate $\hat{Q}$ then pass it back into the $Q$ network to run the optimizer and update the weights.
Below is my implementation of the Q-network. I used two fully connected layers with ReLU activations. Two seems to be good enough, three might be better. Feel free to try it out.
End of explanation
"""
from collections import deque
class Memory():
def __init__(self, max_size = 1000):
self.buffer = deque(maxlen=max_size)
def add(self, experience):
self.buffer.append(experience)
def sample(self, batch_size):
idx = np.random.choice(np.arange(len(self.buffer)),
size=batch_size,
replace=False)
return [self.buffer[ii] for ii in idx]
"""
Explanation: Experience replay
Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on.
Here, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.
Below, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.
End of explanation
"""
train_episodes = 1000 # max number of episodes to learn from
max_steps = 200 # max steps in an episode
gamma = 0.99 # future reward discount
# Exploration parameters
explore_start = 1.0 # exploration probability at start
explore_stop = 0.01 # minimum exploration probability
decay_rate = 0.0001 # exponential decay rate for exploration prob
# Network parameters
hidden_size = 64 # number of units in each Q-network hidden layer
learning_rate = 0.0001 # Q-network learning rate
# Memory parameters
memory_size = 10000 # memory capacity
batch_size = 20 # experience mini-batch size
pretrain_length = batch_size # number experiences to pretrain the memory
tf.reset_default_graph()
mainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate)
"""
Explanation: Exploration - Exploitation
To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\epsilon$-greedy policy.
At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.
Q-Learning training algorithm
Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent:
Initialize the memory $D$
Initialize the action-value network $Q$ with random weights
For episode = 1, $M$ do
For $t$, $T$ do
With probability $\epsilon$ select a random action $a_t$, otherwise select $a_t = \mathrm{argmax}_a Q(s,a)$
Execute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$
Store transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$
Sample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$
Set $\hat{Q}j = r_j$ if the episode ends at $j+1$, otherwise set $\hat{Q}_j = r_j + \gamma \max{a'}{Q(s'_j, a')}$
Make a gradient descent step with loss $(\hat{Q}_j - Q(s_j, a_j))^2$
endfor
endfor
Hyperparameters
One of the more difficult aspects of reinforcememt learning are the large number of hyperparameters. Not only are we tuning the network, but we're tuning the simulation.
End of explanation
"""
# Initialize the simulation
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
memory = Memory(max_size=memory_size)
# Make a bunch of random actions and store the experiences
for ii in range(pretrain_length):
# Uncomment the line below to watch the simulation
# env.render()
# Make a random action
action = env.action_space.sample()
next_state, reward, done, _ = env.step(action)
if done:
# The simulation fails so no next state
next_state = np.zeros(state.shape)
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
"""
Explanation: Populate the experience memory
Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
End of explanation
"""
# Now train with experiences
saver = tf.train.Saver()
rewards_list = []
with tf.Session() as sess:
# Initialize variables
sess.run(tf.global_variables_initializer())
step = 0
for ep in range(1, train_episodes):
total_reward = 0
t = 0
while t < max_steps:
step += 1
# Uncomment this next line to watch the training
env.render()
# Explore or Exploit
explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step)
if explore_p > np.random.rand():
# Make a random action
action = env.action_space.sample()
else:
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
total_reward += reward
if done:
# the episode ends so no next state
next_state = np.zeros(state.shape)
t = max_steps
print('Episode: {}'.format(ep),
'Total reward: {}'.format(total_reward),
'Training loss: {:.4f}'.format(loss),
'Explore P: {:.4f}'.format(explore_p))
rewards_list.append((ep, total_reward))
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
t += 1
# Sample mini-batch from memory
batch = memory.sample(batch_size)
states = np.array([each[0] for each in batch])
actions = np.array([each[1] for each in batch])
rewards = np.array([each[2] for each in batch])
next_states = np.array([each[3] for each in batch])
# Train network
target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states})
# Set target_Qs to 0 for states where episode ends
episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1)
target_Qs[episode_ends] = (0, 0)
targets = rewards + gamma * np.max(target_Qs, axis=1)
loss, _ = sess.run([mainQN.loss, mainQN.opt],
feed_dict={mainQN.inputs_: states,
mainQN.targetQs_: targets,
mainQN.actions_: actions})
saver.save(sess, "checkpoints/cartpole.ckpt")
"""
Explanation: Training
Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / N
eps, rews = np.array(rewards_list).T
smoothed_rews = running_mean(rews, 10)
plt.plot(eps[-len(smoothed_rews):], smoothed_rews)
plt.plot(eps, rews, color='grey', alpha=0.3)
plt.xlabel('Episode')
plt.ylabel('Total Reward')
eps, rews = np.array(rewards_list).T
smoothed_rews = running_mean(rews, 10)
plt.plot(eps[-len(smoothed_rews):], smoothed_rews)
plt.plot(eps, rews, color='grey', alpha=0.3)
plt.xlabel('Episode')
plt.ylabel('Total Reward')
"""
Explanation: Visualizing training
Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue.
End of explanation
"""
test_episodes = 10
test_max_steps = 400
env.reset()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
for ep in range(1, test_episodes):
t = 0
while t < test_max_steps:
env.render()
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
if done:
t = test_max_steps
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
state = next_state
t += 1
env.close()
"""
Explanation: Testing
Let's checkout how our trained agent plays the game.
End of explanation
"""
|
tensorflow/docs | site/en/guide/migrate/evaluator.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
"""
import tensorflow.compat.v1 as tf1
import tensorflow as tf
import numpy as np
import tempfile
import time
import os
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
"""
Explanation: Migrate evaluation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/migrate/evaluator">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/evaluator.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/evaluator.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/evaluator.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Evaluation is a critical part of measuring and benchmarking models.
This guide demonstrates how to migrate evaluator tasks from TensorFlow 1 to TensorFlow 2. In Tensorflow 1 this functionality is implemented by tf.estimator.train_and_evaluate, when the API is running distributedly. In Tensorflow 2, you can use the built-in tf.keras.utils.SidecarEvaluator, or a custom evaluation loop on the evaluator task.
There are simple serial evaluation options in both TensorFlow 1 (tf.estimator.Estimator.evaluate) and TensorFlow 2 (Model.fit(..., validation_data=(...)) or Model.evaluate). The evaluator task is preferable when you would like your workers not switching between training and evaluation, and built-in evaluation in Model.fit is preferable when you would like your evaluation to be distributed.
Setup
End of explanation
"""
feature_columns = [tf1.feature_column.numeric_column("x", shape=[28, 28])]
classifier = tf1.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[256, 32],
optimizer=tf1.train.AdamOptimizer(0.001),
n_classes=10,
dropout=0.2
)
train_input_fn = tf1.estimator.inputs.numpy_input_fn(
x={"x": x_train},
y=y_train.astype(np.int32),
num_epochs=10,
batch_size=50,
shuffle=True,
)
test_input_fn = tf1.estimator.inputs.numpy_input_fn(
x={"x": x_test},
y=y_test.astype(np.int32),
num_epochs=10,
shuffle=False
)
train_spec = tf1.estimator.TrainSpec(input_fn=train_input_fn, max_steps=10)
eval_spec = tf1.estimator.EvalSpec(input_fn=test_input_fn,
steps=10,
throttle_secs=0)
"""
Explanation: TensorFlow 1: Evaluating using tf.estimator.train_and_evaluate
In TensorFlow 1, you can configure a tf.estimator to evaluate the estimator using tf.estimator.train_and_evaluate.
In this example, start by defining the tf.estimator.Estimator and speciyfing training and evaluation specifications:
End of explanation
"""
tf1.estimator.train_and_evaluate(estimator=classifier,
train_spec=train_spec,
eval_spec=eval_spec)
"""
Explanation: Then, train and evaluate the model. The evaluation runs synchronously between training because it's limited as a local run in this notebook and alternates between training and evaluation. However, if the estimator is used distributedly, the evaluator will run as a dedicated evaluator task. For more information, check the migration guide on distributed training.
End of explanation
"""
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model = create_model()
model.compile(optimizer='adam',
loss=loss,
metrics=['accuracy'],
steps_per_execution=10,
run_eagerly=True)
log_dir = tempfile.mkdtemp()
model_checkpoint = tf.keras.callbacks.ModelCheckpoint(
filepath=os.path.join(log_dir, 'ckpt-{epoch}'),
save_weights_only=True)
model.fit(x=x_train,
y=y_train,
epochs=1,
callbacks=[model_checkpoint])
"""
Explanation: TensorFlow 2: Evaluating a Keras model
In TensorFlow 2, if you use the Keras Model.fit API for training, you can evaluate the model with tf.keras.utils.SidecarEvaluator. You can also visualize the evaluation metrics in TensorBoard which is not shown in this guide.
To help demonstrate this, let's first start by defining and training the model:
End of explanation
"""
data = tf.data.Dataset.from_tensor_slices((x_test, y_test))
data = data.batch(64)
tf.keras.utils.SidecarEvaluator(
model=model,
data=data,
checkpoint_dir=log_dir,
max_evaluations=1
).start()
"""
Explanation: Then, evaluate the model using tf.keras.utils.SidecarEvaluator. In real training, it's recommended to use a separate job to conduct the evaluation to free up worker resources for training.
End of explanation
"""
|
erdewit/ib_insync | notebooks/market_depth.ipynb | bsd-2-clause | from ib_insync import *
util.startLoop()
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=16)
"""
Explanation: Market depth (order book)
End of explanation
"""
l = ib.reqMktDepthExchanges()
l[:5]
"""
Explanation: To get a list of all exchanges that support market depth data and display the first five:
End of explanation
"""
contract = Forex('EURUSD')
ib.qualifyContracts(contract)
ticker = ib.reqMktDepth(contract)
"""
Explanation: Let's subscribe to market depth data for EURUSD:
End of explanation
"""
from IPython.display import display, clear_output
import pandas as pd
df = pd.DataFrame(index=range(5),
columns='bidSize bidPrice askPrice askSize'.split())
def onTickerUpdate(ticker):
bids = ticker.domBids
for i in range(5):
df.iloc[i, 0] = bids[i].size if i < len(bids) else 0
df.iloc[i, 1] = bids[i].price if i < len(bids) else 0
asks = ticker.domAsks
for i in range(5):
df.iloc[i, 2] = asks[i].price if i < len(asks) else 0
df.iloc[i, 3] = asks[i].size if i < len(asks) else 0
clear_output(wait=True)
display(df)
ticker.updateEvent += onTickerUpdate
IB.sleep(15);
"""
Explanation: To see a live order book, an event handler for ticker updates is made that displays a dynamically updated dataframe:
End of explanation
"""
ib.cancelMktDepth(contract)
ib.disconnect()
"""
Explanation: Stop the market depth subscription:
End of explanation
"""
|
deo1/deo1 | Legacy/Udacity/Intro to Data Analysis/L1_Starter_Code.ipynb | mit | import unicodecsv
## Longer version of code (replaced with shorter, equivalent version below)
# enrollments = []
# f = open('enrollments.csv', 'rb')
# reader = unicodecsv.DictReader(f)
# for row in reader:
# enrollments.append(row)
# f.close()
def csv_to_list_of_dict(filename):
with open(filename, 'rb') as f:
reader = unicodecsv.DictReader(f)
return list(reader)
#####################################
# 1 #
#####################################
## Read in the data from daily_engagement.csv and project_submissions.csv
## and store the results in the below variables.
## Then look at the first row of each table.
root_path = 'c:/users/deo1/documents/github/Udacity/Intro to Data Analysis'
enrollments_filename = root_path + '/datasets/ud170/enrollments.csv'
engagement_filename = root_path + '/datasets/ud170/daily_engagement.csv'
submissions_filename = root_path + '/datasets/ud170/project_submissions.csv'
enrollments = csv_to_list_of_dict(enrollments_filename)
daily_engagement = csv_to_list_of_dict(engagement_filename)
project_submissions = csv_to_list_of_dict(submissions_filename)
# print first row of each
print(enrollments[0])
print(daily_engagement[0])
print(project_submissions[0])
"""
Explanation: Before we get started, a couple of reminders to keep in mind when using iPython notebooks:
Remember that you can see from the left side of a code cell when it was last run if there is a number within the brackets.
When you start a new notebook session, make sure you run all of the cells up to the point where you last left off. Even if the output is still visible from when you ran the cells in your previous session, the kernel starts in a fresh state so you'll need to reload the data, etc. on a new session.
The previous point is useful to keep in mind if your answers do not match what is expected in the lesson's quizzes. Try reloading the data and run all of the processing steps one by one in order to make sure that you are working with the same variables and data that are at each quiz stage.
Load Data from CSVs
End of explanation
"""
from datetime import datetime as dt
# Takes a date as a string, and returns a Python datetime object.
# If there is no date given, returns None
def parse_date(date):
if date == '':
return None
else:
return dt.strptime(date, '%Y-%m-%d')
# Takes a string which is either an empty string or represents an integer,
# and returns an int or None.
def parse_maybe_int(i):
if i == '':
return None
else:
return int(i)
# I'm not familiar with the dataset -- there may be empties which i do not want
# to assert to be equivalent to False
def parse_maybe_bool(i):
if i == '':
return None
else:
return i == 'True'
# Clean up the data types in the enrollments table
for enrollment in enrollments:
enrollment['cancel_date'] = parse_date(enrollment['cancel_date'])
enrollment['days_to_cancel'] = parse_maybe_int(enrollment['days_to_cancel'])
enrollment['is_canceled'] = parse_maybe_bool(enrollment['is_canceled'])
enrollment['is_udacity'] = parse_maybe_bool(enrollment['is_udacity'])
enrollment['join_date'] = parse_date(enrollment['join_date'])
enrollments[0]
# Clean up the data types in the engagement table
for engagement_record in daily_engagement:
engagement_record['lessons_completed'] = int(float(engagement_record['lessons_completed']))
engagement_record['num_courses_visited'] = int(float(engagement_record['num_courses_visited']))
engagement_record['projects_completed'] = int(float(engagement_record['projects_completed']))
engagement_record['total_minutes_visited'] = float(engagement_record['total_minutes_visited'])
engagement_record['utc_date'] = parse_date(engagement_record['utc_date'])
daily_engagement[0]
# Clean up the data types in the submissions table
for submission in project_submissions:
submission['completion_date'] = parse_date(submission['completion_date'])
submission['creation_date'] = parse_date(submission['creation_date'])
project_submissions[0]
"""
Explanation: Fixing Data Types
End of explanation
"""
#####################################
# 2 #
#####################################
## Find the total number of rows and the number of unique students (account keys)
## in each table.
# takes a table as a list of dict and returns a set of unique values under key: col_name
def unique_values(table, col_name):
unique = set()
for row in table:
unique.add(row[col_name])
return unique
print('enrollments: row count: '
+ str(len(enrollments)))
print('enrollments: unique acct: '
+ str(len(unique_values(enrollments, 'account_key'))))
print('daily_engagement: row count: '
+ str(len(daily_engagement)))
print('daily_engagement: unique acct: '
+ str(len(unique_values(daily_engagement, 'acct'))))
print('project_submissions: row count: '
+ str(len(project_submissions)))
print('project_submissions: unique acct: '
+ str(len(unique_values(project_submissions, 'account_key'))))
"""
Explanation: Note when running the above cells that we are actively changing the contents of our data variables. If you try to run these cells multiple times in the same session, an error will occur.
Investigating the Data
End of explanation
"""
#####################################
# 3 #
#####################################
## Rename the "acct" column in the daily_engagement table to "account_key".
"""
Explanation: Problems in the Data
End of explanation
"""
#####################################
# 4 #
#####################################
## Find any one student enrollments where the student is missing from the daily engagement table.
## Output that enrollment.
"""
Explanation: Missing Engagement Records
End of explanation
"""
#####################################
# 5 #
#####################################
## Find the number of surprising data points (enrollments missing from
## the engagement table) that remain, if any.
"""
Explanation: Checking for More Problem Records
End of explanation
"""
# Create a set of the account keys for all Udacity test accounts
udacity_test_accounts = set()
for enrollment in enrollments:
if enrollment['is_udacity']:
udacity_test_accounts.add(enrollment['account_key'])
len(udacity_test_accounts)
# Given some data with an account_key field, removes any records corresponding to Udacity test accounts
def remove_udacity_accounts(data):
non_udacity_data = []
for data_point in data:
if data_point['account_key'] not in udacity_test_accounts:
non_udacity_data.append(data_point)
return non_udacity_data
# Remove Udacity test accounts from all three tables
non_udacity_enrollments = remove_udacity_accounts(enrollments)
non_udacity_engagement = remove_udacity_accounts(daily_engagement)
non_udacity_submissions = remove_udacity_accounts(project_submissions)
print len(non_udacity_enrollments)
print len(non_udacity_engagement)
print len(non_udacity_submissions)
"""
Explanation: Tracking Down the Remaining Problems
End of explanation
"""
#####################################
# 6 #
#####################################
## Create a dictionary named paid_students containing all students who either
## haven't canceled yet or who remained enrolled for more than 7 days. The keys
## should be account keys, and the values should be the date the student enrolled.
paid_students =
"""
Explanation: Refining the Question
End of explanation
"""
# Takes a student's join date and the date of a specific engagement record,
# and returns True if that engagement record happened within one week
# of the student joining.
def within_one_week(join_date, engagement_date):
time_delta = engagement_date - join_date
return time_delta.days < 7
#####################################
# 7 #
#####################################
## Create a list of rows from the engagement table including only rows where
## the student is one of the paid students you just found, and the date is within
## one week of the student's join date.
paid_engagement_in_first_week =
"""
Explanation: Getting Data from First Week
End of explanation
"""
from collections import defaultdict
# Create a dictionary of engagement grouped by student.
# The keys are account keys, and the values are lists of engagement records.
engagement_by_account = defaultdict(list)
for engagement_record in paid_engagement_in_first_week:
account_key = engagement_record['account_key']
engagement_by_account[account_key].append(engagement_record)
# Create a dictionary with the total minutes each student spent in the classroom during the first week.
# The keys are account keys, and the values are numbers (total minutes)
total_minutes_by_account = {}
for account_key, engagement_for_student in engagement_by_account.items():
total_minutes = 0
for engagement_record in engagement_for_student:
total_minutes += engagement_record['total_minutes_visited']
total_minutes_by_account[account_key] = total_minutes
import numpy as np
# Summarize the data about minutes spent in the classroom
total_minutes = total_minutes_by_account.values()
print 'Mean:', np.mean(total_minutes)
print 'Standard deviation:', np.std(total_minutes)
print 'Minimum:', np.min(total_minutes)
print 'Maximum:', np.max(total_minutes)
"""
Explanation: Exploring Student Engagement
End of explanation
"""
#####################################
# 8 #
#####################################
## Go through a similar process as before to see if there is a problem.
## Locate at least one surprising piece of data, output it, and take a look at it.
"""
Explanation: Debugging Data Analysis Code
End of explanation
"""
#####################################
# 9 #
#####################################
## Adapt the code above to find the mean, standard deviation, minimum, and maximum for
## the number of lessons completed by each student during the first week. Try creating
## one or more functions to re-use the code above.
"""
Explanation: Lessons Completed in First Week
End of explanation
"""
######################################
# 10 #
######################################
## Find the mean, standard deviation, minimum, and maximum for the number of
## days each student visits the classroom during the first week.
"""
Explanation: Number of Visits in First Week
End of explanation
"""
######################################
# 11 #
######################################
## Create two lists of engagement data for paid students in the first week.
## The first list should contain data for students who eventually pass the
## subway project, and the second list should contain data for students
## who do not.
subway_project_lesson_keys = ['746169184', '3176718735']
passing_engagement =
non_passing_engagement =
"""
Explanation: Splitting out Passing Students
End of explanation
"""
######################################
# 12 #
######################################
## Compute some metrics you're interested in and see how they differ for
## students who pass the subway project vs. students who don't. A good
## starting point would be the metrics we looked at earlier (minutes spent
## in the classroom, lessons completed, and days visited).
"""
Explanation: Comparing the Two Student Groups
End of explanation
"""
######################################
# 13 #
######################################
## Make histograms of the three metrics we looked at earlier for both
## students who passed the subway project and students who didn't. You
## might also want to make histograms of any other metrics you examined.
"""
Explanation: Making Histograms
End of explanation
"""
######################################
# 14 #
######################################
## Make a more polished version of at least one of your visualizations
## from earlier. Try importing the seaborn library to make the visualization
## look better, adding axis labels and a title, and changing one or more
## arguments to the hist() function.
"""
Explanation: Improving Plots and Sharing Findings
End of explanation
"""
|
fastai/fastai | nbs/30_text.core.ipynb | apache-2.0 | #|export
import html
"""
Explanation: Text core
Basic function to preprocess text before assembling it in a DataLoaders.
End of explanation
"""
#|export
#special tokens
UNK, PAD, BOS, EOS, FLD, TK_REP, TK_WREP, TK_UP, TK_MAJ = "xxunk xxpad xxbos xxeos xxfld xxrep xxwrep xxup xxmaj".split()
#|export
_all_ = ["UNK", "PAD", "BOS", "EOS", "FLD", "TK_REP", "TK_WREP", "TK_UP", "TK_MAJ"]
#|export
_re_spec = re.compile(r'([/#\\])')
def spec_add_spaces(t):
"Add spaces around / and #"
return _re_spec.sub(r' \1 ', t)
test_eq(spec_add_spaces('#fastai'), ' # fastai')
test_eq(spec_add_spaces('/fastai'), ' / fastai')
test_eq(spec_add_spaces('\\fastai'), ' \\ fastai')
#|export
_re_space = re.compile(' {2,}')
def rm_useless_spaces(t):
"Remove multiple spaces"
return _re_space.sub(' ', t)
test_eq(rm_useless_spaces('a b c'), 'a b c')
#|export
_re_rep = re.compile(r'(\S)(\1{2,})')
def replace_rep(t):
"Replace repetitions at the character level: cccc -- TK_REP 4 c"
def _replace_rep(m):
c,cc = m.groups()
return f' {TK_REP} {len(cc)+1} {c} '
return _re_rep.sub(_replace_rep, t)
"""
Explanation: Preprocessing rules
The following are rules applied to texts before or after it's tokenized.
End of explanation
"""
test_eq(replace_rep('aa'), 'aa')
test_eq(replace_rep('aaaa'), f' {TK_REP} 4 a ')
#|export
_re_wrep = re.compile(r'(?:\s|^)(\w+)\s+((?:\1\s+)+)\1(\s|\W|$)')
#|hide
"""
Matches any word repeated at least four times with spaces between them
(?:\s|^) Non-Capture either a whitespace character or the beginning of text
(\w+) Capture any alphanumeric character
\s+ One or more whitespace
((?:\1\s+)+) Capture a repetition of one or more times \1 followed by one or more whitespace
\1 Occurrence of \1
(\s|\W|$) Capture last whitespace, non alphanumeric character or end of text
""";
#|export
def replace_wrep(t):
"Replace word repetitions: word word word word -- TK_WREP 4 word"
def _replace_wrep(m):
c,cc,e = m.groups()
return f' {TK_WREP} {len(cc.split())+2} {c} {e}'
return _re_wrep.sub(_replace_wrep, t)
"""
Explanation: It starts replacing at 3 repetitions of the same character or more.
End of explanation
"""
test_eq(replace_wrep('ah ah'), 'ah ah')
test_eq(replace_wrep('ah ah ah'), f' {TK_WREP} 3 ah ')
test_eq(replace_wrep('ah ah ah ah'), f' {TK_WREP} 4 ah ')
test_eq(replace_wrep('ah ah ah ah '), f' {TK_WREP} 4 ah ')
test_eq(replace_wrep('ah ah ah ah.'), f' {TK_WREP} 4 ah .')
test_eq(replace_wrep('ah ah ahi'), f'ah ah ahi')
#|export
def fix_html(x):
"Various messy things we've seen in documents"
x = x.replace('#39;', "'").replace('amp;', '&').replace('#146;', "'").replace('nbsp;', ' ').replace(
'#36;', '$').replace('\\n', "\n").replace('quot;', "'").replace('<br />', "\n").replace(
'\\"', '"').replace('<unk>',UNK).replace(' @.@ ','.').replace(' @-@ ','-').replace('...',' …')
return html.unescape(x)
test_eq(fix_html('#39;bli#146;'), "'bli'")
test_eq(fix_html('Sarah amp; Duck...'), 'Sarah & Duck …')
test_eq(fix_html('a nbsp; #36;'), 'a $')
test_eq(fix_html('\\" <unk>'), f'" {UNK}')
test_eq(fix_html('quot; @.@ @-@ '), "' .-")
test_eq(fix_html('<br />text\\n'), '\ntext\n')
#|export
_re_all_caps = re.compile(r'(\s|^)([A-Z]+[^a-z\s]*)(?=(\s|$))')
#|hide
"""
Catches any word in all caps, even with ' or - inside
(\s|^) Capture either a whitespace or the beginning of text
([A-Z]+ Capture one capitalized letter or more...
[^a-z\s]*) ...followed by anything that's non lowercase or whitespace
(?=(\s|$)) Look ahead for a space or end of text
""";
#|export
def replace_all_caps(t):
"Replace tokens in ALL CAPS by their lower version and add `TK_UP` before."
def _replace_all_caps(m):
tok = f'{TK_UP} ' if len(m.groups()[1]) > 1 else ''
return f"{m.groups()[0]}{tok}{m.groups()[1].lower()}"
return _re_all_caps.sub(_replace_all_caps, t)
test_eq(replace_all_caps("I'M SHOUTING"), f"{TK_UP} i'm {TK_UP} shouting")
test_eq(replace_all_caps("I'm speaking normally"), "I'm speaking normally")
test_eq(replace_all_caps("I am speaking normally"), "i am speaking normally")
#|export
_re_maj = re.compile(r'(\s|^)([A-Z][^A-Z\s]*)(?=(\s|$))')
#|hide
"""
Catches any capitalized word
(\s|^) Capture either a whitespace or the beginning of text
([A-Z] Capture exactly one capitalized letter...
[^A-Z\s]*) ...followed by anything that's not uppercase or whitespace
(?=(\s|$)) Look ahead for a space of end of text
""";
#|export
def replace_maj(t):
"Replace tokens in Sentence Case by their lower version and add `TK_MAJ` before."
def _replace_maj(m):
tok = f'{TK_MAJ} ' if len(m.groups()[1]) > 1 else ''
return f"{m.groups()[0]}{tok}{m.groups()[1].lower()}"
return _re_maj.sub(_replace_maj, t)
test_eq(replace_maj("Jeremy Howard"), f'{TK_MAJ} jeremy {TK_MAJ} howard')
test_eq(replace_maj("I don't think there is any maj here"), ("i don't think there is any maj here"),)
#|export
def lowercase(t, add_bos=True, add_eos=False):
"Converts `t` to lowercase"
return (f'{BOS} ' if add_bos else '') + t.lower().strip() + (f' {EOS}' if add_eos else '')
#|export
def replace_space(t):
"Replace embedded spaces in a token with unicode line char to allow for split/join"
return t.replace(' ', '▁')
#|export
defaults.text_spec_tok = [UNK, PAD, BOS, EOS, FLD, TK_REP, TK_WREP, TK_UP, TK_MAJ]
defaults.text_proc_rules = [fix_html, replace_rep, replace_wrep, spec_add_spaces, rm_useless_spaces,
replace_all_caps, replace_maj, lowercase]
defaults.text_postproc_rules = [replace_space]
"""
Explanation: It starts replacing at 3 repetitions of the same word or more.
End of explanation
"""
#|export
class BaseTokenizer():
"Basic tokenizer that just splits on spaces"
def __init__(self, split_char=' ', **kwargs): self.split_char=split_char
def __call__(self, items): return (t.split(self.split_char) for t in items)
tok = BaseTokenizer()
test_eq(tok(["This is a text"]), [["This", "is", "a", "text"]])
tok = BaseTokenizer('x')
test_eq(tok(["This is a text"]), [["This is a te", "t"]])
#|export
class SpacyTokenizer():
"Spacy tokenizer for `lang`"
def __init__(self, lang='en', special_toks=None, buf_sz=5000):
import spacy
from spacy.symbols import ORTH
self.special_toks = ifnone(special_toks, defaults.text_spec_tok)
nlp = spacy.blank(lang)
for w in self.special_toks: nlp.tokenizer.add_special_case(w, [{ORTH: w}])
self.pipe,self.buf_sz = nlp.pipe,buf_sz
def __call__(self, items):
return (L(doc).attrgot('text') for doc in self.pipe(map(str,items), batch_size=self.buf_sz))
#|export
WordTokenizer = SpacyTokenizer
tok = SpacyTokenizer()
inp,exp = "This isn't the easiest text.",["This", "is", "n't", "the", "easiest", "text", "."]
test_eq(L(tok([inp,inp])), [exp,exp])
#|export
class TokenizeWithRules:
"A wrapper around `tok` which applies `rules`, then tokenizes, then applies `post_rules`"
def __init__(self, tok, rules=None, post_rules=None):
self.rules = L(ifnone(rules, defaults.text_proc_rules))
self.post_f = compose(*L(ifnone(post_rules, defaults.text_postproc_rules)))
self.tok = tok
def __call__(self, batch):
return (L(o).map(self.post_f) for o in self.tok(maps(*self.rules, batch)))
f = TokenizeWithRules(BaseTokenizer(),rules=[replace_all_caps])
test_eq(f(["THIS isn't a problem"]), [[TK_UP, 'this', "isn't", 'a', 'problem']])
f = TokenizeWithRules(SpacyTokenizer())
test_eq(f(["This isn't a problem"]), [[BOS, TK_MAJ, 'this', 'is', "n't", 'a', 'problem']])
f = TokenizeWithRules(BaseTokenizer(split_char="'"), rules=[])
test_eq(f(["This isn't a problem"]), [['This▁isn', 't▁a▁problem']])
"""
Explanation: Tokenizing
A tokenizer is a class that must implement __call__. This method receives a iterator of texts and must return a generator with their tokenized versions. Here is the most basic example:
End of explanation
"""
texts = ["this is a text", "this is another text"]
tok = TokenizeWithRules(BaseTokenizer(), texts.__getitem__)
test_eq(tok([0,1]), [['this', 'is', 'a', 'text'],['this', 'is', 'another', 'text']])
#|export
@delegates(TokenizeWithRules)
def tokenize1(text, tok, **kwargs):
"Call `TokenizeWithRules` with a single text"
return first(TokenizeWithRules(tok=tok, **kwargs)([text]))
test_eq(tokenize1("This isn't a problem", SpacyTokenizer()),
[BOS, TK_MAJ, 'this', 'is', "n't", 'a', 'problem'])
test_eq(tokenize1("This isn't a problem", tok=BaseTokenizer(), rules=[]),
['This',"isn't",'a','problem'])
#|export
def parallel_tokenize(items, tok=None, rules=None, n_workers=defaults.cpus, **kwargs):
"Calls optional `setup` on `tok` before launching `TokenizeWithRules` using `parallel_gen"
if tok is None: tok = WordTokenizer()
if hasattr(tok, 'setup'): tok.setup(items, rules)
return parallel_gen(TokenizeWithRules, items, tok=tok, rules=rules, n_workers=n_workers, **kwargs)
"""
Explanation: The main function that will be called during one of the processes handling tokenization. It will iterate through the batch of texts, apply them rules and tokenize them.
End of explanation
"""
res = parallel_tokenize(['0 1', '1 2'], rules=[], n_workers=2)
idxs,toks = zip(*L(res).sorted(itemgetter(0)))
test_eq(toks, [['0','1'],['1','2']])
#|hide
res1 = parallel_tokenize(['0 1', '1 2'], tok=BaseTokenizer(), rules=[], n_workers=0)
idxs1,toks1 = zip(*L(res1).sorted(itemgetter(0)))
test_eq(toks, toks1)
"""
Explanation: Note that since this uses parallel_gen behind the scenes, the generator returned contains tuples of indices and results. There is no guarantee that the results are returned in order, so you should sort by the first item of the tuples (the indices) if you need them ordered.
End of explanation
"""
#|export
fn_counter_pkl = 'counter.pkl'
fn_lengths_pkl = 'lengths.pkl'
#|export
def _tokenize_files(func, files, path, output_dir=None, output_names=None, n_workers=defaults.cpus, rules=None, tok=None,
encoding='utf8', skip_if_exists=False):
"Tokenize text `files` in parallel using `n_workers`"
if tok is None: tok = WordTokenizer()
output_dir = Path(ifnone(output_dir, path.parent/f'{path.name}_tok'))
if skip_if_exists and output_dir.exists(): return output_dir
output_dir.mkdir(exist_ok=True)
if output_names is None: output_names = L(output_dir/f.relative_to(path) for f in files)
rules = partial(Path.read_text, encoding=encoding) + L(ifnone(rules, defaults.text_proc_rules.copy()))
lengths,counter = {},Counter()
for i,tok in parallel_tokenize(files, tok, rules, n_workers=n_workers):
out = func(i,output_dir)
out.mk_write(' '.join(tok), encoding=encoding)
lengths[str(files[i].relative_to(path))] = len(tok)
counter.update(tok)
save_pickle(output_dir/fn_lengths_pkl, lengths)
save_pickle(output_dir/fn_counter_pkl, counter)
return output_dir
#|export
@delegates(_tokenize_files)
def tokenize_folder(path, extensions=None, folders=None, output_dir=None, skip_if_exists=True, **kwargs):
"Tokenize text files in `path` in parallel using `n_workers`"
path,extensions = Path(path),ifnone(extensions, ['.txt'])
files = get_files(path, extensions=extensions, recurse=True, folders=folders)
def _f(i,output_dir): return output_dir/files[i].relative_to(path)
return _tokenize_files(_f, files, path, skip_if_exists=skip_if_exists, **kwargs)
"""
Explanation: Tokenize texts in files
Preprocessing function for texts in filenames. Tokenized texts will be saved in a similar fashion in a directory suffixed with _tok in the parent folder of path (override with output_dir). This directory is the return value.
End of explanation
"""
#|export
@delegates(_tokenize_files)
def tokenize_files(files, path, output_dir, output_names=None, **kwargs):
"Tokenize text `files` in parallel using `n_workers`"
if output_names is None: output_names = L(output_dir/f.relative_to(path) for f in files)
def _f(i,output_dir): return output_dir/output_names[i]
return _tokenize_files(_f, files, path, output_dir=output_dir, **kwargs)
"""
Explanation: The result will be in output_dir (defaults to a folder in the same parent directory as path, with _tok added to path.name) with the same structure as in path. Tokenized texts for a given file will be in the file having the same name in output_dir. Additionally, a file with a .len suffix contains the number of tokens and the count of all words is stored in output_dir/counter.pkl.
extensions will default to ['.txt'] and all text files in path are treated unless you specify a list of folders in include. rules (that defaults to defaults.text_proc_rules) are applied to each text before going in the tokenizer.
End of explanation
"""
#|export
def _join_texts(df, mark_fields=False):
"Join texts in row `idx` of `df`, marking each field with `FLD` if `mark_fields=True`"
text_col = (f'{FLD} {1} ' if mark_fields else '' ) + df.iloc[:,0].astype(str)
for i in range(1,len(df.columns)):
text_col += (f' {FLD} {i+1} ' if mark_fields else ' ') + df.iloc[:,i].astype(str)
return text_col.values
#|hide
texts = [f"This is an example of text {i}" for i in range(10)]
df = pd.DataFrame({'text': texts, 'text1': texts}, columns=['text', 'text1'])
col = _join_texts(df, mark_fields=True)
for i in range(len(df)):
test_eq(col[i], f'{FLD} 1 This is an example of text {i} {FLD} 2 This is an example of text {i}')
#|export
def tokenize_texts(texts, n_workers=defaults.cpus, rules=None, tok=None):
"Tokenize `texts` in parallel using `n_workers`"
rules = L(ifnone(rules, defaults.text_proc_rules.copy()))
outputs = L(parallel_tokenize(texts, tok=tok, rules=rules, n_workers=n_workers)
).sorted().itemgot(1)
return outputs
#|export
def tokenize_df(df, text_cols, n_workers=defaults.cpus, rules=None, mark_fields=None,
tok=None, tok_text_col="text"):
"Tokenize texts in `df[text_cols]` in parallel using `n_workers` and stores them in `df[tok_text_col]`"
text_cols = [df.columns[c] if isinstance(c, int) else c for c in L(text_cols)]
#mark_fields defaults to False if there is one column of texts, True if there are multiple
if mark_fields is None: mark_fields = len(text_cols)>1
rules = L(ifnone(rules, defaults.text_proc_rules.copy()))
texts = _join_texts(df[text_cols], mark_fields=mark_fields)
outputs = L(parallel_tokenize(texts, tok, rules, n_workers=n_workers)
).sorted().itemgot(1)
other_cols = df.columns[~df.columns.isin(text_cols)]
res = df[other_cols].copy()
res[tok_text_col] = outputs
res[f'{tok_text_col}_length'] = [len(o) for o in outputs]
return res,Counter(outputs.concat())
"""
Explanation: Tokenize texts in a dataframe
End of explanation
"""
#|export
def tokenize_csv(fname, text_cols, outname=None, n_workers=4, rules=None, mark_fields=None,
tok=None, header='infer', chunksize=50000):
"Tokenize texts in the `text_cols` of the csv `fname` in parallel using `n_workers`"
df = pd.read_csv(fname, header=header, chunksize=chunksize)
outname = Path(ifnone(outname, fname.parent/f'{fname.stem}_tok.csv'))
cnt = Counter()
for i,dfp in enumerate(df):
out,c = tokenize_df(dfp, text_cols, n_workers=n_workers, rules=rules,
mark_fields=mark_fields, tok=tok)
out.text = out.text.str.join(' ')
out.to_csv(outname, header=(None,header)[i==0], index=False, mode=('a','w')[i==0])
cnt.update(c)
save_pickle(outname.with_suffix('.pkl'), cnt)
#|export
def load_tokenized_csv(fname):
"Utility function to quickly load a tokenized csv ans the corresponding counter"
fname = Path(fname)
out = pd.read_csv(fname)
for txt_col in out.columns[1:-1]:
out[txt_col] = tuple(out[txt_col].str.split(' '))
return out,load_pickle(fname.with_suffix('.pkl'))
"""
Explanation: This function returns a new dataframe with the same non-text columns, a column named text that contains the tokenized texts and a column named text_lengths that contains their respective length. It also returns a counter of all seen words to quickly build a vocabulary afterward.
rules (that defaults to defaults.text_proc_rules) are applied to each text before going in the tokenizer. If mark_fields isn't specified, it defaults to False when there is a single text column, True when there are several. In that case, the texts in each of those columns are joined with FLD markers followed by the number of the field.
End of explanation
"""
def _prepare_texts(tmp_d):
"Prepare texts in a folder struct in tmp_d, a csv file and returns a dataframe"
path = Path(tmp_d)/'tmp'
path.mkdir()
for d in ['a', 'b', 'c']:
(path/d).mkdir()
for i in range(5):
with open(path/d/f'text{i}.txt', 'w') as f: f.write(f"This is an example of text {d} {i}")
texts = [f"This is an example of text {d} {i}" for i in range(5) for d in ['a', 'b', 'c']]
df = pd.DataFrame({'text': texts, 'label': list(range(15))}, columns=['text', 'label'])
csv_fname = tmp_d/'input.csv'
df.to_csv(csv_fname, index=False)
return path,df,csv_fname
#|hide
# integration test
with tempfile.TemporaryDirectory() as tmp_d:
path,df,csv_fname = _prepare_texts(Path(tmp_d))
#Tokenize as folders
tokenize_folder(path)
outp = Path(tmp_d)/'tmp_tok'
for d in ['a', 'b', 'c']:
p = outp/d
for i in range(5):
test_eq((p/f'text{i}.txt').read_text(), ' '.join([
BOS, TK_MAJ, 'this', 'is', 'an', 'example', 'of', 'text', d, str(i) ]))
cnt_a = load_pickle(outp/fn_counter_pkl)
test_eq(cnt_a['this'], 15)
test_eq(cnt_a['a'], 5)
test_eq(cnt_a['0'], 3)
#Tokenize as files
files = get_text_files(path)
tokenize_files(files, path, output_dir=path/'d')
for f in files:
test_eq((path/'d'/f.relative_to(path)).read_text(), ' '.join([
BOS, TK_MAJ, 'this', 'is', 'an', 'example', 'of', 'text', f.parent.name, f.name[4]]))
#Tokenize as individual texts
out = tokenize_texts(df['text'].values)
test_eq(out, [(outp/d/f'text{i}.txt').read_text().split(' ') for i in range(5) for d in ['a', 'b', 'c']])
#Tokenize as a dataframe
out,cnt_b = tokenize_df(df, text_cols='text')
test_eq(list(out.columns), ['label', 'text', 'text_length'])
test_eq(out['label'].values, df['label'].values)
test_eq(list(out['text']), [(outp/d/f'text{i}.txt').read_text().split(' ') for i in range(5) for d in ['a', 'b', 'c']])
test_eq(cnt_a, cnt_b)
#Tokenize as a csv
out_fname = Path(tmp_d)/'output.csv'
tokenize_csv(csv_fname, text_cols='text', outname=out_fname)
a,b = load_tokenized_csv(out_fname)
test_eq((out,cnt_b), load_tokenized_csv(out_fname))
"""
Explanation: The result will be written in a new csv file in outname (defaults to the same as fname with the suffix _tok.csv) and will have the same header as the original file, the same non-text columns, a text and a text_lengths column as described in tokenize_df.
rules (that defaults to defaults.text_proc_rules) are applied to each text before going in the tokenizer. If mark_fields isn't specified, it defaults to False when there is a single text column, True when there are several. In that case, the texts in each of those columns are joined with FLD markers followed by the number of the field.
The csv file is opened with header and optionally with blocks of chunksize at a time. If this argument is passed, each chunk is processed independently and saved in the output file to save memory usage.
End of explanation
"""
#|export
class Tokenizer(Transform):
"Provides a consistent `Transform` interface to tokenizers operating on `DataFrame`s and folders"
input_types = (str, list, L, tuple, Path)
def __init__(self, tok, rules=None, counter=None, lengths=None, mode=None, sep=' '):
if isinstance(tok,type): tok=tok()
store_attr('tok,counter,lengths,mode,sep')
self.rules = defaults.text_proc_rules if rules is None else rules
@classmethod
@delegates(tokenize_df, keep=True)
def from_df(cls, text_cols, tok=None, rules=None, sep=' ', **kwargs):
if tok is None: tok = WordTokenizer()
res = cls(tok, rules=rules, mode='df')
res.kwargs,res.train_setup = merge({'tok': tok}, kwargs),False
res.text_cols,res.sep = text_cols,sep
default_val = inspect.signature(tokenize_df).parameters['tok_text_col'].default
res.tok_text_col = kwargs.get('tok_text_col', default_val)
return res
@classmethod
@delegates(tokenize_folder, keep=True)
def from_folder(cls, path, tok=None, rules=None, **kwargs):
path = Path(path)
if tok is None: tok = WordTokenizer()
output_dir = tokenize_folder(path, tok=tok, rules=rules, **kwargs)
res = cls(tok, counter=load_pickle(output_dir/fn_counter_pkl),
lengths=load_pickle(output_dir/fn_lengths_pkl), rules=rules, mode='folder')
res.path,res.output_dir = path,output_dir
return res
def setups(self, dsets):
if not self.mode == 'df' or not isinstance(dsets.items, pd.DataFrame): return
dsets.items,count = tokenize_df(dsets.items, self.text_cols, rules=self.rules, **self.kwargs)
if self.counter is None: self.counter = count
if self.lengths is None: self.lengths = dsets.items[f'{self.tok_text_col}_length'].values
return dsets
def encodes(self, o:Path):
if self.mode=='folder' and str(o).startswith(str(self.path)):
tok = self.output_dir/o.relative_to(self.path)
return L(tok.read_text(encoding='UTF-8').split(' '))
else: return self._tokenize1(o.read_text())
def encodes(self, o:str): return self._tokenize1(o)
def _tokenize1(self, o): return first(self.tok([compose(*self.rules)(o)]))
def get_lengths(self, items):
if self.lengths is None: return None
if self.mode == 'df':
if isinstance(items, pd.DataFrame) and f'{self.tok_text_col}_length' in items.columns:
return items[f'{self.tok_text_col}_length'].values
if self.mode == 'folder':
try:
res = [self.lengths[str(Path(i).relative_to(self.path))] for i in items]
if len(res) == len(items): return res
except: return None
def decodes(self, o): return TitledStr(self.sep.join(o))
with tempfile.TemporaryDirectory() as tmp_d:
path,df,csv_fname = _prepare_texts(Path(tmp_d))
items = get_text_files(path)
splits = RandomSplitter()(items)
dsets = Datasets(items, [Tokenizer.from_folder(path)], splits=splits)
print(dsets.train[0])
dsets = Datasets(df, [Tokenizer.from_df('text')], splits=splits)
print(dsets.train[0][0].text)
tst = test_set(dsets, ['This is a test', 'this is another test'])
test_eq(tst, [(['xxbos', 'xxmaj', 'this','is','a','test'],),
(['xxbos','this','is','another','test'],)])
"""
Explanation: Tokenizer-
End of explanation
"""
#|export
eu_langs = ["bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hr", "hu",
"it","lt","lv","mt","nl","pl","pt","ro","sk","sl","sv"] # all European langs
#|export
class SentencePieceTokenizer():#TODO: pass the special tokens symbol to sp
"SentencePiece tokenizer for `lang`"
def __init__(self, lang='en', special_toks=None, sp_model=None, vocab_sz=None, max_vocab_sz=30000,
model_type='unigram', char_coverage=None, cache_dir='tmp'):
try: from sentencepiece import SentencePieceTrainer,SentencePieceProcessor
except ImportError:
raise Exception('sentencepiece module is missing: run `pip install sentencepiece!=0.1.90,!=0.1.91`')
self.sp_model,self.cache_dir = sp_model,Path(cache_dir)
self.vocab_sz,self.max_vocab_sz,self.model_type = vocab_sz,max_vocab_sz,model_type
self.char_coverage = ifnone(char_coverage, 0.99999 if lang in eu_langs else 0.9998)
self.special_toks = ifnone(special_toks, defaults.text_spec_tok)
if sp_model is None: self.tok = None
else:
self.tok = SentencePieceProcessor()
self.tok.Load(str(sp_model))
os.makedirs(self.cache_dir, exist_ok=True)
def _get_vocab_sz(self, raw_text_path):
cnt = Counter()
with open(raw_text_path, 'r') as f:
for line in f.readlines():
cnt.update(line.split())
if len(cnt)//4 > self.max_vocab_sz: return self.max_vocab_sz
res = len(cnt)//4
while res%8 != 0: res+=1
return max(res,29)
def train(self, raw_text_path):
"Train a sentencepiece tokenizer on `texts` and save it in `path/tmp_dir`"
from sentencepiece import SentencePieceTrainer
vocab_sz = self._get_vocab_sz(raw_text_path) if self.vocab_sz is None else self.vocab_sz
spec_tokens = ['\u2581'+s for s in self.special_toks]
SentencePieceTrainer.Train(" ".join([
f"--input={raw_text_path} --vocab_size={vocab_sz} --model_prefix={self.cache_dir/'spm'}",
f"--character_coverage={self.char_coverage} --model_type={self.model_type}",
f"--unk_id={len(spec_tokens)} --pad_id=-1 --bos_id=-1 --eos_id=-1 --minloglevel=2",
f"--user_defined_symbols={','.join(spec_tokens)} --hard_vocab_limit=false"]))
raw_text_path.unlink()
return self.cache_dir/'spm.model'
def setup(self, items, rules=None):
from sentencepiece import SentencePieceProcessor
if rules is None: rules = []
if self.tok is not None: return {'sp_model': self.sp_model}
raw_text_path = self.cache_dir/'texts.out'
with open(raw_text_path, 'w') as f:
for t in progress_bar(maps(*rules, items), total=len(items), leave=False):
f.write(f'{t}\n')
sp_model = self.train(raw_text_path)
self.tok = SentencePieceProcessor()
self.tok.Load(str(sp_model))
return {'sp_model': sp_model}
def __call__(self, items):
if self.tok is None: self.setup(items)
for t in items: yield self.tok.EncodeAsPieces(t)
#|export
SubwordTokenizer = SentencePieceTokenizer
texts = [f"This is an example of text {i}" for i in range(10)]
df = pd.DataFrame({'text': texts, 'label': list(range(10))}, columns=['text', 'label'])
out,cnt = tokenize_df(df, text_cols='text', tok=SentencePieceTokenizer(vocab_sz=34), n_workers=1)
with tempfile.TemporaryDirectory() as tmp_d:
path,df,csv_fname = _prepare_texts(Path(tmp_d))
items = get_text_files(path)
splits = RandomSplitter()(items)
tok = SentencePieceTokenizer(special_toks=[])
dsets = Datasets(items, [Tokenizer.from_folder(path, tok=tok)], splits=splits)
print(dsets.train[0][0])
with warnings.catch_warnings():
dsets = Datasets(df, [Tokenizer.from_df('text', tok=tok)], splits=splits)
print(dsets.train[0][0].text)
"""
Explanation: Sentencepiece
End of explanation
"""
#|hide
from nbdev.export import notebook2script
notebook2script()
"""
Explanation: Export -
End of explanation
"""
|
SnShine/aima-python | csp.ipynb | mit | from csp import *
from notebook import psource, pseudocode
# Needed to hide warnings in the matplotlib sections
import warnings
warnings.filterwarnings("ignore")
"""
Explanation: CONSTRAINT SATISFACTION PROBLEMS
This IPy notebook acts as supporting material for topics covered in Chapter 6 Constraint Satisfaction Problems of the book Artificial Intelligence: A Modern Approach. We make use of the implementations in csp.py module. Even though this notebook includes a brief summary of the main topics familiarity with the material present in the book is expected. We will look at some visualizations and solve some of the CSP problems described in the book. Let us import everything from the csp module to get started.
End of explanation
"""
psource(CSP)
"""
Explanation: CONTENTS
Overview
Graph Coloring
N-Queens
Backtracking Search
Tree CSP Solver
Graph Coloring Visualization
N-Queens Visualization
OVERVIEW
CSPs are a special kind of search problems. Here we don't treat the space as a black box but the state has a particular form and we use that to our advantage to tweak our algorithms to be more suited to the problems. A CSP State is defined by a set of variables which can take values from corresponding domains. These variables can take only certain values in their domains to satisfy the constraints. A set of assignments which satisfies all constraints passes the goal test. Let us start by exploring the CSP class which we will use to model our CSPs. You can keep the popup open and read the main page to get a better idea of the code.
End of explanation
"""
s = UniversalDict(['R','G','B'])
s[5]
"""
Explanation: The _ init _ method parameters specify the CSP. Variable can be passed as a list of strings or integers. Domains are passed as dict where key specify the variables and value specify the domains. The variables are passed as an empty list. Variables are extracted from the keys of the domain dictionary. Neighbor is a dict of variables that essentially describes the constraint graph. Here each variable key has a list its value which are the variables that are constraint along with it. The constraint parameter should be a function f(A, a, B, b) that returns true if neighbors A, B satisfy the constraint when they have values A=a, B=b. We have additional parameters like nassings which is incremented each time an assignment is made when calling the assign method. You can read more about the methods and parameters in the class doc string. We will talk more about them as we encounter their use. Let us jump to an example.
GRAPH COLORING
We use the graph coloring problem as our running example for demonstrating the different algorithms in the csp module. The idea of map coloring problem is that the adjacent nodes (those connected by edges) should not have the same color throughout the graph. The graph can be colored using a fixed number of colors. Here each node is a variable and the values are the colors that can be assigned to them. Given that the domain will be the same for all our nodes we use a custom dict defined by the UniversalDict class. The UniversalDict Class takes in a parameter which it returns as value for all the keys of the dict. It is very similar to defaultdict in Python except that it does not support item assignment.
End of explanation
"""
psource(different_values_constraint)
"""
Explanation: For our CSP we also need to define a constraint function f(A, a, B, b). In this what we need is that the neighbors must not have the same color. This is defined in the function different_values_constraint of the module.
End of explanation
"""
%pdoc parse_neighbors
"""
Explanation: The CSP class takes neighbors in the form of a Dict. The module specifies a simple helper function named parse_neighbors which allows to take input in the form of strings and return a Dict of the form compatible with the CSP Class.
End of explanation
"""
psource(MapColoringCSP)
australia, usa, france
"""
Explanation: The MapColoringCSP function creates and returns a CSP with the above constraint function and states. The variables our the keys of the neighbors dict and the constraint is the one specified by the different_values_constratint function. australia, usa and france are three CSPs that have been created using MapColoringCSP. australia corresponds to Figure 6.1 in the book.
End of explanation
"""
psource(queen_constraint)
"""
Explanation: N-QUEENS
The N-queens puzzle is the problem of placing N chess queens on a N×N chessboard so that no two queens threaten each other. Here N is a natural number. Like the graph coloring, problem NQueens is also implemented in the csp module. The NQueensCSP class inherits from the CSP class. It makes some modifications in the methods to suit the particular problem. The queens are assumed to be placed one per column, from left to right. That means position (x, y) represents (var, val) in the CSP. The constraint that needs to be passed on the CSP is defined in the queen_constraint function. The constraint is satisfied (true) if A, B are really the same variable, or if they are not in the same row, down diagonal, or up diagonal.
End of explanation
"""
psource(NQueensCSP)
"""
Explanation: The NQueensCSP method implements methods that support solving the problem via min_conflicts which is one of the techniques for solving CSPs. Because min_conflicts hill climbs the number of conflicts to solve the CSP assign and unassign are modified to record conflicts. More details about the structures rows, downs, ups which help in recording conflicts are explained in the docstring.
End of explanation
"""
eight_queens = NQueensCSP(8)
"""
Explanation: The _ init _ method takes only one parameter n the size of the problem. To create an instance we just pass the required n into the constructor.
End of explanation
"""
import copy
class InstruCSP(CSP):
def __init__(self, variables, domains, neighbors, constraints):
super().__init__(variables, domains, neighbors, constraints)
self.assignment_history = []
def assign(self, var, val, assignment):
super().assign(var,val, assignment)
self.assignment_history.append(copy.deepcopy(assignment))
def unassign(self, var, assignment):
super().unassign(var,assignment)
self.assignment_history.append(copy.deepcopy(assignment))
"""
Explanation: Helper Functions
We will now implement a few helper functions that will help us visualize the Coloring Problem. We will make some modifications to the existing Classes and Functions for additional book keeping. To begin we modify the assign and unassign methods in the CSP to add a copy of the assignment to the assignment_history. We call this new class InstruCSP. This will allow us to see how the assignment evolves over time.
End of explanation
"""
def make_instru(csp):
return InstruCSP(csp.variables, csp.domains, csp.neighbors, csp.constraints)
"""
Explanation: Next, we define make_instru which takes an instance of CSP and returns a InstruCSP instance.
End of explanation
"""
neighbors = {
0: [6, 11, 15, 18, 4, 11, 6, 15, 18, 4],
1: [12, 12, 14, 14],
2: [17, 6, 11, 6, 11, 10, 17, 14, 10, 14],
3: [20, 8, 19, 12, 20, 19, 8, 12],
4: [11, 0, 18, 5, 18, 5, 11, 0],
5: [4, 4],
6: [8, 15, 0, 11, 2, 14, 8, 11, 15, 2, 0, 14],
7: [13, 16, 13, 16],
8: [19, 15, 6, 14, 12, 3, 6, 15, 19, 12, 3, 14],
9: [20, 15, 19, 16, 15, 19, 20, 16],
10: [17, 11, 2, 11, 17, 2],
11: [6, 0, 4, 10, 2, 6, 2, 0, 10, 4],
12: [8, 3, 8, 14, 1, 3, 1, 14],
13: [7, 15, 18, 15, 16, 7, 18, 16],
14: [8, 6, 2, 12, 1, 8, 6, 2, 1, 12],
15: [8, 6, 16, 13, 18, 0, 6, 8, 19, 9, 0, 19, 13, 18, 9, 16],
16: [7, 15, 13, 9, 7, 13, 15, 9],
17: [10, 2, 2, 10],
18: [15, 0, 13, 4, 0, 15, 13, 4],
19: [20, 8, 15, 9, 15, 8, 3, 20, 3, 9],
20: [3, 19, 9, 19, 3, 9]
}
"""
Explanation: We will now use a graph defined as a dictonary for plotting purposes in our Graph Coloring Problem. The keys are the nodes and their corresponding values are the nodes they are connected to.
End of explanation
"""
coloring_problem = MapColoringCSP('RGBY', neighbors)
coloring_problem1 = make_instru(coloring_problem)
"""
Explanation: Now we are ready to create an InstruCSP instance for our problem. We are doing this for an instance of MapColoringProblem class which inherits from the CSP Class. This means that our make_instru function will work perfectly for it.
End of explanation
"""
result = backtracking_search(coloring_problem1)
result # A dictonary of assignments.
"""
Explanation: BACKTRACKING SEARCH
For solving a CSP the main issue with Naive search algorithms is that they can continue expanding obviously wrong paths. In backtracking search, we check constraints as we go. Backtracking is just the above idea combined with the fact that we are dealing with one variable at a time. Backtracking Search is implemented in the repository as the function backtracking_search. This is the same as Figure 6.5 in the book. The function takes as input a CSP and few other optional parameters which can be used to further speed it up. The function returns the correct assignment if it satisfies the goal. We will discuss these later. Let us solve our coloring_problem1 with backtracking_search.
End of explanation
"""
coloring_problem1.nassigns
"""
Explanation: Let us also check the number of assignments made.
End of explanation
"""
len(coloring_problem1.assignment_history)
"""
Explanation: Now let us check the total number of assignments and unassignments which is the length ofour assignment history.
End of explanation
"""
psource(mrv)
psource(num_legal_values)
psource(CSP.nconflicts)
"""
Explanation: Now let us explore the optional keyword arguments that the backtracking_search function takes. These optional arguments help speed up the assignment further. Along with these, we will also point out to methods in the CSP class that help make this work.
The first of these is select_unassigned_variable. It takes in a function that helps in deciding the order in which variables will be selected for assignment. We use a heuristic called Most Restricted Variable which is implemented by the function mrv. The idea behind mrv is to choose the variable with the fewest legal values left in its domain. The intuition behind selecting the mrv or the most constrained variable is that it allows us to encounter failure quickly before going too deep into a tree if we have selected a wrong step before. The mrv implementation makes use of another function num_legal_values to sort out the variables by a number of legal values left in its domain. This function, in turn, calls the nconflicts method of the CSP to return such values.
End of explanation
"""
psource(lcv)
"""
Explanation: Another ordering related parameter order_domain_values governs the value ordering. Here we select the Least Constraining Value which is implemented by the function lcv. The idea is to select the value which rules out the fewest values in the remaining variables. The intuition behind selecting the lcv is that it leaves a lot of freedom to assign values later. The idea behind selecting the mrc and lcv makes sense because we need to do all variables but for values, we might better try the ones that are likely. So for vars, we face the hard ones first.
End of explanation
"""
solve_simple = copy.deepcopy(usa)
solve_parameters = copy.deepcopy(usa)
backtracking_search(solve_simple)
backtracking_search(solve_parameters, order_domain_values=lcv, select_unassigned_variable=mrv, inference=mac)
solve_simple.nassigns
solve_parameters.nassigns
"""
Explanation: Finally, the third parameter inference can make use of one of the two techniques called Arc Consistency or Forward Checking. The details of these methods can be found in the Section 6.3.2 of the book. In short the idea of inference is to detect the possible failure before it occurs and to look ahead to not make mistakes. mac and forward_checking implement these two techniques. The CSP methods support_pruning, suppose, prune, choices, infer_assignment and restore help in using these techniques. You can know more about these by looking up the source code.
Now let us compare the performance with these parameters enabled vs the default parameters. We will use the Graph Coloring problem instance usa for comparison. We will call the instances solve_simple and solve_parameters and solve them using backtracking and compare the number of assignments.
End of explanation
"""
psource(tree_csp_solver)
"""
Explanation: TREE CSP SOLVER
The tree_csp_solver function (Figure 6.11 in the book) can be used to solve problems whose constraint graph is a tree. Given a CSP, with neighbors forming a tree, it returns an assignement that satisfies the given constraints. The algorithm works as follows:
First it finds the topological sort of the tree. This is an ordering of the tree where each variable/node comes after its parent in the tree. The function that accomplishes this is topological_sort, which builds the topological sort using the recursive function build_topological. That function is an augmented DFS, where each newly visited node of the tree is pushed on a stack. The stack in the end holds the variables topologically sorted.
Then the algorithm makes arcs between each parent and child consistent. Arc-consistency between two variables, a and b, occurs when for every possible value of a there is an assignment in b that satisfies the problem's constraints. If such an assignment cannot be found, then the problematic value is removed from a's possible values. This is done with the use of the function make_arc_consistent which takes as arguments a variable Xj and its parent, and makes the arc between them consistent by removing any values from the parent which do not allow for a consistent assignment in Xj.
If an arc cannot be made consistent, the solver fails. If every arc is made consistent, we move to assigning values.
First we assign a random value to the root from its domain and then we start assigning values to the rest of the variables. Since the graph is now arc-consistent, we can simply move from variable to variable picking any remaining consistent values. At the end we are left with a valid assignment. If at any point though we find a variable where no consistent value is left in its domain, the solver fails.
The implementation of the algorithm:
End of explanation
"""
australia_small = MapColoringCSP(list('RB'),
'NT: WA Q; NSW: Q V')
"""
Explanation: We will now use the above function to solve a problem. More specifically, we will solve the problem of coloring the map of Australia. At our disposal we have two colors: Red and Blue. As a reminder, this is the graph of Australia:
"SA: WA NT Q NSW V; NT: WA Q; NSW: Q V; T: "
Unfortunately as you can see the above is not a tree. If, though, we remove SA, which has arcs to WA, NT, Q, NSW and V, we are left with a tree (we also remove T, since it has no in-or-out arcs). We can now solve this using our algorithm. Let's define the map coloring problem at hand:
End of explanation
"""
assignment = tree_csp_solver(australia_small)
print(assignment)
"""
Explanation: We will input australia_small to the tree_csp_solver and we will print the given assignment.
End of explanation
"""
%matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
import matplotlib
import time
"""
Explanation: WA, Q and V got painted with the same color and NT and NSW got painted with the other.
GRAPH COLORING VISUALIZATION
Next, we define some functions to create the visualisation from the assignment_history of coloring_problem1. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit ipywidgets.readthedocs.io. We will be using the networkx library to generate graphs. These graphs can be treated as the graph that needs to be colored or as a constraint graph for this problem. If interested you can read a dead simple tutorial here. We start by importing the necessary libraries and initializing matplotlib inline.
End of explanation
"""
def make_update_step_function(graph, instru_csp):
def draw_graph(graph):
# create networkx graph
G=nx.Graph(graph)
# draw graph
pos = nx.spring_layout(G,k=0.15)
return (G, pos)
G, pos = draw_graph(graph)
def update_step(iteration):
# here iteration is the index of the assignment_history we want to visualize.
current = instru_csp.assignment_history[iteration]
# We convert the particular assignment to a default dict so that the color for nodes which
# have not been assigned defaults to black.
current = defaultdict(lambda: 'Black', current)
# Now we use colors in the list and default to black otherwise.
colors = [current[node] for node in G.node.keys()]
# Finally drawing the nodes.
nx.draw(G, pos, node_color=colors, node_size=500)
labels = {label:label for label in G.node}
# Labels shifted by offset so as to not overlap nodes.
label_pos = {key:[value[0], value[1]+0.03] for key, value in pos.items()}
nx.draw_networkx_labels(G, label_pos, labels, font_size=20)
# show graph
plt.show()
return update_step # <-- this is a function
def make_visualize(slider):
''' Takes an input a slider and returns
callback function for timer and animation
'''
def visualize_callback(Visualize, time_step):
if Visualize is True:
for i in range(slider.min, slider.max + 1):
slider.value = i
time.sleep(float(time_step))
return visualize_callback
"""
Explanation: The ipython widgets we will be using require the plots in the form of a step function such that there is a graph corresponding to each value. We define the make_update_step_function which return such a function. It takes in as inputs the neighbors/graph along with an instance of the InstruCSP. This will be more clear with the example below. If this sounds confusing do not worry this is not the part of the core material and our only goal is to help you visualize how the process works.
End of explanation
"""
step_func = make_update_step_function(neighbors, coloring_problem1)
"""
Explanation: Finally let us plot our problem. We first use the function above to obtain a step function.
End of explanation
"""
matplotlib.rcParams['figure.figsize'] = (18.0, 18.0)
"""
Explanation: Next we set the canvas size.
End of explanation
"""
import ipywidgets as widgets
from IPython.display import display
iteration_slider = widgets.IntSlider(min=0, max=len(coloring_problem1.assignment_history)-1, step=1, value=0)
w=widgets.interactive(step_func,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
"""
Explanation: Finally our plot using ipywidget slider and matplotib. You can move the slider to experiment and see the coloring change. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
End of explanation
"""
def label_queen_conflicts(assignment,grid):
''' Mark grid with queens that are under conflict. '''
for col, row in assignment.items(): # check each queen for conflict
row_conflicts = {temp_col:temp_row for temp_col,temp_row in assignment.items()
if temp_row == row and temp_col != col}
up_conflicts = {temp_col:temp_row for temp_col,temp_row in assignment.items()
if temp_row+temp_col == row+col and temp_col != col}
down_conflicts = {temp_col:temp_row for temp_col,temp_row in assignment.items()
if temp_row-temp_col == row-col and temp_col != col}
# Now marking the grid.
for col, row in row_conflicts.items():
grid[col][row] = 3
for col, row in up_conflicts.items():
grid[col][row] = 3
for col, row in down_conflicts.items():
grid[col][row] = 3
return grid
def make_plot_board_step_function(instru_csp):
'''ipywidgets interactive function supports
single parameter as input. This function
creates and return such a function by taking
in input other parameters.
'''
n = len(instru_csp.variables)
def plot_board_step(iteration):
''' Add Queens to the Board.'''
data = instru_csp.assignment_history[iteration]
grid = [[(col+row+1)%2 for col in range(n)] for row in range(n)]
grid = label_queen_conflicts(data, grid) # Update grid with conflict labels.
# color map of fixed colors
cmap = matplotlib.colors.ListedColormap(['white','lightsteelblue','red'])
bounds=[0,1,2,3] # 0 for white 1 for black 2 onwards for conflict labels (red).
norm = matplotlib.colors.BoundaryNorm(bounds, cmap.N)
fig = plt.imshow(grid, interpolation='nearest', cmap = cmap,norm=norm)
plt.axis('off')
fig.axes.get_xaxis().set_visible(False)
fig.axes.get_yaxis().set_visible(False)
# Place the Queens Unicode Symbol
for col, row in data.items():
fig.axes.text(row, col, u"\u265B", va='center', ha='center', family='Dejavu Sans', fontsize=32)
plt.show()
return plot_board_step
"""
Explanation: N-QUEENS VISUALIZATION
Just like the Graph Coloring Problem we will start with defining a few helper functions to help us visualize the assignments as they evolve over time. The make_plot_board_step_function behaves similar to the make_update_step_function introduced earlier. It initializes a chess board in the form of a 2D grid with alternating 0s and 1s. This is used by plot_board_step function which draws the board using matplotlib and adds queens to it. This function also calls the label_queen_conflicts which modifies the grid placing 3 in positions in a position where there is a conflict.
End of explanation
"""
twelve_queens_csp = NQueensCSP(12)
backtracking_instru_queen = make_instru(twelve_queens_csp)
result = backtracking_search(backtracking_instru_queen)
backtrack_queen_step = make_plot_board_step_function(backtracking_instru_queen) # Step Function for Widgets
"""
Explanation: Now let us visualize a solution obtained via backtracking. We use of the previosuly defined make_instru function for keeping a history of steps.
End of explanation
"""
matplotlib.rcParams['figure.figsize'] = (8.0, 8.0)
matplotlib.rcParams['font.family'].append(u'Dejavu Sans')
iteration_slider = widgets.IntSlider(min=0, max=len(backtracking_instru_queen.assignment_history)-1, step=0, value=0)
w=widgets.interactive(backtrack_queen_step,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
"""
Explanation: Now finally we set some matplotlib parameters to adjust how our plot will look. The font is necessary because the Black Queen Unicode character is not a part of all fonts. You can move the slider to experiment and observe the how queens are assigned. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click.The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
End of explanation
"""
conflicts_instru_queen = make_instru(twelve_queens_csp)
result = min_conflicts(conflicts_instru_queen)
conflicts_step = make_plot_board_step_function(conflicts_instru_queen)
"""
Explanation: Now let us finally repeat the above steps for min_conflicts solution.
End of explanation
"""
iteration_slider = widgets.IntSlider(min=0, max=len(conflicts_instru_queen.assignment_history)-1, step=0, value=0)
w=widgets.interactive(conflicts_step,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
"""
Explanation: The visualization has same features as the above. But here it also highlights the conflicts by labeling the conflicted queens with a red background.
End of explanation
"""
|
Mashimo/datascience | 02-Classification/svm.ipynb | apache-2.0 | import pandas as pd
# The Dataset comes from:
# https://archive.ics.uci.edu/ml/datasets/Optical+Recognition+of+Handwritten+Digits
# Load up the data.
with open('../Datasets/optdigits.tes', 'r') as f: testing = pd.read_csv(f)
with open('../Datasets/optdigits.tra', 'r') as f: training = pd.read_csv(f)
# The number of samples between training and testing can vary
# But the number of features better remain the same!
n_features = testing.shape[1]
X_test = testing.iloc[:,:n_features-1]
X_train = training.iloc[:,:n_features-1]
y_test = testing.iloc[:,n_features-1:].values.ravel()
y_train = training.iloc[:,n_features-1:].values.ravel()
print (n_features)
"""
Explanation: Linear Support Vector Classifier
Support vector machines are a set of supervised learning algorithms that you can use for classification, regression and outlier detection purposes. SciKit-Learn has many classes for SVM usage, depending on your purpose. The one we'll be focusing on is Support Vector Classifier, SVC.
An OCR example
In 1982, the first computer-driven, OCR machine got installed by the United States Postal Service (USPS) in Los Angeles and by the end of 1984, over 250 OCRs machines were installed in 118 major mail processing centers across the country.
Let's see if it's possible to train a support vector classifier in a few seconds using machine learning, and if the classification accuracy is similar or better than the advertised USPS stats.
We start by reading the dataset, which comes from the UCI Machine Learning Repository and is composed by bitmaps of handwritten digits from a preprinted form.
End of explanation
"""
import matplotlib.pyplot as plt
# The 'targets' or labels are stored in y. The 'samples' or data is stored in X
print ("Peeking the data...")
fig = plt.figure()
cnt = 0
for col in range(5):
for row in range(10):
plt.subplot(5, 10, cnt + 1)
plt.imshow(X_train.iloc[cnt,:].values.reshape(8,8), cmap=plt.cm.gray_r, interpolation='nearest')
plt.axis('off')
cnt += 1
fig.set_tight_layout(True)
plt.show()
"""
Explanation: Let's have a look at these bitmaps of handwritten digits
End of explanation
"""
from sklearn import svm # Library for Support Vector Machines
#
# Create and train an SVM classifier.
print ("Training SV Classifier...")
svc = svm.SVC(kernel='linear')
svc.fit(X_train, y_train)
"""
Explanation: Train the SVM Classifier
Now we are ready to train the Support Vector Classifier, using the SciKitLearn library.
We leave all parameters at their defaults, setting only the kernel to be Linear.
More on the kernels later in this notebook.
End of explanation
"""
#
# Print out the TRUE value of the 1000th digit in the test set
# By TRUE value, we mean, the actual provided label for that sample
#
true_1000th_test_value = y_test[999]
print ("1000th test label is: ", true_1000th_test_value)
#
# Predict the value of the 1000th digit in the test set.
# Was the model's prediction correct?
#
guess_1000th_test_value = svc.predict(X_test[999:1000])
print ("1000th test prediction is: ", guess_1000th_test_value)
"""
Explanation: Checkpoint
Print the predicted digit and the actual label for a random example.
We take the thousandth digit.
End of explanation
"""
#
# Use IMSHOW to display the 1000th test image
#
#
plt.imshow(X_test.iloc[999,:].values.reshape(8,8), cmap=plt.cm.gray_r, interpolation='nearest');
"""
Explanation: The model's prediction was correct.
We can display that image, so we can visually check if it was a hard image or an easy image:
End of explanation
"""
# Visual Confirmation of accuracy
fig = plt.figure()
# Make some guesses
y_guess = svc.predict(X_test)
num_rows = 10
num_cols = 5
index = 0
for col in range(num_cols):
for row in range(num_rows):
plt.subplot(num_cols, num_rows, index + 1)
# 8x8 is the size of the image, 64 pixels
plt.imshow(X_test.iloc[index,:].values.reshape(8,8), cmap=plt.cm.gray_r, interpolation='nearest')
# Green = Guessed right
# Red = Fail!
fontcolor = 'g' if y_test[index] == y_guess[index] else 'r'
plt.title('Label: %i' % y_guess[index], fontsize=7, color=fontcolor)
plt.axis('off')
index += 1
fig.set_tight_layout(True)
plt.show()
"""
Explanation: visual confirmation of accuracy
Here we can print more digits with indication of what was the predicted label (in red if it was wrong):
End of explanation
"""
# Calculate the score of the SVC against the testing data
print ("Scoring SVM Classifier...")
#
score = svc.score(X_test, y_test)
print ("Score: ", score)
"""
Explanation: Score
We can see that - on this sample of 50 handwritten digits - 4 of them are wrong, that's 8%.
And here we calculate the score on the entire testing dataset:
End of explanation
"""
#
# We start with the POLY kernel
svc = svm.SVC(kernel='poly', C=1.0, gamma=0.001)
svc.fit(X_train, y_train)
# Calculate the score of the SVC against the testing data
print ("Scoring SV poly Classifier...")
score = svc.score(X_test, y_test)
print ("Score: ", score)
"""
Explanation: Not bad, the model was correct more than 96% of the times!
Non-linear Kernels for the SVC
We experiment now with different kernels, starting with the polynomial kernel.
When training an SVM with a kernel, two additional parameters must be considered: C and gamma.
The parameter C, common to all SVM kernels, trades off misclassification of training examples against simplicity of the decision surface.
A low C makes the decision surface smooth, while a high C aims at classifying all training examples correctly.
We keep C at the default value = 1.
Gamma defines how much influence a single training example has.
The larger gamma is, the closer other examples must be to be affected.
USPS has an advertised accuracy score of 98% which is higher than our SVC with a linear Kernel.
We can beat it with a non-linear kernel!
End of explanation
"""
#
# change SVC's kernel to 'rbf'
svc = svm.SVC(kernel='rbf', C=1.0, gamma=0.001)
svc.fit(X_train, y_train)
# Calculate the score of SVC against the testing data
print ("Scoring SVM rbf Classifier...")
score = svc.score(X_test, y_test)
print ("Score: ", score)
"""
Explanation: Which is sightly better, but we can try a different kernel, more performant: the Radial Basis Function (RBF) kernel.
End of explanation
"""
X = pd.read_csv("../Datasets/parkinsons.data")
X.drop(['name'], axis=1, inplace=True) # drop name column
y = X.status.copy() # copy “y” values out from status
X.drop(['status'], axis=1, inplace=True) # drop status column
# Perform a train/test split. 30% test group size, with a random_state equal to 7.
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
random_state=7)
"""
Explanation: Now it's better than the UPS' score!
Hyper-parameters tuning for SVC Kernels
Proper choice of C and gamma is critical to the SVM’s performance.
One is advised to use sklearn.model_selection.GridSearchCV with C and gamma spaced exponentially far apart to choose good values.
We will tune them - and the pre-processor - using a different example.
A Parkinson's classifier
Apply SVC to the Parkinson's Data Set, provided courtesy of UCI's Machine Learning Repository. The dataset was created at the University of Oxford, in collaboration with 10 medical centers around the US, along with Intel who developed the device used to record the primary features of the dataset: speech signals.
Goals: first to see if it's possible to differentiate between people who have Parkinson's and who don't using SciKit-Learn's support vector classifier and then to take a first-stab at a naive way of fine-tuning our parameters in an attempt to maximize the accuracy of the testing set.
Read and pre-process the data
End of explanation
"""
from sklearn import preprocessing
# tried with different scaler, standard is the best
scaler = preprocessing.StandardScaler() # best score was 0.932203389831
#scaler = preprocessing.MinMaxScaler() # best score was 0.881355932203
#scaler = preprocessing.MaxAbsScaler() # best score was 0.881355932203
#scaler = preprocessing.Normalizer() # best score was 0.796610169492
#scaler = preprocessing.KernelCenterer() # best score was 0.915254237288
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
"""
Explanation: We can apply different scaler for the pre-processing.
The standard scaler seems the best, feel free to experiment with others, uncommenting one below:
End of explanation
"""
from sklearn.decomposition import PCA
from sklearn import manifold
usePCA = False # change this to use PCA as dimensionality reducer
if usePCA:
reducer = PCA(n_components=7).fit(X_train)
else:
reducer = manifold.Isomap(n_neighbors=3, n_components=6).fit(X_train)
X_train = reducer.transform(X_train)
X_test = reducer.transform(X_test)
"""
Explanation: Same for the dimensionality reduction: feel free to experiment with PCA or Isomap
End of explanation
"""
import numpy as np
# a naive, best-parameter search using nested for-loops.
best_score = 0
for c in np.arange(0.05,2,0.05):
for gamma in np.arange(0.001, 0.1, 0.001):
svc = svm.SVC(kernel='rbf', C=c, gamma=gamma)
svc.fit(X_train, y_train)
score = svc.score(X_test, y_test)
if score > best_score:
best_score = score
#print ("New best score:", score, "using C= ", c, "and gamma = ", gamma)
print(f"New best score: {score:.3f} using C= {c:.2f} and gamma = {gamma:.3f}")
"""
Explanation: Train the SVM classifier.
Create and fit an SVC based on the RBF kernel against the training data and then finally score the testing data.
To search the best hyper-parameters we just use simple nested loops.
Looping C from 0.05 to 2 and looping gamma from 0.001 to 0.1
End of explanation
"""
#
# INFO: Parameters can be adjusted here
C = 1
kernel = 'linear'
iterations = 100
#
# INFO: You can set this to false if you want to
# draw the full square matrix
FAST_DRAW = True
"""
Explanation: Best score was with C=0.85 and gamma=0.088
Comparing KNN vs. SVC
How does SVC compare with other classifiers, such as the KNN?
We classify the UCI's wheat-seeds dataset - that we used previously with the KNN algorithm - by using the SVC and compare the results.
First, benchmark how long it takes to train and predict with SVC relative to how long K-Neighbors took to train and test and then compare the decision boundary plot produced by the two.
Defining some parameters
End of explanation
"""
#
# Load up the wheat dataset into dataframe 'X'
#
df = pd.read_csv("../Datasets/wheat.data", index_col='id')
"""
Explanation: Read the data
End of explanation
"""
# INFO: An easy way to show which rows have nans in them
print (df[pd.isnull(df).any(axis=1)])
#
# Go ahead and drop any row with a nan
#
df.dropna(axis=0, inplace=True)
#
# INFO: you might try setting the nan values to the
# mean value of that column, the mean should only be calculated for
# the specific class rather than across all classes, now that you
# have the labels
#
# Copy the labels out of the dset into variable 'y' then Remove
# them from X. Encode the labels -- canadian:0, kama:1, and rosa:2
#
labels = df.wheat_type.copy() # copy “y” values out
df.drop(['wheat_type'], axis=1, inplace=True) # drop output column
labels = labels.map({'canadian':0, 'kama':1, 'rosa':2})
"""
Explanation: Data pre-processing
This is all standard. You can refer to the previous examples for more details, especially the KNN example.
End of explanation
"""
#
# Split data into test / train sets
#
X_train, X_test, y_train, y_test = train_test_split(df, labels, test_size=0.3,
random_state=7)
"""
Explanation: Split into training and testing data sets
End of explanation
"""
import matplotlib as mpl
import matplotlib.pyplot as plt
def drawPlots(model, X_train, X_test, y_train, y_test, wintitle='Figure 1'):
# If this line throws an error, use plt.style.use('ggplot') instead
mpl.style.use('ggplot') # Look Pretty
padding = 3
resolution = 0.5
max_2d_score = 0
score = 0
y_colors = ['#ff0000', '#00ff00', '#0000ff']
my_cmap = mpl.colors.ListedColormap(['#ffaaaa', '#aaffaa', '#aaaaff'])
colors = [y_colors[i] for i in y_train]
num_columns = len(X_train.columns)
fig = plt.figure()
fig.canvas.set_window_title(wintitle)
cnt = 0
for col in range(num_columns):
for row in range(num_columns):
# Easy out
if FAST_DRAW and col > row:
cnt += 1
continue
ax = plt.subplot(num_columns, num_columns, cnt + 1)
plt.xticks(())
plt.yticks(())
# Intersection:
if col == row:
plt.text(0.5, 0.5, X_train.columns[row], verticalalignment='center', horizontalalignment='center', fontsize=12)
cnt += 1
continue
# Only select two features to display, then train the model
X_train_bag = X_train.iloc[:, [row,col]]
X_test_bag = X_test.iloc[:, [row,col]]
model.fit(X_train_bag, y_train)
# Create a mesh to plot in
x_min, x_max = X_train_bag.iloc[:, 0].min() - padding, X_train_bag.iloc[:, 0].max() + padding
y_min, y_max = X_train_bag.iloc[:, 1].min() - padding, X_train_bag.iloc[:, 1].max() + padding
xx, yy = np.meshgrid(np.arange(x_min, x_max, resolution),
np.arange(y_min, y_max, resolution))
# Plot Boundaries
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
# Prepare the contour
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=my_cmap, alpha=0.8)
plt.scatter(X_train_bag.iloc[:, 0], X_train_bag.iloc[:, 1], c=colors, alpha=0.5)
score = round(model.score(X_test_bag, y_test) * 100, 3)
#plt.text(0.5, 0, "Score: {0}".format(score), transform = ax.transAxes, horizontalalignment='center', fontsize=8)
plt.text(0.5, 0, f"Score: {score}", transform = ax.transAxes, horizontalalignment='center', fontsize=8)
max_2d_score = score if score > max_2d_score else max_2d_score
cnt += 1
print ("Max 2D Score: ", max_2d_score)
fig.set_tight_layout(True)
"""
Explanation: Utility function: draw the plots
This is a convenience function to break any higher-dimensional space down and view cross sections of it.
End of explanation
"""
import time
def benchmark(model, X_train, X_test, y_train, y_test, wintitle='Figure 1'):
print ('\n\n' + wintitle + ' Results')
# the only purpose of doing many iterations was to get a more accurate
# count of the time it took for each classifier
s = time.time()
for i in range(iterations):
#
# train the classifier on the training data / labels:
#
model.fit(X_train, y_train)
#print ("{0} Iterations Training Time: ".format(iterations), time.time() - s)
print(f"{iterations} Iterations Training Time: {time.time() - s:.3f}")
scoreBch = 0
s = time.time()
for i in range(iterations):
#
# score the classifier on the testing data / labels:
#
scoreBch = model.score(X_test, y_test)
#print ("{0} Iterations Scoring Time: ".format(iterations), time.time() - s)
print(f"{iterations} Iterations Scoring Time: {time.time() - s:.3f}")
print ("High-Dimensionality Score: ", round((scoreBch*100), 3))
"""
Explanation: Utility function: benchmark times
End of explanation
"""
#
# Create an KNeighbors classifier
#
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
"""
Explanation: Train the Knn classifier
End of explanation
"""
benchmark(knn, X_train, X_test, y_train, y_test, 'KNeighbors')
drawPlots(knn, X_train, X_test, y_train, y_test, 'KNeighbors')
"""
Explanation: And get its benchmark:
End of explanation
"""
#
# Create an SVM classifier
# Use a linear kernel, and set the C value to C (see initial parameters)
#
from sklearn.svm import SVC
svc = SVC(kernel='linear', C=C)
benchmark(svc, X_train, X_test, y_train, y_test, 'SVC')
drawPlots(svc, X_train, X_test, y_train, y_test, 'SVC')
"""
Explanation: Train the SVM Classifier
End of explanation
"""
|
khalido/deep-learning | tensorboard/Anna KaRNNa.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
"""
Explanation: First we'll load the text file and convert it into integers for our network to use.
End of explanation
"""
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
"""
Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
"""
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
preds = tf.nn.softmax(logits, name='predictions')
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
"""
Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
"""
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
End of explanation
"""
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/1', sess.graph)
"""
Explanation: Write out the graph for TensorBoard
End of explanation
"""
!mkdir -p checkpoints/anna
epochs = 1
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
"""
Explanation: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
|
cdawei/digbeta | dchen/music/aotm2011_MLC_genre.ipynb | gpl-3.0 | %matplotlib inline
%load_ext autoreload
%autoreload 2
import os, sys, time
import pickle as pkl
import numpy as np
import pandas as pd
import sklearn as sk
from sklearn.linear_model import LogisticRegression
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import classification_report, f1_score, make_scorer, label_ranking_loss
from scipy.sparse import lil_matrix, issparse
import matplotlib.pyplot as plt
import seaborn as sns
sys.path.append('src')
from TopPushMLC import TopPushMLC
from evaluate import evaluatePrecision, evalPred
data_dir = 'data'
faotm = os.path.join(data_dir, 'aotm-2011/aotm-2011-subset.pkl')
fmap = os.path.join(data_dir, 'aotm-2011/songID2TrackID.pkl')
ftag = os.path.join(data_dir, 'msd/msd_tagtraum_cd2c.cls')
fxtrain = os.path.join(data_dir, 'aotm-2011/XTrain_tag.pkl')
fytrain = os.path.join(data_dir, 'aotm-2011/YTrain.pkl')
fxtest = os.path.join(data_dir, 'aotm-2011/XTest_tag.pkl')
fytest = os.path.join(data_dir, 'aotm-2011/YTest.pkl')
"""
Explanation: A simple example of generating playlist by multilable learning (toppush)
End of explanation
"""
playlists = pkl.load(open(faotm, 'rb'))
print('#Playlists: %d' % len(playlists))
playlists[0]
#print('#Songs: %d' % len({songID for p in playlists for songID in p['filtered_lists'][0]}))
#lengths = [len(p['filtered_lists'][0]) for p in playlists]
lengths = [len(sl) for sl in playlists]
plt.hist(lengths, bins=20)
print('Average playlist length: %.1f' % np.mean(lengths))
"""
Explanation: Data loading
Load playlists.
End of explanation
"""
song2TrackID = pkl.load(open(fmap, 'rb'))
{ k : song2TrackID[k] for k in list(song2TrackID.keys())[:10] }
"""
Explanation: Load song_id --> track_id mapping: a song may correspond to multiple tracks.
End of explanation
"""
track2Tags = dict()
with open(ftag) as f:
for line in f:
if line[0] == '#': continue
tid, tag = line.strip().split('\t')
#print(tid, tag)
track2Tags[tid] = tag
print('#(Track, Tag): %d' % len(track2Tags))
{ k : track2Tags[k] for k in list(track2Tags.keys())[:10] }
"""
Explanation: Load song tags, build track_id --> tag mapping.
End of explanation
"""
subset_ix = []
seedSong2Tag = { }
for ix in range(len(playlists)):
# the list of song IDs in the playlist
#songIDs = playlists[ix]['filtered_lists'][0]
songIDs = playlists[ix]
# seed song
seedSongID = songIDs[0]
seedTrackIDs = song2TrackID[seedSongID]
# a song can have multiple tracks, make sure that at least one track for a song has a tag
flag = [ (trackID in track2Tags) for trackID in seedTrackIDs]
if not np.any(flag):
continue
#seedSong2Tag[playlists[ix]['mix_id']] = [track2Tags[seedTrackIDs[i]] for i in range(len(flag)) if flag[i] is True]
seedSong2Tag[playlists[ix][0]] = [track2Tags[seedTrackIDs[i]] for i in range(len(flag)) if flag[i] is True]
subset_ix.append(ix)
#seedSong2Tag
playlists_subset = [playlists[ix] for ix in subset_ix]
print('#Playlists used: %d' % len(subset_ix))
playlists_subset[0]
"""
Explanation: Data cleaning
Use the subset of playlist such that the first song (i.e. the seed song) in each playlist has tag(s).
End of explanation
"""
song_set = sorted({songID for p in playlists_subset for songID in p})
print('#Songs used: %d' % len(song_set))
print(song_set[:10])
"""
Explanation: The set of unique songs, in multilabel learning, we have a label for each song in this set.
End of explanation
"""
playlist_lengths = [len(p) for p in playlists_subset]
plt.hist(playlist_lengths, bins=20)
print('Average playlist length: %.1f' % np.mean(playlist_lengths))
"""
Explanation: Data analysis
For the most part, playlists contain less than 10 songs. The most common playlist length is 2 songs.
End of explanation
"""
# the set of unique tags
tag_set = sorted(set(track2Tags.values()))
print('#Tags: %d' % len(tag_set))
tag_indicator = { tag: ix for ix, tag in enumerate(tag_set) }
tag_indicator
"""
Explanation: One-hot tag encoding
Indicator of tags: tag --> index mapping.
End of explanation
"""
def gen_features(song_id, song2TrackID = song2TrackID, tag_indicator = tag_indicator):
"""
Generate one-hot feature vector for a given song ID
"""
features = np.zeros(len(tag_set), dtype = np.float)
trackIDs = song2TrackID[song_id]
cnt = 0
for trackID in trackIDs:
if trackID in track2Tags:
cnt += 1
tag = track2Tags[trackID]
tag_ix = tag_indicator[tag]
features[tag_ix] = 1
# must have at least one tag for the song, else useless
assert(cnt >= 1)
return features
def gen_feature_map(song_id, seed):
"""
Generate feature mapping for a given (label, query) pair
"""
#return gen_features(song_id) - gen_features(seed) # feature map
return gen_features(seed) # a trivial feature map
def gen_training_set(playlists = playlists_subset, song_set = song_set):
"""
Create the labelled dataset for a given song index
Input:
- playlists: which playlists to create features for
Output:
- (Feature, Label) pair (X, Y), with # num playlists rows
X comprises the features for each seed song and the given song
Y comprises the indicators of whether the given song is present in the respective playlist
"""
N = len(playlists)
D = len(tag_set)
K = len(song_set)
X = np.zeros((N, D), dtype = np.float)
#Y = np.zeros((N, K), dtype = np.int)
#Y = coo_matrix(([0], ([0],[0])), shape=(N, K), dtype=np.int8).tolil()
Y = lil_matrix((N, K), dtype=np.int8)
for i in range(len(playlists)):
playlist = playlists[i]
seed = playlist[0]
X[i, :] = gen_feature_map(None, seed)
Y[i, :] = [int(sid in playlist) for sid in song_set]
return X, Y.tocsr()
gen_feature_map(song_set[100], playlists_subset[0][0])
"""
Explanation: Feature extraction
Build features (1-hot encoding of tag) for a song given its song_id.
End of explanation
"""
if np.all([os.path.exists(fname) for fname in [fxtrain, fytrain, fxtest, fytest]]):
X_train = pkl.load(open(fxtrain, 'rb'))
Y_train = pkl.load(open(fytrain, 'rb'))
X_test = pkl.load(open(fxtest, 'rb'))
Y_test = pkl.load(open(fytest, 'rb'))
else:
X, Y = gen_training_set()
# by fixing random seed, the same playlists will be in the test set each time
X_train, X_test, Y_train, Y_test = sk.model_selection.train_test_split(X, Y, test_size=0.33, random_state=31)
pkl.dump(X_train, open(fxtrain, 'wb'))
pkl.dump(Y_train, open(fytrain, 'wb'))
pkl.dump(X_test, open(fxtest, 'wb'))
pkl.dump(Y_test, open(fytest, 'wb'))
X_train.shape
Y_train.shape
#clf = OneVsRestClassifier(LogisticRegression(verbose=1))
#clf.fit(X_train, Y_train)
#pkl.dump(clf, open(os.path.join(data_dir, 'aotm-2011/br-base.pkl'), 'wb'))
def print_results(predictor, X_train, Y_train, X_test, Y_test):
"""
Compute and save performance results
"""
p3_train = []
p5_train = []
pk_train = []
p10_train = []
p3_test = []
p5_test = []
pk_test = []
p10_test = []
rankloss_train = []
rankloss_test = []
N_train = X_train.shape[0]
batch_size = 500
N_batch_train = int((N_train-1) / batch_size) + 1
for i in range(N_batch_train):
ix0 = i * batch_size
ix1 = min((i+1) * batch_size, N_train)
preds = predictor.decision_function(X_train[ix0:ix1])
evaldict = evaluatePrecision(Y_train[ix0:ix1].toarray(), preds, verbose=-1)
size = ix1 - ix0
p3_train.append(evaldict['Precision@3'][0] * size)
p5_train.append(evaldict['Precision@5'][0] * size)
pk_train.append(evaldict['Precision@K'][0] * size)
p10_train.append(evaldict['Precision@10'][0] * size)
#rankloss_train.append(evalPred1(Y_train[i].toarray()[0], pred, metricType='Ranking'))
sys.stdout.write('\r%d / %d' % (i+1, N_batch_train)); sys.stdout.flush()
print()
N_test = X_test.shape[0]
N_batch_test = int((N_test-1) / batch_size) + 1
for i in range(N_batch_test):
ix0 = i * batch_size
ix1 = min((i+1) * batch_size, N_test)
preds = predictor.decision_function(X_test[ix0:ix1])
evaldict = evaluatePrecision(Y_test[ix0:ix1].toarray(), preds, verbose=-1)
size = ix1 - ix0
p3_test.append(evaldict['Precision@3'][0] * size)
p5_test.append(evaldict['Precision@5'][0] * size)
pk_test.append(evaldict['Precision@K'][0] * size)
p10_test.append(evaldict['Precision@10'][0] * size)
#rankloss_test.append(evalPred1(Y_test[i].toarray()[0], pred, metricType='Ranking'))
sys.stdout.write('\r%d / %d' % (i+1, N_batch_test)); sys.stdout.flush()
print()
print('Training set:')
print('Precision@3:', (np.sum(p3_train) / N_train))
print('Precision@5:', (np.sum(p5_train) / N_train))
print('Precision@k:', (np.sum(pk_train) / N_train))
print('Precision@10:', (np.sum(p10_train) / N_train))
print()
print('Test set:')
print('Precision@3:', (np.sum(p3_test) / N_test))
print('Precision@5:', (np.sum(p5_test) / N_test))
print('Precision@k:', (np.sum(pk_test) / N_test))
print('Precision@10:', (np.sum(p10_test) / N_test))
#print()
#print('Training set:')
#print('RankingLoss: %.1f, %.1f' % (np.mean(rankloss_train), np.std(rankloss_train) / N_train))
#print()
#print('Test set:')
#print('RankingLoss: %.1f, %.1f' % (np.mean(rankloss_test), np.std(rankloss_test) / N_test))
def print_dataset_info(X_train, Y_train, X_test, Y_test):
N_train, D = X_train.shape
K = Y_train.shape[1]
N_test = X_test.shape[0]
print('%-45s %s' % ('Number of training examples:', '{:,}'.format(N_train)))
print('%-45s %s' % ('Number of test examples:', '{:,}'.format(N_test)))
print('%-45s %s' % ('Number of features:', '{:,}'.format(D)))
print('%-45s %s' % ('Number of labels:', '{:,}'.format(K)))
avgK_train = np.mean(np.sum(Y_train, axis=1))
avgK_test = np.mean(np.sum(Y_test, axis=1))
print('%-45s %.3f (%.2f%%)' % ('Average number of positive labels (train):', avgK_train, 100*avgK_train / K))
print('%-45s %.3f (%.2f%%)' % ('Average number of positive labels (test):', avgK_test, 100*avgK_test / K))
#print('%-45s %.4f%%' % ('Average label occurrence (train):', np.mean(np.sum(Y_train, axis=0)) / N_train))
#print('%-45s %.4f%%' % ('Average label occurrence (test):', np.mean(np.sum(Y_test, axis=0)) / N_test))
print('%-45s %.3f%%' % ('Sparsity (percent) (train):', 100 * np.sum(Y_train) / np.prod(Y_train.shape)))
print('%-45s %.3f%%' % ('Sparsity (percent) (test):', 100 * np.sum(Y_test) / np.prod(Y_test.shape)))
print_dataset_info(X_train, Y_train, X_test, Y_test)
clf = pkl.load(open(os.path.join(data_dir, 'aotm-2011/br-base.pkl'), 'rb'))
print_results(clf, X_train, Y_train, X_test, Y_test)
y1 = clf.decision_function(X_train[:1])[0]
Y_train[0].toarray().nonzero()
clf2 = TopPushMLC(C=1, r=1)
clf2.fit_SGD(X_train, Y_train, batch_size=500, n_epochs=10, learning_rate=0.01)
print_results(clf2, X_train, Y_train, X_test, Y_test)
"""
Explanation: Training & Test
Train a logistic regression model for each label.
End of explanation
"""
|
uber/pyro | tutorial/source/tensor_shapes.ipynb | apache-2.0 | import os
import torch
import pyro
from torch.distributions import constraints
from pyro.distributions import Bernoulli, Categorical, MultivariateNormal, Normal
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, TraceEnum_ELBO, config_enumerate
import pyro.poutine as poutine
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.7.0')
# We'll ue this helper to check our models are correct.
def test_model(model, guide, loss):
pyro.clear_param_store()
loss.loss(model, guide)
"""
Explanation: Tensor shapes in Pyro
This tutorial introduces Pyro's organization of tensor dimensions.
Before starting, you should familiarize yourself with PyTorch broadcasting semantics.
After this tutorial, you may want to also read about enumeration.
You may also find it useful to read Eric J. Ma's post Reasoning about Shapes and Probability Distributions.
While this post is specifically about TensorFlow Probability, many of the same concepts apply.
Summary:
Tensors broadcast by aligning on the right: torch.ones(3,4,5) + torch.ones(5).
Distribution .sample().shape == batch_shape + event_shape.
Distribution .log_prob(x).shape == batch_shape (but not event_shape!).
Use .expand() to draw a batch of samples, or rely on plate to expand automatically.
Use my_dist.to_event(1) to declare a dimension as dependent.
Use with pyro.plate('name', size): to declare a dimension as conditionally independent.
All dimensions must be declared either dependent or conditionally independent.
Try to support batching on the left. This lets Pyro auto-parallelize.
use negative indices like x.sum(-1) rather than x.sum(2)
use ellipsis notation like pixel = image[..., i, j]
use Vindex if i,j are enumerated, pixel = Vindex(image)[..., i, j]
When using pyro.plate's automatic subsampling, be sure to subsample your data:
Either manually subample by capturing the index with pyro.plate(...) as i: ...
or automatically subsample via batch = pyro.subsample(data, event_dim=...).
When debugging, examine all shapes in a trace using Trace.format_shapes().
Table of Contents
Distribution shapes
Examples
Reshaping distributions
It is always safe to assume dependence
Declaring independence with plate
Subsampling inside plate
Broadcasting to allow Parallel Enumeration
Writing parallelizable code
Automatic broadcasting inside pyro.plate
End of explanation
"""
d = Bernoulli(0.5)
assert d.batch_shape == ()
assert d.event_shape == ()
x = d.sample()
assert x.shape == ()
assert d.log_prob(x).shape == ()
"""
Explanation: Distributions shapes: batch_shape and event_shape <a class="anchor" id="Distributions-shapes:-batch_shape-and-event_shape"></a>
PyTorch Tensors have a single .shape attribute, but Distributions have two shape attributions with special meaning: .batch_shape and .event_shape. These two combine to define the total shape of a sample
py
x = d.sample()
assert x.shape == d.batch_shape + d.event_shape
Indices over .batch_shape denote conditionally independent random variables, whereas indices over .event_shape denote dependent random variables (ie one draw from a distribution). Because the dependent random variables define probability together, the .log_prob() method only produces a single number for each event of shape .event_shape. Thus the total shape of .log_prob() is .batch_shape:
py
assert d.log_prob(x).shape == d.batch_shape
Note that the Distribution.sample() method also takes a sample_shape parameter that indexes over independent identically distributed (iid) random varables, so that
py
x2 = d.sample(sample_shape)
assert x2.shape == sample_shape + batch_shape + event_shape
In summary
| iid | independent | dependent
------+--------------+-------------+------------
shape = sample_shape + batch_shape + event_shape
For example univariate distributions have empty event shape (because each number is an independent event). Distributions over vectors like MultivariateNormal have len(event_shape) == 1. Distributions over matrices like InverseWishart have len(event_shape) == 2.
Examples <a class="anchor" id="Examples"></a>
The simplest distribution shape is a single univariate distribution.
End of explanation
"""
d = Bernoulli(0.5 * torch.ones(3,4))
assert d.batch_shape == (3, 4)
assert d.event_shape == ()
x = d.sample()
assert x.shape == (3, 4)
assert d.log_prob(x).shape == (3, 4)
"""
Explanation: Distributions can be batched by passing in batched parameters.
End of explanation
"""
d = Bernoulli(torch.tensor([0.1, 0.2, 0.3, 0.4])).expand([3, 4])
assert d.batch_shape == (3, 4)
assert d.event_shape == ()
x = d.sample()
assert x.shape == (3, 4)
assert d.log_prob(x).shape == (3, 4)
"""
Explanation: Another way to batch distributions is via the .expand() method. This only works if
parameters are identical along the leftmost dimensions.
End of explanation
"""
d = MultivariateNormal(torch.zeros(3), torch.eye(3, 3))
assert d.batch_shape == ()
assert d.event_shape == (3,)
x = d.sample()
assert x.shape == (3,) # == batch_shape + event_shape
assert d.log_prob(x).shape == () # == batch_shape
"""
Explanation: Multivariate distributions have nonempty .event_shape. For these distributions, the shapes of .sample() and .log_prob(x) differ:
End of explanation
"""
d = Bernoulli(0.5 * torch.ones(3,4)).to_event(1)
assert d.batch_shape == (3,)
assert d.event_shape == (4,)
x = d.sample()
assert x.shape == (3, 4)
assert d.log_prob(x).shape == (3,)
"""
Explanation: Reshaping distributions <a class="anchor" id="Reshaping-distributions"></a>
In Pyro you can treat a univariate distribution as multivariate by calling the .to_event(n) property where n is the number of batch dimensions (from the right) to declare as dependent.
End of explanation
"""
def model1():
a = pyro.sample("a", Normal(0, 1))
b = pyro.sample("b", Normal(torch.zeros(2), 1).to_event(1))
with pyro.plate("c_plate", 2):
c = pyro.sample("c", Normal(torch.zeros(2), 1))
with pyro.plate("d_plate", 3):
d = pyro.sample("d", Normal(torch.zeros(3,4,5), 1).to_event(2))
assert a.shape == () # batch_shape == () event_shape == ()
assert b.shape == (2,) # batch_shape == () event_shape == (2,)
assert c.shape == (2,) # batch_shape == (2,) event_shape == ()
assert d.shape == (3,4,5) # batch_shape == (3,) event_shape == (4,5)
x_axis = pyro.plate("x_axis", 3, dim=-2)
y_axis = pyro.plate("y_axis", 2, dim=-3)
with x_axis:
x = pyro.sample("x", Normal(0, 1))
with y_axis:
y = pyro.sample("y", Normal(0, 1))
with x_axis, y_axis:
xy = pyro.sample("xy", Normal(0, 1))
z = pyro.sample("z", Normal(0, 1).expand([5]).to_event(1))
assert x.shape == (3, 1) # batch_shape == (3,1) event_shape == ()
assert y.shape == (2, 1, 1) # batch_shape == (2,1,1) event_shape == ()
assert xy.shape == (2, 3, 1) # batch_shape == (2,3,1) event_shape == ()
assert z.shape == (2, 3, 1, 5) # batch_shape == (2,3,1) event_shape == (5,)
test_model(model1, model1, Trace_ELBO())
"""
Explanation: While you work with Pyro programs, keep in mind that samples have shape batch_shape + event_shape, whereas .log_prob(x) values have shape batch_shape. You'll need to ensure that batch_shape is carefully controlled by either trimming it down with .to_event(n) or by declaring dimensions as independent via pyro.plate.
It is always safe to assume dependence <a class="anchor" id="It-is-always-safe-to-assume-dependence"></a>
Often in Pyro we'll declare some dimensions as dependent even though they are in fact independent, e.g.
py
x = pyro.sample("x", Normal(0, 1).expand([10]).to_event(1))
assert x.shape == (10,)
This is useful for two reasons: First it allows us to easily swap in a MultivariateNormal distribution later. Second it simplifies the code a bit since we don't need a plate (see below) as in
py
with pyro.plate("x_plate", 10):
x = pyro.sample("x", Normal(0, 1)) # .expand([10]) is automatic
assert x.shape == (10,)
The difference between these two versions is that the second version with plate informs Pyro that it can make use of conditional independence information when estimating gradients, whereas in the first version Pyro must assume they are dependent (even though the normals are in fact conditionally independent). This is analogous to d-separation in graphical models: it is always safe to add edges and assume variables may be dependent (i.e. to widen the model class), but it is unsafe to assume independence when variables are actually dependent (i.e. narrowing the model class so the true model lies outside of the class, as in mean field). In practice Pyro's SVI inference algorithm uses reparameterized gradient estimators for Normal distributions so both gradient estimators have the same performance.
Declaring independent dims with plate <a class="anchor" id="Declaring-independent-dims-with-plate"></a>
Pyro models can use the context manager pyro.plate to declare that certain batch dimensions are independent. Inference algorithms can then take advantage of this independence to e.g. construct lower variance gradient estimators or to enumerate in linear space rather than exponential space. An example of an independent dimension is the index over data in a minibatch: each datum should be independent of all others.
The simplest way to declare a dimension as independent is to declare the rightmost batch dimension as independent via a simple
py
with pyro.plate("my_plate"):
# within this context, batch dimension -1 is independent
We recommend always providing an optional size argument to aid in debugging shapes
py
with pyro.plate("my_plate", len(my_data)):
# within this context, batch dimension -1 is independent
Starting with Pyro 0.2 you can additionally nest plates, e.g. if you have per-pixel independence:
py
with pyro.plate("x_axis", 320):
# within this context, batch dimension -1 is independent
with pyro.plate("y_axis", 200):
# within this context, batch dimensions -2 and -1 are independent
Note that we always count from the right by using negative indices like -2, -1.
Finally if you want to mix and match plates for e.g. noise that depends only on x, some noise that depends only on y, and some noise that depends on both, you can declare multiple plates and use them as reusable context managers. In this case Pyro cannot automatically allocate a dimension, so you need to provide a dim argument (again counting from the right):
py
x_axis = pyro.plate("x_axis", 3, dim=-2)
y_axis = pyro.plate("y_axis", 2, dim=-3)
with x_axis:
# within this context, batch dimension -2 is independent
with y_axis:
# within this context, batch dimension -3 is independent
with x_axis, y_axis:
# within this context, batch dimensions -3 and -2 are independent
Let's take a closer look at batch sizes within plates.
End of explanation
"""
trace = poutine.trace(model1).get_trace()
trace.compute_log_prob() # optional, but allows printing of log_prob shapes
print(trace.format_shapes())
"""
Explanation: It is helpful to visualize the .shapes of each sample site by aligning them at the boundary between batch_shape and event_shape: dimensions to the right will be summed out in .log_prob() and dimensions to the left will remain.
batch dims | event dims
-----------+-----------
| a = sample("a", Normal(0, 1))
|2 b = sample("b", Normal(zeros(2), 1)
| .to_event(1))
| with plate("c", 2):
2| c = sample("c", Normal(zeros(2), 1))
| with plate("d", 3):
3|4 5 d = sample("d", Normal(zeros(3,4,5), 1)
| .to_event(2))
|
| x_axis = plate("x", 3, dim=-2)
| y_axis = plate("y", 2, dim=-3)
| with x_axis:
3 1| x = sample("x", Normal(0, 1))
| with y_axis:
2 1 1| y = sample("y", Normal(0, 1))
| with x_axis, y_axis:
2 3 1| xy = sample("xy", Normal(0, 1))
2 3 1|5 z = sample("z", Normal(0, 1).expand([5])
| .to_event(1))
To examine the shapes of sample sites in a program automatically, you can trace the program and use the Trace.format_shapes() method, which prints three shapes for each sample site: the distribution shape (both site["fn"].batch_shape and site["fn"].event_shape), the value shape (site["value"].shape), and if log probability has been computed also the log_prob shape (site["log_prob"].shape):
End of explanation
"""
data = torch.arange(100.)
def model2():
mean = pyro.param("mean", torch.zeros(len(data)))
with pyro.plate("data", len(data), subsample_size=10) as ind:
assert len(ind) == 10 # ind is a LongTensor that indexes the subsample.
batch = data[ind] # Select a minibatch of data.
mean_batch = mean[ind] # Take care to select the relevant per-datum parameters.
# Do stuff with batch:
x = pyro.sample("x", Normal(mean_batch, 1), obs=batch)
assert len(x) == 10
test_model(model2, guide=lambda: None, loss=Trace_ELBO())
"""
Explanation: Subsampling tensors inside a plate <a class="anchor" id="Subsampling-tensors-inside-a-plate"></a>
One of the main uses of plate is to subsample data. This is possible within a plate because data are conditionally independent, so the expected value of the loss on, say, half the data should be half the expected loss on the full data.
To subsample data, you need to inform Pyro of both the original data size and the subsample size; Pyro will then choose a random subset of data and yield the set of indices.
End of explanation
"""
@config_enumerate
def model3():
p = pyro.param("p", torch.arange(6.) / 6)
locs = pyro.param("locs", torch.tensor([-1., 1.]))
a = pyro.sample("a", Categorical(torch.ones(6) / 6))
b = pyro.sample("b", Bernoulli(p[a])) # Note this depends on a.
with pyro.plate("c_plate", 4):
c = pyro.sample("c", Bernoulli(0.3))
with pyro.plate("d_plate", 5):
d = pyro.sample("d", Bernoulli(0.4))
e_loc = locs[d.long()].unsqueeze(-1)
e_scale = torch.arange(1., 8.)
e = pyro.sample("e", Normal(e_loc, e_scale)
.to_event(1)) # Note this depends on d.
# enumerated|batch|event dims
assert a.shape == ( 6, 1, 1 ) # Six enumerated values of the Categorical.
assert b.shape == ( 2, 1, 1, 1 ) # Two enumerated Bernoullis, unexpanded.
assert c.shape == ( 2, 1, 1, 1, 1 ) # Only two Bernoullis, unexpanded.
assert d.shape == (2, 1, 1, 1, 1, 1 ) # Only two Bernoullis, unexpanded.
assert e.shape == (2, 1, 1, 1, 5, 4, 7) # This is sampled and depends on d.
assert e_loc.shape == (2, 1, 1, 1, 1, 1, 1,)
assert e_scale.shape == ( 7,)
test_model(model3, model3, TraceEnum_ELBO(max_plate_nesting=2))
"""
Explanation: Broadcasting to allow parallel enumeration <a class="anchor" id="Broadcasting-to-allow-parallel-enumeration"></a>
Pyro 0.2 introduces the ability to enumerate discrete latent variables in parallel. This can significantly reduce the variance of gradient estimators when learning a posterior via SVI.
To use parallel enumeration, Pyro needs to allocate tensor dimension that it can use for enumeration. To avoid conflicting with other dimensions that we want to use for plates, we need to declare a budget of the maximum number of tensor dimensions we'll use. This budget is called max_plate_nesting and is an argument to SVI (the argument is simply passed through to TraceEnum_ELBO). Usually Pyro can determine this budget on its own (it runs the (model,guide) pair once and record what happens), but in case of dynamic model structure you may need to declare max_plate_nesting manually.
To understand max_plate_nesting and how Pyro allocates dimensions for enumeration, let's revisit model1() from above. This time we'll map out three types of dimensions:
enumeration dimensions on the left (Pyro takes control of these), batch dimensions in the middle, and event dimensions on the right.
max_plate_nesting = 3
|<--->|
enumeration|batch|event
-----------+-----+-----
|. . .| a = sample("a", Normal(0, 1))
|. . .|2 b = sample("b", Normal(zeros(2), 1)
| | .to_event(1))
| | with plate("c", 2):
|. . 2| c = sample("c", Normal(zeros(2), 1))
| | with plate("d", 3):
|. . 3|4 5 d = sample("d", Normal(zeros(3,4,5), 1)
| | .to_event(2))
| |
| | x_axis = plate("x", 3, dim=-2)
| | y_axis = plate("y", 2, dim=-3)
| | with x_axis:
|. 3 1| x = sample("x", Normal(0, 1))
| | with y_axis:
|2 1 1| y = sample("y", Normal(0, 1))
| | with x_axis, y_axis:
|2 3 1| xy = sample("xy", Normal(0, 1))
|2 3 1|5 z = sample("z", Normal(0, 1).expand([5]))
| | .to_event(1))
Note that it is safe to overprovision max_plate_nesting=4 but we cannot underprovision max_plate_nesting=2 (or Pyro will error). Let's see how this works in practice.
End of explanation
"""
trace = poutine.trace(poutine.enum(model3, first_available_dim=-3)).get_trace()
trace.compute_log_prob() # optional, but allows printing of log_prob shapes
print(trace.format_shapes())
"""
Explanation: Let's take a closer look at those dimensions. First note that Pyro allocates enumeration dims starting from the right at max_plate_nesting: Pyro allocates dim -3 to enumerate a, then dim -4 to enumerate b, then dim -5 to enumerate c, and finally dim -6 to enumerate d. Next note that samples only have extent (size > 1) in the new enumeration dimension. This helps keep tensors small and computation cheap. (Note that the log_prob shape will be broadcast up to contain both enumeratin shape and batch shape, so e.g. trace.nodes['d']['log_prob'].shape == (2, 1, 1, 1, 5, 4).)
We can draw a similar map of the tensor dimensions:
max_plate_nesting = 2
|<->|
enumeration batch event
------------|---|-----
6|1 1| a = pyro.sample("a", Categorical(torch.ones(6) / 6))
2 1|1 1| b = pyro.sample("b", Bernoulli(p[a]))
| | with pyro.plate("c_plate", 4):
2 1 1|1 1| c = pyro.sample("c", Bernoulli(0.3))
| | with pyro.plate("d_plate", 5):
2 1 1 1|1 1| d = pyro.sample("d", Bernoulli(0.4))
2 1 1 1|1 1|1 e_loc = locs[d.long()].unsqueeze(-1)
| |7 e_scale = torch.arange(1., 8.)
2 1 1 1|5 4|7 e = pyro.sample("e", Normal(e_loc, e_scale)
| | .to_event(1))
To automatically examine this model with enumeration semantics, we can create an enumerated trace and then use Trace.format_shapes():
End of explanation
"""
width = 8
height = 10
sparse_pixels = torch.LongTensor([[3, 2], [3, 5], [3, 9], [7, 1]])
enumerated = None # set to either True or False below
def fun(observe):
p_x = pyro.param("p_x", torch.tensor(0.1), constraint=constraints.unit_interval)
p_y = pyro.param("p_y", torch.tensor(0.1), constraint=constraints.unit_interval)
x_axis = pyro.plate('x_axis', width, dim=-2)
y_axis = pyro.plate('y_axis', height, dim=-1)
# Note that the shapes of these sites depend on whether Pyro is enumerating.
with x_axis:
x_active = pyro.sample("x_active", Bernoulli(p_x))
with y_axis:
y_active = pyro.sample("y_active", Bernoulli(p_y))
if enumerated:
assert x_active.shape == (2, 1, 1)
assert y_active.shape == (2, 1, 1, 1)
else:
assert x_active.shape == (width, 1)
assert y_active.shape == (height,)
# The first trick is to broadcast. This works with or without enumeration.
p = 0.1 + 0.5 * x_active * y_active
if enumerated:
assert p.shape == (2, 2, 1, 1)
else:
assert p.shape == (width, height)
dense_pixels = p.new_zeros(broadcast_shape(p.shape, (width, height)))
# The second trick is to index using ellipsis slicing.
# This allows Pyro to add arbitrary dimensions on the left.
for x, y in sparse_pixels:
dense_pixels[..., x, y] = 1
if enumerated:
assert dense_pixels.shape == (2, 2, width, height)
else:
assert dense_pixels.shape == (width, height)
with x_axis, y_axis:
if observe:
pyro.sample("pixels", Bernoulli(p), obs=dense_pixels)
def model4():
fun(observe=True)
def guide4():
fun(observe=False)
# Test without enumeration.
enumerated = False
test_model(model4, guide4, Trace_ELBO())
# Test with enumeration.
enumerated = True
test_model(model4, config_enumerate(guide4, "parallel"),
TraceEnum_ELBO(max_plate_nesting=2))
"""
Explanation: Writing parallelizable code <a class="anchor" id="Writing-parallelizable-code"></a>
It can be tricky to write Pyro models that correctly handle parallelized sample sites. Two tricks help: broadcasting and ellipsis slicing. Let's look at a contrived model to see how these work in practice. Our aim is to write a model that works both with and without enumeration.
End of explanation
"""
num_particles = 100 # Number of samples for the ELBO estimator
width = 8
height = 10
sparse_pixels = torch.LongTensor([[3, 2], [3, 5], [3, 9], [7, 1]])
def sample_pixel_locations_no_broadcasting(p_x, p_y, x_axis, y_axis):
with x_axis:
x_active = pyro.sample("x_active", Bernoulli(p_x).expand([num_particles, width, 1]))
with y_axis:
y_active = pyro.sample("y_active", Bernoulli(p_y).expand([num_particles, 1, height]))
return x_active, y_active
def sample_pixel_locations_full_broadcasting(p_x, p_y, x_axis, y_axis):
with x_axis:
x_active = pyro.sample("x_active", Bernoulli(p_x))
with y_axis:
y_active = pyro.sample("y_active", Bernoulli(p_y))
return x_active, y_active
def sample_pixel_locations_partial_broadcasting(p_x, p_y, x_axis, y_axis):
with x_axis:
x_active = pyro.sample("x_active", Bernoulli(p_x).expand([width, 1]))
with y_axis:
y_active = pyro.sample("y_active", Bernoulli(p_y).expand([height]))
return x_active, y_active
def fun(observe, sample_fn):
p_x = pyro.param("p_x", torch.tensor(0.1), constraint=constraints.unit_interval)
p_y = pyro.param("p_y", torch.tensor(0.1), constraint=constraints.unit_interval)
x_axis = pyro.plate('x_axis', width, dim=-2)
y_axis = pyro.plate('y_axis', height, dim=-1)
with pyro.plate("num_particles", 100, dim=-3):
x_active, y_active = sample_fn(p_x, p_y, x_axis, y_axis)
# Indices corresponding to "parallel" enumeration are appended
# to the left of the "num_particles" plate dim.
assert x_active.shape == (2, 1, 1, 1)
assert y_active.shape == (2, 1, 1, 1, 1)
p = 0.1 + 0.5 * x_active * y_active
assert p.shape == (2, 2, 1, 1, 1)
dense_pixels = p.new_zeros(broadcast_shape(p.shape, (width, height)))
for x, y in sparse_pixels:
dense_pixels[..., x, y] = 1
assert dense_pixels.shape == (2, 2, 1, width, height)
with x_axis, y_axis:
if observe:
pyro.sample("pixels", Bernoulli(p), obs=dense_pixels)
def test_model_with_sample_fn(sample_fn):
def model():
fun(observe=True, sample_fn=sample_fn)
@config_enumerate
def guide():
fun(observe=False, sample_fn=sample_fn)
test_model(model, guide, TraceEnum_ELBO(max_plate_nesting=3))
test_model_with_sample_fn(sample_pixel_locations_no_broadcasting)
test_model_with_sample_fn(sample_pixel_locations_full_broadcasting)
test_model_with_sample_fn(sample_pixel_locations_partial_broadcasting)
"""
Explanation: Automatic broadcasting inside pyro.plate<a class="anchor" id="Automatic-broadcasting-inside-pyro-plate"></a>
Note that in all our model/guide specifications, we have relied on pyro.plate to automatically expand sample shapes to satisfy the constraints on batch shape enforced by pyro.sample statements. However this broadcasting is equivalent to hand-annotated .expand() statements.
We will demonstrate this using model4 from the previous section. Note the following changes to the code from earlier:
For the purpose of this example, we will only consider "parallel" enumeration, but broadcasting should work as expected without enumeration or with "sequential" enumeration.
We have separated out the sampling function which returns the tensors corresponding to the active pixels. Modularizing the model code into components is a common practice, and helps with maintainability of large models.
We would also like to use the pyro.plate construct to parallelize the ELBO estimator over num_particles. This is done by wrapping the contents of model/guide inside an outermost pyro.plate context.
End of explanation
"""
|
google/prog-edu-assistant | exercises/dataframe-pre3-master.ipynb | apache-2.0 | # データをCVSファイルから読み込みます。 Read the data from CSV file.
df = pd.read_csv('data/15-July-2019-Tokyo-hourly.csv')
print("データフレームの行数は %d" % len(df))
print(df.dtypes)
df.head()
"""
Explanation: Data frames 3: 簡単なデータの変換 (Simple data manipulation)
```
ASSIGNMENT METADATA
assignment_id: "DataFrame3"
```
lang:en
In this unit, we will get acquainted with a couple of simple techniques to change the data:
Filter rows based on a condition
Create new columns as a transformation of other columns
Drop columns that are no longer needed
Let's start with reading the data.
lang:ja
この講義では、簡単なデータの変換を紹介します。
行を条件によりフィルター(抽出)します
データ変換によって新しい列を作ります
必要だけの列を抽出します
まずはデータを読み込みましょう。
End of explanation
"""
# This is an example of filtering rows by a condition
# that is computed over variables in the dataframe.
# 条件によってデータフレームをフィルターします。
df2 = df[df['Precipitation_mm'] > 0]
len(df2)
"""
Explanation: lang:en Let's consider the question of how one should hold an umbrella when it rains.
Depending on the wind direction, it's better to slant the umbrella towards the direction
the rain is coming from. Therefore, one needs to know the wind direction when it rains.
First step is to limit the data to the hours when there was rain. To accomplish that,
we filter the data set by using a condition. The condition is placed in square brackets
after the dataframe.
Technical details:
* The inner df['Precipitation_mm'] extracts a single column as a pandas Series object.
* The comparison df['Precipitation_mm'] > 0' is evaluated as a vector expression, that computes
the condition element-wise, resulting in a Series object of the same length with boolean elements
(true or false).
* Finally, the indexing of a data frame by the boolean series performs the filtering of the rows
in the dataframe only to rows which had the corresponding element as True. Note that the original
data frame is left unmodified. Instead, a new copy of a data
lang:ja雨の中の傘の持ち方について考えましょう。風の向きによって、適切な持ち方が変わります。風が来ている方向に傾けると傘の効率がよくなります。
したがって、雨のときの風の向きを調べなければいけません。
まずは雨のなかったデータを除きましょう。そのために条件をつけてデータをフィルターします。
条件はデータフレームの参照の後に角括弧に入ります。
詳しく述べると:
角括弧に入っているdf['Precipitation_mm']は一つの列を抽出します。それはpandasのSeriesオブジェクトになります。
比較表現 df['Precipitation_mm'] > 0' は各行ごとに評価されます、真理値のベクターになります。それもSeriesです。長さはデータフレームの行数です。
データフレームの後に角括弧に真理値ベクターを入れるとFalseの行が除かれます。
結果のデータフレームは新しいデータフレームです。既存のデータフレームは変わらないままで、フィルターされたデータフレームを新しい変数に保存します。
End of explanation
"""
px.histogram(df2, x='WindDirection_16compasspoints')
"""
Explanation: lang:en So it was 11 hours out of 24 in a day that the rain was falling. Let's see what the distribution of wind directions was.
lang:ja 一日の24時間の中に雨が降っていたは11時間がありました。 風の向きを可視化しましょう。 px.histogramはxの値を数えて、個数を棒グラフとして可視化します。
End of explanation
"""
px.histogram(df, x='WindDirection_16compasspoints')
"""
Explanation: lang:en Now we can clearly see that NE was the prevailing wind direction while it rained.
Note that the result may have been different if we did not filter for the hours with rain:
lang:ja雨が降ったときに風はNEの方向に吹いたことがわかります。雨だけのデータにフィルターしなければ、グラフは異なる結果がえられます。
以下はdfは元のデータフレームで、フィルターされたデータフレームはdf2です。
End of explanation
"""
# This creates a new column named "rained" that is a boolean variable
# indicating whether it was raining in that hour.
# 新しい真理値の列'rained'を追加します。
df['rained'] = df['Precipitation_mm'] > 0
px.histogram(df, x='WindDirection_16compasspoints', color='rained')
"""
Explanation: lang:en We can plot the whole data and use the color dimension to distinguish between hours when it rained or not by using a different technique: instead of filtering rows by some condition, we can introduce the condition
as a new boolean variable. This is done by assigning to a new column in the data frame:
lang:jaフィルターに変わりに、可視化によって同じデータを確認ができます。たとえば、雨が降ったかどうかを色で表現します。
そのために新しい真理値の列を作らなければなりません。以下の例はdfのデータフレームに新しい列を追加します。
End of explanation
"""
# そのままだとデータが多すぎて混乱しやすい。
# その表を見せてなにがいいたいのか分かりづらい。
df
# 列の名前の一覧を見ましょう。
df.dtypes
# Indexing by list of column names returns a copy of the data frame just with the named
# columns.
# 列の名前を二重角括弧に入れると、列の抽出ができます。 列の名前は以上の`dtypes`の一覧によって確認できます。
df[['Time_Hour', 'WindDirection_16compasspoints', 'rained']]
"""
Explanation: lang:en Now let's consider how could we present the same data in a tabular form. If we do not do anything,
all existing columns in the data frame would be shown, which may make it hard for the reader
to see the point of the author. To make reading the data easier, we can limit the data output
just to columns we are interested in.
lang:ja 今まで解析してきたデータを表の形に表示について考えましょう。 dfのデータフレームをそのまま表示するとたくさんの列が出て、
どのデータを見せたかったのはとてもわかりにくくなります。 それを解決するために、見せたい列だけを抽出しましょう。
End of explanation
"""
%%solution
""" # BEGIN PROMPT
# Note: you can do multiple steps to get the data frame you need.
# 複数の段階に分けてデータ処理してもよい。
df['rained'] = df[...]
sunny_df = df[...]
sunny_df = sunny_df[...]
""" # END PROMPT
# BEGIN SOLUTION
df['rained'] = df['Precipitation_mm'] > 0
sunny_df = df[df['SunshineDuration_h'] > 0]
sunny_df = sunny_df[['Time_Hour', 'WindDirection_16compasspoints', 'rained']]
# END SOLUTION
"""
Explanation: 予習課題: データの変換 (Data manipulation)
```
EXERCISE METADATA
exercise_id: "DataManipulation"
```
lang:en
Starting with the weather data frame df defined above, filter out the data set consisting only of the day hours when sun was shining (i.e. variable SunshineDuration_h > 0), and containing only the following columns:
* Time_Hour -- extracted from the original data frame.
* WindDirection_16compasspoints -- extracted from the original data frame.
* rained -- the boolean indicator of whether it was raining or not (Precipitation_mm > 0). This is a new column that is not present in the original data, so it should be added.
lang:ja
以上に定義したdfのデータフレームを使って、以下のデータの表を抽出しましょう。
* 日が出ていた時間帯のみ (すなわち、SunshineDuration_h > 0)
以下の列だけを抽出しましょう。
* Time_Hour -- 元のデータフレームから抽出しましょう。
* WindDirection_16compasspoints -- 元のデータフレームから抽出しましょう。
* rained -- 雨があったかどうかの真理値列 (すなわち、Precipitation_mm > 0)。こちらの列は元のデータに入ってないため、追加しなければなりません。
End of explanation
"""
# Inspect the data frame
sunny_df
%%studenttest StudentTest
# Test your solution
assert len(sunny_df) == 2, "The result data frame should only have 2 rows, yours has %d" % len(sunny_df)
assert np.sort(np.unique(sunny_df['Time_Hour'])).tolist() == [13, 14], "Sunshine was during 13h,14h, but you got %s" % sunny_df['Time_Hour']
assert np.all(sunny_df['rained'] == False), "It was not raining during sunshine hours!"
%%inlinetest AutograderTest
# This cell will not be present in the students notebook.
assert 'sunny_df' in globals(), "Did you define the data frame named 'sunny_df' in the solution cell?"
assert sunny_df.__class__ == pd.core.frame.DataFrame, "Did you define a data frame named 'sunny_df'? 'sunny_df' was a %s instead" % sunny_df.__class__
assert len(sunny_df) == 2, "The data frame should have 2 rows, but you have %d" % len(sunny_df)
assert np.sort(np.unique(sunny_df['Time_Hour'])).tolist() == [13, 14], "Sunshine was during 13h,14h, but you got %s" % sunny_df['Time_Hour']
assert np.all(sunny_df['rained'] == False), "It was not raining during sunshine hours!"
assert np.all(np.sort(np.unique(sunny_df.columns)) == ['Time_Hour', 'WindDirection_16compasspoints', 'rained']), ("Expected to see 3 columns: rained, Time_Hour, WindDirection_16compasspoints, but got %d: %s" % (len(np.unique(sunny_df.columns)), np.sort(np.unique(sunny_df.columns))) )
%%submission
df['rained'] = df['Precipitation_mm'] > 0
sunny_df = df[df['SunshineDuration_h'] > 0]
#sunny_df = sunny_df[['Time_Hour', 'WindDirection_16compasspoints', 'rained']]
import re
result, logs = %autotest AutograderTest
assert re.match(r'Expected to see 3 columns.*', str(result.results['error']))
report(AutograderTest, results=result.results, source=submission_source.source)
"""
Explanation: lang:enNote: if you see a warning SettingWithCopyWarning, it means that you are trying to apply transformation
to a data frame that is a copy or a slice of a different data frame. This is an optimization that Pandas
library may do on filtering steps to reduce memory use. To avoid this warning, you can either move the new column computation before the filtering step, or add a .copy() call to the filtered data frame to force
creating of a full data frame object.
lang:jaもしSettingWithCopyWarningのエラーが出たら、データフレームのコピーに変更を行うという意味なのです。pandasは、データ抽出のときに
自動的にコピーしないような最適化の副作用です。解決のために、データ変更は先にするか、抽出の後に.copy()を呼び出すことができます。
End of explanation
"""
|
BDannowitz/polymath-progression-blog | distribution-fitting/Distribution-Fitting.ipynb | gpl-2.0 | import pandas as pd
data_df = pd.read_csv('raw-data.csv', index_col='eventID')
data_df.head()
"""
Explanation: Distribution Fitting
Goals:
Load up raw data from previous post
Inspect distribution for a feature
Postulate a fit function
Use scipy.stats to fit function to the distribution
1. Load the Raw Data
End of explanation
"""
import matplotlib.pylab as plt
import seaborn as sns
# Show plots in notebook
%matplotlib inline
# Set some styling options
sns.set_style("darkgrid")
sns.set_context("paper", font_scale=1.4)
feature = "AfterInhMATRIX5"
sns.distplot(data_df.query("label == 1")[feature])
"""
Explanation: 2. Inspect the Distribution for a Feature
End of explanation
"""
from sklearn.neighbors import KernelDensity
import numpy as np
import matplotlib.pylab as plt
# Put the data you want to apply the KDE to into an array
data = data_df.query("label == 1")[feature].values[:, np.newaxis]
# Create a KDE object and then fit it to the data
kde = KernelDensity(kernel='gaussian', bandwidth=1400).fit(data)
"""
Explanation: Extract the values of the KDE curve for fitting purposes
End of explanation
"""
X_plot = np.linspace(10000, 100000, 1000)[:, np.newaxis]
# Get log density values for each point on the x-axis
log_dens = kde.score_samples(X_plot)
Y_plot = np.exp(log_dens)
# Plot the two against each other
sns.distplot(data_df.query("label == 1")[feature],
hist=False, color='black',
label='Seaborn KDE')
plt.plot(X_plot, Y_plot, '--', color='red', lw=2,
label='scikit-learn KDE')
plt.legend(loc='best')
"""
Explanation: Let's plot it to make sure it looks like what we've seen above
End of explanation
"""
def gauss_dist(xdata, amp, mean, stddev):
return (amp * np.exp( np.divide(-1 * np.square(xdata-mean),
(2 * stddev**2))))
# Take four amplitudes, means, and standard deviations
# Compute sum of four Gaussians
def my_fit(xdata,
a1, a2, a3, a4,
m1, m2, m3, m4,
s1, s2, s3, s4):
exp1 = gauss_dist(xdata, a1, m1, s1)
exp2 = gauss_dist(xdata, a2, m2, s2)
exp3 = gauss_dist(xdata, a3, m3, s3)
exp4 = gauss_dist(xdata, a4, m4, s4)
return exp1 + exp2 + exp3 + exp4
"""
Explanation: Good! It looks like it matches exactly.
3. Postulate a Fit Function
I'm guessing that this is four Gaussian functions added together.
Define a Gaussian function
Define our custom function made up of Gaussians
Make a guess at our parameters
Use scipy.optimize.curve_fit to optimize the parameters
$$ f_{G}(x) = a \cdot exp\left( -\frac{(x-\mu)^2}{2\sigma^2} \right) $$
End of explanation
"""
p0 = [0.00001, 0.00002, 0.000061, 0.000005,
31000, 51000, 66000, 83000,
1000, 1500, 2000, 3000]
"""
Explanation: Here I make my ballpark guesses for the amplitudes, means, and deviations
End of explanation
"""
my_guess = my_fit(X_plot[:, 0], *p0)
plt.plot(X_plot, my_guess, '-')
"""
Explanation: Take a look at what that gives us
End of explanation
"""
from scipy.optimize import curve_fit
popt, pcov = curve_fit(my_fit, X_plot[:, 0], Y_plot, p0)
print popt
"""
Explanation: ... This looks not good, but it's a nice little nucleation point for an optimization routine.
4. Use scipy.stats to fit the function to the distribution
End of explanation
"""
optim_fit = my_fit(X_plot[:, 0], *popt)
# Plot the whole fit
plt.plot(X_plot, optim_fit, '-.', lw=3, color='black')
# Along with the consituent gaussians
for i in range(0,4):
plt.plot(X_plot,
gauss_dist(X_plot[:, 0], popt[i], popt[i+4], popt[i+8]),
'-', lw=2)
"""
Explanation: Put in these optimized parameters and see what we get
End of explanation
"""
plt.plot(X_plot, Y_plot, '-', lw=3, color='black',
label='scikit-learn KDE')
plt.plot(X_plot, optim_fit, '-', color='red',
label=r'$\sum_{i=1}^4\ f_G(a_i, \mu_i, \sigma_i)$')
plt.legend(loc='best')
"""
Explanation: Compare to the actual KDE distribution
End of explanation
"""
def calc_chisq(obs, exp):
chisq = 0.0
for i in range(0,len(obs)):
chisq += (obs[i] - exp[i])**2 / exp[i]
return chisq
print calc_chisq(Y_plot, optim_fit)
"""
Explanation: I'd say that's pretty good for a first go
Calculate a $\chi^2$ value for this fit
With the hypothesis that these two distributions are the same, we calculate:
$$ \chi^2 = \sum_i \frac{(O_i - E_i)^2}{E_i} $$
End of explanation
"""
from scipy.stats import chisquare
chisq, pvalue = chisquare(Y_plot, optim_fit, ddof=12)
print ("Chi-squared: %.02f\np-value: %.02f" % (chisq, pvalue))
"""
Explanation: For 12 degrees of freedom (12 fit parameters), we can look at a $\chi^2$ table to find that we have a $p>0.995$ that this is a good fit to the distribution.
Or use scipy.stats to get the $\chi^2$ and p-value
End of explanation
"""
sns.distplot(data_df.query("label == 1")[feature],
kde=False, hist=True, norm_hist=True)
for i in range(0,4):
plt.plot(X_plot,
gauss_dist(X_plot[:, 0], popt[i], popt[i+4], popt[i+8]),
'-', lw=2)
"""
Explanation: Overlay onto original data
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/computer_vision_fun/labs/classifying_images_with_pre-built_tf_container_on_vertex_ai.ipynb | apache-2.0 | from datetime import datetime
import os
REGION = 'us-central1'
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT
MODEL_TYPE = "cnn" # "linear", "dnn", "dnn_dropout", or "cnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
"""
Explanation: Classifying Images with pre-built TF Container on Vertex AI
Introduction
In this notebook, you learn how to implement different image models on MNIST using the tf.keras API.
Learning objectives
Understand how to build a Dense Neural Network (DNN) for image classification.
Understand how to use dropout (DNN) for image classification.
Understand how to use Convolutional Neural Networks (CNN).
Know how to deploy and use an image classifcation model using Google Cloud's Vertex AI.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete this notebook first and then review the solution notebook
Configuring the parameters
First, configure the parameters below to match your own Google Cloud project details.
If you don’t want to change the region, leave all settings as they are. Otherwise, update the region as you want, leave other settings as they are and run the cell.
End of explanation
"""
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
"""
Explanation: Building a dynamic model
This part of notebook demonstrates how to implement DNN and CNN models on MNIST using the tf.keras API.
In the previous notebook, “classifying_images_with_a_nn_and_dnn_model.ipynb”, you run the code directly from the notebook. In this notebook, you see that you can also package your notebook as a python module on Vertex AI.
The boilerplate structure for this module has already been set up in the folder mnist_models. The module lives in the sub-folder, trainer, and is designated as a python package with the empty init.py (mnist_models/trainer/init.py) file. It still needs the model and a trainer to run it, so let's make them.
Start with the trainer file first. This file parses command line arguments to feed into the model.
End of explanation
"""
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
"""
Explanation: Next, group non-model functions into a util file to keep the model file simple. Use the scale and load_dataset functions to scale images from a 0-255 int range to a 0-1 float range and load MNIST dataset into a tf.data.Dataset.
End of explanation
"""
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO 1: Your code here
],
'dnn_dropout': [
# TODO 2: Your code here
],
'cnn': [
# TODO 3: Your code here
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
"""
Explanation: Now you can code the models. The tf.keras API accepts an array of layers into a model object, so you can create a dictionary of layers based on the different model types you want to use. mnist_models/trainer/model.py file has three functions: get_layers, build_model and train_and_evaluate.
In get_layers function: Youl build the structure of our model in get_layers with four different layers:
First, you define a linear model.
Second, you define the Keras layers for a DNN model.
Third, you define the Keras layers for a dropout model.
Lastly, you define the Keras layers for a CNN model.
In the build_model function: You compile the model, specifying an optimizer to use, the loss to minimize, and metrics to report.
Finally in the train_and_evaluate function, you compile your Keras model by loading data into it for training.
Note that these models progressively build on each other. Look at the imported tensorflow.keras.layers modules and the default values for the variables defined in get_layers for guidance.
End of explanation
"""
!python3 -m mnist_models.trainer.test
"""
Explanation: Local Training
After completing the set up, you can run locally to test the code. Some of the previous tests have been copied over into a testing script mnist_models/trainer/test.py to make sure the model still passes our previous checks. On line 13, you can specify which model types you would like to check. line 14 and line 15 have the number of epochs and steps per epoch respectively.
Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
End of explanation
"""
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
"""
Explanation: Now you know that your models are working as expected. Now ,you can run it on Google Cloud within Vertex AI. You can run it as a python module locally first using the command line.
The below cell transfers some of your variables to the command line and creates a job directory including a timestamp.
End of explanation
"""
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
"""
Explanation: The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorter, as defined in your mnist_models/trainer/task.py file.
End of explanation
"""
%%writefile mnist_models/setup.py
from setuptools import find_packages
from setuptools import setup
setup(
name='mnist_trainer',
version='0.1',
packages=find_packages(),
include_package_data=True,
description='MNIST model training application.'
)
%%bash
cd mnist_models
python ./setup.py sdist --formats=gztar
cd ..
gsutil cp mnist_models/dist/mnist_trainer-0.1.tar.gz gs://${BUCKET}/mnist/
"""
Explanation: Training on the cloud
For this model, you use a Tensorflow pre-built container on Vertex AI, as you do not have any particular additional prerequisites. You use setuptools for this, and store the created source distribution on Cloud Storage by using “gsutil cp”.
End of explanation
"""
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
"""
Explanation: Then, you can start the Vertex AI Custom Job using the pre-built container. You can pass your source distribution URI using the --python-package-uris flag.
End of explanation
"""
%%bash
echo $JOB_DIR $REGION $JOB_NAME
PYTHON_PACKAGE_URIS=gs://${BUCKET}/mnist/mnist_trainer-0.1.tar.gz
MACHINE_TYPE=n1-standard-4
REPLICA_COUNT=1
PYTHON_PACKAGE_EXECUTOR_IMAGE_URI="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-3:latest"
PYTHON_MODULE=trainer.task
WORKER_POOL_SPEC="machine-type=$MACHINE_TYPE,\
replica-count=$REPLICA_COUNT,\
executor-image-uri=$PYTHON_PACKAGE_EXECUTOR_IMAGE_URI,\
python-module=$PYTHON_MODULE"
gcloud ai custom-jobs create \
--region=${REGION} \
--display-name=$JOB_NAME \
--python-package-uris=$PYTHON_PACKAGE_URIS \
--worker-pool-spec=$WORKER_POOL_SPEC \
--args="--job-dir=$JOB_DIR,--model_type=$MODEL_TYPE"
%%bash
SAVEDMODEL_DIR=${JOB_DIR}keras_export
echo $SAVEDMODEL_DIR
gsutil ls $SAVEDMODEL_DIR
"""
Explanation: After submitting the following job, view the status in Vertex AI > Training and select Custom Jobs tab. Wait for the job to finish.
End of explanation
"""
%%bash
TIMESTAMP=$(date -u +%Y%m%d_%H%M%S)
MODEL_DISPLAYNAME=mnist_$TIMESTAMP
ENDPOINT_DISPLAYNAME=mnist_endpoint_$TIMESTAMP
IMAGE_URI="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest"
SAVEDMODEL_DIR=${JOB_DIR}keras_export
echo $SAVEDMODEL_DIR
# Model
MODEL_RESOURCENAME=$(gcloud ai models upload \
--region=$REGION \
--display-name=$MODEL_DISPLAYNAME \
--container-image-uri=$IMAGE_URI \
--artifact-uri=$SAVEDMODEL_DIR \
--format="value(model)")
echo "MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}"
echo "MODEL_RESOURCENAME=${MODEL_RESOURCENAME}"
# Endpoint
ENDPOINT_RESOURCENAME=$(gcloud ai endpoints create \
--region=$REGION \
--display-name=$ENDPOINT_DISPLAYNAME \
--format="value(name)")
echo "ENDPOINT_DISPLAYNAME=${ENDPOINT_DISPLAYNAME}"
echo "ENDPOINT_RESOURCENAME=${ENDPOINT_RESOURCENAME}"
# Deployment
DEPLOYED_MODEL_DISPLAYNAME=${MODEL_DISPLAYNAME}_deployment
MACHINE_TYPE=n1-standard-2
gcloud ai endpoints deploy-model $ENDPOINT_RESOURCENAME \
--region=$REGION \
--model=$MODEL_RESOURCENAME \
--display-name=$DEPLOYED_MODEL_DISPLAYNAME \
--machine-type=$MACHINE_TYPE \
--min-replica-count=1 \
--max-replica-count=1 \
--traffic-split=0=100
"""
Explanation: Deploying and predicting with model
Once you have a model you're proud of, let's deploy it! All you need to do is to upload the created model artifact from Cloud Storage to Vertex AI as a model, create a new endpoint, and deploy the model to the endpoint. It can take 15-20 minutes to complete. Make a note of ENDPOINT_RESOURCENAME for further use.
End of explanation
"""
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = # TODO 4: Your code here
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
!cat test.json
"""
Explanation: To predict with the model, take one of the example images.
Write a .json file with image data to send to a Vertex AI deployed model.
End of explanation
"""
%%bash
ENDPOINT_RESOURCENAME=#Insert ENDPOINT_RESOURCENAME from above
gcloud ai endpoints predict $ENDPOINT_RESOURCENAME \
--region=$REGION \
--json-request=test.json
"""
Explanation: Finally, you can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
End of explanation
"""
|
dorairajsanjay/w209finalproject | Getting simple demo data.ipynb | apache-2.0 | import pandas as pd
data_dir = "/Users/seddont/Dropbox/Tom/MIDS/W209_work/Tom_project/"
"""
Explanation: Processing the open food databse to extract a small number of representative items to use as demonstration for the visualization.
End of explanation
"""
# Get sample of the full database to understand what columns we want
smp = pd.read_csv(data_dir+"en.openfoodfacts.org.products.csv", sep = "\t", nrows = 100)
for c in smp.columns:
print(c)
# Specify what columns we need for the demonstration visualizations
demo_cols = ['code', 'creator', 'product_name', 'generic_name', 'quantity',
'brands', 'brands_tags', 'categories', 'categories_tags', 'serving_size',
'serving_size', 'energy_100g', 'energyfromfat_100g', 'fat_100g',
'saturatedfat_100g', 'monounsaturatedfat_100g',
'polyunsaturatedfat_100g', 'omega3fat_100g', 'omega6fat_100g',
'omega9fat_100g', 'oleicacid_100g', 'transfat_100g', 'cholesterol_100g',
'carbohydrates_100g', 'sugars_100g', 'sucrose_100g', 'glucose_100g',
'fructose_100g', 'lactose_100g', 'maltose_100g', 'starch_100g',
'fiber_100g', 'proteins_100g', 'salt_100g', 'sodium_100g',
'alcohol_100g', 'vitamina_100g', 'betacarotene_100g', 'vitamind_100g',
'vitamine_100g', 'vitamink_100g', 'vitaminc_100g', 'vitaminb1_100g',
'vitaminb2_100g', 'vitaminpp_100g', 'vitaminb6_100g', 'vitaminb9_100g',
'folates_100g', 'vitaminb12_100g', 'bicarbonate_100g', 'potassium_100g',
'chloride_100g', 'calcium_100g', 'iron_100g', 'fluoride_100g',
'iodine_100g', 'caffeine_100g', 'cocoa_100g',
'ingredients_list']
# Create a list of columns to drop
drop_cols = [c for c in smp.columns if c not in demo_cols]
print(drop_cols)
# Pull in full dataset
df = pd.read_csv(data_dir+"en.openfoodfacts.org.products.csv", sep = "\t")
# Drop unwanted columns
df.drop(drop_cols, axis = 1, inplace = True)
# Take a quick look
df
# Drop all rows that are not from the usda ndb import
df = df[df.creator == "usda-ndb-import"]
df
"""
Explanation: Working from the full database, because the usda_imports_filtered.csv file in the shared drive does not have brand information, which will be useful for displaying.
End of explanation
"""
df[df["product_name"].str.lower().str.contains("baked donut", na = False)]
df[df["product_name"].str.lower().str.contains("cracker", na = False)]
df[df["product_name"].str.lower().str.contains("cereal", na = False)]
"""
Explanation: Now down to a manageable number of rows and columns. Going to explore for a few typical items to use as demo data. Let's take a look at donuts, crackers and cereal -- the three categories used in the paper prototype.
End of explanation
"""
# reminder on column names remaining
df.columns
"""
Explanation: Looks like there are plenty of options for these. For demo purposes I want to pick 12 of each with a reasonable range of variation on the key factors of sugar, fat, sodium, protein, so that I can have one plus up to 11 comparison products.
End of explanation
"""
# Words we want to find that indicate product type
cat_words = ["donut", "cracker", "cereal"]
# Some of these generate confusion, so also have an 'exclude' dictionary
# This is pretty crude, but seems ok for generating demo
exclude_dict = {"donut": "coffee",
"cracker": "Nut",
"cereal": "Bar"}
# What we want to get variation on
pick_factors = ['fat_100g', 'sugars_100g', 'proteins_100g', 'sodium_100g']
# Points we want to pick (percentiles). Can tune this to get more or fewer picks.
pick_percentiles = [0.1, 0.5, 0.9]
# pick_percentiles = [0, 0.25, 0.5, 0.75, 1.0]
demo_picks = []
for cat in cat_words:
# first get all the items containing the cat word
catf = df[df["product_name"].str.lower().str.contains(cat, na = False)]
# then exclude any of these that contain the relevant exclude word
catf = catf[~catf["product_name"].str.lower().str.contains(exclude_dict[cat], na = False)]
# Identify what rank each product is in that category, for each main factor
for p in pick_factors:
catf[p + "_rank"] = catf[p].rank(method = "first")
# Select five products, at quintiles on each
high = catf[p + "_rank"].max()
pick_index = [max(1, round(n * high)) for n in pick_percentiles]
demo_picks.extend(catf[catf[p+"_rank"].isin(pick_index)].code)
demo_df = df[df.code.isin(demo_picks)]
# Add in category identifier
demo_df["demo_cat"] = "None"
for w in cat_words:
is_cat = demo_df.product_name.str.lower().str.contains(w)
demo_df["demo_cat"][is_cat] = w
# Take a look at what we built
demo_df
# Now write it out to disk
outfile = "demo_food_data.csv"
demo_df.to_csv(data_dir+outfile)
"""
Explanation: Now going to go through and find items that have certain category words in the product name. Then filter these to exclude the most often word that is confused in there (e.g. donut flavor coffee gets picked up under donut).
Then going to sort each of these based on the rank of items on key factors like sugar. And for each factor, going to pick items that are at specified percentiles, so we get a wide range on those factors.
End of explanation
"""
|
kubeflow/community | scripts/company_stats.ipynb | apache-2.0 | # NOTE: The RuntimeWarnings (if any) are harmless. See ContinuumIO/anaconda-issues#6678.
from pandas.io import gbq
import pandas as pd
import getpass
import subprocess
# Configuration Variables. Modify as desired.
PROJECT = subprocess.check_output(["gcloud", "config", "get-value", "project"]).strip().decode()
"""
Explanation: Company Contributions
This notebook attempts to compute number of contributions (currently just PRs) by various companies
It relies on github_users.json to map users to affiliation
github_users.json Is outdated and needs to be updated
End of explanation
"""
import datetime
month = datetime.datetime.now().month
year = datetime.datetime.now().year
num_months = 12
months = []
for i in range(num_months):
months.append("\"{0}{1:02}\"".format(year, month))
month -= 1
if month == 0:
month = 12
year -=1
"""
Explanation: Setup Authorization
If you are using a service account run
%%bash
Activate Service Account provided by Kubeflow.
gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}
If you are running using user credentials
gcloud auth application-default login
End of explanation
"""
import json
import os
import requests
if not os.path.exists(".cache"):
os.makedirs(".cache")
users_file = os.path.join(".cache", "github_users.json")
if not os.path.exists(users_file):
url = "https://github.com/kubeflow/community/blob/master/devstats/data/github_users.json?raw=true"
r = requests.get(url, allow_redirects=True)
with open(users_file, "wb") as hf:
hf.write(r.content)
with open(users_file) as hf:
data = json.load(hf)
users=pd.DataFrame(data)
users = users[["login", "company"]]
# Dedupe companies
c = ["cisco", "datawire", "google", "ibm", "intel", "teradata", "red hat"]
known_companies = dict(zip(c,c))
known_companies["redhat"] = "red hat"
def normalize_company(name):
if name is None:
return "None"
name = name.strip().lower().strip("!").strip("@")
for k, v in known_companies.items():
if k in name:
return v
return name
users["company"] = users["company"].apply(normalize_company)
"""
Explanation: Read in user affiliations
github_users.json is produced using CNCF scripts
There can be multiple entries for a user showing their company & affiliation during different time periods
End of explanation
"""
def combine_company(names):
for i in names:
if i != "None":
return i
return None
user_map= users.groupby("login")["company"].apply(combine_company)
# You can now look up users as user_map[actor]
user_map["jlewi"]
"""
Explanation: Users can have multiple entries
We pick the first non None entry
TODO(jlewi) We should find a better way to combine multiple entries
End of explanation
"""
query = """
SELECT
DATE(created_at) AS pr_date,
actor.id,
actor.login
FROM `githubarchive.month.*`
WHERE
_TABLE_SUFFIX IN ({0})
AND type = 'PullRequestEvent'
AND org.login = 'kubeflow'
AND JSON_EXTRACT(payload, '$.action') IN ('"opened"')
""".format(",".join(months))
prs=gbq.read_gbq(str(query), dialect='standard', project_id=PROJECT)
p=pd.Series(data=prs["id"].values,index=prs["pr_date"])
p=p.sort_index()
prs
prs["company"] = user_map[prs["login"]].values
d=prs[["pr_date", "company"]]
d["count"]=1
pr_counts = d.pivot_table("count", columns="company", index="pr_date", aggfunc="sum", fill_value=0)
# Some solutions here: https://stackoverflow.com/questions/46470743/how-to-efficiently-compute-a-rolling-unique-count-in-a-pandas-time-series
# Need to figure out how to do a time based window
counts = pr_counts.rolling('28d').sum()
"""
Explanation: Attribute PRs to Companies
We use BigQuery to get a list of PRs
We then map each PR to the company of its author
End of explanation
"""
names = ["google", "cisco", "microsoft", "arrikto", "ibm", "seldon"]
companies_df = counts[names]
companies_df["day"] = companies_df.index
companies = pd.melt(companies_df, value_vars=names, id_vars=["day"])
chart = alt.Chart(companies, title= "PRs")
point = chart.mark_point().encode(
x= alt.X('day', title = "Day"),
y=alt.Y("value", title="# PRs"),
color="company",
)
point.interactive()
"""
Explanation: Make a plot
Plot a subset of companies
End of explanation
"""
|
aqreed/PyVLM | issues/issue_CD_convergence.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.ticker import MaxNLocator
from pyvlm.vlm import PyVLM
"""
Explanation: As the mesh density increases, CD does not converge (unlike CL).
Import of the needed libraries:
End of explanation
"""
C = np.array([0, 1.03])
D = np.array([0.414, 8.14])
leading_edges_coord = [C, D]
chord_lengths = [2.15, 1.24]
"""
Explanation: Definition of the geometry of the wing:
Y ^ D +--+
| / \
| / \
| / \
|/ \
C +------------+
+-------------------->
X
End of explanation
"""
example_wing = PyVLM()
N = 9
X, Y = np.zeros(N-1), np.zeros(N-1) # to store the meshgrid
Zl, Zd = np.zeros((N-1, N-1)), np.zeros((N-1, N-1)) # to store the values of CL and CD
for n in range(1, N): # number of panels (chordwise)
X[n-1] = n
for m in range(1, N): # number of panels (spanwise)
Y[m-1] = m
example_wing.add_wing(leading_edges_coord, chord_lengths, n, m) # a wing is defined for each mesh density
# configuration (n, m)
CL, CD = example_wing.vlm(alpha=0) # the VLM method is called for an alpha=0
Zl[n-1][m-1], Zd[n-1][m-1] = CL, CD
# print('n =%2s m =%2s CL = %6.3f CD = %6.3f ' % (i, j, CL, CD))
example_wing.reset() # the VLM attributes are deleted
"""
Explanation: The method is applied to increasingly dense mesh grids:
End of explanation
"""
fig = plt.figure()
ax1 = fig.add_subplot(2, 1, 1, projection='3d')
ax2 = fig.add_subplot(2, 1, 2, projection='3d')
X, Y = np.meshgrid(X, Y)
ax1.plot_surface(X, Y, Zl)
ax1.set_title("CL")
ax2.plot_surface(X, Y, Zd)
ax2.set_title("CD")
ax1.set_xlabel('m - spanwise')
ax1.set_ylabel('n - chordwise')
ax2.set_xlabel('m - spanwise')
ax2.set_ylabel('n - chordwise')
ax1.xaxis.set_major_locator(MaxNLocator(integer=True))
ax1.yaxis.set_major_locator(MaxNLocator(integer=True))
ax2.xaxis.set_major_locator(MaxNLocator(integer=True))
ax2.yaxis.set_major_locator(MaxNLocator(integer=True))
ax1.set_zlabel('cl')
ax2.set_zlabel('cd')
plt.show()
"""
Explanation: The results are plotted:
End of explanation
"""
|
csaladenes/aviation | code/airport_parser-Copy1.ipynb | mit | L=json.loads(file('../json/L.json','r').read())
M=json.loads(file('../json/M.json','r').read())
N=json.loads(file('../json/N.json','r').read())
import requests
AP={}
for c in M:
if c not in AP:AP[c]={}
for i in range(len(L[c])):
AP[c][N[c][i]]=L[c][i]
sch={}
"""
Explanation: Load airports of each country
End of explanation
"""
baseurl='https://www.airportia.com/'
import requests, urllib2
SC={}
"""
Explanation: record schedules for 2 weeks, then augment count with weekly flight numbers.
seasonal and seasonal charter will count as once per week for 3 months, so 12/52 per week. TGM separate, since its history is in the past.
End of explanation
"""
for c in AP:
print c
airportialinks=AP[c]
sch={}
for i in airportialinks:
print i,
if i not in sch:sch[i]={}
#march 4-31 = 4 weeks
for d in range (4,32):
if d not in sch[i]:
try:
#capture token
url=baseurl+airportialinks[i]+'arrivals/201703'+str(d)
s = requests.Session()
cookiesopen = s.get(url)
cookies=str(s.cookies)
fcookies=[[k[:k.find('=')],k[k.find('=')+1:k.find(' for ')]] for k in cookies[cookies.find('Cookie '):].split('Cookie ')[1:]]
#push token
opener = urllib2.build_opener()
for k in fcookies:
opener.addheaders.append(('Cookie', k[0]+'='+k[1]))
#read html
m=s.get(url).content
sch[i][url]=pd.read_html(m)[0]
except: pass #print 'no tables',i,d
print
SC[c]=sch
"""
Explanation: parse Arrivals
End of explanation
"""
SD={}
for c in AP:
print c
airportialinks=AP[c]
sch={}
for i in airportialinks:
print i,
if i not in sch:sch[i]={}
#march 4-31 = 4 weeks
for d in range (4,32):
if d not in sch[i]:
try:
#capture token
url=baseurl+airportialinks[i]+'departures/201703'+str(d)
s = requests.Session()
cookiesopen = s.get(url)
cookies=str(s.cookies)
fcookies=[[k[:k.find('=')],k[k.find('=')+1:k.find(' for ')]] for k in cookies[cookies.find('Cookie '):].split('Cookie ')[1:]]
#push token
opener = urllib2.build_opener()
for k in fcookies:
opener.addheaders.append(('Cookie', k[0]+'='+k[1]))
#read html
m=s.get(url).content
sch[i][url]=pd.read_html(m)[0]
except: pass #print 'no tables',i,d
print
SD[c]=sch
SC
"""
Explanation: parse Departures
End of explanation
"""
mdf=pd.DataFrame()
for i in sch:
for d in sch[i]:
df=sch[i][d].drop(sch[i][d].columns[3:],axis=1).drop(sch[i][d].columns[0],axis=1)
df['To']=i
df['Date']=d
mdf=pd.concat([mdf,df])
mdf=mdf.replace('Hahn','Frankfurt')
mdf=mdf.replace('Hahn HHN','Frankfurt HHN')
mdf['City']=[i[:i.rfind(' ')] for i in mdf['From']]
mdf['Airport']=[i[i.rfind(' ')+1:] for i in mdf['From']]
file("mdf_ae_arrv.json",'w').write(json.dumps(mdf.reset_index().to_json()))
len(mdf)
airlines=set(mdf['Airline'])
cities=set(mdf['City'])
file("cities_ae_arrv.json",'w').write(json.dumps(list(cities)))
file("airlines_ae_arrv.json",'w').write(json.dumps(list(airlines)))
citycoords={}
for i in cities:
if i not in citycoords:
if i==u'Birmingham': z='Birmingham, UK'
elif i==u'Valencia': z='Valencia, Spain'
elif i==u'Naples': z='Naples, Italy'
elif i==u'St. Petersburg': z='St. Petersburg, Russia'
elif i==u'Bristol': z='Bristol, UK'
elif i==u'Victoria': z='Victoria, Seychelles'
elif i==u'Washington': z='Washington, DC'
elif i==u'Odessa': z='Odessa, Ukraine'
else: z=i
citycoords[i]=Geocoder(apik).geocode(z)
print i
citysave={}
for i in citycoords:
citysave[i]={"coords":citycoords[i][0].coordinates,
"country":citycoords[i][0].country}
file("citysave_ae_arrv.json",'w').write(json.dumps(citysave))
"""
Explanation: for c in AP:
print c
airportialinks=AP[c]
sch={}
for i in airportialinks:
print i,
if i not in sch:sch[i]={}
#march 4-31 = 4 weeks
for d in range (4,32):
if d not in sch[i]:
try:
#capture token
url=baseurl+airportialinks[i]+'arrivals/201703'+str(d)
s = requests.Session()
cookiesopen = s.get(url)
cookies=str(s.cookies)
fcookies=[[k[:k.find('=')],k[k.find('=')+1:k.find(' for ')]] for k in cookies[cookies.find('Cookie '):].split('Cookie ')[1:]]
#push token
opener = urllib2.build_opener()
for k in fcookies:
opener.addheaders.append(('Cookie', k[0]+'='+k[1]))
#read html
m=s.get(url).content
sch[i][url]=pd.read_html(m)[0]
except: pass #print 'no tables',i,d
print
SC[c]=sch
End of explanation
"""
|
oemof/feedinlib | examples/load_era5_weather_data.ipynb | mit | from feedinlib import era5
"""
Explanation: Example for ERA5 weather data download
This example shows you how to download ERA5 weather data from the Climate Data Store (CDS) and store it locally. Furthermore, it shows how to convert the weather data to the format needed by the pvlib and windpowerlib.
In order to download ERA5 weather data you need an account at the CDS.
Furthermore, you need to install the cdsapi package. See here for installation details.
When downloading the data using the API your request gets queued and may take a while to be completed. All actual calls of the data download are therefore commented to avoid unintended download. Instead, an example netcdf file is provided.
Download data for single coordinate
Download data for a region
Convert data into pvlib and windpowerlib format
End of explanation
"""
latitude = 52.47
longitude = 13.30
"""
Explanation: Download data for single coordinate <a class="anchor" id="single_loc"></a>
To download data for a single location you have to specify latitude and longitude of the desired location. Data will be retrieved for the nearest weather data point to that location.
End of explanation
"""
# set start and end date (end date will be included
# in the time period for which data is downloaded)
start_date, end_date = '2017-01-01', '2017-12-31'
# set variable set to download
variable = "pvlib"
"""
Explanation: Besides a location you have to specify a time period for which you would like to download the data as well as the weather variables you need. The feedinlib provides predefined sets of variables that are needed to use the pvlib and windpowerlib. These can be applied by setting the variable parameter to "pvlib" or "windpowerlib", as shown below. If you want to download data for both libraries you can set variable to "feedinlib".
Concerning the start and end date, keep in mind that all timestamps in the feedinlib are in UTC. So if you later on want to convert the data to a different time zone, the data may not cover the whole period you intended to download. To avoid this set your start date to one day before the start of your required period if you are East of the zero meridian or your end date to one day after your required period ends if you are West of the zero meridian.
End of explanation
"""
target_file = 'ERA5_pvlib_2017.nc'
"""
Explanation: If you want to store the downloaded data, which is recommended as download may take a while, you may provide a filename (including path) to save data to.
End of explanation
"""
latitude = [52.3, 52.7] # [latitude south, latitude north]
longitude = [13.1, 13.6] # [longitude west, longitude east]
target_file = 'ERA5_example_data.nc'
"""
Explanation: Now we can retrieve the data:
```python
get windpowerlib data for specified location
ds = era5.get_era5_data_from_datespan_and_position(
variable=variable,
start_date=start_date, end_date=end_date,
latitude=latitude, longitude=longitude,
target_file=target_file)
```
bash
2020-01-12 20:53:56,465 INFO Welcome to the CDS
2020-01-12 20:53:56,469 INFO Sending request to https://cds.climate.copernicus.eu/api/v2/resources/reanalysis-era5-single-levels
2020-01-12 20:53:57,023 INFO Request is queued
2020-01-12 20:53:58,085 INFO Request is running
2020-01-12 21:48:24,341 INFO Request is completed
2020-01-12 21:48:24,344 INFO Downloading request for 5 variables to ERA5_pvlib_2017.nc
2020-01-12 21:48:24,346 INFO Downloading http://136.156.132.153/cache-compute-0002/cache/data7/adaptor.mars.internal-1578858837.3774962-24514-9-8081d664-0a1e-48b9-951c-bc9b8e2caa44.nc to ERA5_pvlib_2017.nc (121.9K)
2020-01-12 21:48:24,653 INFO Download rate 400.6K/s
Download data for a region<a class="anchor" id="region"></a>
When wanting to download weather data for a region instead of providing a single value for each latitude and longitude you have to provide latitude and longitude as lists in the following form:
End of explanation
"""
era5_netcdf_filename = 'ERA5_example_data.nc'
# reimport downloaded data
import xarray as xr
ds = xr.open_dataset(era5_netcdf_filename)
ds
"""
Explanation: ```python
get pvlib data for specified area
ds_berlin = era5.get_era5_data_from_datespan_and_position(
variable=variable,
start_date=start_date, end_date=end_date,
latitude=latitude, longitude=longitude,
target_file=target_file)
```
bash
2020-01-12 22:55:35,301 INFO Download rate 1.6M/s
2020-01-12 22:03:08,085 INFO Welcome to the CDS
2020-01-12 22:03:08,086 INFO Sending request to https://cds.climate.copernicus.eu/api/v2/resources/reanalysis-era5-single-levels
2020-01-12 22:03:08,756 INFO Request is queued
2020-01-12 22:03:09,809 INFO Request is running
2020-01-12 22:55:34,863 INFO Request is completed
2020-01-12 22:55:34,864 INFO Downloading request for 5 variables to ERA5_example_data.nc
2020-01-12 22:55:34,864 INFO Downloading http://136.156.132.235/cache-compute-0006/cache/data5/adaptor.mars.internal-1578862989.052999-21409-23-831562a8-e0b2-4b19-8463-e14931a3f630.nc to ERA5_example_data.nc (720.7K)
If you want weather data for the whole world, you may leave latitude and longitude unspecified.
```python
get feedinlib data (includes pvlib and windpowerlib data)
for the whole world
ds = era5.get_era5_data_from_datespan_and_position(
variable="feedinlib",
start_date=start_date, end_date=end_date,
target_file=target_file)
```
Convert data into pvlib and windpowerlib format<a class="anchor" id="convert"></a>
In order to use the weather data for your feed-in calculations using the pvlib and windpowerlib it has to be converted into the required format. This section shows you how this is done.
End of explanation
"""
# get all weather data points in dataset
from shapely.geometry import Point
import geopandas as gpd
points = []
for x in ds.longitude:
for y in ds.latitude:
points.append(Point(x, y))
points_df = gpd.GeoDataFrame({'geometry': points})
# read provided shape file
region_shape = gpd.read_file('berlin_shape.geojson')
# plot weather data points on map
base = region_shape.plot(color='white', edgecolor='black')
points_df.plot(ax=base, marker='o', color='red', markersize=5);
"""
Explanation: Let's first plot the downloaded weather data points on a map.
End of explanation
"""
# for single location (as list of longitude and latitude)
single_location = [13.2, 52.4]
pvlib_df = era5.weather_df_from_era5(
era5_netcdf_filename=era5_netcdf_filename,
lib='pvlib',
area=single_location)
pvlib_df.head()
# for whole region
era5.weather_df_from_era5(
era5_netcdf_filename=era5_netcdf_filename,
lib='pvlib').head()
# specify rectangular area
area = [(13.2, 13.7), (52.4, 52.8)]
era5.weather_df_from_era5(
era5_netcdf_filename=era5_netcdf_filename,
lib='pvlib',
area=area).head()
# specify area giving a Polygon
from shapely.geometry import Polygon
lat_point_list = [52.3, 52.3, 52.65]
lon_point_list = [13.0, 13.4, 13.4]
area = Polygon(zip(lon_point_list, lat_point_list))
era5.weather_df_from_era5(
era5_netcdf_filename=era5_netcdf_filename,
lib='pvlib',
area=area).head()
# export to csv
pvlib_df.to_csv('pvlib_df_ERA5.csv')
"""
Explanation: Now let's convert the weather data to the pvlib format.
With the area parameter you can specify wether you want to retrieve weather dataframes for a single location, a region within the downloaded region or the whole downloaded region.
In case area is not a single location, the index of the resulting dataframe will be a multiindex with levels (time, latitude, longitude). Be aware that in order to use it for pvlib or windpowerlib calculations you need to select a single location.
End of explanation
"""
# for whole region
windpowerlib_df = era5.weather_df_from_era5(
era5_netcdf_filename=era5_netcdf_filename,
lib='windpowerlib',
area=single_location)
windpowerlib_df.head()
"""
Explanation: The conversion to the windpowerlib format is analog to the pvlib conversion.
End of explanation
"""
# get weather data in pvlib format for July
start = '2017-07-01'
end = '2017-07-31'
era5.weather_df_from_era5(
era5_netcdf_filename=era5_netcdf_filename,
lib='pvlib',
area=single_location,
start=start,
end=end).head()
"""
Explanation: Furthermore, it is possible to specify a start and end date to retrieve data for. They must be provided as something that can be converted to a timestamp, i.e. '2013-07-02'.
End of explanation
"""
from feedinlib import Photovoltaic
system_data = {
'module_name': 'Advent_Solar_Ventura_210___2008_',
'inverter_name': 'ABB__MICRO_0_25_I_OUTD_US_208__208V_',
'azimuth': 180,
'tilt': 30,
'albedo': 0.2}
pv_system = Photovoltaic(**system_data)
feedin = pv_system.feedin(
weather=pvlib_df,
location=(52.4, 13.2))
feedin.plot()
from feedinlib import WindPowerPlant
turbine_data = {
'turbine_type': 'E-101/3050',
'hub_height': 135
}
wind_turbine = WindPowerPlant(**turbine_data)
feedin = wind_turbine.feedin(
weather=windpowerlib_df)
feedin.plot()
"""
Explanation: The following shows you in short how to use the weather data for feed-in calculations and mainly serves as a test wether the conversion works correctly. More detailed explanation on feed-in calculations using the feedinlib can be found in the notebooks run_pvlib_model.ipynb and run_windpowerlib_turbine_model.ipynb.
End of explanation
"""
|
sbussmann/sensor-fusion | Code/Resample Sensor Data to 10 Hz Sampling Rate.ipynb | mit | import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# load the raw data
df = pd.read_csv('../Data/shaneiphone_exp2.csv')
"""
Explanation: Goal: resample XYZ signals to 10 Hz sampling rate
Experiment: I drove my car from home to Censio and back. My phone rested on my seat facing forwards for the trip to Censio. Nick was in the passenger seat with his phone in his pocket. For the return trip, we swapped phones. The total time for the trip was about 15 minutes.
End of explanation
"""
sampling_rate = 1 / np.diff(df['motionTimestamp_sinceReboot'])
plt.hist(sampling_rate)
"""
Explanation: For SensorLog data, sampling rate can be obtained directly from 'motionTimestamp_sinceReboot'
End of explanation
"""
df['DateTime'] = pd.DatetimeIndex(df['loggingTime'])
df = df.set_index('DateTime')
"""
Explanation: The sampling rate was 15 Hz most of the time. Sometimes it was 30 Hz. Very occasionally it was close to 10 Hz. This is somewhat annoying, so let's resample to 10 Hz.
First, we have to make a DateTime column from the loggingTime columns and set it as the index
End of explanation
"""
# 100L corresponds to 100 milliseconds
rdf = df.resample('100L')
# check for nans
rdf.isnull().sum()
"""
Explanation: Then we just use pandas built-in resampling function
End of explanation
"""
rdf = rdf.dropna()
resampled_rate = 1 / np.diff(rdf['motionTimestamp_sinceReboot'])
plt.hist(resampled_rate, bins=20)
"""
Explanation: 3 rows in the resampled database are nan's. Just drop them.
End of explanation
"""
|
feststelltaste/software-analytics | prototypes/Reading a Git repo's commit history with Pandas efficiently (Word counts edition).ipynb | gpl-3.0 | import git
import pandas as pd
GIT_REPO_PATH = r'../../spring-petclinic/'
repo = git.Repo(GIT_REPO_PATH)
git_bin = repo.git
git_log = git_bin.execute('git log --pretty=format:"%h\t%at\t%aN\t%s"')
commits = pd.read_csv(StringIO(git_log),
sep="\t",
header=None,
names=['sha', 'timestamp', 'author', 'message']
)
commits['word_count'] = commits['message'].str.count(" ")+1
commits.head()
s =
"""
Explanation: Introduction
There are multiple reasons for analyzing a version control system like your Git repository. See for example Adam Tornhill's book "Your Code as a Crime Scene" or his upcoming book "Software Design X-Rays" for plenty of inspirations:
You can
- analyze knowledge islands
- distinguish often changing code from stable code parts
- identify code that is temporal coupled to other code
Having the necessary data for those analyses in a Pandas <tt>DataFrame</tt> gives you many possibilities to quickly gain insights into the evolution of your software system in various ways.
The idea
In a preceding blog post, I showed you a way to read a Git log file with Pandas' DataFrame and GitPython. Looking back, this was really complicated and tedious. So, with a few tricks we can do it much better this time:
We use GitPython's feature to directly access an underlying Git installation. This is way more faster than using GitPython's object representation of the commits and makes it possible to have everything we need in one notebook.
We use in-memory reading by using StringIO to avoid unnecessary file access. This avoids storing the Git output on disk and read it from from disc again. This method is faster, too.
We also exploit Pandas' <tt>read_csv</tt> method even more. This makes the transformation of the Git log into a <tt>DataFrame</tt> as easy as pie.
Exporting the Git repo's history
The first step is to connect GitPython with the Git repo. If we have an instance of the repo, we can gain access to the underlying Git installation of the operating system via <tt>repo.git</tt>.
In our case, we tap the Spring PetClinic repo, a small sample application for the Spring framework (I also analyzed the huge Linux repo, works as well).
With the <tt>git_bin</tt>, we can execute almost any Git command we like directly. In our hypothetical use case, we want to retrieve some information about the change frequency of files. For this, we need the complete history of the Git repo including statistics for the changed files (via <tt>--numstat</tt>).
We use a little trick to make sure, that the format for the file's statistics fits nicely with the commit's metadata (SHA <tt>%h</tt>, UNIX timestamp <tt>%at</tt> and author's name <tt>%aN</tt>). The <tt>--numstat</tt> option provides data for additions and deletions for the affected file name in one line – separated by the tabulator character <tt>\t</tt>:
<p>
<tt>1<b>\t</b>1<b>\t</b>some/file/name.ext</tt>
</p>
We use the same tabular separator <tt>\t</tt> for the format string:
<p>
<tt>%h<b>\t</b>%at<b>\t</b>%aN</tt>
</p>
And here is the trick: Additionally, we add the number of tabulators of the file's statistics plus an additional tabulator in front of the format string to pretend that there is an empty file statistics' information in front of each commit meta data string.
The results looks like this:
<p>
<tt>\t\t\t%h\t%at\t%aN</tt>
</p>
Note: If you want to export the Git log on the command line into a file, you need to use the horizontal tab <tt>%x0A</tt> as separator instead of <tt>\t</tt> in the format string. Otherwise, the trick doesn't work (I'll show the corresponding format string at the end of this article).
OK, let's executed the Git log export:
End of explanation
"""
import pandas as pd
from io import StringIO
commits_raw = pd.read_csv(StringIO(git_log),
sep="\t",
header=None,
names=['additions', 'deletions', 'filename', 'sha', 'timestamp', 'author', "message"]
)
commits_raw.head()
"""
Explanation: Reading the Git log
We now read in the complete files' history in the <tt>git_log</tt> variable. Don't let confuse you by all the <tt>\t</tt> characters.
Let's read the result into a Pandas <tt>DataFrame</tt> by using the <tt>read_csv</tt> method. Because we can't provide a file path to a CSV data, we have to use StringIO to read in our in-memory buffered content.
Pandas will read the first line of the tabular-separated "file", sees the many tabular-separated columns and parses all other lines in the same format / column layout. Additionally, we set the <tt>header</tt> to <tt>None</tt> because we don't have one and provide nice names for all the columns that we read in.
End of explanation
"""
commits = commits_raw[['additions', 'deletions', 'filename']]\
.join(commits_raw[['sha', 'timestamp', 'author', 'message']].fillna(method='ffill'))
commits.head()
"""
Explanation: Now we have two different kinds of content for the rows:
- The commit meta data without file statistics (see rows with the indexes 0, 2 and 4 above)
- The file statistics without the commit meta data (see rows with the indexes 1 and 3 above)
But we are interested in the commit meta data for each file's statistic. For this, we forward fill (<tt>ffill</tt>) the empty commit meta data entries of the file statistics rows with the preceding commit's meta data via the <tt>DataFrame</tt>'s <tt>fillna</tt> method and <tt>join</tt> this data with the existing columns of the file statistics.
End of explanation
"""
commits = commits.dropna()
commits.head()
"""
Explanation: This gives use the commit meta data for each file change!
Because we aren't interested in the pure commit meta data anymore, we drop all those rows that don't contain file statistics aka contain null values via <tt>dropna</tt>.
End of explanation
"""
# reading
git_log = pd.read_csv(
"../../spring-petclinic/git.log",
sep="\t",
header=None,
names=[
'additions',
'deletions',
'filename',
'sha',
'timestamp',
'author'])
# converting in "one line"
git_log[['additions', 'deletions', 'filename']]\
.join(git_log[['sha', 'timestamp', 'author']]\
.fillna(method='ffill'))\
.dropna().head()
"""
Explanation: And that's it! We are finished!
In summary, you just need a "one-liner" for converting the Git log file output that was exported with
git log --numstat --pretty=format:"%x09%x09%x09%h%x09%at%x09%aN" > git.log
and read into a <tt>DataFrame</tt>:
End of explanation
"""
|
andreyf/machine-learning-examples | linear_models/peer_review_linreg_height_weight.ipynb | gpl-3.0 | import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Линейная регрессия и основные библиотеки Python для анализа данных и научных вычислений
Это задание посвящено линейной регрессии. На примере прогнозирования роста человека по его весу Вы увидите, какая математика за этим стоит, а заодно познакомитесь с основными библиотеками Python, необходимыми для дальнейшего прохождения курса.
Материалы
Лекции данного курса по линейным моделям и градиентному спуску
Документация по библиотекам NumPy и SciPy
Документация по библиотеке Matplotlib
Документация по библиотеке Pandas
Pandas Cheat Sheet
Документация по библиотеке Seaborn
Задание 1. Первичный анализ данных c Pandas
В этом заданиии мы будем использовать данные SOCR по росту и весу 25 тысяч подростков.
[1]. Если у Вас не установлена библиотека Seaborn - выполните в терминале команду conda install seaborn. (Seaborn не входит в сборку Anaconda, но эта библиотека предоставляет удобную высокоуровневую функциональность для визуализации данных).
End of explanation
"""
data = pd.read_csv('weights_heights.csv', index_col='Index')
"""
Explanation: Считаем данные по росту и весу (weights_heights.csv, приложенный в задании) в объект Pandas DataFrame:
End of explanation
"""
data.plot(y='Height', kind='hist',
color='red', title='Height (inch.) distribution')
"""
Explanation: Чаще всего первое, что надо надо сделать после считывания данных - это посмотреть на первые несколько записей. Так можно отловить ошибки чтения данных (например, если вместо 10 столбцов получился один, в названии которого 9 точек с запятой). Также это позволяет познакомиться с данными, как минимум, посмотреть на признаки и их природу (количественный, категориальный и т.д.).
После этого стоит построить гистограммы распределения признаков - это опять-таки позволяет понять природу признака (степенное у него распределение, или нормальное, или какое-то еще). Также благодаря гистограмме можно найти какие-то значения, сильно не похожие на другие - "выбросы" в данных.
Гистограммы удобно строить методом plot Pandas DataFrame с аргументом kind='hist'.
Пример. Построим гистограмму распределения роста подростков из выборки data. Используем метод plot для DataFrame data c аргументами y='Height' (это тот признак, распределение которого мы строим)
End of explanation
"""
data.head(5)
data.plot(y='Weight', kind='hist',
color='green', title='Weight (lb.) distribution')
"""
Explanation: Аргументы:
y='Height' - тот признак, распределение которого мы строим
kind='hist' - означает, что строится гистограмма
color='red' - цвет
[2]. Посмотрите на первые 5 записей с помощью метода head Pandas DataFrame. Нарисуйте гистограмму распределения веса с помощью метода plot Pandas DataFrame. Сделайте гистограмму зеленой, подпишите картинку.
End of explanation
"""
def make_bmi(height_inch, weight_pound):
METER_TO_INCH, KILO_TO_POUND = 39.37, 2.20462
return (weight_pound / KILO_TO_POUND) / \
(height_inch / METER_TO_INCH) ** 2
data['BMI'] = data.apply(lambda row: make_bmi(row['Height'],
row['Weight']), axis=1)
"""
Explanation: Один из эффективных методов первичного анализа данных - отображение попарных зависимостей признаков. Создается $m \times m$ графиков (m - число признаков), где по диагонали рисуются гистограммы распределения признаков, а вне диагонали - scatter plots зависимости двух признаков. Это можно делать с помощью метода $scatter_matrix$ Pandas Data Frame или pairplot библиотеки Seaborn.
Чтобы проиллюстрировать этот метод, интересней добавить третий признак. Создадим признак Индекс массы тела (BMI). Для этого воспользуемся удобной связкой метода apply Pandas DataFrame и lambda-функций Python.
End of explanation
"""
sns.pairplot(data)
"""
Explanation: [3]. Постройте картинку, на которой будут отображены попарные зависимости признаков , 'Height', 'Weight' и 'BMI' друг от друга. Используйте метод pairplot библиотеки Seaborn.
End of explanation
"""
def weight_category(weight):
if weight < 120:
return 1
elif weight >= 150:
return 3
else:
return 2
data['weight_cat'] = data['Weight'].apply(weight_category)
bxp = sns.boxplot(x="weight_cat", y="Height", data=data)
bxp.set_xlabel(u'Весовая категория')
bxp.set_ylabel(u'Рост')
"""
Explanation: Часто при первичном анализе данных надо исследовать зависимость какого-то количественного признака от категориального (скажем, зарплаты от пола сотрудника). В этом помогут "ящики с усами" - boxplots библиотеки Seaborn. Box plot - это компактный способ показать статистики вещественного признака (среднее и квартили) по разным значениям категориального признака. Также помогает отслеживать "выбросы" - наблюдения, в которых значение данного вещественного признака сильно отличается от других.
[4]. Создайте в DataFrame data новый признак weight_category, который будет иметь 3 значения: 1 – если вес меньше 120 фунтов. (~ 54 кг.), 3 - если вес больше или равен 150 фунтов (~68 кг.), 2 – в остальных случаях. Постройте «ящик с усами» (boxplot), демонстрирующий зависимость роста от весовой категории. Используйте метод boxplot библиотеки Seaborn и метод apply Pandas DataFrame. Подпишите ось y меткой «Рост», ось x – меткой «Весовая категория».
End of explanation
"""
data.plot(y='Height', x='Weight', kind='scatter', title=u'Зависимость роста от веса')
"""
Explanation: [5]. Постройте scatter plot зависимости роста от веса, используя метод plot для Pandas DataFrame с аргументом kind='scatter'. Подпишите картинку.
End of explanation
"""
def error(w):
return np.sum((data['Height'] - (w[0] + w[1] * data['Weight'])) ** 2)
"""
Explanation: Задание 2. Минимизация квадратичной ошибки
В простейшей постановке задача прогноза значения вещественного признака по прочим признакам (задача восстановления регрессии) решается минимизацией квадратичной функции ошибки.
[6]. Напишите функцию, которая по двум параметрам $w_0$ и $w_1$ вычисляет квадратичную ошибку приближения зависимости роста $y$ от веса $x$ прямой линией $y = w_0 + w_1 * x$:
$$error(w_0, w_1) = \sum_{i=1}^n {(y_i - (w_0 + w_1 * x_i))}^2 $$
Здесь $n$ – число наблюдений в наборе данных, $y_i$ и $x_i$ – рост и вес $i$-ого человека в наборе данных.
End of explanation
"""
x = np.linspace(60,180)
y1 = 60 + 0.05 * x
y2 = 50 + 0.16 * x
plt.figure()
data.plot(y='Height', x='Weight', kind='scatter', color='green', title=u'Зависимость роста от веса')
plt.xlabel(u'Вес')
plt.ylabel(u'Рост')
plt.plot(x,y1)
plt.plot(x,y2)
"""
Explanation: Итак, мы решаем задачу: как через облако точек, соответсвующих наблюдениям в нашем наборе данных, в пространстве признаков "Рост" и "Вес" провести прямую линию так, чтобы минимизировать функционал из п. 6. Для начала давайте отобразим хоть какие-то прямые и убедимся, что они плохо передают зависимость роста от веса.
[7]. Проведите на графике из п. 5 Задания 1 две прямые, соответствующие значениям параметров ($w_0, w_1) = (60, 0.05)$ и ($w_0, w_1) = (50, 0.16)$. Используйте метод plot из matplotlib.pyplot, а также метод linspace библиотеки NumPy. Подпишите оси и график.
End of explanation
"""
w1 = np.linspace(-10, 10)
w0 = [50] * len(w1)
w = zip(w0, w1)
e = []
for weight in w:
e.append(error(weight))
plt.plot(w1, e)
plt.xlabel('w1')
plt.ylabel('error')
plt.title(u'Зависимость ошибки от w1 при w0 = 50')
"""
Explanation: Минимизация квадратичной функции ошибки - относительная простая задача, поскольку функция выпуклая. Для такой задачи существует много методов оптимизации. Посмотрим, как функция ошибки зависит от одного параметра (наклон прямой), если второй параметр (свободный член) зафиксировать.
[8]. Постройте график зависимости функции ошибки, посчитанной в п. 6, от параметра $w_1$ при $w_0$ = 50. Подпишите оси и график.
End of explanation
"""
from scipy.optimize import minimize_scalar
def error50(w1):
return np.sum((data['Height']-(50+w1*data['Weight']))**2)
w1_opt = minimize_scalar(error50, bounds=(-5,5), method='bounded')
plt.figure()
data.plot(y='Height', x='Weight', kind='scatter', color='green', title=u'Оптимальный наклон прямой при w0=50')
plt.plot(x, 50 + w1_opt.x * x)
"""
Explanation: Теперь методом оптимизации найдем "оптимальный" наклон прямой, приближающей зависимость роста от веса, при фиксированном коэффициенте $w_0 = 50$.
[9]. С помощью метода minimize_scalar из scipy.optimize найдите минимум функции, определенной в п. 6, для значений параметра $w_1$ в диапазоне [-5,5]. Проведите на графике из п. 5 Задания 1 прямую, соответствующую значениям параметров ($w_0$, $w_1$) = (50, $w_1_opt$), где $w_1_opt$ – найденное в п. 8 оптимальное значение параметра $w_1$.
End of explanation
"""
from mpl_toolkits.mplot3d import Axes3D
"""
Explanation: При анализе многомерных данных человек часто хочет получить интуитивное представление о природе данных с помощью визуализации. Увы, при числе признаков больше 3 такие картинки нарисовать невозможно. На практике для визуализации данных в 2D и 3D в данных выделаяют 2 или, соответственно, 3 главные компоненты (как именно это делается - мы увидим далее в курсе) и отображают данные на плоскости или в объеме.
Посмотрим, как в Python рисовать 3D картинки, на примере отображения функции $z(x,y) = sin(\sqrt{x^2+y^2})$ для значений $x$ и $y$ из интервала [-5,5] c шагом 0.25.
End of explanation
"""
fig = plt.figure()
ax = fig.gca(projection='3d') # get current axis
# Создаем массивы NumPy с координатами точек по осям X и У.
# Используем метод meshgrid, при котором по векторам координат
# создается матрица координат. Задаем нужную функцию Z(x, y).
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
Z = np.sin(np.sqrt(X**2 + Y**2))
# Наконец, используем метод *plot_surface* объекта
# типа Axes3DSubplot. Также подписываем оси.
surf = ax.plot_surface(X, Y, Z)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
"""
Explanation: Создаем объекты типа matplotlib.figure.Figure (рисунок) и matplotlib.axes._subplots.Axes3DSubplot (ось).
End of explanation
"""
w0 = np.arange(-100, 100.25)
w1 = np.arange(-5, 5, 0.25)
w0,w1 = np.meshgrid(w0, w1)
def error_arr(w0,w1):
a=w0.shape[0]
b=w0.shape[1]
Z=np.zeros((a,b))
for i in range(a):
for j in range(b):
Z[i,j]=error((w0[i,j],w1[i,j]))
return Z
z = error_arr(w0,w1)
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(w0, w1, z)
ax.set_xlabel('Intercept')
ax.set_ylabel('Slope')
ax.set_zlabel('Error')
plt.show()
"""
Explanation: [10]. Постройте 3D-график зависимости функции ошибки, посчитанной в п.6 от параметров $w_0$ и $w_1$. Подпишите ось $x$ меткой «Intercept», ось $y$ – меткой «Slope», a ось $z$ – меткой «Error».
End of explanation
"""
from scipy.optimize import minimize
res = minimize(error, [0,0], method='L-BFGS-B', bounds=[(-100,100),(-5,5)])
w_opt = res.x
print "w0 = %s\nw1 = %s" % (w_opt[0],w_opt[1])
plt.figure()
data.plot(y='Height', x='Weight', kind='scatter', color='green', title=u'Оптимальная прямая')
plt.plot(x,w_opt[0]+w_opt[1]*x)
"""
Explanation: [11]. С помощью метода minimize из scipy.optimize найдите минимум функции, определенной в п. 6, для значений параметра $w_0$ в диапазоне [-100,100] и $w_1$ - в диапазоне [-5, 5]. Начальная точка – ($w_0$, $w_1$) = (0, 0). Используйте метод оптимизации L-BFGS-B (аргумент method метода minimize). Проведите на графике из п. 5 Задания 1 прямую, соответствующую найденным оптимальным значениям параметров $w_0$ и $w_1$. Подпишите оси и график.
End of explanation
"""
|
nproctor/phys202-2015-work | assignments/assignment08/InterpolationEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.interpolate import interp1d
"""
Explanation: Interpolation Exercise 1
End of explanation
"""
f = np.load("trajectory.npz")
t = f['t']
x = f['x']
y = f['y']
assert isinstance(x, np.ndarray) and len(x)==40
assert isinstance(y, np.ndarray) and len(y)==40
assert isinstance(t, np.ndarray) and len(t)==40
"""
Explanation: 2D trajectory interpolation
The file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time:
t which has discrete values of time t[i].
x which has values of the x position at those times: x[i] = x(t[i]).
x which has values of the y position at those times: y[i] = y(t[i]).
Load those arrays into this notebook and save them as variables x, y and t:
End of explanation
"""
X = interp1d(t, x, kind='cubic')
Y = interp1d(t, y, kind='cubic')
newt = np.linspace(min(t), max(t), 200)
newx = X(newt)
newy = Y(newt)
assert newt[0]==t.min()
assert newt[-1]==t.max()
assert len(newt)==200
assert len(newx)==200
assert len(newy)==200
"""
Explanation: Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays:
newt which has 200 points between ${t_{min},t_{max}}$.
newx which has the interpolated values of $x(t)$ at those times.
newy which has the interpolated values of $y(t)$ at those times.
End of explanation
"""
plt.figure(figsize=(12,5))
plt.plot(t, x, 'b', linestyle='', marker='o', label="Original Point for x(t)")
plt.plot(t, y, 'r', linestyle = '', marker='o', label="Original Point for y(t)")
plt.plot(newt, newx, 'c-', label="Interpoled x(t)")
plt.plot(newt, newy, 'm-', label="Interpoled y(t)")
plt.grid(False)
ax = plt.gca()
ax.set_axis_bgcolor("white")
plt.xlabel("Time", fontsize=14)
plt.title("X and Y Trajectory", fontsize = 14)
plt.ylabel("Position", fontsize = 14)
plt.legend()
plt.show()
assert True # leave this to grade the trajectory plot
"""
Explanation: Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points:
For the interpolated points, use a solid line.
For the original points, use circles of a different color and no line.
Customize you plot to make it effective and beautiful.
End of explanation
"""
|
dmolina/es_intro_python | 07-Control-Flow-Statements.ipynb | gpl-3.0 | x = -15
if x == 0:
print(x, "is zero")
elif x > 0:
print(x, "is positive")
elif x < 0:
print(x, "is negative")
else:
print(x, "is unlike anything I've ever seen...")
"""
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="fig/cover-small.jpg">
This notebook contains an excerpt from the Whirlwind Tour of Python by Jake VanderPlas; the content is available on GitHub.
The text and code are released under the CC0 license; see also the companion project, the Python Data Science Handbook.
<!--NAVIGATION-->
< Built-In Data Structures | Contents | Defining and Using Functions >
Control Flow
Control flow is where the rubber really meets the road in programming.
Without it, a program is simply a list of statements that are sequentially executed.
With control flow, you can execute certain code blocks conditionally and/or repeatedly: these basic building blocks can be combined to create surprisingly sophisticated programs!
Here we'll cover conditional statements (including "if", "elif", and "else"), loop statements (including "for" and "while" and the accompanying "break", "continue", and "pass").
Conditional Statements: if-elif-else:
Conditional statements, often referred to as if-then statements, allow the programmer to execute certain pieces of code depending on some Boolean condition.
A basic example of a Python conditional statement is this:
End of explanation
"""
for N in [2, 3, 5, 7]:
print(N, end=' ') # print all on same line
"""
Explanation: Note especially the use of colons (:) and whitespace to denote separate blocks of code.
Python adopts the if and else often used in other languages; its more unique keyword is elif, a contraction of "else if".
In these conditional clauses, elif and else blocks are optional; additionally, you can optinally include as few or as many elif statements as you would like.
for loops
Loops in Python are a way to repeatedly execute some code statement.
So, for example, if we'd like to print each of the items in a list, we can use a for loop:
End of explanation
"""
for i in range(10):
print(i, end=' ')
"""
Explanation: Notice the simplicity of the for loop: we specify the variable we want to use, the sequence we want to loop over, and use the "in" operator to link them together in an intuitive and readable way.
More precisely, the object to the right of the "in" can be any Python iterator.
An iterator can be thought of as a generalized sequence, and we'll discuss them in Iterators.
For example, one of the most commonly-used iterators in Python is the range object, which generates a sequence of numbers:
End of explanation
"""
# range from 5 to 10
list(range(5, 10))
# range from 0 to 10 by 2
list(range(0, 10, 2))
"""
Explanation: Note that the range starts at zero by default, and that by convention the top of the range is not included in the output.
Range objects can also have more complicated values:
End of explanation
"""
i = 0
while i < 10:
print(i, end=' ')
i += 1
"""
Explanation: You might notice that the meaning of range arguments is very similar to the slicing syntax that we covered in Lists.
Note that the behavior of range() is one of the differences between Python 2 and Python 3: in Python 2, range() produces a list, while in Python 3, range() produces an iterable object.
while loops
The other type of loop in Python is a while loop, which iterates until some condition is met:
End of explanation
"""
for n in range(20):
# if the remainder of n / 2 is 0, skip the rest of the loop
if n % 2 == 0:
continue
print(n, end=' ')
"""
Explanation: The argument of the while loop is evaluated as a boolean statement, and the loop is executed until the statement evaluates to False.
break and continue: Fine-Tuning Your Loops
There are two useful statements that can be used within loops to fine-tune how they are executed:
The break statement breaks-out of the loop entirely
The continue statement skips the remainder of the current loop, and goes to the next iteration
These can be used in both for and while loops.
Here is an example of using continue to print a string of odd numbers.
In this case, the result could be accomplished just as well with an if-else statement, but sometimes the continue statement can be a more convenient way to express the idea you have in mind:
End of explanation
"""
a, b = 0, 1
amax = 100
L = []
while True:
(a, b) = (b, a + b)
if a > amax:
break
L.append(a)
print(L)
"""
Explanation: Here is an example of a break statement used for a less trivial task.
This loop will fill a list with all Fibonacci numbers up to a certain value:
End of explanation
"""
L = []
nmax = 30
for n in range(2, nmax):
for factor in L:
if n % factor == 0:
break
else: # no break
L.append(n)
print(L)
"""
Explanation: Notice that we use a while True loop, which will loop forever unless we have a break statement!
Loops with an else Block
One rarely used pattern available in Python is the else statement as part of a for or while loop.
We discussed the else block earlier: it executes if all the if and elif statements evaluate to False.
The loop-else is perhaps one of the more confusingly-named statements in Python; I prefer to think of it as a nobreak statement: that is, the else block is executed only if the loop ends naturally, without encountering a break statement.
As an example of where this might be useful, consider the following (non-optimized) implementation of the Sieve of Eratosthenes, a well-known algorithm for finding prime numbers:
End of explanation
"""
|
mathnathan/notebooks | mpfi/Probability of an Image.ipynb | mit | x1 = np.random.uniform(size=500)
x2 = np.random.uniform(size=500)
fig = plt.figure();
ax = fig.add_subplot(1,1,1);
ax.scatter(x1,x2, edgecolor='black', s=80);
ax.grid();
ax.set_axisbelow(True);
ax.set_xlim(-0.25,1.25); ax.set_ylim(-0.25,1.25)
ax.set_xlabel('Pixel 2'); ax.set_ylabel('Pixel 1'); plt.savefig('images_in_2dspace.pdf')
"""
Explanation: Introduction
What are the underlying biophysics that govern astrocyte behavior? To explore this question we have at our disposal a large dataset of observations. A key concept we must face in the analysis of this data is that of uncertainty. Sources of uncertainty will be noise in our measurements due to the recording apparatus, the finite precision of our measurements, as well as the intrinsic stochasticity of the process being measured. Perhaps the most important source of uncertainty we will consider is due to there being sources of variability that are themselves unobserved. Probability theory provides us with a framework to reason in the presence of uncertainty and information theory allows us to quantify uncertainty. We will make use of both in our exploration of the data. A precise definition of the data can be found in appendix B.
We begin by considering the data and the processes that give rise to it. The dataset is composed of microscopy recordings of astrocytes in the visual cortex of ferrets. At the highest level, we have noise being added to the data by the recording apparatus. A model for the noise has been developed and can be found in Apprendix D. The next level of uncertainty comes as a consquence of discretizing the continuous process. Consider for example a military satellite tracking a vehicle. If one wishes to predict the future location of the van, the prediction is limited to be within one of the discrete cells that make up its measurements. However, the true location of the van could be anywhere within that grid cell. Lastly, there is intrinsic stochasticity at the molecular level that we ignore for now. We consider the fluctuations taking place at that scale to be averaged out in our observations.
The unobserved sources of variability will be our primary focus. Before we address that, let us lay down some preliminary concepts. We are going to assume that there exists some true unknown process governing the activity of an astrocyte. Our measurements can then be considered snapshots of this process at various points throughout its life. This suggests that these snapshots (our observations) are a function of the underlying data generating process. Considering the many sources of uncertainty outlined above, we will describe this process as a probability distribution. There will be many ways to interpret the data as a probability, but we will begin by considering any one frame of a video to be the result of a data generating distribution, $P_{data}(x)$. Here $x$ is considered to be an image with $n$ pixels. So $P_{data}$ is a joint distribution over each pixel of the frame with a probability density function (pdf), $p_{data}(x_1,x_2,\dots,x_n)$.
To build intuition about what $p_{data}(x)$ is and how it relates to the assumed data generating process, we will explore a simple example. Take an image with only 2 pixels... [$x_1$,$x_2$] where both $x_1$ and $x_2$ are in [0,1]. Each image can be considered a two dimensional point in $\mathbb{R}^2$. All possible images would occupy a square in the 2 dimensional plane. 500 points are shown as follows
End of explanation
"""
im = [(0.25, 0.85)]
plt.imshow(im, cmap='gray',vmin=0,vmax=1)
plt.tick_params(
axis='both', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom='off', # ticks along the bottom edge are off
top='off', # ticks along the top edge are off
left='off',
right='off'
)
plt.xticks([])
plt.yticks([])
plt.xlabel('Pixel 1 = 0.25 Pixel 2 = 0.85')
plt.savefig('sample_2dspace_image.pdf')
"""
Explanation: Any one point inside the unit square would represent an image. For example the image associated with the point $(0.25,0.85)$ is shown below.
End of explanation
"""
x1 = lambda x2: 0.5*np.cos(2*np.pi*x2)+0.5
x2 = np.linspace(0,1,200)
eps = np.random.normal(scale=0.1, size=200)
fig = plt.figure();
ax = fig.add_subplot(1,1,1);
ax.scatter(x2,x1(x2)+eps, edgecolor='black', s=80);
ax.grid();
ax.set_axisbelow(True);
ax.set_xlim(-0.25,1.25); ax.set_ylim(-0.25,1.25); plt.axes().set_aspect('equal')
ax.set_xlabel('Pixel 2'); ax.set_ylabel('Pixel 1'); plt.savefig('structured_images_in_2dspace.pdf')
structured_images = zip(x1(np.linspace(0,1,10)), np.linspace(0,1,10))
for im in structured_images:
plt.figure(); plt.imshow([im], cmap='gray', vmin=0, vmax=1)
"""
Explanation: Now consider the case where there is some process correlating the two variables. This would be similar to the underlying biophysics governing the activity of an astrocyte. In that case, the pixels would be correlated in some manner due to the mechanism driving the cell and we would see structure in the microscopy recordings. In this simple case, let's consider a direct correlation of the form $x_1 = \frac{1}{2} \cos(2\pi x_2)+\frac{1}{2}+\epsilon$ where $\epsilon$ is a noise term coming from a low variability normal distribution $\epsilon \sim N(0,\frac{1}{10})$. We see below that in this case, the images plotted in two dimensions resulting from this relationship form a distinct pattern. In addition if we look at the images themselves one may be able to see a pattern...
End of explanation
"""
from matplotlib.colors import LogNorm
x2 = np.random.uniform(size=100000)
eps = np.random.normal(scale=0.1, size=100000)
hist2d = plt.hist2d(x2,x1(x2)+eps, bins=50, norm=LogNorm())
plt.xlim(0.0,1.0); plt.ylim(-0.3,1.3); plt.axes().set_aspect('equal')
plt.xlabel('Pixel 2'); plt.ylabel('Pixel 1')
plt.colorbar();
plt.savefig('histogram_of_structured_images.pdf')
"""
Explanation: We will refer to the structure suggested by the two dimensional points as the 'manifold'. This is a common practice when analyzing images. A 28 by 28 dimensional image will be a point in 784 dimensional space. If we are examining images with structure, various images of the number 2 for example, then it turns out that these images will form a manifold in 784 dimensional space. In most cases, as is the case in our contrived example, this manifold exists in a lower dimensional space than that of the images themselves. The goal is to 'learn' this manifold. In our simple case we can describe the manifold as a function of only 1 variable $$f(t) = <t,\frac{1}{2} \cos(2\pi t)+\frac{1}{2}>$$ This is what we would call the underlying data generating process. In practice we usually describe the manifold in terms of a probability distribution. We will refer to the data generating distribution in our example as $p_{test}(x_1, x_2)$. Why did we choose a probability to describe the manifold created by the data generating process? How might this probability be interpreted?
Learning the actual distribution turns out to be a rather difficult task. Here we will use a common non parametric technique for describing distributions, the histrogram. Looking at a histogram of the images, or two dimensional points, will give us insight into the structure of the distribution from which they came. Notice here though that the histogram merely describes the distribution, we do not know what it is.
End of explanation
"""
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.gca(projection='3d')
X,Y = np.mgrid[0:50,0:50]
ax.plot_surface(X, Y, hist2d[0])#, linewidth=0, antialiased=False)
"""
Explanation: As our intuition might have suggested, the data generating distribution looks very similar to the structure suggested by the two dimensional images plotted above. There is high probability very near the actual curve $x_1 = \frac{1}{2} \cos(2\pi x_2)+\frac{1}{2}$ and low probability as we move away. We imposed the uncertainty via the Gaussian noise term $\epsilon$. However, in real data the uncertainty can be due to the myriad of sources outlined above. In these cases a complex probability distribution isn't an arbitrary choice for representing the data, it becomes necessary [cite Cristopher Bishop 2006].
End of explanation
"""
|
SlipknotTN/udacity-deeplearning-nanodegree | sentiment-rnn/Sentiment_RNN.ipynb | mit | import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
"""
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
"""
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
"""
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
"""
# Create your dictionary that maps vocab words to integers here
vocab_to_int = {word: index + 1 for index, word in enumerate(set(words))}
# Convert the reviews to integers, same shape as reviews list, but with integers
#print(reviews[:1])
#print(len(reviews))
reviews_ints = list()
for review in reviews:
single_review_ints = list()
for word in review.split():
single_review_ints.append(vocab_to_int[word])
reviews_ints.append(single_review_ints)
#print(reviews_ints[:1])
"""
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
"""
# Convert labels to 1s and 0s for 'positive' and 'negative'
# Needs np.array to work (doesn't work as plain list)
labels = np.array([1 if label == 'positive' else 0 for label in labels.split()])
print(labels[:5])
print(len(labels))
"""
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
"""
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
"""
Explanation: If you built labels correctly, you should see the next output.
End of explanation
"""
# Filter out that review with 0 length
print(len(reviews_ints))
reviews_ints.remove([])
print(len(reviews_ints))
# Remove from labels?! Labels is already #25000 (it is 25001 is we split on '\n')
"""
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
"""
seq_len = 200
features = list()
for index, review_ints in enumerate(reviews_ints):
review_length = len(review_ints)
if review_length < 200:
padding = [0] * (seq_len - review_length)
padding.extend(review_ints)
features.append([])
features[-1].extend(padding)
else:
features.append([])
features[-1].extend(review_ints[:200])
features = np.asarray(features)
"""
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
"""
features[:10,:100]
"""
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
"""
split_frac = 0.8
featuresLen = len(features)
split_idx = int(featuresLen * split_frac)
print(split_idx)
test_idx = int(featuresLen * (split_frac + (1 - split_frac) / 2))
print(test_idx)
print(len(features))
print(len(labels))
print(features.shape)
print(labels.shape)
train_x, val_x = features[:split_idx], features[split_idx : test_idx]
train_y, val_y = labels[:split_idx], labels[split_idx : test_idx]
test_x = features[test_idx:]
test_y = labels[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
"""
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
"""
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
"""
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
"""
#n_words = len(vocab)
# +1 because we have the padding word (index 0)
n_words = len(set(words)) + 1
print(n_words)
#Fix added in newer commit by udacity
#n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
# batch_size x review_length (one_hot encoding review)
inputs_ = tf.placeholder(tf.int32, [None, seq_len], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, 1], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
"""
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
"""
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
"""
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 300 units, the function will return a tensor with size [batch_size, 300].
End of explanation
"""
with graph.as_default():
# Your basic LSTM cell
# input_size is deprecated (automatically detected?!). The input is the embed, so batch_size x embed_size
# Every LSTM get a single word (length is embed_size because we use embedding instead of one-hot vector)
# When we use batch_size > 1, every LSTM get all the nth-word of all the batches
#lstm = tf.contrib.rnn.BasicLSTMCell(num_units=lstm_size, input_size=(batch_size, embed_size))
lstm = tf.contrib.rnn.BasicLSTMCell(num_units=lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(cell=lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
# Initial state shape: batch_size x lstm_size.
print(initial_state)
"""
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
"""
# IMHO:
# Embed gets "inputs" placeholder as input which has shape batch_size x review_length
# tf.nn.dynamic_rnn creates #review_length LSTM in parallel
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell=cell, inputs=embed, initial_state=initial_state)
# Outputs shape is batch_size x review_length (max_time) x lstm_size.
# It includes all the LSTMs (#review_length here).
# Each LSTM output is lstm_size, but we have to consider also the batch_size.
print("outputs shape: " + str(outputs.shape))
# Last LSTM state. Final state shape is batch_size x lstm_size
print("final state shape: " + str(final_state))
"""
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
"""
print(len(set(words)))
with graph.as_default():
# outputs shape is batch_size x review_length x lstm_size,
# we pick the last LSTM output (batch_size x lstm_size)
print(outputs.shape)
print(outputs[:, -1].shape)
# Squash all single LSTM output values (#lstm_size) to single output and apply sigmoid.
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
# Predictions shape is batch_size x 1, so labels is 2D shape=(batch_size,)
print(predictions.shape)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
"""
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
"""
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
"""
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
"""
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
"""
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
"""
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
"""
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
"""
Explanation: Testing
End of explanation
"""
|
ComputationalModeling/spring-2017-danielak | past-semesters/fall_2016/day-by-day/day19-kinematics-terminal-velocity-of-a-skydiver/Day_19_pre_class_notebook.ipynb | agpl-3.0 | from IPython.display import YouTubeVideo
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("JXJQYpgFAyc",width=640,height=360) # Numerical integration
"""
Explanation: Day 19 Pre-class assignment
Goals for today's pre-class assignment
In this pre-class assignment, you are going to learn how to:
Numerically integrate a function
Numerically differentiate a function
Get a sense of how the result depends on the step size you use.
Assignment instructions
Watch the videos below and complete the assigned programming problems.
End of explanation
"""
# Put your code here
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("b0K8LiHyrBg",width=640,height=360) # Numerical differentiation
"""
Explanation: Question 1: Write a function that uses the rectangle rule to integrates $f(x) = \sin(x)$ from $x_{beg}= 0$ to $x_{end} = \pi$ by taking $N_{step}$ equal-sized steps $\Delta x = \frac{x_{beg} - x_{end}}{N_{step}}$. Allow $N_{step}$ and the beginning and ending of the range to be defined by user-set parameters. For values of $N_{step} = 10, 100$, and $1000$, how close are you to the true answer? (In other words, calculate the error as defined above.)
Note 1: $\int_{0}^{\pi} \sin(x) dx = \left. -\cos(x) \right|_0^\pi = 2$
Note 2: The "error" is defined as $\epsilon = |\frac{I - T}{T}|$, where I is the integrated answer, T is the true (i.e., analytic) answer, and the vertical bars denote that you take the absolute value.
End of explanation
"""
# Put your code here
"""
Explanation: Question 2: Write a function that calculates the derivative of $f(x) = e^{-2x}$ at several points between -3.0 and 3.0, using two points that are a distance $\Delta x$ from the point, x, where we want the value of the derivative. Calculate the difference between this value and the answer to the analytic solution, $\frac{df}{dx} = -2 e^{-2x}$, for $\Delta x$ = 0.1, 0.01 and 0.001 (in other words, calculate the error as defined above).
Hint: use np.linspace() to create a range of values of x that are regularly-spaced, create functions that correspond to $f(x)$ and $\frac{df}{dx}$, and use numpy to calculate the derivatives and the error. Note that if x is a numpy array, a function f(x) that returns a value will also be a numpy array. In other words, the function:
def f(x):
return np.exp(-2.0*x)
will return an array of values corresponding to the function $f(x)$ defined above if given an array of x values.
End of explanation
"""
from IPython.display import HTML
HTML(
"""
<iframe
src="https://goo.gl/forms/IMkGVL5XnxqZM8EP2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
"""
)
"""
Explanation: Question 3: For the two programs above, approximately how much does the error go down as you change the step size $\Delta x$ by factors of 10?
// put your answer here
Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation
"""
|
morningc/pyladies-interactive-planetary | notebooks/HELLO WORLD | matplotlib + seaborn + ipywidgets.ipynb | mit | %pylab inline
"""
Explanation: python libs for all vis things
End of explanation
"""
t = arange(0.0, 1.0, 0.01)
y1 = sin(2*pi*t)
y2 = sin(2*2*pi*t)
import pandas as pd
df = pd.DataFrame({'t': t, 'y1': y1, 'y2': y2})
df.head(10)
fig = figure(1, figsize = (10,10))
ax1 = fig.add_subplot(211)
ax1.plot(t, y1)
ax1.grid(True)
ax1.set_ylim((-2, 2))
ax1.set_ylabel('Gentle Lull')
ax1.set_title('I can plot waves')
for label in ax1.get_xticklabels():
label.set_color('r')
ax2 = fig.add_subplot(212)
ax2.plot(t, y2,)
ax2.grid(True)
ax2.set_ylim((-2, 2))
ax2.set_ylabel('Getting choppier')
l = ax2.set_xlabel('Hi PyLadies')
l.set_color('g')
l.set_fontsize('large')
show()
"""
Explanation: matplotlib
interactive vis notes: http://matplotlib.org/users/navigation_toolbar.html
End of explanation
"""
import seaborn as sns
sns.set(color_codes=True)
sns.distplot(y1)
sns.distplot(y2)
"""
Explanation: + seaborn
End of explanation
"""
from ipywidgets import widgets
from IPython.html.widgets import *
t = arange(0.0, 1.0, 0.01)
def pltsin(f):
plt.plot(t, sin(2*pi*t*f))
interact(pltsin, f=(1,10,0.1))
"""
Explanation: ipywidgets
helpful tutorial here
with matplotlib
End of explanation
"""
def pltsin(f):
sns.distplot(sin(2*pi*t*f))
interact(pltsin, f=(1,10,0.1))
"""
Explanation: with seaborn!
End of explanation
"""
|
intel-analytics/analytics-zoo | pyzoo/zoo/chronos/use-case/fsi/stock_prediction.ipynb | apache-2.0 | import numpy as np
import pandas as pd
import os
# S&P 500
FILE_NAME = 'all_stocks_5yr.csv'
SOURCE_URL = 'https://github.com/CNuge/kaggle-code/raw/master/stock_data/'
filepath = './data/'+ FILE_NAME
filepath = os.path.join('data', FILE_NAME)
print(filepath)
# download data
!if ! [ -d "data" ]; then mkdir data; cd data; wget https://github.com/CNuge/kaggle-code/raw/master/stock_data/individual_stocks_5yr.zip; wget https://raw.githubusercontent.com/CNuge/kaggle-code/master/stock_data/merge.sh; chmod +x merge.sh; unzip individual_stocks_5yr.zip; ./merge.sh; fi
# read data
data = pd.read_csv(filepath)
print(data[:10])
target_rows = data[data['Name']=='MMM']
print(target_rows[:10])
# extract close value
close_val = target_rows[['close']].values
print(close_val[:10])
# Visualize data
import matplotlib.pyplot as plt
plt.plot(close_val, color='blue', label='MMM daily price Raw')
plt.xlabel("Time Period")
plt.ylabel("Stock Price")
plt.legend()
plt.show()
"""
Explanation: Stock Price Prediction
In this notebook, we demonstrate a reference use case where we use historical stock price data to predict the future price. The dataset we use is the daily stock price of S&P500 stocks during 2013-2018 (data source). We demostrate how to do univariate forecasting using the past 80% of the total days' MMM price to predict the future 20% days' daily price.
Reference: https://github.com/jwkanggist/tf-keras-stock-pred
Get Data
We will use the close prices of MMM stock for our experiment. We will
1. download raw dataset and load into dataframe.
2. Extract the close prices of MMM stock from the dataframe into a numpy array
End of explanation
"""
from zoo.chronos.data import TSDataset
from sklearn.preprocessing import MinMaxScaler
df = target_rows[['date', 'close']]
tsdata_train, _, tsdata_test = TSDataset.from_pandas(df, dt_col="date", target_col="close", with_split=True, test_ratio=0.2)
minmax_scaler = MinMaxScaler()
for tsdata in [tsdata_train, tsdata_test]:
tsdata.scale(minmax_scaler, fit=(tsdata is tsdata_train))\
.roll(lookback=50, horizon=1)
X_train, y_train = tsdata_train.to_numpy()
X_test, y_test = tsdata_test.to_numpy()
X_train.shape, y_train.shape, X_test.shape, y_test.shape
"""
Explanation: Data Pre-processing
Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset.
For the stock price data we're using, the processing contains 2 parts:
Data normalization such that the normalized stock prices fall in the range of 0 to 1
Extract time series of given window size
We generate a built-in TSDataset to complete the whole processing.
End of explanation
"""
from zoo.chronos.forecaster.lstm_forecaster import LSTMForecaster
"""
Explanation: Time series forecasting
We use LSTMForecaster for forecasting.
End of explanation
"""
# Hyperparameters
feature_dim = X_train.shape[-1]
target_dim = 1
hidden_dim = 10
learning_rate = 0.01
batch_size = 16
epochs = 50
# build model
forecaster = LSTMForecaster(past_seq_len=X_train.shape[1],
input_feature_num=feature_dim,
output_feature_num=target_dim,
hidden_dim=32,
lr=learning_rate,
)
"""
Explanation: First we initiate a LSTMForecaster.
feature_dim should match the dimension of the input data, so we just use the last dimension of train input data shape
target_dim equals the dimension of the output data, here we set target_dim=1 for univariate forecasting.
End of explanation
"""
%%time
forecaster.fit(data=(X_train, y_train), batch_size=batch_size, epochs=epochs)
"""
Explanation: Then we use fit to train the model. Wait sometime for it to finish.
End of explanation
"""
# make prediction
y_pred = forecaster.predict(X_test)
"""
Explanation: After training is finished. You can use the forecaster to do prediction and evaluation.
End of explanation
"""
y_pred_unscale = tsdata_test.unscale_numpy(y_pred)
y_test_unscale = tsdata_test.unscale_numpy(y_test)
"""
Explanation: Since we have used standard scaler to scale the input data (including the target values), we need to inverse the scaling on the predicted values too.
End of explanation
"""
# evaluate with mean_squared_error
from zoo.orca.automl.metrics import Evaluator
print("mean_squared error is", Evaluator.evaluate("mse", y_test_unscale, y_pred_unscale, multioutput='uniform_average'))
"""
Explanation: Calculate the mean square error.
End of explanation
"""
# Plot predictions
plt.plot(y_test_unscale[:, :, 0], color='blue', label="MMM daily price Raw")
plt.plot(y_pred_unscale[:, :, 0], color='red', label="MMM daily price Predicted")
plt.xlabel("Time Period")
plt.ylabel("Normalized Stock Price")
plt.legend()
plt.show()
"""
Explanation: Visualize the prediction.
End of explanation
"""
|
FedericoMuciaccia/SistemiComplessi | src/Fede.ipynb | mit | # import math
def euclideanDistace(x,y):
return numpy.sqrt(numpy.square(x) + numpy.square(y))
import geopy # TODO AttributeError: 'module' object has no attribute 'distance'
from geopy import distance, geocoders
# distanze in km
# geopy.distance.vincenty(A, B) # su sferoide oblato
# geopy.distance.great_circle(A, B) # sulla sfera
# punti di prova
# A = (41.49008, -71.312796)
# B = (41.499498, -81.695391)
def geolocate(place): # string
"""
return coordinates for addresses and toponyms
"""
geolocator = geopy.geocoders.Nominatim()
location = geolocator.geocode(place)
# i dati si danno in (latitudine, longitudine), ma vanno intesi come (y, x)
# ovvero vanno visualizzati come x=longitudine, y=latitudine
return (location.latitude, location.longitude) # coordinate
def geodesicDistance(A, B = geolocate("Colosseo")):
"""
return the Vincenty's geodesic distance in meters
default place = Colosseum
"""
# colosseo = (41.890183, 12.492369)
return geopy.distance.vincenty(A, B).meters
def isInRome(place):
"""
return True if the place is less than 10 km away from Colosseum
"""
raggioRaccordoAnulare = 10000 # in metri
return geodesicDistance(place) <= raggioRaccordoAnulare
name = "Sapienza, Roma"
print(geodesicDistance(geolocate(name)))
print(isInRome(geolocate(name)))
colosseo = geopy.point.Point(41.890183, 12.492369)
# TODO fare map con coordinate nel datagrame
# import mpl_toolkits.basemap
# from mpl_toolkits.basemap import Basemap
# mappa = Basemap(projection='stere',lat_0=41.890183,lon_0=12.492369,resolution='l')
# mappa.drawcoastlines(linewidth=0.25)
# mappa.drawcountries(linewidth=0.25)
# mappa.fillcontinents(color='coral',lake_color='aqua')
# mappa.drawmapboundary(fill_color='aqua')
# mappa.drawmeridians(numpy.arange(0,360,30))
# mappa.drawparallels(numpy.arange(-90,90,30))
# x, y = map(lons*180./np.pi, lats*180./np.pi)
# cs = map.contour(x,y,wave+mean,15,linewidths=1.5)
# plt.title('contour lines over filled continent background')
# plt.show()
# m = Basemap(width=100000,height=100000,projection='lcc',resolution=None,lat_0=41.890183,lon_0=12.492369)
#m.drawcoastlines()
#m.drawmapboundary(fill_color='aqua')
#m.fillcontinents(color='coral',lake_color='aqua')
# m.bluemarble()
## m.shadedrelief()
# resolution: c (crude), l (low), i (intermediate), h (high), f (full)
# plt.show()
copertura = dataframe[["range"]]
dataframe.plot(kind="scatter", x="lon", y="lat", s=copertura/1000, alpha=0.5) # TODO
# pyplot.show()
"""
Explanation: golocalizzazione
End of explanation
"""
isInItaly = dataframe.mcc == 222
isReliable = dataframe.samples > 1
# Crit3 = isInRome((dataframe.lat, dataframe.lon))
# AllCriteria = isInItaly & isReliable & Crit3
# dataframe[AllCriteria]
italia = dataframe[isInItaly & isReliable]
# TODO riordinare gli indici levando i buchi
italia.plot(kind="scatter", x="lon", y="lat", label="Italy")
# pyplot.show()
"""
Explanation: selezione dei dati
il filtraggio viene inizialmente fatto per mobile country code (Italy)
python
mcc == 222
e successivamente vengono scartati i valori ritenuti inaffidabili, ovvero con soltanto una rilevazione
python
samples > 1
End of explanation
"""
import networkx
G = networkx.Graph()
G.add_node("Fede")
G.add_node("Iuri")
G.add_edge(1,2)
G.add_edge('a','b')
G.add_edge('a','c')
G.add_edge('c','d')
G.add_edge('c','e')
G.add_edge('c','f')
G.add_edge('a','d')
print(G.nodes())
print(G.edges())
position=networkx.spring_layout(G)
networkx.draw_networkx_nodes(G,position,node_size=300)
networkx.draw_networkx_edges(G,position,width=5)
networkx.draw_networkx_labels(G,position,font_size=15,font_family='sans-serif')
# pyplot.axis("off")
position
H=networkx.path_graph(10)
networkx.draw_networkx_nodes(G,position,node_size=300)
networkx.draw_random(G)
dataframePiccolo = italia[0:10]
dataframePiccolo["cell"]
dataframePiccolo = roma[0:10]
grafoPiccolo = networkx.Graph()
grafoPiccolo.add_nodes_from(dataframePiccolo["cell"])
# grafoPiccolo.add_nodes_from(dataframePiccolo.iterrows())
# grafoPiccolo.add_nodes_from(dataframePiccolo.itertuples())
networkx.draw(grafoPiccolo)
import numpy
matriceDiAdiacenza = numpy.zeros((10,10), dtype=int)
# le matrici hanno indici che partono da zero
matriceDiAdiacenza
#dataframe = pandas.read_csv("../Siscomp_datas/cell_towers.csv")
romaPiccolo = roma[0:50]
#dataframe
#celle = dataframe[['cell', 'lat', 'lon', 'range']].values
#print celle
coordinate = romaPiccolo[['lat', 'lon']].values
raggio = romaPiccolo['range'].values
#print coordinate
#print raggio
def geodesicDistance(A, B):
return geopy.distance.vincenty(A, B).meters
def sommaRange(A, B):
return A+B
def sonoLinkati(A, rangeA, B, rangeB):
return geodesicDistance(A, B) <= sommaRange(rangeA, rangeB)
def linkVettori(rigA, rigB):
return sonoLinkati((rigA['lat'], rigA['lon']), rigA['range'], (rigB['lat'], rigB['lon']), rigB['range'])
dimensioni = raggio.size
a = numpy.zeros((dimensioni,dimensioni), dtype=int)
# print a
for i in xrange(raggio.size):
for j in xrange(raggio.size):
if geodesicDistance(coordinate[i], coordinate[j]) <= raggio[i] + raggio[j]:
a[i,j] = 1
if (i == j):
a[i,j] = 0
print a
#for i in celle:
# for j in celle:
# if linkVettori(i, j):
# a[i,j] = 1
#ridotto = dataframe[['cell', 'lat', 'lon', 'range']]
#b = numpy.zeros((50,50))
#for i in ridotto.iterrows():
# for j in ridotto.iterrows():
# if linkVettori(i, j):
# a[ridotto["index"],ridotto["index"]] = 1
# A = numpy.reshape(numpy.random.random_integers(0,1,size=100),(10,10))
# D = networkx.DiGraph(A)
# networkx.draw(D)
F50 = networkx.Graph(a)
# position=networkx.spring_layout(F50)
# networkx.draw(F50)
# networkx.draw_networkx_nodes(F50,position,node_size=300)
# networkx.draw_networkx_edges(F50,position,width=5)
# networkx.draw_networkx_labels(F50,position,font_size=15,font_family='sans-serif')
networkx.draw_random(F50)
# networkx.degree(F50)
grado = F50.degree().values()
def degreeDistribution(gradi):
pyplot.hist(gradi, bins=max(gradi)-min(gradi), histtype='step')
pyplot.title('Degree distribution')
pyplot.xlabel("Degree")
pyplot.ylabel("Frequency")
# return
# histtype='bar', alpha=0.5
# bins=max(grado)-min(grado)
distribuzione = degreeDistribution(grado)
"""
Explanation: Iuri ha fatto la selezione su Roma e ha creato un nuovo file csv
TODO integrare il suo codice
```python
isInRome((dataframe.lat, dataframe.lon))
italyDataframe["lat"].apply(sum)
dataframe["distanze"] = arrayDistanzeCalcolate
.apply()
.applymap()
.resample()
.transform()
.groupby
code_groups[['data']].transform(sum)
```
TODO Capocci
<ul>
<li><input type="checkbox" checked>dati MLS</li>
<li><input type="checkbox" checked>filtro Roma</li>
<li><input type="checkbox" checked>grafo</li>
<li><input type="checkbox" checked>distribuzione del grado P(k)</li>
<li><input type="checkbox" checked>dati disaggregati per compagnia, canale radio, ecc</li>
<li><input type="checkbox" checked>soglia percolativa</li>
</ul>
Creazione del grafo con NetworkX
TODO si può anche fare un grafo pesato sull'intensità del segnale
TODO:
mettere sfondo allo scatterplot con mappa terrestre
(farlo con le API di Google Maps o con quelle di OpenStreetMaps)
provare API grafi per fare grafo delle antenne
aggiungere colonna database per distanze e coperture antenne
End of explanation
"""
import numpy
# raggi = roma['range'].values
raggiPositivi = roma.range >= 1
raggiBuoni = roma[raggiPositivi].range.values
#distribuzioneRange = pyplot.hist(raggi, 100)
#distribuzioneRangeLogLog = pyplot.hist(numpy.log2(raggiBuoni), 100, log=True)
raggi = roma[raggiPositivi].range.values
pyplot.figure(1)
pyplot.subplot(211)
distribuzioneRange = pyplot.hist(raggi,bins=100)
pyplot.title('Range distribution')
pyplot.xlabel("Degree")
pyplot.ylabel("Frequency")
pyplot.subplot(212)
distribuzioneRangeLogLog = pyplot.hist(numpy.log2(raggiBuoni), 100, log=True)
pyplot.title('Range distribution (log-log)')
pyplot.xlabel("log2 Grado")
pyplot.ylabel("Frequency (scala log10)")
massimoRange = max(distribuzioneRange[1])
massimoRange
#numpy.nan in raggi
#min(raggi)
# TODO vedere se l'antenna di massimo range sta su Monte Mario
# TODO fare mappa geografica delle antenne di range gigante per vedere dove sono messe
"""
Explanation: Distribuzione dei raggi di copertura delle antenne
TODO fare il fit esponenziale
ci sono molte antenne "piccole" e poche antenne "grandi"
ci sono pure alcune antenne dal raggio di copertura gigante $\sim 10 Km$
(ovvero quanto tutto il raggio del Grande Raccordo Anulare e quindi di tutto il nostro data sample)
probabilmente questi saranno degli hub se la nostra rete risulterà essere complessa
End of explanation
"""
from joblib import Parallel, delayed
import multiprocessing
inputs = range(10)
def processInput(i):
return i * i
num_cores = multiprocessing.cpu_count()
results = Parallel(n_jobs=num_cores)(delayed(processInput)(i) for i in inputs)
results
def matriceSuperiore(datiCoordinate, datiRaggi):
a = numpy.zeros((datiRaggi.size,datiRaggi.size))
for i in xrange(datiRaggi.size):
for j in xrange(datiRaggi.size-i-1):
if geodesicDistance(datiCoordinate[i], datiCoordinate[j+i+1]) <= datiRaggi[i] + datiRaggi[j+i+1]:
# if geodesicDistance(datiCoordinate[i], datiCoordinate[j+i+1]) <= datiRaggi[i] + datiRaggi[j+i+1]:
a[i,j+i+1] = 1
a[j+i+1,i] = 1
return a
# iterare su una matrice numpy
def matriceSimilSimmetrica(N):
"""
crea una matrice triangolare bassa
(per adesso includendo la diagonale, per seplicità)
"""
#a = range(N)
#return [range(i+1) for i in a]
#bucket = [0] * N
for i in range(N):
j = i + 1
a[i] = [0] * j
return a
b = numpy.zeros((5,5), dtype=int)
for i in numpy.nditer(b): print i
#tri = numpy.zeros((10, 10))
#dm = tri[numpy.triu_indices(10, 1)]
#dm
#tri[(1,2), (4,5)]
triangolo = matriceSimilSimmetrica(N)
triangolo
# listofzeros = [0] * n
"""
Explanation: TODO fare lo scatterplot georeferenziato con
basemap e cartopy:
http://matplotlib.org/basemap/
http://scitools.org.uk/cartopy/docs/latest/
http://scitools.org.uk/cartopy/docs/latest/gallery.html
oppure con le API o bindings per OperStreetMaps o Google Maps
multiprocessing e calcolo parallelo
End of explanation
"""
|
bollwyvl/yamlmagic | README.ipynb | bsd-3-clause | %reload_ext yamlmagic
"""
Explanation: yamlmagic
an IPython magic for capturing data in YAML into a running IPython kernel.
Install
From the command line (or with ! in a notebook cell):
bash
pip install yamlmagic
Enable
Ad-hoc
In the notebook, you can use the %load_ext or %reload_ext line magic.
End of explanation
"""
%%yaml
a_toplevel_key: 1
"""
Explanation: Configuration
In your profile's ipython_kernel_config.py, you can add the following line to automatically load yamlmagic into all your running kernels:
python
c.InteractiveShellApp.extensions = ['yaml_magic']
Use
The %%yaml cell magic will either act as simple parser:
End of explanation
"""
_
"""
Explanation: which can be accessed by the special last result variable _:
End of explanation
"""
%%yaml x
- a: 1
b: 2
x
"""
Explanation: Or will update a named variable with the parsed document:
End of explanation
"""
from yaml import Loader
class FooLoader(Loader):
# some special things you have built
pass
%%yaml --loader FooLoader
- a: !!python/float 1
b: !!python/float 2
"""
Explanation: By default, yaml.SafeLoader will be used, which won't allow the powerful but dangerous (and unportable) !python/ tags. If you'd like to use them, provide the -l (or --loader) argument with a BaseLoader subclass available via a local variable...
End of explanation
"""
%%yaml --loader yaml.Loader
- a: !!python/float 1
b: !!python/float 2
"""
Explanation: ...or dotted-notation path to a loader:
End of explanation
"""
|
DJCordhose/ai | notebooks/2019_tf/time_series.ipynb | mit | # univariate data preparation
import numpy as np
# split a univariate sequence into samples
def split_sequence(sequence, n_steps):
X, y = list(), list()
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the sequence
if end_ix > len(sequence)-1:
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
# define input sequence
raw_seq = [10, 20, 30, 40, 50, 60, 70, 80, 90]
# choose a number of time steps
n_steps = 3
# split into samples
X, y = split_sequence(raw_seq, n_steps)
# summarize the data
list(zip(X, y))
X
"""
Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/2019_tf/time_series.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Time Series / Sequences
Example, some code and a lot of inspiration taken from: https://machinelearningmastery.com/how-to-develop-lstm-models-for-time-series-forecasting/
Univariate Sequences
just one variable per time step
Challenge
We have a known series of events, possibly in time and you want to know what is the next event. Like this
[10, 20, 30, 40, 50, 60, 70, 80, 90]
End of explanation
"""
# reshape from [samples, timesteps] into [samples, timesteps, features]
n_features = 1
X = X.reshape((X.shape[0], X.shape[1], n_features))
X
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense, LSTM, GRU, SimpleRNN, Bidirectional
from tensorflow.keras.models import Sequential, Model
model = Sequential()
model.add(SimpleRNN(units=50, activation='relu', input_shape=(n_steps, n_features), name="RNN_Input"))
model.add(Dense(units=1, name="Linear_Output"))
model.compile(optimizer='adam', loss='mse')
%time history = model.fit(X, y, epochs=500, verbose=0)
import matplotlib.pyplot as plt
plt.plot(history.history['loss'])
# this does not look too bad
X_sample = np.array([[10, 20, 30], [70, 80, 90]])
X_sample = X_sample.reshape((X_sample.shape[0], X_sample.shape[1], n_features))
X_sample
y_pred = model.predict(X_sample)
y_pred
def predict(model, samples, n_features=1):
input = np.array(samples)
input = input.reshape((input.shape[0], input.shape[1], n_features))
y_pred = model.predict(input)
return y_pred
# do not look too close, though
predict(model, [[100, 110, 120], [200, 210, 220], [200, 300, 400]])
"""
Explanation: Converting shapes
one of the most frequent, yet most tedious steps
match between what you have and what an interface needs
expected input of RNN: 3D tensor featureswith shape (samples, timesteps, input_dim)
we have: (samples, timesteps)
reshape on np arrays can do all that
End of explanation
"""
# https://keras.io/layers/recurrent/
# input: (samples, timesteps, input_dim)
# output: (samples, units)
# let's have a look at the actual output for an example
rnn_layer = model.get_layer("RNN_Input")
model_stub = Model(inputs = model.input, outputs = rnn_layer.output)
hidden = predict(model_stub, [[10, 20, 30]])
hidden
"""
Explanation: Input and output of an RNN layer
End of explanation
"""
# https://arxiv.org/ftp/arxiv/papers/1701/1701.05923.pdf
# n = output dimension
# m = input dimension
# Total number of parameters for
# Simple RNN = n**2 + nm + n
# GRU = 3 × (n**2 + nm + n)
# LSTM = 4 × (n**2 + nm + n)
rnn_units = 1
model = Sequential()
model.add(SimpleRNN(units=rnn_units, activation='relu', input_shape=(n_steps, n_features), name="RNN_Input"))
# model.add(GRU(units=rnn_units, activation='relu', input_shape=(n_steps, n_features), name="RNN_Input"))
model.summary()
output_dimension = rnn_units
input_dimension = n_features
parameters = 1 * (output_dimension ** 2 + output_dimension * input_dimension + output_dimension)
parameters
# from only a single output for the final timestep
# ideal for feeding into something that *does not* handle timesteps
rnn_units = 1
model = Sequential([
SimpleRNN(units=rnn_units, activation='relu', input_shape=(n_steps, n_features))
])
predict(model, [[10, 20, 30]])
"""
Explanation: What do we see?
each unit (50) has a single output
as a sidenote you nicely see the RELU nature of the output
so the timesteps are lost
we are only looking at the final output
still with each timestep, the layer does produce a unique output we can use
We need to look into RNNs a bit more deeply now
RNNs - Networks with Loops
<img src='https://djcordhose.github.io/ai/img/nlp/colah/RNN-rolled.png' height=200>
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Unrolling the loop
<img src='https://djcordhose.github.io/ai/img/nlp/colah/RNN-unrolled.png'>
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Simple RNN internals
<img src='https://djcordhose.github.io/ai/img/nlp/fchollet_rnn.png'>
$output_t = \tanh(W input_t + U output_{t-1} + b)$
From Deep Learning with Python, Chapter 6, François Chollet, Manning: https://livebook.manning.com/#!/book/deep-learning-with-python/chapter-6/129
Activation functions
<img src='https://djcordhose.github.io/ai/img/sigmoid-activation.png' height=200>
Sigmoid compressing between 0 and 1
<img src='https://djcordhose.github.io/ai/img/tanh-activation.png' height=200>
Hyperbolic tangent, like sigmoind, but compressing between -1 and 1, thus allowing for negative values as well
Advanced part follows
End of explanation
"""
# to one output for each timestep
# ideal for feeding into something that *expects* timesteps
rnn_units = 1
model = Sequential([
SimpleRNN(units=rnn_units, activation='relu', input_shape=(n_steps, n_features), return_sequences=True)
])
# https://keras.io/layers/recurrent/
# input: (samples, timesteps, input_dim)
# output with return_sequences: (samples, timesteps, units)
predict(model, [[10, 20, 30]])
rnn_units = 50
model = Sequential([
SimpleRNN(units=rnn_units, activation='relu', input_shape=(n_steps, n_features), return_sequences=True, name="RNN_Input"),
SimpleRNN(units=rnn_units, activation='relu', name="RNN_Latent"),
Dense(units=1, name="Linear_Output")
])
model.compile(optimizer='adam', loss='mse')
model.summary()
%time history = model.fit(X, y, epochs=500, verbose=0)
plt.plot(history.history['loss'])
predict(model, [[10, 20, 30], [70, 80, 90], [100, 110, 120], [200, 210, 220], [200, 300, 400]])
"""
Explanation: Multi Layer RNNs
End of explanation
"""
rnn_units = 50
model = Sequential([
Bidirectional(SimpleRNN(units=rnn_units, activation='relu', input_shape=(n_steps, n_features), name="RNN_Input")),
Dense(units=1, name="Linear_Output")
])
model.compile(optimizer='adam', loss='mse')
%time history = model.fit(X, y, epochs=500, verbose=0)
plt.plot(history.history['loss'])
predict(model, [[10, 20, 30], [70, 80, 90], [100, 110, 120], [200, 210, 220], [200, 300, 400]])
"""
Explanation: Bidirectional RNNs
End of explanation
"""
rnn_units = 50
model = Sequential([
LSTM(units=rnn_units, activation='relu', input_shape=(n_steps, n_features), name="RNN_Input"),
Dense(units=1, name="Linear_Output")
])
model.compile(optimizer='adam', loss='mse')
model.summary()
output_dimension = rnn_units
input_dimension = n_features
parameters = 4 * (output_dimension ** 2 + output_dimension * input_dimension + output_dimension)
parameters
%time history = model.fit(X, y, epochs=500, verbose=0)
plt.plot(history.history['loss'])
predict(model, [[10, 20, 30], [70, 80, 90], [100, 110, 120], [200, 210, 220], [200, 300, 400]])
rnn_units = 50
model = Sequential([
GRU(units=rnn_units, activation='relu', input_shape=(n_steps, n_features), name="RNN_Input"),
Dense(units=1, name="Linear_Output")
])
model.compile(optimizer='adam', loss='mse')
model.summary()
output_dimension = rnn_units
input_dimension = n_features
parameters = 3 * (output_dimension ** 2 + output_dimension * input_dimension + output_dimension)
parameters
%time history = model.fit(X, y, epochs=500, verbose=0)
plt.plot(history.history['loss'])
predict(model, [[10, 20, 30], [70, 80, 90], [100, 110, 120], [200, 210, 220], [200, 300, 400]])
"""
Explanation: LSMTs / GRUs
mainly beneficial for long sequences
but also 3-4 times more expensive
might not have better results for short sequences like these
End of explanation
"""
in_seq1 = [10, 20, 30, 40, 50, 60, 70, 80, 90]
in_seq2 = [15, 25, 35, 45, 55, 65, 75, 85, 95]
out_seq = [in1 + in2 for in1, in2 in zip(in_seq1, in_seq2)]
out_seq
# convert to [rows, columns] structure
in_seq1 = np.array(in_seq1).reshape((len(in_seq1), 1))
in_seq2 = np.array(in_seq2).reshape((len(in_seq2), 1))
out_seq = np.array(out_seq).reshape((len(out_seq), 1))
out_seq
# horizontally stack columns
dataset = np.hstack((in_seq1, in_seq2, out_seq))
dataset
# split a multivariate sequence into samples
def split_sequences(sequences, n_steps):
X, y = list(), list()
for i in range(len(sequences)):
# find the end of this pattern
end_ix = i + n_steps
# check if we are beyond the dataset
if end_ix > len(sequences):
break
# gather input and output parts of the pattern
seq_x, seq_y = sequences[i:end_ix, :-1], sequences[end_ix-1, -1]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
# choose a number of time steps
n_steps = 3
# convert into input/output
X, y = split_sequences(dataset, n_steps)
# summarize the data
list(zip(X, y))
# the dataset knows the number of features, e.g. 2
n_features = X.shape[2]
# define model
model = Sequential()
model.add(GRU(units=50, activation='relu', input_shape=(n_steps, n_features), name="RNN_Input"))
model.add(Dense(units=1, name="Linear_Output"))
model.compile(optimizer='adam', loss='mse')
# fit model
%time history = model.fit(X, y, epochs=500, verbose=0)
import matplotlib.pyplot as plt
plt.yscale('log')
plt.plot(history.history['loss'])
def predict_multi(model, samples):
input = np.array(samples)
input = input.reshape(1, input.shape[0], input.shape[1])
y_pred = model.predict(input)
return y_pred
predict_multi(model, [[80, 85], [90, 95], [100, 105]])
predict_multi(model, [[10, 15], [20, 25], [30, 35]])
predict_multi(model, [[180, 185], [190, 195], [200, 205]])
"""
Explanation: Multivariate LSTM Models
Multiple Input Series
End of explanation
"""
y += 20
list(zip(X, y))
model = Sequential()
model.add(GRU(units=50, activation='relu', input_shape=(n_steps, n_features), name="RNN_Input"))
model.add(Dense(units=1, name="Linear_Output"))
model.compile(optimizer='adam', loss='mse')
# train a little bit longer, as this should be harder now
%time history = model.fit(X, y, epochs=2000, verbose=0)
import matplotlib.pyplot as plt
plt.yscale('log')
plt.plot(history.history['loss'])
predict_multi(model, [[80, 85], [90, 95], [100, 105]])
predict_multi(model, [[10, 15], [20, 25], [30, 35]])
predict_multi(model, [[180, 185], [190, 195], [200, 205]])
"""
Explanation: Let's make this a little bit harder
output y can be inferred from final timestep
now we try to infer following ouput
End of explanation
"""
# split a univariate sequence into samples
def split_sequence(sequence, n_steps_in, n_steps_out):
X, y = list(), list()
for i in range(len(sequence)):
# find the end of this pattern
end_ix = i + n_steps_in
out_end_ix = end_ix + n_steps_out
# check if we are beyond the sequence
if out_end_ix > len(sequence):
break
# gather input and output parts of the pattern
seq_x, seq_y = sequence[i:end_ix], sequence[end_ix:out_end_ix]
X.append(seq_x)
y.append(seq_y)
return np.array(X), np.array(y)
# define input sequence
raw_seq = [10, 20, 30, 40, 50, 60, 70, 80, 90]
# choose a number of time steps
n_steps_in, n_steps_out = 3, 2
# split into samples
X, y = split_sequence(raw_seq, n_steps_in, n_steps_out)
# summarize the data
for input, output in zip(X, y):
print (input, output)
# reshape from [samples, timesteps] into [samples, timesteps, features]
n_features = 1
X = X.reshape((X.shape[0], X.shape[1], n_features))
# define model
model = Sequential()
model.add(GRU(100, activation='relu', input_shape=(n_steps_in, n_features)))
# model.add(GRU(100, activation='relu', return_sequences=True, input_shape=(n_steps_in, n_features)))
# model.add(GRU(100, activation='relu'))
model.add(Dense(n_steps_out))
model.compile(optimizer='adam', loss='mse')
# fit model
%time history = model.fit(X, y, epochs=500, verbose=0)
import matplotlib.pyplot as plt
plt.yscale('log')
plt.plot(history.history['loss'])
X_sample = np.array([70, 80, 90]).reshape((1, n_steps_in, n_features))
y_pred = model.predict(X_sample)
print(y_pred)
X_sample = np.array([10, 20, 30]).reshape((1, n_steps_in, n_features))
y_pred = model.predict(X_sample)
print(y_pred)
"""
Explanation: Multi-Step LSTM Models
this might just as well be an encoder / decoder approach
End of explanation
"""
|
CalPolyPat/Python-Workshop | Python Workshop/MatPlotLib.ipynb | mit | import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
#^^^This line tells Jupyter to render the plots inside the notebook^^^
"""
Explanation: MatPlotLib
MatPlotLib is another add on library. It allows us to plot anything we desire. But first, as with NumPy, we need to import it.
End of explanation
"""
plt.plot?
"""
Explanation: The main command used in matplotlib is the plot command. To find out how to use it, we will use some of the sweet functionality of the Jupyter notebook. To retrieve the documentation on any command, type the command into a cell with a question mark after it.
End of explanation
"""
x = np.linspace(0, np.pi*2, 10)
y = np.sin(x)
plt.plot(x,y)
x = np.linspace(0, np.pi*2, 100)
y = np.sin(x)
plt.plot(x,y)
"""
Explanation: As we can see, there is a lot of information. The important part are the four indented lines showing you what parameters you can use when you call plot. Let's start plotting some things.
End of explanation
"""
x = np.linspace(0,10,1000) #Wonderful measurements
y = x**np.exp(np.sin(x**2))-np.sin(x*np.log(x)) #Wonderful Calculations
plt.plot(x,y)
"""
Explanation: That is really all there is to matplotlib. All that is left to discuss is making plots pretty. Let's say that you have made some wondeful measurements of some physical quantity and calculated some other physical quantity. Then you drew some amazing conclusion and you are sure your advisor will love it! Let's model this in the next cell.
End of explanation
"""
plt.plot(x,y, label="Some Physics or Something")
plt.xlabel("I dun measured this!")
plt.ylabel("I dun calculated this!")
plt.title("I dun made this plot!")
plt.legend()
"""
Explanation: Aha! According to the plot above, my Grand Unified Theory is complete. When your advisor looks at the graph, they might pass out due to its poor quality. This graph needs x labels, y labels, a title and maybe even a legend! Doing those things is astoundingly easy! Let's check it out.
End of explanation
"""
|
tensorflow/datasets | docs/determinism.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import re
import tensorflow_datasets as tfds
imagenet = tfds.builder('imagenet2012')
num_shards = imagenet.info.splits['train'].num_shards
num_examples = imagenet.info.splits['train'].num_examples
print(f'imagenet has {num_shards} shards ({num_examples} examples)')
"""
Explanation: TFDS and determinism
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/datasets/determinism"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/datasets/blob/master/docs/determinism.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/datasets/blob/master/docs/determinism.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/datasets/docs/determinism.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This document explains:
The TFDS guarantees on determinism
In which order does TFDS read examples
Various caveats and gotchas
Setup
Datasets
Some context is needed to understand how TFDS reads the data.
During generation, TFDS write the original data into standardized .tfrecord files. For big datasets, multiple .tfrecord files are created, each containing multiple examples. We call each .tfrecord file a shard.
This guide uses imagenet which has 1024 shards:
End of explanation
"""
#@title
def load_dataset(builder, **as_dataset_kwargs):
"""Load the dataset with the tfds_id."""
read_config = as_dataset_kwargs.pop('read_config', tfds.ReadConfig())
read_config.add_tfds_id = True # Set `True` to return the 'tfds_id' key
return builder.as_dataset(read_config=read_config, **as_dataset_kwargs)
def print_ex_ids(
builder,
*,
take: int,
skip: int = None,
**as_dataset_kwargs,
) -> None:
"""Print the example ids from the given dataset split."""
ds = load_dataset(builder, **as_dataset_kwargs)
if skip:
ds = ds.skip(skip)
ds = ds.take(take)
exs = [ex['tfds_id'].numpy().decode('utf-8') for ex in ds]
exs = [id_to_int(tfds_id, builder=builder) for tfds_id in exs]
print(exs)
def id_to_int(tfds_id: str, builder) -> str:
"""Format the tfds_id in a more human-readable."""
match = re.match(r'\w+-(\w+).\w+-(\d+)-of-\d+__(\d+)', tfds_id)
split_name, shard_id, ex_id = match.groups()
split_info = builder.info.splits[split_name]
return sum(split_info.shard_lengths[:int(shard_id)]) + int(ex_id)
"""
Explanation: Finding the dataset examples ids
You can skip to the following section if you only want to know about determinism.
Each dataset example is uniquely identified by an id (e.g. 'imagenet2012-train.tfrecord-01023-of-01024__32'). You can recover this
id by passing read_config.add_tfds_id = True which will add a 'tfds_id' key in the dict from the tf.data.Dataset.
In this tutorial, we define a small util which will print the example ids of the dataset (converted in integer to be more human-readable):
End of explanation
"""
# Same as: imagenet.as_dataset(split='train').take(20)
print_ex_ids(imagenet, split='train', take=20)
print_ex_ids(imagenet, split='train', take=20)
"""
Explanation: Determinism when reading
This section explains deterministim guarantee of tfds.load.
With shuffle_files=False (default)
By default TFDS yield examples deterministically (shuffle_files=False)
End of explanation
"""
print_ex_ids(imagenet, split='train[67%:84%]', take=20)
print_ex_ids(imagenet, split='train[67%:84%]', take=20)
"""
Explanation: For performance, TFDS read multiple shards at the same time using tf.data.Dataset.interleave. We see in this example that TFDS switch to shard 2 after reading 16 examples (..., 14, 15, 1251, 1252, ...). More on interleave bellow.
Similarly, the subsplit API is also deterministic:
End of explanation
"""
print_ex_ids(imagenet, split='train', shuffle_files=True, take=20)
print_ex_ids(imagenet, split='train', shuffle_files=True, take=20)
"""
Explanation: If you're training for more than one epoch, the above setup is not recommended as all epochs will read the shards in the same order (so randomness is limited to the ds = ds.shuffle(buffer) buffer size).
With shuffle_files=True
With shuffle_files=True, shards are shuffled for each epoch, so reading is not deterministic anymore.
End of explanation
"""
print_ex_ids(imagenet, split='train', take=20)
"""
Explanation: Note: Setting shuffle_files=True also disable deterministic in tf.data.Options to give some performance boost. So even small datasets which only have a single shard (like mnist), become non-deterministic.
See recipe below to get deterministic file shuffling.
Determinism caveat: interleave args
Changing read_config.interleave_cycle_length, read_config.interleave_block_length will change the examples order.
TFDS relies on tf.data.Dataset.interleave to only load a few shards at once, improving the performance and reducing memory usage.
The example order is only guaranteed to be the same for a fixed value of interleave args. See interleave doc to understand what cycle_length and block_length correspond too.
cycle_length=16, block_length=16 (default, same as above):
End of explanation
"""
read_config = tfds.ReadConfig(
interleave_cycle_length=3,
interleave_block_length=2,
)
print_ex_ids(imagenet, split='train', read_config=read_config, take=20)
"""
Explanation: cycle_length=3, block_length=2:
End of explanation
"""
print_ex_ids(imagenet, split='train', take=25) # tfds.load(..., split='train').take(25)
print_ex_ids(imagenet, split='train[:25]', take=-1) # tfds.load(..., split='train[:25]')
"""
Explanation: In the second example, we see that the dataset read 2 (block_length=2) examples in a shard, then switch to the next shard. Every 2 * 3 (cycle_length=3) examples, it goes back to the first shard (shard0-ex0, shard0-ex1, shard1-ex0, shard1-ex1, shard2-ex0, shard2-ex1, shard0-ex2, shard0-ex3, shard1-ex2, shard1-ex3, shard2-ex2,...).
Subsplit and example order
Each example has an id 0, 1, ..., num_examples-1. The subsplit API select a slice of examples (e.g. train[:x] select 0, 1, ..., x-1).
However, within the subsplit, examples are not read in increasing id order (due to shards and interleave).
More specifically, ds.take(x) and split='train[:x]' are not equivalent !
This can be seen easily in the above interleave example where examples come from different shards.
End of explanation
"""
read_config = tfds.ReadConfig(
shuffle_seed=32,
)
# Deterministic order, different from the default shuffle_files=False above
print_ex_ids(imagenet, split='train', shuffle_files=True, read_config=read_config, take=22)
print_ex_ids(imagenet, split='train', shuffle_files=True, read_config=read_config, take=22)
"""
Explanation: After the 16 (block_length) examples, .take(25) switches to the next shard while train[:25] continue reading examples in from the first shard.
Recipes
Get deterministic file shuffling
There are 2 ways to have deterministic shuffling:
Setting the shuffle_seed. Note: This requires changing the seed at each epoch, otherwise shards will be read in the same order between epoch.
End of explanation
"""
def _reverse_order(file_instructions):
return list(reversed(file_instructions))
read_config = tfds.ReadConfig(
experimental_interleave_sort_fn=_reverse_order,
)
# Last shard (01023-of-01024) is read first
print_ex_ids(imagenet, split='train', read_config=read_config, take=5)
"""
Explanation: Using experimental_interleave_sort_fn: This gives full control over which shards are read and in which order, rather than relying on ds.shuffle order.
End of explanation
"""
read_config = tfds.ReadConfig(
interleave_cycle_length=1, # Read shards sequentially
)
print_ex_ids(imagenet, split='train', read_config=read_config, skip=40, take=22)
# If the job get pre-empted, using the subsplit API will skip at most `len(shard0)`
print_ex_ids(imagenet, split='train[40:]', read_config=read_config, take=22)
"""
Explanation: Get deterministic preemptable pipeline
This one is more complicated. There is no easy, satisfactory solution.
Without ds.shuffle and with deterministic shuffling, in theory it should be possible to count the examples which have been read and deduce which examples have been read within in each shard (as a function of
cycle_length, block_length and shard order). Then the skip, take for
each shard could be injected through experimental_interleave_sort_fn.
With ds.shuffle it's likely impossible without replaying the full training pipeline. It would require saving the
ds.shuffle buffer state to deduce which examples have been read. Examples
could be non-continuous (e.g. shard5_ex2, shard5_ex4 read but not shard5_ex3).
With ds.shuffle, one way would be to save all shards_ids/example_ids read (deduced from tfds_id), then deducing the file instructions from that.
The simplest case for 1. is to have .skip(x).take(y) match train[x:x+y] match. It requires:
Set cycle_length=1 (so shards are read sequentially)
Set shuffle_files=False
Do not use ds.shuffle
It should only be used on huge dataset where the training is
only 1 epoch. Examples would be read in the default shuffle order.
End of explanation
"""
imagenet.info.splits['train[44%:45%]'].file_instructions
"""
Explanation: Find which shards/examples are read for a given subsplit
With the tfds.core.DatasetInfo, you have direct access to the read instructions.
End of explanation
"""
|
kkai/perception-aware | 4.visualization/Plotting Basics.ipynb | mit | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%pylab inline
#use a nicer plotting style
plt.style.use(u'seaborn-notebook')
print(plt.style.available)
"""
Explanation: Plotting Basics
Let's start with importing some plotting functions (don't care about the warning ... we should use something else, but this is just easier, for the time being).
End of explanation
"""
data = pd.read_csv('./data/assembly.dat',delimiter='\t',skiprows=11,names=['s','usec','ax','ay','az','gx','gy','gz','mx','my','mz','label'])
#to get an overview of the data you can use describe in pandas
data.describe()
data[['gx','gy','gz']].plot()
data[['ax','ay','az']].plot()
#simple line plot
plt.plot(data['ax'])
plt.xlim(4000,7000)
fig= figure()
plot(data['ax'], label='acceleration x')
plot(data['gx'], label='gyrscope x')
legend(loc='upper right')
"""
Explanation: Motion Data
The assembly.dat file contains a recording done from a assembly session.
A person was doing the following activities:
hammering in nails (label 1)
screwdring (label 2)
sandpapering (label 3)
sawing (label 4)
Label 0 is for doing none of the activities above. The sensor is attached to the subjects right wrist (x axis pointing towards the fingers).
The sensor was sampled with 100 Hz, it's raw sensor data (not calibrated). The data structure of the file is shown in the header of the file.
Import the data
The file uses tabs as separators and we need to skip 11 rows (the description of the content).
Also we need to give the desciription for each column in the names variable.
End of explanation
"""
#to explore the hist command a bit more ... use this list
alist =[1,2,3,4,5,5,5,4,4,4,4,4,4,5,6,4,4,4,4,4,4,4,4,3,2,2,8,8,8]
hist(alist)
#if you want to plot the density function with it ... pandas is nice:
d = pd.Series(alist)
d.hist()
d.plot(kind='kde', style='k--')
#try it with the gryo and accelerometer data
data['ax'].plot(kind='kde', style='b--')
data['ay'].plot(kind='kde', style='g--')
data['az'].plot(kind='kde', style='r--')
"""
Explanation: for more details on plotting options see also: http://matplotlib.org/users/pyplot_tutorial.html
We can change the size and DPI as follows:
Histograms
End of explanation
"""
y = [3, 10, 7, 5, 3, 4.5, 6, 8.1]
N = len(y)
x = range(N)
width = 1/1.5
plt.bar(x, y, width, color="blue")
plt.barh(x, y, width, color="blue")
"""
Explanation: try it with the gyroscope data ... what's the difference?
Barcharts
End of explanation
"""
gyro_hammer = data[data['label']==1][['gx','gy','gz']]
gyro_screw = data[data['label']==2][['gx','gy','gz']]
gyro_sand = data[data['label']==3][['gx','gy','gz']]
gyro_saw = data[data['label']==4][['gx','gy','gz']]
gyro_hammer.plot()
gyro_screw.plot()
gyro_sand.plot()
gyro_saw.plot()
#calculating a sliding window ...
from pandas.stats.moments import rolling_apply
print size(gyro_screw)/10.0
method = median
wsize = 10
feat1 = rolling_apply(gyro_screw, wsize, method).dropna()
feat2 = rolling_apply(gyro_hammer,wsize, method).dropna()
feat3 = rolling_apply(gyro_sand, wsize, method).dropna()
feat4 = rolling_apply(gyro_saw, wsize, method).dropna()
scatter(feat1['gz'], feat1['gy'])
scatter(feat2['gz'], feat2['gy'],color='red')
scatter(feat3['gz'], feat3['gy'],color='green')
scatter(feat4['gz'], feat4['gy'],color='yellow')
#3d is usually worse but you can do also 3d plotd
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig = figure(figsize=(14,6))
ax = axes(projection='3d')
ax.scatter(feat1['gx'], feat1['gy'], feat1['gz'])
ax.scatter(feat2['gx'], feat2['gy'], feat2['gz'],c='red')
"""
Explanation: Feature Caluclation and Selection
In the next step we will take the gyro data and calculate some features on the them.
End of explanation
"""
#don't look here ... I split the data in test and training set
#bad code :)
l_1 = len(feat1)
l_2 = len(feat2)
l_3 = len(feat3)
l_4 = len(feat4)
#X = feat1.append(feat2).append(feat3).append(feat4)
#Y = [1.0] * l_1 + [2.0]* l_2 + [3] * l_3 + [4] * l_4
X = feat1[0:l_1/2].append(feat2[0:l_2/2]).append(feat3[0:l_3/2]).append(feat4[0:l_4/2])
Y = [1]*(l_1/2) + [2]*(l_2/2)+[3]*(l_3/2) + [4]*(l_4/2)
T = feat1[l_1/2:l_1].append(feat2[l_2/2:l_2]).append(feat3[l_3/2:l_3]).append(feat4[l_4/2:l_4])
t_gg = [1]*len(feat1[l_1/2:l_1]) + [2]*len(feat2[l_2/2:l_2]) + [3]*len(feat3[l_3/2:l_3]) + [4]*len(feat1[l_4/2:l_4])
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
neigh = KNeighborsClassifier(n_neighbors=3)
dt = DecisionTreeClassifier()
dt.fit(X,Y)
res = dt.predict(T)
plot(res+0.2, 'r.')
plot(t_gg, 'b.')
print dt.score(T, t_gg)
from sklearn.utils import shuffle
from sklearn.cross_validation import StratifiedKFold, cross_val_score
Xn, yn = shuffle(T, t_gg)
skf = StratifiedKFold(yn, 10)
print cross_val_score(dt, Xn, yn, cv=skf)
"""
Explanation: Only for advanced students: Classification
In the following we will try how good our features are to automatically classify the different activities.
End of explanation
"""
|
akseshina/dl_course | seminar_1/homework_task1.ipynb | gpl-3.0 | def task_1a_np(x, y):
return np.where(x > y, x + y, x - y)
X = tf.placeholder(tf.float64)
Y = tf.placeholder(tf.float64)
out = tf.cond(tf.greater(X, Y), lambda: tf.add(X, Y), lambda: tf.subtract(X, Y))
with tf.Session() as sess:
for xx, yy in np.random.uniform(size=(50, 2)):
actual = sess.run(out, feed_dict={X:xx, Y:yy})
expected = task_1a_np(xx, yy)
if actual != expected:
print('Fail')
else:
print('Success')
"""
Explanation: Hint: Use dtype=tf.float64 if you want to have same precision as numpy for testing<br>
Hint: You migth wanna use tf.InterativeSession for convenience
1a: Create two random 0-d tensors x and y of any distribution. <br>
Create a TensorFlow object that returns x + y if x > y, and x - y otherwise. <br>
Hint: look up tf.cond() <br>
I do the first problem for you <br>
End of explanation
"""
def task_1b_np(x, y):
return np.select(condlist=[x < y, x > y],
choicelist=[x + y, x - y],
default=0)
X = tf.placeholder(tf.float64)
Y = tf.placeholder(tf.float64)
out = tf.case([(tf.less(X, Y), lambda: tf.add(X, Y)),
(tf.greater(X, Y), lambda: tf.subtract(X, Y))],
default = lambda: tf.constant(0., dtype=tf.float64))
with tf.Session() as sess:
for xx, yy in np.random.uniform(-1, 1, size=(50, 2)):
actual = sess.run(out, feed_dict={X:xx, Y:yy})
expected = task_1b_np(xx, yy)
if actual != expected:
print('Fail')
else:
print('Success')
"""
Explanation: 1b: Create two 0-d tensors x and y randomly selected from the range [-1, 1).<br>
Return x + y if x < y, x - y if x > y, 0 otherwise.<br>
Hint: Look up tf.case().<br>
End of explanation
"""
def task_1c_np():
x = np.array([[0, -2, -1], [0, 1, 2]])
y = np.zeros_like(x)
return x == y
X = tf.constant([[0, -2, -1], [0, 1, 2]])
Y = tf.zeros_like(X)
out = tf.equal(X, Y)
with tf.Session() as sess:
actual = sess.run(out)
expected = task_1c_np()
if not np.array_equal(actual, expected):
print('Fail')
else:
print('Success')
"""
Explanation: 1c: Create the tensor x of the value [[0, -2, -1], [0, 1, 2]] <br>
and y as a tensor of zeros with the same shape as x. <br>
Return a boolean tensor that yields Trues if x equals y element-wise. <br>
Hint: Look up tf.equal(). <br>
End of explanation
"""
def task_1d_np(x):
return x[x > 30].reshape(-1, 1)
X = tf.placeholder(tf.float64)
out = tf.gather(X, tf.where(X > 30))
with tf.Session() as sess:
for xx in np.random.uniform(size=(50, 1)):
actual = sess.run(out, feed_dict={X:xx})
expected = task_1d_np(xx)
if actual != expected:
print('Fail')
else:
print('Success')
"""
Explanation: 1d:<br>
Get the indices of elements in x whose values are greater than 30.<br>
Hint: Use tf.where().<br>
Then extract elements whose values are greater than 30.<br>
Hint: Use tf.gather().<br>
End of explanation
"""
def task_1e_np():
return np.diag(np.arange(1, 7))
out = tf.diag(tf.range(1, 7))
with tf.Session() as sess:
actual = sess.run(out)
expected = task_1e_np()
if not np.array_equal(actual, expected):
print('Fail')
else:
print('Success')
"""
Explanation: 1e: Create a diagnoal 2-d tensor of size 6 x 6 with the diagonal values of 1,<br>
2, ..., 6<br>
Hint: Use tf.range() and tf.diag().<br>
End of explanation
"""
def task_1f_np(x):
return np.linalg.det(x)
X = tf.placeholder(tf.float64, shape=(10, 10))
out = tf.matrix_determinant(X)
with tf.Session() as sess:
for xx in np.random.uniform(size=(50, 10, 10)):
actual = sess.run(out, feed_dict={X:xx})
expected = task_1f_np(xx)
if not math.isclose(actual, expected):
print('Fail')
else:
print('Success')
"""
Explanation: 1f: Create a random 2-d tensor of size 10 x 10 from any distribution.<br>
Calculate its determinant.<br>
Hint: Look at tf.matrix_determinant().<br>
End of explanation
"""
def task_1g_np():
x = [5, 2, 3, 5, 10, 6, 2, 3, 4, 2, 1, 1, 0, 9]
_, idx = np.unique(x, return_index=True)
return np.take(x, sorted(idx))
X = tf.constant([5, 2, 3, 5, 10, 6, 2, 3, 4, 2, 1, 1, 0, 9])
out = tf.unique(X)
with tf.Session() as sess:
print(sess.run(out))
with tf.Session() as sess:
actual, _ = sess.run(out)
expected = task_1g_np()
if not np.array_equal(actual, expected):
print('Fail')
else:
print('Success')
"""
Explanation: 1g: Create tensor x with value [5, 2, 3, 5, 10, 6, 2, 3, 4, 2, 1, 1, 0, 9].<br>
Return the unique elements in x<br>
Hint: use tf.unique(). Keep in mind that tf.unique() returns a tuple.<br>
End of explanation
"""
def task_1h_np(x, y):
average = np.mean(x - y)
mse = np.mean((x - y) ** 2)
asum = np.sum(np.abs(x - y))
return mse if average < 0 else asum
X = tf.placeholder(tf.float64)
Y = tf.placeholder(tf.float64)
out = tf.cond(tf.less(tf.reduce_mean(X - Y), 0),
lambda: tf.reduce_mean(tf.square(X - Y)),
lambda: tf.reduce_sum(tf.abs(X - Y)))
with tf.Session() as sess:
for xx, yy in np.random.normal(size=(50, 2, 300)):
actual = sess.run(out, feed_dict={X:xx, Y:yy})
expected = task_1h_np(xx, yy)
if not math.isclose(actual, expected):
print('Fail')
else:
print('Success')
"""
Explanation: 1h: Create two tensors x and y of shape 300 from any normal distribution,<br>
as long as they are from the same distribution.<br>
Use tf.cond() to return:<br>
- The mean squared error of (x - y) if the average of all elements in (x - y)<br>
is negative, or<br>
- The sum of absolute value of all elements in the tensor (x - y) otherwise.<br>
Hint: see the Huber loss function in the lecture slides 3.<br>
End of explanation
"""
|
astroai/starnet | Quantifying+Persistence-v2.ipynb | bsd-2-clause | f = '/home/spiffical/data/stars/apStar_visits_quantifypersist_med.txt'
persist_vals_med1=[]
persist_vals_med2=[]
fibers_med = []
snr_combined_med = []
starflags_indiv_med = []
loc_ids_med = []
ap_ids_med = []
fi = open(f)
for j, line in enumerate(fi):
# Get values
line = line.split()
persist1 = float(line[-2])
persist2 = float(line[-1])
fiber = int(line[-3])
snr_comb = float(line[-6])
starflag_indiv = float(line[0])
loc_id = line[-4]
ap_id = line[-5]
# Append to lists
persist_vals_med1.append(persist1)
persist_vals_med2.append(persist2)
fibers_med.append(fiber)
snr_combined_med.append(snr_comb)
starflags_indiv_med.append(starflag_indiv)
loc_ids_med.append(loc_id)
ap_ids_med.append(ap_id)
fi.close()
"""
Explanation: Analysis with median derived value. Checking for jump between (blue to green) and (green to red) chips
Method of persistence determination:
found median of last 100 points in chip 1
found median and std. dev. of first 100 points in chip 2
persist_val = ( median(chip1) - median(chip2) ) / std(chip2)
Each line in file has format:
(starflag_indiv, starflag_comb, aspcapflag, targflag_1, targflag_2, SNR_visit, SNR_combined, ap_id, loc_id, fiber, (bluegreen)persist, (greenred)persist)
End of explanation
"""
# Get rid of nans and infs in (bluegreen) jump
nan_list2 = np.isnan(persist_vals_med1)
inf_list2 = np.isinf(persist_vals_med1)
comb_list = np.invert([a or b for a,b in zip(nan_list2, inf_list2)]) # invert so we keep non-nans
persist_vals_med1 = np.asarray(persist_vals_med1)[comb_list]
fibers_med1 = np.asarray(fibers_med)[comb_list]
snr_combined_med1 = np.asarray(snr_combined_med)[comb_list]
starflags_indiv_med1 = np.asarray(starflags_indiv_med)[comb_list]
loc_ids_med1 = np.asarray(loc_ids_med)[comb_list]
ap_ids_med1 = np.asarray(ap_ids_med)[comb_list]
# Get rid of nans and infs in (greenred) jump
nan_list3 = np.isnan(persist_vals_med2)
inf_list3 = np.isinf(persist_vals_med2)
comb_list2 = np.invert([a or b for a,b in zip(nan_list3, inf_list3)]) # invert so we keep non-nans
persist_vals_med2 = np.asarray(persist_vals_med2)[comb_list2]
fibers_med2 = np.asarray(fibers_med)[comb_list2]
snr_combined_med2 = np.asarray(snr_combined_med)[comb_list2]
starflags_indiv_med2 = np.asarray(starflags_indiv_med)[comb_list2]
loc_ids_med2 = np.asarray(loc_ids_med)[comb_list2]
ap_ids_med2 = np.asarray(ap_ids_med)[comb_list2]
fig, ax = plt.subplots(figsize=(12,12))
ax.hist(persist_vals_med1, bins=4000, alpha=0.6, label='Blue to green')
ax.hist(persist_vals_med2, bins=4000, alpha=0.6, label='Green to red' )
ax.set_xlim((min(persist_vals_med2), 30))
ax.set_xlabel(r'Persistence value ($\sigma$)', size=20)
ax.set_ylabel('# of spectra', size=20)
ax.legend(loc=0, prop={'size':15})
plt.show()
"""
Explanation: First let's examine the range of persistence values
End of explanation
"""
# High (>3sigma) blue to green persistence
high_persist_med1_indx3 = np.abs(persist_vals_med1)>3
high_persist_med1_vals3 = persist_vals_med1[high_persist_med1_indx3]
high_persist_med_fibers3 = fibers_med[high_persist_med1_indx3]
high_persist_med_snr3 = snr_combined_med[high_persist_med1_indx3]
high_persist_med_starflag3 = starflags_indiv_med[high_persist_med1_indx3]
high_persist_med_ap3 = ap_ids_med[high_persist_med1_indx3]
high_persist_med_loc3 = loc_ids_med[high_persist_med1_indx3]
# Really high (>8sigma) blue to green persistence
high_persist_med1_indx8 = np.abs(persist_vals_med1)>8
high_persist_med1_vals8 = persist_vals_med1[high_persist_med1_indx8]
high_persist_med_fibers8 = fibers_med[high_persist_med1_indx8]
high_persist_med_snr8 = snr_combined_med[high_persist_med1_indx8]
high_persist_med_starflag8 = starflags_indiv_med[high_persist_med1_indx8]
high_persist_med_ap8 = ap_ids_med[high_persist_med1_indx8]
high_persist_med_loc8 = loc_ids_med[high_persist_med1_indx8]
# High (>3sigma) green to red persistence
high_persist_med2_indx3 = np.abs(persist_vals_med2)>3
high_persist_med2_vals3 = persist_vals_med2[high_persist_med2_indx3]
high_persist_med2_fibers3 = fibers_med2[high_persist_med2_indx3]
high_persist_med2_snr3 = snr_combined_med2[high_persist_med2_indx3]
high_persist_med2_starflag3 = starflags_indiv_med2[high_persist_med2_indx3]
high_persist_med2_ap3 = ap_ids_med2[high_persist_med2_indx3]
high_persist_med2_loc3 = loc_ids_med2[high_persist_med2_indx3]
# Really high (>8sigma) green to red persistence
high_persist_med2_indx8 = np.abs(persist_vals_med2)>8
high_persist_med2_vals8 = persist_vals_med2[high_persist_med2_indx8]
high_persist_med2_fibers8 = fibers_med2[high_persist_med2_indx8]
high_persist_med2_snr8 = snr_combined_med2[high_persist_med2_indx8]
high_persist_med2_starflag8 = starflags_indiv_med2[high_persist_med2_indx8]
high_persist_med2_ap8 = ap_ids_med2[high_persist_med2_indx8]
high_persist_med2_loc8 = loc_ids_med2[high_persist_med2_indx8]
fig, ax = plt.subplots(figsize=(12,12))
ax.hist(high_persist_med_fibers3, bins=300, alpha=0.6, label='Blue to green')
ax.hist(high_persist_med2_fibers3, bins=300, alpha=0.6, label='Green to red')
ax.set_xlabel('Fiber #', size=20)
ax.set_ylabel(r'# of spectra with persistence > 3$\sigma$', size=20)
ax.set_xlim((-5,305))
ax.annotate("Total # of affected spectra: "+str(len(high_persist_med2_fibers3) + len(high_persist_med_fibers3)),
xy=(0.3, 0.9), xycoords="axes fraction", size=15)
ax.legend(loc=0, prop={'size':15})
plt.show()
fig, ax = plt.subplots(figsize=(12,12))
ax.hist(high_persist_med_fibers8, bins=300, alpha=0.6, label='Blue to green')
ax.hist(high_persist_med2_fibers8, bins=300, alpha=0.6, label='Green to red')
ax.set_xlabel('Fiber #', size=20)
ax.set_ylabel(r'# of spectra with persistence > 8$\sigma$', size=20)
ax.annotate("Total # of affected spectra: "+str(len(high_persist_med2_fibers8) + len(high_persist_med_fibers3)),
xy=(0.3, 0.9), xycoords="axes fraction", size=15)
ax.set_xlim((-5,305))
ax.legend(loc=0, prop={'size':15})
plt.show()
"""
Explanation: Now let's see which fibers are being affected
End of explanation
"""
bit=9
print '# with bit %s in apogee (from my database): %s' %(bit, len(high_persist_med_starflag3[high_persist_med_starflag3==2**bit]))
print 'total in my database: %s ' %len(high_persist_med_starflag3)
print 'fraction: ', len(high_persist_med_starflag3[high_persist_med_starflag3==2**bit])*1./len(high_persist_med_starflag3)
"""
Explanation: Let's see if the persistence flags in apStar match the stars I've found
STARFLAG with bit 9 corresponds to >20% of spectrum falling in high persistence region
... persistence(bluegreen) > 3
End of explanation
"""
bit=9
print '# with bit %s in apogee (from my database): %s' %(bit, len(high_persist_med_starflag8[high_persist_med_starflag8==2**bit]))
print 'total in my database: %s ' %len(high_persist_med_starflag8)
print 'fraction: ', len(high_persist_med_starflag8[high_persist_med_starflag8==2**bit])*1./len(high_persist_med_starflag8)
"""
Explanation: ... persistence(bluegreen) > 8
End of explanation
"""
bit=9
print '# with bit %s in apogee (from my database): %s' %(bit, len(high_persist_med2_starflag3[high_persist_med2_starflag3==2**bit]))
print 'total in my database: %s ' %len(high_persist_med2_starflag3)
print 'fraction: ', len(high_persist_med2_starflag3[high_persist_med2_starflag3==2**bit])*1./len(high_persist_med2_starflag3)
"""
Explanation: ... persistence(greenred) > 3
End of explanation
"""
bit=9
print '# with bit %s in apogee (from my database): %s' %(bit, len(high_persist_med2_starflag8[high_persist_med2_starflag8==2**bit]))
print 'total in my database: %s ' %len(high_persist_med2_starflag8)
print 'fraction: ', len(high_persist_med2_starflag8[high_persist_med2_starflag8==2**bit])*1./len(high_persist_med2_starflag8)
"""
Explanation: ... persistence(greenred) > 8
End of explanation
"""
|
Summer-MIND/mind_2017 | Tutorials/hyperalignment/hyperalignment_tutorial.ipynb | mit | %matplotlib inline
import numpy as np
from scipy.spatial.distance import pdist, cdist
from mvpa2.datasets.base import Dataset
from mvpa2.mappers.zscore import zscore
from mvpa2.misc.surfing.queryengine import SurfaceQueryEngine
from mvpa2.algorithms.searchlight_hyperalignment import SearchlightHyperalignment
from mvpa2.base.hdf5 import h5save, h5load
# Alternatively, all those above can be imported using
# from mvpa2.suite import *
import matplotlib.pyplot as plt
from mvpa2.support.nibabel.surf import read as read_surface
"""
Explanation: Hyperalignment Tutorial
This jupyter notebook is an example to use searchlight hyperalignment (Guntupalli et al., 2016) on fMRI movie data and benchmark its performance.
In this example, we will use some minimal data from the Guntupalli et al. (2016) paper to save computation time. This minimal dataset contains 3 subjects, 2 movie runs per subject, and left hemisphere data only. The data have been preprocessed with motion correction, surface-based alignment, and denoising.
0. Preparations
We will use the docker image from https://github.com/Summer-MIND/mind-tools
Reopen the container by typing
docker start MIND && docker attach MIND
in the command line. (Or
docker run -it -p 9999:9999 --name MIND -v ~/Desktop:/mnt ejolly/mind-tools
if you haven't used it before).
Then, within the docker container, let's create the directory and download the tutorial data.
mkdir /mnt/hyperalignment
cd /mnt/hyperalignment
wget http://discovery.dartmouth.edu/~fma/hyper_data.tar.gz
wget http://discovery.dartmouth.edu/~fma/hyperalignment_tutorial.ipynb
tar xzvf hyper_data.tar.gz
Finally, prepare the python packages we will use. Here we will use python2 because PyMVPA dependency h5py is not compatible with python3.
source activate py27
pip install h5py nibabel pprocess pymvpa2
After all these, you can start a jupyter notebook using
jupyter notebook --port=9999 --no-browser --ip=0.0.0.0 --allow-root
And copy the url from the terminal to your web browser.
1. Import python functions and classes
End of explanation
"""
dss_train = []
dss_test = []
subjects = ['rid000005', 'rid000011', 'rid000014']
for subj in subjects:
ds = Dataset(np.load('raiders/{subj}_run00_lh.npy'.format(subj=subj)))
ds.fa['node_indices'] = np.arange(ds.shape[1], dtype=int)
zscore(ds, chunks_attr=None)
dss_train.append(ds)
ds = Dataset(np.load('raiders/{subj}_run01_lh.npy'.format(subj=subj)))
ds.fa['node_indices'] = np.arange(ds.shape[1], dtype=int)
zscore(ds, chunks_attr=None)
dss_test.append(ds)
# Each run has 336 time points and 10242 features per subject.
print(dss_train[0].shape)
print(dss_test[0].shape)
"""
Explanation: 2. Read data
The data are read from numpy npy files and wrapped as Datasets. Features (vertices) are normalized to have unit variance.
End of explanation
"""
sl_radius = 5.0
qe = SurfaceQueryEngine(read_surface('fsaverage.lh.surf.gii'), radius=sl_radius)
hyper = SearchlightHyperalignment(
queryengine=qe,
compute_recon=False, # We don't need to project back from common space to subject space
nproc=1, # Number of processes to use. Change "Docker - Preferences - Advanced - CPUs" accordingly.
)
"""
Explanation: 3. Create SearchlightHyperalignment instance
The QueryEngine is used to find voxel/vertices within a searchlight. This SurfaceQueryEngine use a searchlight radius of 5 mm based on the fsaverage surface.
End of explanation
"""
# mappers = hyper(dss_train)
# h5save('mappers.hdf5.gz', mappers, compression=9)
mappers = h5load('mappers.hdf5.gz') # load pre-computed mappers
"""
Explanation: 4. Create common template space with training data
This step may take a long time. In my case it's 10 minutes with nproc=1.
End of explanation
"""
dss_aligned = [mapper.forward(ds) for ds, mapper in zip(dss_test, mappers)]
_ = [zscore(ds, chunks_attr=None) for ds in dss_aligned]
"""
Explanation: 5. Project testing data to the common space
End of explanation
"""
def compute_average_similarity(dss, metric='correlation'):
"""
Returns
=======
sim : ndarray
A 1-D array with n_features elements, each element is the average
pairwise correlation similarity on the corresponding feature.
"""
n_features = dss[0].shape[1]
sim = np.zeros((n_features, ))
for i in range(n_features):
data = np.array([ds.samples[:, i] for ds in dss])
dist = pdist(data, metric)
sim[i] = 1 - dist.mean()
return sim
sim_test = compute_average_similarity(dss_test)
sim_aligned = compute_average_similarity(dss_aligned)
plt.figure(figsize=(6, 6))
plt.scatter(sim_test, sim_aligned)
plt.xlim([-.2, .5])
plt.ylim([-.2, .5])
plt.xlabel('Surface alignment', size='xx-large')
plt.ylabel('SL Hyperalignment', size='xx-large')
plt.title('Average pairwise correlation', size='xx-large')
plt.plot([-1, 1], [-1, 1], 'k--')
plt.show()
"""
Explanation: 6. Benchmark inter-subject correlations
End of explanation
"""
def movie_segment_classification_no_overlap(dss, window_size=6, dist_metric='correlation'):
"""
Parameters
==========
dss : list of ndarray or Datasets
window_size : int, optional
dist_metric : str, optional
Returns
=======
cv_results : ndarray
An n_subjects x n_segments boolean array, 1 means correct classification.
"""
dss = [ds.samples if hasattr(ds, 'samples') else ds for ds in dss]
def flattern_movie_segment(ds, window_size=6):
n_seg = ds.shape[0] // window_size
ds = ds[:n_seg*window_size, :].reshape((n_seg, window_size, -1))
ds = ds.reshape((n_seg, -1))
return ds
dss = [flattern_movie_segment(ds, window_size=window_size) for ds in dss]
n_subj, n_seg = len(dss), dss[0].shape[0]
ds_sum = np.sum(dss, axis=0)
cv_results = np.zeros((n_subj, n_seg), dtype=bool)
for i, ds in enumerate(dss):
dist = cdist(ds, (ds_sum - ds) / float(n_subj - 1), dist_metric)
predicted = np.argmin(dist, axis=1)
acc = (predicted == np.arange(n_seg))
cv_results[i, :] = acc
return cv_results
acc_test = movie_segment_classification_no_overlap(dss_test)
acc_aligned = movie_segment_classification_no_overlap(dss_aligned)
print('Classification accuracy with surface alignment: %.1f%%' % (acc_test.mean()*100, ))
print('Classification accuracy with SL hyperalignment: %.1f%%' % (acc_aligned.mean()*100, ))
print('Classification accuracy with surface alignment per subject:', acc_test.mean(axis=1))
print('Classification accuracy with SL hyperalignment per subject:', acc_aligned.mean(axis=1))
"""
Explanation: 7. Benchmark movie segment classifications
End of explanation
"""
|
hankcs/HanLP | plugins/hanlp_demo/hanlp_demo/zh/con_mtl.ipynb | apache-2.0 | !pip install hanlp -U
"""
Explanation: <h2 align="center">点击下列图标在线运行HanLP</h2>
<div align="center">
<a href="https://colab.research.google.com/github/hankcs/HanLP/blob/doc-zh/plugins/hanlp_demo/hanlp_demo/zh/con_mtl.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<a href="https://mybinder.org/v2/gh/hankcs/HanLP/doc-zh?filepath=plugins%2Fhanlp_demo%2Fhanlp_demo%2Fzh%2Fcon_mtl.ipynb" target="_blank"><img src="https://mybinder.org/badge_logo.svg" alt="Open In Binder"/></a>
</div>
安装
无论是Windows、Linux还是macOS,HanLP的安装只需一句话搞定:
End of explanation
"""
import hanlp
hanlp.pretrained.mtl.ALL # MTL多任务,具体任务见模型名称,语种见名称最后一个字段或相应语料库
"""
Explanation: 加载模型
HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。
End of explanation
"""
HanLP = hanlp.load(hanlp.pretrained.mtl.CLOSE_TOK_POS_NER_SRL_DEP_SDP_CON_ELECTRA_BASE_ZH)
"""
Explanation: 调用hanlp.load进行加载,模型会自动下载到本地缓存。自然语言处理分为许多任务,分词只是最初级的一个。与其每个任务单独创建一个模型,不如利用HanLP的联合模型一次性完成多个任务:
End of explanation
"""
doc = HanLP(['2021年HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。', '阿婆主来到北京立方庭参观自然语义科技公司。'], tasks='con')
"""
Explanation: 短语句法分析
任务越少,速度越快。如指定仅执行短语句法分析:
End of explanation
"""
print(doc)
"""
Explanation: 返回值为一个Document:
End of explanation
"""
doc.pretty_print()
"""
Explanation: doc['con']为Tree类型,是list的子类。
可视化短语句法树:
End of explanation
"""
print(doc['con'][0])
"""
Explanation: 将第一个短语树转换为bracketed格式:
End of explanation
"""
doc['con'][0].to_list()
"""
Explanation: 将第一个短语树转换为list格式:
End of explanation
"""
HanLP([
["HanLP", "为", "生产", "环境", "带来", "次世代", "最", "先进", "的", "多语种", "NLP", "技术", "。"],
["我", "的", "希望", "是", "希望", "张晚霞", "的", "背影", "被", "晚霞", "映红", "。"]
], tasks='con', skip_tasks='tok*').pretty_print()
"""
Explanation: 为已分词的句子执行短语句法分析:
End of explanation
"""
|
kristianperkins/nbrequests | example_nbrequests.ipynb | apache-2.0 | # autoreload for development
%load_ext autoreload
%autoreload 1
%aimport nbrequests
import requests
from nbrequests import display_request
"""
Explanation: Example nbrequests
Pretty printing requests/responses from the python requests library in Jupyter notebook.
End of explanation
"""
r = requests.get('http://httpbin.org/get')
display_request(r)
"""
Explanation: GET request
Execute requests.get('...') and format the result in the notebook output using display_request(r).
End of explanation
"""
r = requests.post('http://httpbin.org/post', data='text data')
display_request(r)
r = requests.post('http://httpbin.org/post', data=b'binary data')
display_request(r)
"""
Explanation: POST request
End of explanation
"""
import json
r = requests.post('http://httpbin.org/post', json={"a": "b"})
display_request(r)
"""
Explanation: JSON POST request
JSON is currently renderer using Renderjson.
End of explanation
"""
r = requests.get('http://httpbin.org/status/500')
display_request(r)
"""
Explanation: 500 error response
End of explanation
"""
|
matthijsvk/multimodalSR | code/Experiments/Tutorials/EbenOlsen_TheanoLasagne/1 - Theano Basics/.ipynb_checkpoints/Exercises-checkpoint.ipynb | mit | # Uncomment and run this cell for one solution
load ./spoilers/logistic.py
"""
Explanation: Exercises
1. Logistic function
Create an expression for the logistic function $s(x) = \frac{1}{1+exp(-x)}$. Plot the function and its derivative, and verify that $\frac{ds}{dx} = s(x)(1-s(x))$.
End of explanation
"""
# Uncomment and run this cell for one solution
#%load spoilers/fib.py
"""
Explanation: 2. Fibonacci sequence
Calculate the 3rd to 10th terms of the sequence, defined by the recurrance relation $F_n = F_{n-2} + F_{n-1}$, with $F_1=1$ and $F_2=1$.
End of explanation
"""
board = theano.shared(np.zeros((100, 100), dtype='uint8'))
initial = np.random.binomial(1, 0.1, size=(100, 100)).astype('uint8')
board.set_value(initial)
# Create a function f that updates board with new values and return the current state
# Uncomment the line below and run for a solution
#%load spoilers/life.py
# After creating your f function, run this cell to animate the output
%matplotlib notebook
import matplotlib.pyplot as plt
from IPython import display
import time
for i in range(50):
plt.gca().cla()
current = f()
plt.imshow(current, interpolation='nearest', cmap='gray')
display.clear_output(wait=True)
display.display(plt.gcf())
time.sleep(0.1)
"""
Explanation: 3. Game of Life
Implement Conway's Game of Life with periodic boundary conditions (wrapping borders).
End of explanation
"""
|
jegibbs/phys202-2015-work | assignments/assignment05/InteractEx04.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
"""
Explanation: Interact Exercise 4
Imports
End of explanation
"""
def random_line(m, b, sigma, size=10):
"""Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
"""
xarray = np.linspace(-1.0,1.0,size)
yarray = np.array([m*x + b + ((1/np.sqrt(2*np.pi*sigma**2))*np.exp(-(x**2)/2*sigma**2)) for x in xarray])
return xarray, yarray
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
"""
Explanation: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case.
End of explanation
"""
def ticks_out(ax):
"""Move the ticks to the outside of the box."""
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
"""Plot a random line with slope m, intercept b and size points."""
# YOUR CODE HERE
raise NotImplementedError()
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
"""
Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful.
End of explanation
"""
# YOUR CODE HERE
raise NotImplementedError()
#### assert True # use this cell to grade the plot_random_line interact
"""
Explanation: Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue.
End of explanation
"""
|
google/spectral-density | tf/mnist_spectral_density.ipynb | apache-2.0 | import os
import sys
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import experiment_utils
import lanczos_experiment
import tensorflow_datasets as tfds
sys.path.insert(0, os.path.abspath("./../jax"))
import density
COLAB_PATH = '/tmp/spectral-density'
TRAIN_PATH = os.path.join(COLAB_PATH, 'train')
LANCZOS_PATH = os.path.join(COLAB_PATH, 'lanczos')
os.makedirs(TRAIN_PATH)
os.makedirs(LANCZOS_PATH)
IMAGE_SIZE = 28
NUM_CLASSES = 10
BATCH_SIZE = 32
LEARNING_RATE = 0.02
NUM_TRAIN_STEPS = 10000
NUM_SUMMARIZE_STEPS = 1000
NUM_LANCZOS_STEPS = 90
def data_fn(num_epochs=None, shuffle=False, initializable=False):
"""Returns tf.data dataset for MNIST."""
dataset = tfds.load(name="mnist", split=tfds.Split.TRAIN)
dataset = dataset.repeat(num_epochs)
if shuffle:
dataset = dataset.shuffle(buffer_size=1024)
dataset = dataset.batch(BATCH_SIZE)
if initializable:
iterator = dataset.make_initializable_iterator()
init_op = iterator.initializer
else:
iterator = dataset.make_one_shot_iterator()
init_op = None
output = iterator.get_next()
images = (tf.to_float(output['image']) - 128) / 128.0
one_hot_labels = tf.one_hot(output['label'], NUM_CLASSES)
return images, one_hot_labels, init_op
def model_fn(features, one_hot_labels):
"""Builds MLP for MNIST and computes loss.
Args:
features: a [batch_size, height, width, channels] float32 tensor.
one_hot_labels: A [batch_size, NUM_CLASSES] int tensor.
Returns:
A scalar loss tensor, and a [batch_size, NUM_CLASSES] prediction tensor.
"""
net = tf.reshape(features, [BATCH_SIZE, IMAGE_SIZE * IMAGE_SIZE])
net = tf.layers.dense(net, 256, activation=tf.nn.relu)
net = tf.layers.dense(net, 256, activation=tf.nn.relu)
net = tf.layers.dense(net, NUM_CLASSES)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(
logits=net, labels=one_hot_labels))
return loss, tf.nn.softmax(net)
"""
Explanation: MNIST Hessian Spectral Density Calculator
This notebook trains a simple MLP for MNIST, runs the Lanczos algorithm on its full-batch Hessian, and then plots the spectral density. This shows how to use the python TensorFlow LanczosExperiment class.
End of explanation
"""
tf.reset_default_graph()
images, one_hot_labels, _ = data_fn(num_epochs=None, shuffle=True, initializable=False)
loss, predictions = model_fn(images, one_hot_labels)
accuracy = tf.reduce_mean(tf.to_float(tf.equal(tf.math.argmax(predictions, axis=1),
tf.math.argmax(one_hot_labels, axis=1))))
train_op = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(loss)
saver = tf.train.Saver(max_to_keep=None)
# Simple training loop that saves the model checkpoint every 1000 steps.
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(NUM_TRAIN_STEPS):
if i % NUM_SUMMARIZE_STEPS == 0:
saver.save(sess, os.path.join(TRAIN_PATH, 'model.ckpt'), global_step=i)
outputs = sess.run([loss, train_op])
if i % NUM_SUMMARIZE_STEPS == 0:
print 'Step: ', i, 'Loss: ', outputs[0]
# Save a final checkpoint.
saver.save(sess, os.path.join(TRAIN_PATH, 'model.ckpt'),
global_step=NUM_TRAIN_STEPS)
# Check that the model fits the training data.
with tf.Session() as sess:
saver.restore(sess, os.path.join(TRAIN_PATH, 'model.ckpt-10000'))
minibatch_accuracy = 0.0
for i in range(100):
minibatch_accuracy += sess.run(accuracy) / 100
print 'Accuracy on training data:', minibatch_accuracy
"""
Explanation: Train a MNIST model.
End of explanation
"""
tf.reset_default_graph()
checkpoint_to_load = os.path.join(TRAIN_PATH, 'model.ckpt-10000')
# For Lanczos, the tf.data pipeline should have some very specific characteristics:
# 1. It should stop after a single epoch.
# 2. It should be deterministic (i.e., no data augmentation).
# 3. It should be initializable (we use it to restart the pipeline for each Lanczos iteration).
images, one_hot_labels, init = data_fn(num_epochs=1, shuffle=False, initializable=True)
loss, _ = model_fn(images, one_hot_labels)
# Setup for Lanczos mode.
restore_specs = [
experiment_utils.RestoreSpec(tf.trainable_variables(),
checkpoint_to_load)]
# This callback is used to restart the tf.data pipeline for each Lanczos
# iteration on each worker (the chief has a slightly different callback). You
# can check the logs to see the status of the computation: new
# phases of Lanczos iteration are indicated by "New phase i", and local steps
# per worker are logged with "Local step j".
def end_of_input(sess, train_op):
try:
sess.run(train_op)
except tf.errors.OutOfRangeError:
sess.run(init)
return True
return False
# This object stores the state for the phases of the Lanczos iteration.
experiment = lanczos_experiment.LanczosExperiment(
loss,
worker=0, # These two flags will change when the number of workers > 1.
num_workers=1,
save_path=LANCZOS_PATH,
end_of_input=end_of_input,
lanczos_steps=NUM_LANCZOS_STEPS,
num_draws=1,
output_address=LANCZOS_PATH)
# For distributed training, there are a few options:
# Multi-gpu single worker: Partition the tf.data per tower of the model, and pass the aggregate
# loss to the LanczosExperiment class.
# Multi-gpu multi worker: Set num_workers in LanczosExperiment to be equal to the number of workers.
# These have to be ordered.
train_op = experiment.get_train_op()
saver = experiment.get_saver(checkpoint_to_load, restore_specs)
init_fn = experiment.get_init_fn()
train_fn = experiment.get_train_fn()
local_init_op = tf.group(tf.local_variables_initializer(), init)
train_step_kwargs = {}
# The LanczosExperiment class is designed with slim in mind since it gives us
# very specific control of the main training loop.
tf.contrib.slim.learning.train(
train_op,
train_step_kwargs=train_step_kwargs,
train_step_fn=train_fn,
logdir=LANCZOS_PATH,
is_chief=True,
init_fn=init_fn,
local_init_op=local_init_op,
global_step=tf.zeros([], dtype=tf.int64), # Dummy global step.
saver=saver,
save_interval_secs=0, # The LanczosExperiment class controls saving.
summary_op=None, # DANGER DANGER: Do not change this.
summary_writer=None)
# This cell takes a little time to run: maybe 7 mins.
"""
Explanation: Run Lanczos on the MNIST model.
End of explanation
"""
# Outputs are saved as numpy saved files. The most interesting ones are
# 'tridiag_1' and 'lanczos_vec_1'.
with open(os.path.join(LANCZOS_PATH, 'tridiag_1'), 'rb') as f:
tridiagonal = np.load(f)
# For legacy reasons, we need to squeeze tridiagonal.
tridiagonal = np.squeeze(tridiagonal)
# Note that the output shape is [NUM_LANCZOS_STEPS, NUM_LANCZOS_STEPS].
print tridiagonal.shape
# The function tridiag_to_density computes the density (i.e., trace estimator
# the standard Gaussian c * exp(-(x - t)**2.0 / 2 sigma**2.0) where t is
# from a uniform grid. Passing a reasonable sigma**2.0 to this function is
# important -- somewhere between 1e-3 and 1e-5 seems to work best.
density, grids = density.tridiag_to_density([tridiagonal])
# We add a small epsilon to make the plot not ugly.
plt.semilogy(grids, density + 1.0e-7)
plt.xlabel('$\lambda$')
plt.ylabel('Density')
plt.title('MNIST hessian eigenvalue density at step 10000')
"""
Explanation: Visualize the Hessian eigenvalue density.
End of explanation
"""
|
nick-youngblut/SIPSim | ipynb/bac_genome/priming_exp/.ipynb_checkpoints/CD-HIT-checkpoint.ipynb | mit | baseDir = '/home/nick/notebook/SIPSim/dev/priming_exp/'
workDir = os.path.join(baseDir, 'CD-HIT')
rnammerDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/rnammer/'
otuRepFile = '/var/seq_data/priming_exp/otusn.pick.fasta'
otuTaxFile = '/var/seq_data/priming_exp/otusn_tax/otusn_tax_assignments.txt'
genomeDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/genomes/'
otuTableFile = '/var/seq_data/priming_exp/data/otu_table.txt'
"""
Explanation: Goal:
Running CD-HIT priming_exp OTUs & bac_genome1210 16S rRNA gene dataset
cutoff 97% seqID
ID OTUs with taxa from both datasets
target genomes
User variables
End of explanation
"""
import re
import glob
import itertools
import random
from pprint import pprint
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
if not os.path.isdir(workDir):
os.makedirs(workDir)
"""
Explanation: Init
End of explanation
"""
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter to just bulk soil samples
tbl = tbl %>%
select(matches('OTUId'), ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
filter(count > 0) %>%
select(OTUId) %>%
distinct() %>%
arrange(OTUId)
tbl.h %>% head
%%R -i workDir
outFile = paste(c(workDir, 'bulk_OTUs.txt'), collapse='/')
write.table(tbl.h, outFile, row.names=F, quote=F, sep='\t', col.names=F)
"""
Explanation: Writing out a list of all bulk soil OTUs
Only bulk soil OTUs used because they are the only ones with relative abundance info on the 'original' community
End of explanation
"""
# loading bulk OTU list
inFile = os.path.join(workDir, 'bulk_OTUs.txt')
bulk_OTUs = []
with open(inFile, 'rb') as iFH:
for line in iFH:
line = line.rstrip().split('\t')
if line[0] != 'OTUId':
bulk_OTUs.append(line[0])
bulk_OTUs = set(bulk_OTUs)
print 'Number of OTUs: {}'.format(len(bulk_OTUs))
# loading OTU reps
OTU_seq = {}
with open(otuRepFile, 'rb') as iFH:
taxon = None
for line in iFH:
line = line.rstrip()
if line.startswith('>'):
taxon = line.lstrip('>')
if taxon in bulk_OTUs:
OTU_seq[taxon] = ''
else:
taxon = None
else:
if taxon is not None:
OTU_seq[taxon] += line
print 'Number of OTU rep sequences: {}'.format(len(OTU_seq.keys()))
# writing out filtered OTUs
bulk_OTU_rep = os.path.join(workDir, 'otusn.pick.bulk.fasta')
with open(bulk_OTU_rep, 'wb') as oFH:
for taxon,seq in OTU_seq.items():
oFH.write('>{}\n{}\n'.format(taxon, seq))
print 'File written: {}'.format(bulk_OTU_rep)
"""
Explanation: Filtering OTU rep sequences to just OTUs in bulk soil samples
Only bulk soil OTUs used because they are the only ones with relative abundance info on the 'original' community
End of explanation
"""
# priming_exp otu rep sequences
!printf "Number of sequences: "
!cd $workDir; \
grep -c ">" $bulk_OTU_rep
# bac_genome sequences
!cd $workDir; \
ln -s -f $rnammerDir/bac_genome1210_16S.fna .
!printf "Number of sequences: "
!cd $workDir; \
grep -c ">" $rnammerDir/bac_genome1210_16S.fna
# concatenating sequences
!cd $workDir; \
cat $bulk_OTU_rep bac_genome1210_16S.fna > ssu_all.fna
!printf "Number of sequences: "
!cd $workDir; \
grep -c ">" ssu_all.fna
"""
Explanation: Notes
Just writing OTU rep sequences found in any bulk soil samples
Symlinking sequences
End of explanation
"""
!cd $workDir; \
cd-hit-est -i ssu_all.fna -o ssu_all_cdhit -c 0.97 -d 0
"""
Explanation: CD-HIT run
End of explanation
"""
inFile = os.path.join(workDir, 'ssu_all_cdhit.clstr')
tbl = {}
with open(inFile, 'rb') as inFH:
clst_id = None
for line in inFH:
line = line.rstrip()
if line.startswith('>'):
clst_id = line.lstrip('>Cluster ')
tbl[clst_id] = []
else:
tbl[clst_id].append(re.split('\t|, ', line))
print "Number of clusters loaded: {}".format(len(tbl.keys()))
# clusters that have '>OTU' and '>rRNA' (OTU and genome)
def shared_clust(x):
otus = any([y[2].startswith('>OTU') for y in x])
genomes = any([y[2].startswith('>rRNA') for y in x])
return otus == True and genomes == True
tbl_f = {x:y for x,y in tbl.items() if shared_clust(y)}
print "Number of clusters with OTUs and genomes: {}".format(len(tbl_f.keys()))
"""
Explanation: Finding clusters with sequences from both datasets
End of explanation
"""
# loading tax file
tax = {}
with open(otuTaxFile, 'rb') as inFH:
for line in inFH:
line = line.rstrip()
if not line.startswith('OTU'):
continue
otu,cls,boot,_ = line.split('\t')
cls = [x.lstrip(' __') for x in cls.split(';')]
for i in range(8):
try:
len(cls[i])
except IndexError:
cls.append('Unclassified')
tax[otu] = cls
def printDict(d, n=10):
cnt = 0
for x,y in d.items():
pprint(x)
print(y)
cnt += 1
if cnt >= n:
break
printDict(tax, n=3)
# adding taxonomic classifications to OTUs in target clusters
for clstr,x in tbl_f.items():
for y in x:
ID = y[2].lstrip('>')
ID = re.sub('\.\.\..+','', ID)
#print 'ID: "{}"'.format(ID)
try:
y.append(tax[ID])
except KeyError:
y.append(None)
# gut check: manual check of OTU classifications & genome names
for clstr,x in tbl_f.items():
print 'Cluster: {}'.format(clstr)
for y in x:
ID = y[2].lstrip('>')
if ID.startswith('OTU'):
# classifications
try:
print ':'.join(y[3])[:100]
except IndexError:
print ':'.join(y[3])
elif ID.startswith('rRNA'):
# genome names
try:
print ID[:100]
except IndexError:
print ID
"""
Explanation: Getting taxonomic classification of OTUs in target clusters
End of explanation
"""
def write_targets_OLD(tbl, fh):
fh.write('\t'.join(['cluster', 'ssu_ID', 'target_genome', 'OTU', 'OTU_taxonomy']) + '\n')
for clstr,x in tbl.items():
# parsing cluster
targets = []
otus = []
for y in x:
ID = y[2].lstrip('>')
ID = re.sub('\.\.\..+','', ID)
if ID.startswith('OTU'):
otu = [ID, ':'.join(y[3])]
otus.append(otu)
elif ID.startswith('rRNA'):
targets.append(ID)
# writing out list
for target in targets:
for otu in otus:
genome = target.lstrip('rRNA_')
genome = re.sub('_\d+-\d+_DIR.+', '', genome)
fh.write('\t'.join([clstr, target, genome] + otu) + '\n')
def write_targets(tbl, fh):
fh.write('\t'.join(['cluster', 'ssu_ID', 'target_genome', 'OTU', 'OTU_taxonomy']) + '\n')
for clstr,rows in tbl.items():
# parsing cluster; getting all OTUs and genome_IDs
targets = []
otus = []
for row in rows:
ID = row[2].lstrip('>')
ID = re.sub('\.\.\..+','', ID)
if ID.startswith('OTU'):
otu = [ID, ':'.join(row[3])]
otus.append(otu)
elif ID.startswith('rRNA'):
targets.append(ID)
# writing out list
## one 1 randomly selected genome is associated with OTU
random.shuffle(targets)
for otu, target in zip(otus, itertools.cycle(targets)):
genome = target.lstrip('rRNA_')
genome = re.sub('_\d+-\d+_DIR.+', '', genome)
fh.write('\t'.join([clstr, target, genome] + otu) + '\n')
#for otu in otus:
#for target in targets:
# genome = target.lstrip('rRNA_')
# genome = re.sub('_\d+-\d+_DIR.+', '', genome)
# fh.write('\t'.join([clstr, target, genome] + otu) + '\n')
outFile = os.path.join(workDir, 'target_taxa.txt')
with open(outFile, 'wb') as oFH:
write_targets(tbl_f, oFH)
"""
Explanation: Notes:
At least most of the taxonomic classifications make sense for the genomes in each cluster
Writing out a list of target genomes and their corresponding OTUs
If an OTU has multiple associations with a genome, selecting 1 a random
ie., 1-to-1 association
End of explanation
"""
!printf "Number of clusters: "
!cd $workDir; \
tail -n +2 target_taxa.txt | cut -f 1 | sort -u | wc -l
!printf "Number of target genomes: "
!cd $workDir; \
tail -n +2 target_taxa.txt | cut -f 3 | sort -u | wc -l
"""
Explanation: Gut check on written file
End of explanation
"""
# making table of genome file names & genome sequence IDs
p = os.path.join(genomeDir, '*.fasta')
genomeFiles = glob.glob(p)
assert len(genomeFiles) == 1210, 'There should be 1210 genome files'
# Getting seq names in all genome files
fileSeq = {}
cnt = 0
for f in genomeFiles:
fileSeq[f] = []
with open(f, 'rb') as iFH:
for line in iFH:
if line.startswith('>'):
line = line.lstrip('>').rstrip()
fileSeq[f].append(line)
cnt += 1
if cnt % 100 == 0:
sys.stderr.write('Number of files processed: {}\n'.format(cnt))
printDict(fileSeq, n=5)
outFile = os.path.join(workDir, 'genomeFile_seqID.txt')
with open(outFile, 'wb') as oFH:
for f,seqIDs in fileSeq.items():
for seqID in seqIDs:
line = '\t'.join([seqID, f])
oFH.write(line + '\n')
"""
Explanation: Get genome files with associated genome sequence names
End of explanation
"""
%%bash -s "$workDir"
cd $1
grep -f <(tail -n +2 target_taxa.txt| cut -f 3) genomeFile_seqID.txt > genomeFile_seqID_filt.txt
printf "Number of genomes in filtered list: "
wc -l genomeFile_seqID_filt.txt
"""
Explanation: Filtering genome file list by target genomes
End of explanation
"""
x,y = os.path.split(workDir)
newGenomeDir = os.path.join(x, 'genomes')
if os.path.isdir(newGenomeDir):
p = os.path.join(newGenomeDir, '*')
files = glob.glob(p)
for f in files:
os.unlink(f)
os.rmdir(newGenomeDir)
os.makedirs(newGenomeDir)
inFile = os.path.join(workDir, 'genomeFile_seqID_filt.txt')
exts = ['.fasta','.fasta.2bit','.fasta.flat','.fasta.gdx','.fasta.sqlite3.db','.fasta.uni']
with open(inFile, 'rb') as iFH:
for line in iFH:
seqID,fastaFile = line.rstrip().split('\t')
assert os.path.isfile(fastaFile), '"{}" not found'.format(fastaFile)
base,ext = os.path.splitext(fastaFile)
path,fileBase = os.path.split(base)
newBase = os.path.join(newGenomeDir, fileBase)
for x in exts:
if not os.path.islink(newBase + x):
os.symlink(base + '.fasta', newBase + x)
!printf "Number of fasta files in genomes dir: "
!cd $newGenomeDir; \
ls *fasta | wc -l
"""
Explanation: Symlinking genome files (& associated files)
End of explanation
"""
taxonMapFile = os.path.join(workDir, 'target_taxa.txt')
genomeFilterFile = os.path.join(workDir, 'genomeFile_seqID_filt.txt')
abundFile = os.path.join(workDir, '../exp_info', 'bulk_OTU_abund_stats.txt')
%%R -i taxonMapFile -i abundFile -i genomeFilterFile
taxonMap = read.delim(taxonMapFile, sep='\t') %>%
select(target_genome, OTU)
taxonMap %>% nrow %>% print
taxonMap %>% head(n=3) %>% print
message('----------------')
genomeFilter = read.delim(genomeFilterFile, sep='\t', header=F)
genomeFilter %>% nrow %>% print
genomeFilter %>% head(n=3) %>% print
message('----------------')
abund = read.delim(abundFile, sep='\t')
abund %>% nrow %>% print
abund %>% head(n=3) %>% print
%%R
tbl.j = inner_join(taxonMap, genomeFilter, c('target_genome' = 'V1'))
tbl.j %>% nrow %>% print
tbl.j %>% head
%%R
tbl.j2 = inner_join(tbl.j, abund, c('OTU' = 'OTUId'))
tbl.j2 %>% nrow %>% print
tbl.j2 %>% head
%%R -i workDir
outFile = paste(c(workDir, 'taxa_bulk-comm_abunds.txt'), collapse='/')
write.table(tbl.j2, outFile, sep='\t', quote=F, row.names=F)
"""
Explanation: Notes:
Number of genome files is less than the number of taxa in genomeFile_seqID_filt.txt because:
each taxon name can correspond to 1 of multiple chromosomes in the taxon file
All genomes files set
OLD
Making a table of genome (taxon) relative abundances
Will be used for community simulation
End of explanation
"""
!cd $workDir; \
tail -n +2 taxa_bulk-comm_abunds.txt | cut -f 2,3 > ../genomes/genome_index_bulkSoil_OTU.txt
"""
Explanation: Notes
USING this file for community simulation
associating OTUs with genome files & abundances
Creating a genome-file index
End of explanation
"""
|
geoscixyz/computation | docs/case-studies/PF/Kevitsa_Grav_Inv.ipynb | mit | # The usual, we need to load some libraries
from SimPEG import Mesh, Utils, Maps, PF
from SimPEG import mkvc, Regularization, DataMisfit, Optimization, InvProblem, Directives,Inversion
from SimPEG.Utils import mkvc
from SimPEG.Utils.io_utils import download
import numpy as np
import scipy as sp
import os
%pylab inline
# Download data from the cloud
url = "https://storage.googleapis.com/simpeg/kevitsa_synthetic/"
cloudfiles = [
'Mesh_global_100m_padded.msh','GravSim.dat',
'Kevitsa.topo', 'SimPEG_GRAV.inp'
]
keys = ['mesh', 'data', 'topo', 'input']
# Download to ./KevitsaGrav
files = download([url+f for f in cloudfiles], folder='./KevitsaGrav', overwrite=True)
files = dict(zip(keys, files)) # allows us to name the files
# Read in the input file which included all parameters at once (mesh, topo, model, survey, inv param, etc.)
inputFile = files['input'] # input file was the last downloaded
driver = PF.GravityDriver.GravityDriver_Inv()
driver.basePath = './KevitsaGrav'
# All the parameters in the input files can be access via the driver object
# For example, to get the survey:
obs = driver.readGravityObservations(files['data'])
mesh = Mesh.TensorMesh.readUBC(files['mesh'])
"""
Explanation: Kevitsa Gravity Inversion
Objective:
In this tutorial we will invert simulated gravity data over the Kevitsa Ni-Cu-PGE deposit.
The two main objectives are
Walk through the gravity inversion workflow with SimPEG
Determine the depth resolution of the gravity survey
Ultimately, we want to know if the recovered compact density model can delinate the shape and vertical extent of the Kevitsa intrusion. We will compare our density model to the published geological horizons picked from 2d/3D seismic survey.
References
Koivisto, E., Malehmir, A., Hellqvist, N., Voipio, T. and Wijns, C. (2015), Building a 3D model of lithological contacts and near-mine structures in the Kevitsa mining and exploration site, Northern Finland: constraints from 2D and 3D reflection seismic data. Geophysical Prospecting, 63: 754–773. doi:10.1111/1365-2478.12252
Here goes nothing...
End of explanation
"""
# The gridded data holds 20k+ observation points, too large for a quick inversion
# Let's grab a random subset
nD = 500
indx = randint(0,high=obs.dobs.shape[0],size=nD)
# Create a new downsampled survey
locXYZ = obs.srcField.rxList[0].locs[indx,:]
rxLoc = PF.BaseGrav.RxObs(locXYZ)
srcField = PF.BaseGrav.SrcField([rxLoc])
survey = PF.BaseGrav.LinearSurvey(srcField)
survey.dobs = obs.dobs[indx]
survey.std = obs.std[indx]
ph = PF.Gravity.plot_obs_2D(survey.srcField.rxList[0].locs, survey.dobs,'Observed Data')
# Create a mesh, we will start coarse. Feel free to change the
# the mesh, but make sure you have enough memory and coffee brakes...
dx = 200.
npad = 5
hxind = [(dx, npad, -1.3), (dx, 65), (dx, npad, 1.3)]
hyind = [(dx, npad, -1.3), (dx, 45), (dx, npad, 1.3)]
hzind = [(dx, npad, -1.3), (150, 15), (10, 10, -1.3), (10,5)]
# Create the mesh and move the location to the center of the data
mesh = Mesh.TensorMesh([hxind, hyind, hzind], 'CC0')
mesh._x0 += [np.mean(locXYZ[:,0]), np.mean(locXYZ[:,1]), np.max(locXYZ[:,2])-np.sum(mesh.hz)]
ax = mesh.plotGrid()
# We will get the topography from the input file
topo = np.genfromtxt(files['topo'], skip_header=1)
# Find the active cells
actv = Utils.surface2ind_topo(mesh, topo, 'N')
actv = np.asarray(
[inds for inds, elem in enumerate(actv, 1) if elem], dtype=int
) - 1
nC = len(actv)
print("Number of data points: " + str(nD))
print("Number of model cells: " + str(nC))
"""
Explanation: Setup
The relation between density and the gravity field is well known, thanks to the classic work of Newton in 1686. Since we generally only measure the vertical component of the field, this relationship can be written as:
$$G(r)z = \gamma \int{V} \rho(r) \left(\frac{z - z_0}{{|\vec r - \vec r_0|}^3}\right) \; dV $$
where $\rho$ is the anomalous density and $\gamma$ is the Newton's gravitational constant.
This integral can be evaluated analytically for simple prisms, giving rise to a linear system of equations relating a discrete Earth to the observed data:
$$ \mathbf{d}_z = \mathbf{F} \; \boldsymbol{\rho} $$
End of explanation
"""
# Create active map to go from reduce set to full
actvMap = Maps.InjectActiveCells(mesh, actv, -100)
# Create reduced identity map
idenMap = Maps.IdentityMap(nP=nC)
mstart = np.ones(nC)*1e-4
# Create gravity problem
prob = PF.Gravity.GravityIntegral(mesh, rhoMap=idenMap, actInd=actv)
survey.pair(prob)
# Make depth weighting,
# this will also require the calculation of the forward operator ... time for coffee
wr = np.sum(prob.G**2., axis=0)**0.5
wr = (wr/np.max(wr))
"""
Explanation: Forward system:
Now that we have all our spatial components, we can create our linear system relating the data and anomalous density:
$$ d^{obs} = \mathbf{F\; \rho}$$
where $\mathbf{F} \in \mathbb{R}^{nd \times nc}$ is our $forward$ operator.
End of explanation
"""
# % Create inversion objects
reg = Regularization.Sparse(mesh, indActive=actv, mapping=idenMap)
reg.cell_weights = wr
reg.norms = [0,2,2,2]
opt = Optimization.ProjectedGNCG(maxIter=100, lower=-.5,upper=0.5, maxIterLS = 20, maxIterCG= 10, tolCG = 1e-3)
dmis = DataMisfit.l2_DataMisfit(survey)
dmis.W = 1./survey.std
# This is where the misfit function and regularization are put together
invProb = InvProblem.BaseInvProblem(dmis, reg, opt)
# Here are few directives to make the inversion work and apply sparsity.
# After the l2, beta is re-adjusted on the fly to stay near the target misfit
betaest = Directives.BetaEstimate_ByEig()
IRLS = Directives.Update_IRLS(f_min_change=1e-4, minGNiter=3)
update_Jacobi = Directives.Update_lin_PreCond()
inv = Inversion.BaseInversion(invProb, directiveList=[betaest, IRLS,
update_Jacobi])
# Run the inversion
mrec = inv.run(mstart)
"""
Explanation: Inverse problem
We have generated synthetic data, we now want to see if we can solve the inverse problem and recover our synthetic density model. Using the usual formulation, we seek a model that can reproduce the data, let’s say a least-squares measure of data fit of the form:
\begin{equation}
\phi_d = \|\mathbf{W}_d \left( \mathbb{F}[\mathbf{m}] - \mathbf{d}^{obs} \right)\|_2^2
\end{equation}
The inverse problem is hard because we don’t have great data coverage, and the Earth is big, and there is usually noise in the data. So we need to add something to regularize it.
The simplest way to do it is to penalize solutions that won’t make sense geologically. For example we can assume that the model is smooth and that anomalous density should remain small.
The usual smooth inversion function use an l2-norm measure:
\begin{equation}
\phi_d = \|\mathbf{W}d \left( \mathbb{F}[\mathbf{m}] - \mathbf{d}^{obs} \right)\|_2^2 \
\phi_m = \beta \Big [ {\| \mathbf{W}_s \;( \mathbf{m - m^{ref}})\|}^2_2 + \sum{i = x,y,z} {\| \mathbf{W}_i \; \mathbf{G}_i \; \mathbf{m}\|}^2_2 \Big ]\;,
\end{equation}
The full objective function to be minimized can be written as:
\begin{equation}
\phi(m) = \phi_d + \beta \phi_m\;,
\end{equation}
which will yield our usual small and smooth models.
We propose a fancier regularization function that can allow to recover sparse and blocky solutions.
Starting with the well known Ekblom norm:
\begin{equation}
\phi_m = \sum_{i=1}^{nc} {(x_i^2 + \epsilon^2)}^{p/2} \;,
\end{equation}
where $x_i$ denotes some function of the model parameter, and $\epsilon$ is a small value to avoid singularity as $m\rightarrow0$.
For p=2, we get the usual least-squares measure and we recover the regularization presented above. For $p \leq 1$, the function becomes non-linear which requires some tweaking.
We can linearize the function by updating the penality function iteratively, commonly known as an Iterative Re-weighted Least-Squares (IRLS) method:
\begin{equation}
\phi_m^{(k)} = \frac{1}{2}\sum_{i=1}^{nc} r_i \; x_i^2
\end{equation}
where we added the superscript $\square^{(k)}$ to denote the IRLS iterations. The weights $r(x)$ are computed from model values obtained at a previous iteration such that:
\begin{equation}
{r}_i ={\Big( {({x_i}^{(k-1)})}^{2} + \epsilon^2 \Big)}^{p/2 - 1} \;,
\end{equation}
where ${r}(x) \in \mathbb{R}^{nc}$.
In matrix form, our objective function simply becomes:
\begin{equation}
\phi(m) = \|\mathbf{W}d \left( \mathbb{F}[\mathbf{m}] - \mathbf{d}^{obs} \right)\|_2^2 + \beta \Big [ {\| \mathbf{W}_s \;\mathbf{R}_s\;( \mathbf{m - m^{ref}})\|}^2_2 + \sum{i = x,y,z} {\| \mathbf{W}_i\; \mathbf{R}_i \; \mathbf{G}_i \; \mathbf{m}\|}^2_2 \Big ]\;,
\end{equation}
where the IRLS weights $\mathbf{R}_s$ and $\mathbf{R}_i$ are diagonal matrices defined as:
\begin{equation}
\begin{split}
{R}{s{jj}} &= \sqrt{\eta_p}{\Big[ {({m_j}^{(k-1)})}^{2} + \epsilon_p^2 \Big]}^{(p/2 - 1)/2} \
{R}{i{jj}} &= \sqrt{\eta_q}{\Big[ {\left ({{(G_i\;m^{(k-1)})}_j }\right)}^{2} + \epsilon_q^2 \Big]}^{(q/2 - 1)/2} \
\eta_p &= {\epsilon_p}^{(1-p/2)} \
\eta_q &= {\epsilon_q}^{(1-q/2)} \;,
\end{split}
\end{equation}
we added two scaling parameters $\eta_p$ and $\eta_q$ for reasons that we won't dicuss here, but turn out to be important to get stable solves.
In order to initialize the IRLS and get an estimate for the stabilizing parameters $\epsilon_p$ and $\epsilon_q$, we first invert with the smooth $l_2$-norm.
The whole IRLS process is implemented with a directive added to the inversion workflow (see below).
End of explanation
"""
# Here is a quick script to slice through the final model
import ipywidgets as widgets
def ModSlicer(mesh, model):
def plotIt(m, normal, panel, vmin, vmax):
ypanel = int(mesh.vnC[1]/2)
plt.figure(figsize=(10, 8))
ax = plt.subplot(211)
ph = mesh.plotSlice(model[m], ax=ax, normal=normal, ind=int(panel),
grid=True,
clim=(vmin,vmax), pcolorOpts={'cmap': 'jet', })
# Set default limits
if normal == 'X':
Xlim = [mesh.vectorNy.min(), mesh.vectorNy.max()]
Ylim = [mesh.vectorNz.min(), mesh.vectorNz.max()]
elif normal == 'Y':
Xlim = [mesh.vectorNx.min(), mesh.vectorNx.max()]
Ylim = [mesh.vectorNz.min(), mesh.vectorNz.max()]
else:
Xlim = [mesh.vectorNx.min(), mesh.vectorNx.max()]
Ylim = [mesh.vectorNy.min(), mesh.vectorNy.max()]
ax.set_xlim(Xlim)
ax.set_ylim(Ylim)
ax.set_aspect('equal')
plt.colorbar(ph[0])
plt.title('Plan lp-model.')
plt.gca().set_aspect('equal')
plt.ylabel('y')
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
out = widgets.interactive(plotIt,
m = widgets.ToggleButtons(
options=['l2', 'lp'],
description='Model:'),
normal = widgets.ToggleButtons(
options=['X', 'Y', 'Z'],
description='Normal:',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description'),
panel = widgets.FloatSlider(min=0, max=mesh.vnC.max(), step=1,value=1, continuous_update=False),
vmin = widgets.FloatSlider(min=model['l2'][~np.isnan(model['l2'])].min(), max=model['l2'][~np.isnan(model['l2'])].max(), step=0.001,value=model['l2'][~np.isnan(model['l2'])].min(), continuous_update=False),
vmax = widgets.FloatSlider(min=model['l2'][~np.isnan(model['l2'])].min(), max=model['l2'][~np.isnan(model['l2'])].max(), step=0.001,value=model['l2'][~np.isnan(model['l2'])].max(), continuous_update=False),
)
return out
# Plot the result
m_lp = actvMap * mrec
m_lp[m_lp == -100] = np.nan
m_l2 = actvMap*IRLS.l2model
m_l2[m_l2 == -100] = np.nan
model={'l2':m_l2,'lp':m_lp}
# Execute the ploting function
ModSlicer(mesh, model)
"""
Explanation: View the inversion results
End of explanation
"""
|
UWSEDS/LectureNotes | PreFall2018/Unit-Tests/unit-tests-completed.ipynb | bsd-2-clause | import numpy as np
# Code Under Test
def entropy(ps):
if not np.isclose(np.sum(ps), 1.0):
raise ValueError("Probability is not 1.")
items = ps * np.log(ps)
return -np.sum(items)
# Smoke test
probs = [
[0.1, 0.8, 0.1],
[0.1, 0.9],
[0.5, 0.5],
[1.0]
]
for prob in probs:
try:
entropy(prob)
except:
print("%s failed." % str(prob))
print ("Testing completed.")
"""
Explanation: Unit Tests
Overview and Principles
Testing is the process by which you exercise your code to determine if it performs as expected. The code you are testing is referred to as the code under test.
There are two parts to writing tests.
1. invoking the code under test so that it is exercised in a particular way;
1. evaluating the results of executing code under test to determine if it behaved as expected.
The collection of tests performed are referred to as the test cases. The fraction of the code under test that is executed as a result of running the test cases is referred to as test coverage.
For dynamical languages such as Python, it's extremely important to have a high test coverage. In fact, you should try to get 100% coverage. This is because little checking is done when the source code is read by the Python interpreter. For example, the code under test might contain a line that has a function that is undefined. This would not be detected until that line of code is executed.
Test cases can be of several types. Below are listed some common classifications of test cases.
- Smoke test. This is an invocation of the code under test to see if there is an unexpected exception. It's useful as a starting point, but this doesn't tell you anything about the correctness of the results of a computation.
- One-shot test. In this case, you call the code under test with arguments for which you know the expected result.
- Edge test. The code under test is invoked with arguments that should cause an exception, and you evaluate if the expected exception occurrs.
- Pattern test - Based on your knowledge of the calculation (not implementation) of the code under test, you construct a suite of test cases for which the results are known or there are known patterns in these results that are used to evaluate the results returned.
Another principle of testing is to limit what is done in a single test case. Generally, a test case should focus on one use of one function. Sometimes, this is a challenge since the function being tested may call other functions that you are testing. This means that bugs in the called functions may cause failures in the tests of the calling functions. Often, you sort this out by knowing the structure of the code and focusing first on failures in lower level tests. In other situations, you may use more advanced techniques called mocking. A discussion of mocking is beyond the scope of this course.
A best practice is to develop your tests while you are developing your code. Indeed, one school of thought in software engineering, called test-driven development, advocates that you write the tests before you implement the code under test so that the test cases become a kind of specification for what the code under test should do.
Examples of Test Cases
This section presents examples of test cases. The code under test is the calculation of entropy.
Entropy of a set of probabilities
$$
H = -\sum_i p_i \log(p_i)
$$
where $\sum_i p_i = 1$.
End of explanation
"""
# One-shot test. Need to know the correct answer.
entries = [
[0, [1]],
]
for entry in entries:
ans = entry[0]
prob = entry[1]
if not np.isclose(entropy(prob), ans):
print("Test failed!")
print ("Test completed!")
"""
Explanation: Suppose that all of the probability of a distribution is at one point. An example of this is a coin with two heads. Whenever you flip it, you always get heads. That is, the probability of a head is 1.
What is the entropy of such a distribution? From the calculation above, we see that the entropy should be $log(1)$, which is 0. This means that we have a test case where we know the result!
End of explanation
"""
# Edge test. This is something that should cause an exception.
entropy([0.5])
"""
Explanation: Question: What is an example of another one-shot test? (Hint: You need to know the expected result.)
One edge test of interest is to provide an input that is not a distribution in that probabilities don't sum to 1.
End of explanation
"""
# Pattern test
def test_equal_probabilities(n):
prob = 1.0/n
ps = np.repeat(prob , n)
if not np.isclose(entropy(ps), -np.log(prob)):
import pdb; pdb.set_trace()
print ("Bad result.")
else:
print("Worked!")
# Run a test
test_equal_probabilities(100)
"""
Explanation: Now let's consider a pattern test. Examining the structure of the calculation of $H$, we consider a situation in which there are $n$ equal probabilities. That is, $p_i = \frac{1}{n}$.
$$
H = -\sum_{i=1}^{n} p_i \log(p_i)
= -\sum_{i=1}^{n} \frac{1}{n} \log(\frac{1}{n})
= n (-\frac{1}{n} \log(\frac{1}{n}) )
= -\log(\frac{1}{n})
$$
For example, entropy([0.5, 0.5]) should be $-log(0.5)$.
End of explanation
"""
import unittest
# Define a class in which the tests will run
class UnitTests(unittest.TestCase):
# Each method in the class to execute a test
def test_success(self):
self.assertEqual(1, 1)
def test_success1(self):
self.assertTrue(1 == 1)
def test_failure(self):
self.assertEqual(1, 2)
suite = unittest.TestLoader().loadTestsFromTestCase(UnitTests)
_ = unittest.TextTestRunner().run(suite)
# Function the handles test loading
#def test_setup(argument ?):
"""
Explanation: You see that there are many, many cases to test. So far, we've been writing special codes for each test case. We can do better.
Unittest Infrastructure
There are several reasons to use a test infrastructure:
- If you have many test cases (which you should!), the test infrastructure will save you from writing a lot of code.
- The infrastructure provides a uniform way to report test results, and to handle test failures.
- A test infrastructure can tell you about coverage so you know what tests to add.
We'll be using the unittest framework. This is a separate Python package. Using this infrastructure, requires the following:
1. import the unittest module
1. define a class that inherits from unittest.TestCase
1. write methods that run the code to be tested and check the outcomes.
The last item has two subparts. First, we must identify which methods in the class inheriting from unittest.TestCase are tests. You indicate that a method is to be run as a test by having the method name begin with "test".
Second, the "test methods" should communicate with the infrastructure the results of evaluating output from the code under test. This is done by using assert statements. For example, self.assertEqual takes two arguments. If these are objects for which == returns True, then the test passes. Otherwise, the test fails.
End of explanation
"""
# Implementating a pattern test. Use functions in the test.
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
def test_equal_probability(self):
def test(count):
"""
Invokes the entropy function for a number of values equal to count
that have the same probability.
:param int count:
"""
raise RuntimeError ("Not implemented.")
#
test(2)
test(20)
test(200)
#test_setup(TestEntropy)
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
"""Write the full set of tests."""
"""
Explanation: Code for homework or your work should use test files. In this lesson, we'll show how to write test codes in a Jupyter notebook. This is done for pedidogical reasons. It is NOT not something you should do in practice, except as an intermediate exploratory approach.
As expected, the first test passes, but the second test fails.
Exercise
Rewrite the above one-shot test for entropy using the unittest infrastructure.
End of explanation
"""
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
def test_invalid_probability(self):
try:
entropy([0.1, 0.5])
self.assertTrue(False)
except ValueError:
self.assertTrue(True)
#test_setup(TestEntropy)
"""
Explanation: Testing For Exceptions
Edge test cases often involves handling exceptions. One approach is to code this directly.
End of explanation
"""
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
def test_invalid_probability(self):
with self.assertRaises(ValueError):
entropy([0.1, 0.5])
suite = unittest.TestLoader().loadTestsFromTestCase(TestEntropy)
_ = unittest.TextTestRunner().run(suite)
"""
Explanation: unittest provides help with testing exceptions.
End of explanation
"""
import unittest
# Define a class in which the tests will run
class TestEntryopy(unittest.TestCase):
def test_oneshot(self):
self.assertEqual(geomean([1,1]), 1)
def test_oneshot2(self):
self.assertEqual(geomean([3, 3, 3]), 3)
#test_setup(TestGeomean)
#def geomean(argument?):
# return ?
"""
Explanation: Test Files
Although I presented the elements of unittest in a notebook. your tests should be in a file. If the name of module with the code under test is foo.py, then the name of the test file should be test_foo.py.
The structure of the test file will be very similar to cells above. You will import unittest. You must also import the module with the code under test. Take a look at test_prime.py in this directory to see an example.
Discussion
Question: What tests would you write for a plotting function?
Test Driven Development
Start by writing the tests. Then write the code.
We illustrate this by considering a function geomean that takes a list of numbers as input and produces the geometric mean on output.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.23/_downloads/70e603ce6ceb1fd2cb094ccee99a1920/resolution_metrics_eegmeg.ipynb | bsd-3-clause | # Author: Olaf Hauk <olaf.hauk@mrc-cbu.cam.ac.uk>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.minimum_norm.resolution_matrix import make_inverse_resolution_matrix
from mne.minimum_norm.spatial_resolution import resolution_metrics
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects/'
fname_fwd_emeg = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fname_evo = data_path + '/MEG/sample/sample_audvis-ave.fif'
# read forward solution with EEG and MEG
forward_emeg = mne.read_forward_solution(fname_fwd_emeg)
# forward operator with fixed source orientations
forward_emeg = mne.convert_forward_solution(forward_emeg, surf_ori=True,
force_fixed=True)
# create a forward solution with MEG only
forward_meg = mne.pick_types_forward(forward_emeg, meg=True, eeg=False)
# noise covariance matrix
noise_cov = mne.read_cov(fname_cov)
# evoked data for info
evoked = mne.read_evokeds(fname_evo, 0)
# make inverse operator from forward solution for MEG and EEGMEG
inv_emeg = mne.minimum_norm.make_inverse_operator(
info=evoked.info, forward=forward_emeg, noise_cov=noise_cov, loose=0.,
depth=None)
inv_meg = mne.minimum_norm.make_inverse_operator(
info=evoked.info, forward=forward_meg, noise_cov=noise_cov, loose=0.,
depth=None)
# regularisation parameter
snr = 3.0
lambda2 = 1.0 / snr ** 2
"""
Explanation: Compute spatial resolution metrics to compare MEG with EEG+MEG
Compute peak localisation error and spatial deviation for the point-spread
functions of dSPM and MNE. Plot their distributions and difference of
distributions. This example mimics some results from :footcite:HaukEtAl2019,
namely Figure 3 (peak localisation error for PSFs, L2-MNE vs dSPM) and Figure 4
(spatial deviation for PSFs, L2-MNE vs dSPM). It shows that combining MEG with
EEG reduces the point-spread function and increases the spatial resolution of
source imaging, especially for deeper sources.
End of explanation
"""
rm_emeg = make_inverse_resolution_matrix(forward_emeg, inv_emeg,
method='MNE', lambda2=lambda2)
ple_psf_emeg = resolution_metrics(rm_emeg, inv_emeg['src'],
function='psf', metric='peak_err')
sd_psf_emeg = resolution_metrics(rm_emeg, inv_emeg['src'],
function='psf', metric='sd_ext')
del rm_emeg
"""
Explanation: EEGMEG
Compute resolution matrices, localization error, and spatial deviations
for MNE:
End of explanation
"""
rm_meg = make_inverse_resolution_matrix(forward_meg, inv_meg,
method='MNE', lambda2=lambda2)
ple_psf_meg = resolution_metrics(rm_meg, inv_meg['src'],
function='psf', metric='peak_err')
sd_psf_meg = resolution_metrics(rm_meg, inv_meg['src'],
function='psf', metric='sd_ext')
del rm_meg
"""
Explanation: MEG
Do the same for MEG:
End of explanation
"""
brain_ple_emeg = ple_psf_emeg.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=1,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_ple_emeg.add_text(0.1, 0.9, 'PLE PSF EMEG', 'title', font_size=16)
"""
Explanation: Visualization
Look at peak localisation error (PLE) across the whole cortex for PSF:
End of explanation
"""
brain_ple_meg = ple_psf_meg.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=2,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_ple_meg.add_text(0.1, 0.9, 'PLE PSF MEG', 'title', font_size=16)
"""
Explanation: For MEG only:
End of explanation
"""
diff_ple = ple_psf_emeg - ple_psf_meg
brain_ple_diff = diff_ple.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=3,
clim=dict(kind='value', pos_lims=(0., .5, 1.)),
smoothing_steps=20)
brain_ple_diff.add_text(0.1, 0.9, 'PLE EMEG-MEG', 'title', font_size=16)
"""
Explanation: Subtract the two distributions and plot this difference:
End of explanation
"""
brain_sd_emeg = sd_psf_emeg.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=4,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_sd_emeg.add_text(0.1, 0.9, 'SD PSF EMEG', 'title', font_size=16)
"""
Explanation: These plots show that with respect to peak localization error, adding EEG to
MEG does not bring much benefit. Next let's visualise spatial deviation (SD)
across the whole cortex for PSF:
End of explanation
"""
brain_sd_meg = sd_psf_meg.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=5,
clim=dict(kind='value', lims=(0, 2, 4)))
brain_sd_meg.add_text(0.1, 0.9, 'SD PSF MEG', 'title', font_size=16)
"""
Explanation: For MEG only:
End of explanation
"""
diff_sd = sd_psf_emeg - sd_psf_meg
brain_sd_diff = diff_sd.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir, figure=6,
clim=dict(kind='value', pos_lims=(0., .5, 1.)),
smoothing_steps=20)
brain_sd_diff.add_text(0.1, 0.9, 'SD EMEG-MEG', 'title', font_size=16)
"""
Explanation: Subtract the two distributions and plot this difference:
End of explanation
"""
|
sudov/numpy_pandas_learning | ipython_notebook_tutorial.ipynb | mit | # Hit shift + enter or use the run button to run this cell and see the results
print 'hello world'
# The last line of every code cell will be displayed by default,
# even if you don't print it. Run this cell to see how this works.
2 + 2 # The result of this line will not be displayed
3 + 3 # The result of this line will be displayed, because it is the last line of the cell
"""
Explanation: Text Using Markdown
If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. Hit shift + enter or shift + return on your keyboard to show the formatted text again. This is called "running" the cell, and you can also do it using the run button in the toolbar.
Code cells
One great advantage of IPython notebooks is that you can show your Python code alongside the results, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. The following cell is a code cell.
End of explanation
"""
# If you run this cell, you should see the values displayed as a table.
# Pandas is a software library for data manipulation and analysis. You'll learn to use it later in this course.
import pandas as pd
df = pd.DataFrame({'a': [2, 4, 6, 8], 'b': [1, 3, 5, 7]})
df
# If you run this cell, you should see a scatter plot of the function y = x^2
%pylab inline
import matplotlib.pyplot as plt
xs = range(-30, 31)
ys = [x ** 2 for x in xs]
plt.scatter(xs, ys)
"""
Explanation: Nicely formatted results
IPython notebooks allow you to display nicely formatted results, such as plots and tables, directly in
the notebook. You'll learn how to use the following libraries later on in this course, but for now here's a
preview of what IPython notebook can do.
End of explanation
"""
class_name = "Intro to Data Analysis"
message = class_name + " is awesome!"
message
"""
Explanation: Creating cells
To create a new code cell, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created.
To create a new markdown cell, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart buttons.
Re-running cells
If you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, "In [5]". The third cell should output the message "Intro to Data Analysis is awesome!"
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.