repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
cschnaars/intro-to-coding-in-python | notebooks/intro_to_coding_in_python_part_2_lists_and_dictionaries_no_code.ipynb | mit | my_friends
"""
Explanation: Introduction to Coding in Python, Part 2
Investigative Reporters and Editors Conference, New Orleans, June 2016<br />
By Aaron Kessler and Christopher Schnaars<br />
Lists
A list is a mutable (meaning it can be changed), ordered collection of objects. Everything in Python is an object, so a list can contain not only strings and numbers, but also functions and even other lists.
Let's make a list of new friends we've made at the IRE conference: Aaron, Sue, Chris and Renee. We'll call our list my_friends. To create a list, put a comma-separated list of strings (our friends' names) inside square brackets ([]). These brackets are how we tell Python we're building a list. See if you can figure out what to do in the box below. If you can't figure it out, don't sweat it, and read on for the answer:
Did you get it? The answer: my_friends = ['Aaron', 'Sue', 'Chris', 'Renee']
Type my_friends in the box below, and you'll see Python remembers the order of the names:
We met Cora at an awesome Python class we just attended, so let's add her to our list of friends. To do that, we're going to use a method called append. A method is a bit of code associated with a Python object (in this case, our list) to provide some built-in functionality. Every time you create a list in Python, you get the functionality of the append method (and a bunch of other methods, too) for free.
To use the append method, type the name of your list, followed by a period and the name of the method, and then put the string we want to add to our list ('Cora') in parentheses. Try it:
End of explanation
"""
# The first two names in the list (['Aaron', 'Sue'])
# The second and third names in the list (['Sue', 'Chris'])
# The second and fourth names in the list (['Sue', 'Renee'])
# The last three names in the list, in reverse order (['Cora', 'Renee', 'Chris'])
"""
Explanation: In a list, you can retrieve a single item by index. To do this, type the name of your list, followed by the numeric position of the item you want inside square brackets. There's just one sticking point: Indices in Python are zero-based, which means the first item is at position 0, the second item is at position 1 and so on. If that sounds confusing, don't worry about it. There actually are very good, logical reasons for this behavior that we won't dive into here. For now, just accept our word that you'll get used to it and see if you can figure out what to type to get the first name in our list (Aaron):
You can retrieve a contiguous subset of names from your list, called a slice. To do this, type the name of your list and provide up to three parameters in square brackets, separated by colons. Just leave any parameter you don't need blank, and Python will use its default value. These parameters, in order, are:
<ul><li>The index of the first item you want. Default is 0.</li>
<li>The index of the first item you *don't* want. You can set this to a negative number to skip a specific number of items at the end of your `list`. For example, a value of -1 here would stop at the next-to-last item in your `list`. Default is the number of items in your `list` (also called the *length*).</li>
<li>The *step* value, which we can use to skip over names in our `list`. For example, if you want every other name, you could set the *step* to 2. If you want to go backwards through your `list`, set this to a negative number. Default is 1.</li></ul>
Use the boxes below to see if you can figure out how to retrieve these lists of names. There could be more than one way to answer each question:
<ol><li>The first two names in the list (`['Aaron', 'Sue']`)</li>
<li>The second and third names in the list (`['Sue', 'Chris']`)</li>
<li>The second and fourth names in the list (`['Sue', 'Renee']`)</li>
<li>The last three names in the list, in reverse order (`['Cora', 'Renee', 'Chris']`)</li></ol>
End of explanation
"""
my_friends = ['Aaron', 'Sue', 'Chris', 'Renee', 'Cora']
your_friends = my_friends
print('My friends are: ')
print(my_friends)
print('\nAnd your friends are: ') # \n is code for newline.
print(your_friends)
"""
Explanation: Mutable objects
In many programming languages, it's common to assign a value (such as an integer or string) to a variable. Python works a bit differently. In our example above, Python creates a list in memory to house the names of our friends and then creates the object my_friends to point to the location in memory where this list is located. Why is that important? Well, for one thing, it means that if we make a copy of a list, Python keeps only one list in memory and just creates a second pointer. While this is not a concern in our example code, it could save a lot of computer memory for a large list containing hundreds or even thousands of objects. Consider this code:
End of explanation
"""
your_friends
"""
Explanation: Here's where mutability will bite you, if you're not careful. You haven't met Cora yet and don't know how nice she is, so you decide to remove her from your list of friends, at least for now. See if you can figure out what to type in the box below. You want to use the remove method to remove 'Cora' from your_friends. Use the second box below to verify Cora has been removed:
End of explanation
"""
my_friends
"""
Explanation: Perfect! Or is it? Let's take another look at my_friends:
End of explanation
"""
my_friends
your_friends
"""
Explanation: Uh-oh! You've unfriended Cora for me too! Remember that my_friends and your_friends are just pointers to the same list, so when you change one, you're really changing both. If you want the two lists to be independent, you must explicitly make a copy using, you guessed it, the copy method. In the box below:
<ul><li>Add Cora back to `my_friends`.</li>
<li>Use the `copy` method to assign a copy of `my_friends` to `your_friends`.</li>
<li>Remove Cora from `your_friends`.</li></ul>
You can use the second and third boxes below to test whether your code is correct.
End of explanation
"""
friend = {'last_name': 'Schnaars', 'first_name': 'Christopher', 'works_for': 'USA Today', 'favorite_food': 'spam'}
"""
Explanation: Dictionaries
In Python, a dictionary is a mutable, unordered collection of key-value pairs. Consider:
End of explanation
"""
friend
"""
Explanation: Note that our data is enclosed in curly braces, which tell Python you are building a dictionary.<br />
<br />
Now notice what happens when we ask Python to spit this information back to us:
End of explanation
"""
friend
"""
Explanation: Notice that Python did not return the list of key-value pairs in the same order as we entered them. Remember that dictionaries are unordered collections. This might bother you, but it shouldn't. You'll find in practice it is not a problem. Because key order varies, you can't access a value by index as you might with a list, so something like friend[0] will not work.
You might notice that the keys are listed in alphabetical order. This is <u>not</u> always the case. You can't assume keys will be in any other particular order.
To add a new key-value pair, simply put the new key in brackets and assign the value with an = sign. Try to add the key favorite_sketch to our dictionary, and set it's value to dead parrot:
End of explanation
"""
friend
"""
Explanation: To replace an existing value, simply re-assign it. Change first_name to Chris:
End of explanation
"""
|
coursemdetw/reveal2 | content/notebook/JSInteraction.ipynb | mit | from IPython.display import HTML
input_form = """
<div style="background-color:gainsboro; border:solid black; width:300px; padding:20px;">
Variable Name: <input type="text" id="var_name" value="foo"><br>
Variable Value: <input type="text" id="var_value" value="bar"><br>
<button onclick="set_value()">Set Value</button>
</div>
"""
javascript = """
<script type="text/Javascript">
function set_value(){
var var_name = document.getElementById('var_name').value;
var var_value = document.getElementById('var_value').value;
var command = var_name + " = '" + var_value + "'";
console.log("Executing Command: " + command);
var kernel = IPython.notebook.kernel;
kernel.execute(command);
}
</script>
"""
HTML(input_form + javascript)
"""
Explanation: IPython Notebook: Javascript/Python Bi-directional Communication
This notebook originally appeared as a post on
Pythonic Perambulations
by Jake Vanderplas.
<!-- PELICAN_BEGIN_SUMMARY -->
I've been working with javascript and the IPython notebook recently, and found myself
in need of a way to pass data back and forth between the Javascript runtime and the
IPython kernel. There's a bit of information about this floating around on various
mailing lists and forums, but no real organized tutorial on the subject.
Partly this is because the tools are relatively specialized,
and partly it's because the functionality I'll outline here is planned to
be obsolete
in the 2.0 release of IPython.
Nevertheless, I thought folks might be interested to hear what I've learned.
What follows are a few basic examples of moving data back and forth between
the IPython kernel and the browser's javascript.
Note that if you're viewing this statically (i.e. on the blog or on nbviewer) then
the javascript calls below will not work: to see the code in action, you'll need
to download
the notebook and open it in IPython.
<!-- PELICAN_END_SUMMARY -->
Executing Python Statements From Javascript
The key functionality needed for interaction between javascript and the
IPython kernel is the kernel object
in the IPython Javascript package.
A python statement can be executed from javascript as follows:
var kernel = IPython.notebook.kernel;
kernel.execute(command);
where command is a string containing python code.
Here is a short example where we use HTML elements and javascript callbacks
to execute a statement in the Python kernel from Javascript, using the
kernel.execute command:
End of explanation
"""
print(foo)
"""
Explanation: After pressing <button>Set Value</button> above
with the default arguments, the value of the variable
foo is set in the Python kernel, and can be
accessed from Python:
End of explanation
"""
from math import pi, sin
"""
Explanation: Examining the code, we see that
when the button is clicked, the set_value() function is called, which
constructs a simple Python statement
assigning var_value to the variable given by var_name. As mentioned
above, the key to interaction between Javascript and the notebook kernel is to use
the IPython.notebook.kernel.execute() command, passing valid Python
code in a string. We also log the result to the javascript console, which
can be helpful for Javascript debugging.
Accessing Python Output In Javascript
Executing Python statements from Javascript is one thing, but we'd
really like to be able to do something with the output.
In order to process the output of a Python statement executed in the kernel,
we need to add a callback function to the execute statement.
The full extent of callbacks is a bit involved, but the first step is
to set a callback which does something with the output attribute.
To set an output, we pass a Javascript callback object to the
execute call, looking like this:
var kernel = IPython.notebook.kernel;
function callback(out_type, out_data){
// do_something
}
kernel.execute(command, {"output": callback});
Using this, we can execute a Python command and do something with
the result. The python command can be as simple as a variable name:
in this case, the value returned is simply the value of that variable.
To demonstrate this, we'll first import pi and sin
from the math package in Python:
End of explanation
"""
# Add an input form similar to what we saw above
input_form = """
<div style="background-color:gainsboro; border:solid black; width:600px; padding:20px;">
Code: <input type="text" id="code_input" size="50" height="2" value="sin(pi / 2)"><br>
Result: <input type="text" id="result_output" size="50" value="1.0"><br>
<button onclick="exec_code()">Execute</button>
</div>
"""
# here the javascript has a function to execute the code
# within the input box, and a callback to handle the output.
javascript = """
<script type="text/Javascript">
function handle_output(out_type, out){
console.log(out_type);
console.log(out);
var res = null;
// if output is a print statement
if(out_type == "stream"){
res = out.data;
}
// if output is a python object
else if(out_type === "pyout"){
res = out.data["text/plain"];
}
// if output is a python error
else if(out_type == "pyerr"){
res = out.ename + ": " + out.evalue;
}
// if output is something we haven't thought of
else{
res = "[out type not implemented]";
}
document.getElementById("result_output").value = res;
}
function exec_code(){
var code_input = document.getElementById('code_input').value;
var kernel = IPython.notebook.kernel;
var callbacks = {'output' : handle_output};
document.getElementById("result_output").value = ""; // clear output box
var msg_id = kernel.execute(code_input, callbacks, {silent:false});
console.log("button pressed");
}
</script>
"""
HTML(input_form + javascript)
"""
Explanation: And then we'll manipulate this value via Javascript:
End of explanation
"""
%pylab inline
from IPython.display import HTML
from cStringIO import StringIO
# We'll use HTML to create a control panel with an
# empty image and a number of navigation buttons.
disp_html = """
<div class="animation" align="center">
<img id="anim_frame" src=""><br>
<button onclick="prevFrame()">Prev Frame</button>
<button onclick="reverse()">Reverse</button>
<button onclick="pause()">Pause</button>
<button onclick="play()">Play</button>
<button onclick="nextFrame()">Next Frame</button>
</div>
"""
# now the javascript to drive it. The nextFrame() and prevFrame()
# functions will call the kernel and pull-down the frame which
# is generated. The play() and reverse() functions use timeouts
# to repeatedly call nextFrame() and prevFrame().
javascript = """
<script type="text/Javascript">
var count = -1; // keep track of frame number
var animating = 0; // keep track of animation direction
var timer = null;
var kernel = IPython.notebook.kernel;
function output(out_type, out){
data = out.data["text/plain"];
document.getElementById("anim_frame").src = data.substring(1, data.length - 1);
if(animating > 0){
timer = setTimeout(nextFrame, 0);
}
else if(animating < 0){
timer = setTimeout(prevFrame, 0);
}
}
var callbacks = {'output' : output};
function pause(){
animating = 0;
if(timer){
clearInterval(timer);
timer = null;
}
}
function play(){
pause();
animating = +1;
nextFrame();
}
function reverse(){
pause();
animating = -1;
prevFrame();
}
function nextFrame(){
count += 1;
var msg_id = kernel.execute("disp._get_frame_data(" + count + ")", callbacks, {silent:false});
}
function prevFrame(){
count -= 1;
var msg_id = kernel.execute("disp._get_frame_data(" + count + ")", callbacks, {silent:false});
}
// display the first frame
setTimeout(nextFrame, 0);
</script>
"""
# Here we create a class whose HTML representation is the above
# HTML and javascript. Note that we've hard-coded the global
# variable name `disp` in the Javascript, so you'll have to assign
# the resulting object to this name in order to view it.
class DisplayAnimation(object):
def __init__(self, anim):
self.anim = anim
self.fig = anim._fig
plt.close(self.fig)
def _get_frame_data(self, i):
anim._draw_frame(i)
buffer = StringIO()
fig.savefig(buffer, format='png')
buffer.reset()
data = buffer.read().encode('base64')
return "data:image/png;base64,{0}".format(data.replace('\n', ''))
def _repr_html_(self):
return disp_html + javascript
"""
Explanation: Pressing <button>Execute</button> above will call kernel.execute
with the contents of the Code box, passing a callback which
displays the result in the result box.
The reason the callback has so many conditionals is because there are several types
of outputs we need to handle. Note that the output handler is given as the output
attribute of a Javascript object, and passed to the kernel.execute function.
Again, we use console.log to allow us to inspect the objects
using the Javascript console.
Application: An On-the-fly Matplotlib Animation
In a previous post
I introduced a javascript viewer for matplotlib animations. This viewer pre-computes all the matplotlib
frames, embeds them in the notebook, and offers some tools to view them.
Here we'll explore a different strategy: rather than precomputing all the frames before displaying them,
we'll use the javascript/python kernel communication and generate the frames as needed.
Note that if you're viewing this statically (e.g. in nbviewer or on my blog), it will be relatively
unexciting: with no IPython kernel available, calls to the kernel will do nothing.
To see this in action, please
download the notebook and open it with a running IPython notebook instance.
End of explanation
"""
from matplotlib import animation
fig = plt.figure()
ax = plt.axes(xlim=(0, 10), ylim=(-2, 2))
line, = ax.plot([], [], lw=2)
def init():
line.set_data([], [])
return line,
def animate(i):
x = np.linspace(0, 10, 1000)
y = np.cos(i * 0.02 * np.pi) * np.sin(x - i * 0.02 * np.pi)
line.set_data(x, y)
return line,
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=100, interval=30)
# For now, we need to name this `disp` for it to work
disp = DisplayAnimation(anim)
disp
"""
Explanation: This code should be considered a proof-of-concept: in particular, it
requires the display object to be named disp in the global namespace.
But making it more robust would be a relatively simple process.
Here we'll test the result by creating a simple animation and displaying it dynamically:
End of explanation
"""
def save_to_mem(fig):
buffer = StringIO()
fig.savefig(buffer, format='png')
buffer.reset()
data = buffer.read().encode('base64')
return "data:image/png;base64,{0}".format(data.replace('\n', ''))
fig, ax = plt.subplots()
ax.plot(rand(200))
%timeit save_to_mem(fig)
"""
Explanation: Once again, if you're viewing this statically, you'll see nothing above
the buttons. The kernel needs to be running in order to see this: you can
download the notebook and run it to see the results (To see a
statically-viewable version of the animation, refer to the
previous post).
But I assure you,
it works! We've created an animation viewer which uses bi-directional
communication between javascript and matplotlib to generate the frames in
real-time.
Note that this is still rather limited, and should be considered a proof-of-concept
more than a finished result. In particular, on my four-year-old linux box, I
can only achieve a frame-rate of about 10 frames/sec.
Part of this is due to the reliance on png images saved within matplotlib,
as we can see by profiling the function:
End of explanation
"""
|
cwhanse/pvlib-python | docs/tutorials/tmy.ipynb | bsd-3-clause | # built in python modules
import datetime
import os
import inspect
# python add-ons
import numpy as np
import pandas as pd
# plotting libraries
%matplotlib inline
import matplotlib.pyplot as plt
try:
import seaborn as sns
except ImportError:
pass
import pvlib
"""
Explanation: TMY tutorial
This tutorial shows how to use the pvlib.tmy module to read data from TMY2 and TMY3 files.
This tutorial has been tested against the following package versions:
* pvlib 0.3.0
* Python 3.5.1
* IPython 4.1
* pandas 0.18.0
Authors:
* Will Holmgren (@wholmgren), University of Arizona. July 2014, July 2015, March 2016.
Import modules
End of explanation
"""
# Find the absolute file path to your pvlib installation
pvlib_abspath = os.path.dirname(os.path.abspath(inspect.getfile(pvlib)))
"""
Explanation: pvlib comes packaged with a TMY2 and a TMY3 data file.
End of explanation
"""
tmy3_data, tmy3_metadata = pvlib.iotools.read_tmy3(os.path.join(pvlib_abspath, 'data', '703165TY.csv'))
tmy2_data, tmy2_metadata = pvlib.iotools.read_tmy2(os.path.join(pvlib_abspath, 'data', '12839.tm2'))
"""
Explanation: Import the TMY data using the functions in the pvlib.iotools module.
End of explanation
"""
print(tmy3_metadata)
tmy3_data.head(5)
tmy3_data['GHI'].plot();
"""
Explanation: Print the TMY3 metadata and the first 5 lines of the data.
End of explanation
"""
tmy3_data, tmy3_metadata = pvlib.iotools.read_tmy3(os.path.join(pvlib_abspath, 'data', '703165TY.csv'), coerce_year=1987)
tmy3_data['GHI'].plot();
"""
Explanation: The TMY readers have an optional argument to coerce the year to a single value.
End of explanation
"""
print(tmy2_metadata)
print(tmy2_data.head())
"""
Explanation: Here's the TMY2 data.
End of explanation
"""
|
ClaudiaEsp/inet | Analysis/misc/How to use DataLoader.ipynb | gpl-2.0 | from __future__ import division
from terminaltables import AsciiTable
import inet
inet.__version__
from inet import DataLoader
"""
Explanation: <H1>How to use DataLoader</H1>
<P>This is an example on how to use a DataLoader object</P>
End of explanation
"""
mydataset = DataLoader('../../data/PV') # create an object with information of all connections
"""
Explanation: <H2>Object creation</H2>
The object loads the connectivity matrices in .syn format and reports the number of files loaded at construction.
End of explanation
"""
len(mydataset.experiment)
mydataset.nIN, mydataset.nPC # number of PV cells and GC cells recorded
mydataset.configuration # number of recording configurations
print(mydataset.motif) # number of connections tested and found for every type
"""
Explanation: <H2>Object atttributes</H2>
The object contains a list with all experiments loaded
End of explanation
"""
mydataset.experiment[0] # example of the data from the first experiment
"""
Explanation: Details of every experiments are given in a list
End of explanation
"""
mydataset.experiment[12]['fname'] # mydataset.filename(12)
mydataset.filename(12)
mydataset.experiment[12]['matrix']
mydataset.matrix(12)
print(mydataset.experiment[12]['motif'])
mydataset.motifs(12)
"""
Explanation: and details fo the recording configurations are provided
End of explanation
"""
mydataset.IN[2]
"""
Explanation: or the type the number of configurations when two PV-positive cells were recorded
End of explanation
"""
y = mydataset.stats()
print AsciiTable(y).table
mymotifs = mydataset.motif
info = [
['Connection type', 'Value'],
['PV-PV chemical synapses', mymotifs.ii_chem_found],
['PV-PV electrical synapses', mymotifs.ii_elec_found],
[' ',' '],
['PV-PV bidirectional chemical', mymotifs.ii_c2_found],
['PV-PV divergent chemical', mymotifs.ii_div_found],
['PV-PV convergent chemical', mymotifs.ii_con_found],
['PV-PV linear chemical', mymotifs.ii_lin_found],
[''],
['PV-PV one chemical with electrical', mymotifs.ii_c1e_found],
['PV-PV bidirectional chemical with electrical', mymotifs.ii_c2e_found],
[' ',' '],
['P(PV-PV) chemical synapse', mymotifs.ii_chem_found/mymotifs.ii_chem_tested],
['P(PV-PV) electrical synapse', mymotifs.ii_elec_found/mymotifs.ii_elec_tested],
[''],
['P(PV-PV) bidirectional chemical synapse', mymotifs.ii_c2_found/mymotifs.ii_c2_tested],
['P(div) divergent chemical motifs', mymotifs.ii_div_found/mymotifs.ii_div_tested],
['P(div) convergent chemical motifs', mymotifs.ii_con_found/mymotifs.ii_con_tested],
['P(chain) linear chain motifs', mymotifs.ii_lin_found/mymotifs.ii_lin_tested],
[' ',' '],
['P(PV-PV) one chemical with electrical', mymotifs.ii_c1e_found/mymotifs.ii_c1e_tested],
['P(PV-PV) bidirectional chemical with electrical', mymotifs.ii_c2e_found/mymotifs.ii_c2e_tested],
[' ',' '],
['PV-GC chemical synapses', mymotifs.ie_found],
['GC-PC chemical synapses', mymotifs.ei_found],
[' ',' '],
['P(PV-GC) chemical synapse',mymotifs.ie_found/mymotifs.ie_tested],
['P(GC-PC) chemical synapse', mymotifs.ei_found/mymotifs.ei_tested],
[' ',' '],
]
table = AsciiTable(info)
print (table.table)
"""
Explanation: <H2> Descriptive statistics </H2>
The stats attribute will return basis statistics of the whole dataset
End of explanation
"""
|
deepchem/deepchem | examples/tutorials/Uncertainty_In_Deep_Learning.ipynb | mit | !pip install --pre deepchem
import deepchem
deepchem.__version__
"""
Explanation: Uncertainty in Deep Learning
A common criticism of deep learning models is that they tend to act as black boxes. A model produces outputs, but doesn't given enough context to interpret them properly. How reliable are the model's predictions? Are some predictions more reliable than others? If a model predicts a value of 5.372 for some quantity, should you assume the true value is between 5.371 and 5.373? Or that it's between 2 and 8? In some fields this situation might be good enough, but not in science. For every value predicted by a model, we also want an estimate of the uncertainty in that value so we can know what conclusions to draw based on it.
DeepChem makes it very easy to estimate the uncertainty of predicted outputs (at least for the models that support it—not all of them do). Let's start by seeing an example of how to generate uncertainty estimates. We load a dataset, create a model, train it on the training set, predict the output on the test set, and then derive some uncertainty estimates.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
End of explanation
"""
import deepchem as dc
import numpy as np
import matplotlib.pyplot as plot
tasks, datasets, transformers = dc.molnet.load_delaney()
train_dataset, valid_dataset, test_dataset = datasets
model = dc.models.MultitaskRegressor(len(tasks), 1024, uncertainty=True)
model.fit(train_dataset, nb_epoch=20)
y_pred, y_std = model.predict_uncertainty(test_dataset)
"""
Explanation: We'll use the Delaney dataset from the MoleculeNet suite to run our experiments in this tutorial. Let's load up our dataset for our experiments, and then make some uncertainty predictions.
End of explanation
"""
# Generate some fake data and plot a regression line.
x = np.linspace(0, 5, 10)
y = 0.15*x + np.random.random(10)
plot.scatter(x, y)
fit = np.polyfit(x, y, 1)
line_x = np.linspace(-1, 6, 2)
plot.plot(line_x, np.poly1d(fit)(line_x))
plot.show()
"""
Explanation: All of this looks exactly like any other example, with just two differences. First, we add the option uncertainty=True when creating the model. This instructs it to add features to the model that are needed for estimating uncertainty. Second, we call predict_uncertainty() instead of predict() to produce the output. y_pred is the predicted outputs. y_std is another array of the same shape, where each element is an estimate of the uncertainty (standard deviation) of the corresponding element in y_pred. And that's all there is to it! Simple, right?
Of course, it isn't really that simple at all. DeepChem is doing a lot of work to come up with those uncertainties. So now let's pull back the curtain and see what is really happening. (For the full mathematical details of calculating uncertainty, see https://arxiv.org/abs/1703.04977)
To begin with, what does "uncertainty" mean? Intuitively, it is a measure of how much we can trust the predictions. More formally, we expect that the true value of whatever we are trying to predict should usually be within a few standard deviations of the predicted value. But uncertainty comes from many sources, ranging from noisy training data to bad modelling choices, and different sources behave in different ways. It turns out there are two fundamental types of uncertainty we need to take into account.
Aleatoric Uncertainty
Consider the following graph. It shows the best fit linear regression to a set of ten data points.
End of explanation
"""
plot.figure(figsize=(12, 3))
line_x = np.linspace(0, 5, 50)
for i in range(3):
plot.subplot(1, 3, i+1)
plot.scatter(x, y)
fit = np.polyfit(np.concatenate([x, [3]]), np.concatenate([y, [i]]), 10)
plot.plot(line_x, np.poly1d(fit)(line_x))
plot.show()
"""
Explanation: The line clearly does not do a great job of fitting the data. There are many possible reasons for this. Perhaps the measuring device used to capture the data was not very accurate. Perhaps y depends on some other factor in addition to x, and if we knew the value of that factor for each data point we could predict y more accurately. Maybe the relationship between x and y simply isn't linear, and we need a more complicated model to capture it. Regardless of the cause, the model clearly does a poor job of predicting the training data, and we need to keep that in mind. We cannot expect it to be any more accurate on test data than on training data. This is known as aleatoric uncertainty.
How can we estimate the size of this uncertainty? By training a model to do it, of course! At the same time it is learning to predict the outputs, it is also learning to predict how accurately each output matches the training data. For every output of the model, we add a second output that produces the corresponding uncertainty. Then we modify the loss function to make it learn both outputs at the same time.
Epistemic Uncertainty
Now consider these three curves. They are fit to the same data points as before, but this time we are using 10th degree polynomials.
End of explanation
"""
abs_error = np.abs(y_pred.flatten()-test_dataset.y.flatten())
plot.scatter(y_std.flatten(), abs_error)
plot.xlabel('Standard Deviation')
plot.ylabel('Absolute Error')
plot.show()
"""
Explanation: Each of them perfectly interpolates the data points, yet they clearly are different models. (In fact, there are infinitely many 10th degree polynomials that exactly interpolate any ten data points.) They make identical predictions for the data we fit them to, but for any other value of x they produce different predictions. This is called epistemic uncertainty. It means the data does not fully constrain the model. Given the training data, there are many different models we could have found, and those models make different predictions.
The ideal way to measure epistemic uncertainty is to train many different models, each time using a different random seed and possibly varying hyperparameters. Then use all of them for each input and see how much the predictions vary. This is very expensive to do, since it involves repeating the whole training process many times. Fortunately, we can approximate the same effect in a less expensive way: by using dropout.
Recall that when you train a model with dropout, you are effectively training a huge ensemble of different models all at once. Each training sample is evaluated with a different dropout mask, corresponding to a different random subset of the connections in the full model. Usually we only perform dropout during training and use a single averaged mask for prediction. But instead, let's use dropout for prediction too. We can compute the output for lots of different dropout masks, then see how much the predictions vary. This turns out to give a reasonable estimate of the epistemic uncertainty in the outputs.
Uncertain Uncertainty?
Now we can combine the two types of uncertainty to compute an overall estimate of the error in each output:
$$\sigma_\text{total} = \sqrt{\sigma_\text{aleatoric}^2 + \sigma_\text{epistemic}^2}$$
This is the value DeepChem reports. But how much can you trust it? Remember how I started this tutorial: deep learning models should not be used as black boxes. We want to know how reliable the outputs are. Adding uncertainty estimates does not completely eliminate the problem; it just adds a layer of indirection. Now we have estimates of how reliable the outputs are, but no guarantees that those estimates are themselves reliable.
Let's go back to the example we started with. We trained a model on the SAMPL training set, then generated predictions and uncertainties for the test set. Since we know the correct outputs for all the test samples, we can evaluate how well we did. Here is a plot of the absolute error in the predicted output versus the predicted uncertainty.
End of explanation
"""
plot.hist(abs_error/y_std.flatten(), 20)
plot.show()
"""
Explanation: The first thing we notice is that the axes have similar ranges. The model clearly has learned the overall magnitude of errors in the predictions. There also is clearly a correlation between the axes. Values with larger uncertainties tend on average to have larger errors. (Strictly speaking, we expect the absolute error to be less than the predicted uncertainty. Even a very uncertain number could still happen to be close to the correct value by chance. If the model is working well, there should be more points below the diagonal than above it.)
Now let's see how well the values satisfy the expected distribution. If the standard deviations are correct, and if the errors are normally distributed (which is certainly not guaranteed to be true!), we expect 95% of the values to be within two standard deviations, and 99% to be within three standard deviations. Here is a histogram of errors as measured in standard deviations.
End of explanation
"""
|
ireapps/cfj-2017 | exercises/08. Working with APIs (Part 1)-working.ipynb | mit | # build a dictionary of payload data
# turn it into a string of JSON
"""
Explanation: Let's post a message to Slack
In this session, we're going to use Python to post a message to Slack. I set up a team for us so we can mess around with the Slack API.
We're going to use a simple incoming webhook to accomplish this.
Hello API
API stands for "Application Programming Interface." An API is a way to interact programmatically with a software application.
If you want to post a message to Slack, you could open a browser and navigate to your URL and sign in with your username and password (or open the app), click on the channel you want, and start typing.
OR ... you could post your Slack message with a Python script.
Hello environmental variables
The code for this boot camp is on the public internet. We don't want anyone on the internet to be able to post messages to our Slack channels, so we're going to use an environmental variable to store our webhook.
The environmental variable we're going to use -- IRE_CFJ_2017_SLACK_HOOK -- should already be stored on your computer.
Python has a standard library module for working with the operating system called os. The os module has a data attribute called environ, a dictionary of environmental variables stored on your computer.
(Here is a new thing: Instead of using brackets to access items in a dictionary, you can use the get() method. The advantage to doing it this way: If the item you're trying to get doesn't exist in your dictionary, it'll return None instead of throwing an exception, which is sometimes a desired behavior.)
Hello JSON
So far we've been working with tabular data -- CSVs with columns and rows. Most modern web APIs prefer to shake hands with a data structure called JSON (JavaScript Object Notation), which is more like a matryoshka doll.
Python has a standard library module for working with JSON data called json. Let's import it.
Using requests to post data
We're also going to use the requests library again, except this time, instead of using the get() method to get something off the web, we're going to use the post() method to send data to the web.
Formatting the data correctly
The JSON data we're going to send to the Slack webhook will start its life as a Python dictionary. Then we'll use the json module's dumps() method to turn it into a string of JSON.
End of explanation
"""
# check to see if you have the webhook URL
# send it to slack!
# if you don't have the webhook env var, print a message to the terminal
"""
Explanation: Send it off to Slack
End of explanation
"""
|
reece/ga4gh-examples | nb/Plot SO with pivottablejs.ipynb | apache-2.0 | import pandas as pd
import ga4gh.client
print(ga4gh.__version__)
gc = ga4gh.client.HttpClient("http://localhost:8000")
region_constraints = dict(referenceName="1", start=0, end=int(1e10))
"""
Explanation: Method Overview
gc.searchDatasets() -- Returns an iterator over the Datasets on the server.
gc.searchVariantSets(datasetId) -- Returns an iterator over the VariantSets fulfilling the specified conditions from the specified Dataset.
gc.searchCallSets(variantSetId, name=None) -- Returns an iterator over the CallSets for variantSetId
gc.searchVariants(variantSetId, start=None, end=None, referenceName=None, callSetIds=None) -- Returns an iterator over the Variants fulfilling the specified conditions from the specified VariantSet.
gc.searchVariantAnnotationSets(variantSetId) -- Returns an iterator over the AnnotationSets fulfilling the specified conditions from the specified Dataset.
gc.searchVariantAnnotations(variantAnnotationSetId, referenceName=None, referenceId=None, start=None, end=None, featureIds=[], effects=[]) -- Returns an iterator over the Annotations fulfilling the specified conditions from the specified AnnotationSet.
End of explanation
"""
data_sets = pd.DataFrame(ds.toJsonDict() for ds in gc.searchDatasets())
data_sets.head()
"""
Explanation: Fetch Data Sets
End of explanation
"""
variant_sets = pd.DataFrame([
{'data_set_id': ds.id,
'variant_set_id': vs.id,
'variant_set_name': vs.name}
for ds in gc.searchDatasets()
for vs in gc.searchVariantSets(ds.id)
])
variant_sets.head()
"""
Explanation: Variant Sets for each Data Set (currently only one)
End of explanation
"""
call_sets = pd.DataFrame([
{
'data_set_id': ds.id,
'variant_set_id': vs.id,
'variant_set_name': vs.name,
'call_set_id': cs.id,
'call_set_name': cs.name,
}
for ds in gc.searchDatasets()
for vs in gc.searchVariantSets(ds.id)
for cs in gc.searchCallSets(vs.id)
])
call_sets.head()
"""
Explanation: Call Sets (by variant set)
End of explanation
"""
call_sets = pd.DataFrame([
{
'data_set_id': ds.id,
'variant_set_id': vs.id,
'variant_set_name': vs.name,
'variant_annotation_set_id': vas.id,
'variant_annotation_set_name': vas.name,
}
for ds in gc.searchDatasets()
for vs in gc.searchVariantSets(ds.id)
for vas in gc.searchVariantAnnotationSets(vs.id)
])
call_sets.head()
"""
Explanation: Variant Annotation Sets (by variant set)
End of explanation
"""
call_sets = pd.DataFrame([
{
'data_set_id': ds.id,
'variant_set_id': vs.id,
'variant_set_name': vs.name,
'n_callsets': sum(w for _ in gc.searchCallSets(vs.id)),
'n_variants': sum(1 for _ in gc.searchVariants(vs.id, **region_constraints)),
'n_annotation_sets': sum(1 for _ in gc.searchVariantAnnotationSets(vs.id)),
'n_annotations': sum(1
for vas in gc.searchVariantAnnotationSets(vs.id)
for _ in gc.searchVariantAnnotations(vas.id, **region_constraints)
),
}
for ds in gc.searchDatasets()
for vs in gc.searchVariantSets(ds.id)
for vas in gc.searchVariantAnnotationSets(vs.id)
])
call_sets.head()
"""
Explanation: Variant Annotations (by variant set and region)
End of explanation
"""
|
Cushychicken/cushychicken.github.io | assets/lockin_amp_simulation.ipynb | mit | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
def sine_wave(freq, phase=0, Fs=10000):
ph_rad = (phase/360.0)*(2.0*np.pi)
return np.array([np.sin(((2 * np.pi * freq * a) / Fs) + ph_rad) for a in range(Fs)])
sine = sine_wave(100)
mean = np.array([np.mean(sine)] * len(sine))
df = pd.DataFrame({'sine':sine_wave(100),
'mean':mean})
df[:1000].plot()
"""
Explanation: I've been quietly obsessing over lock-in amplifiers ever since I read about them in Chapter 8 of The Art of Electronics. Now that I've had a few months to process the concept as a background task, I decided to whip up a Python model of a lock-in amp, just for the hell of it.
Background
Lock-in amplifiers are a type of lab equipment used for pulling really weak signals out of overpowering noise. Horowitz and Hill deem it "a method of considerable subtlety", which is refined way of saying "a cool trick of applied mathematics". A lock-in amplifier relies on the fact that, over a large time interval (much greater than any single period), the average DC value of any given sine wave is zero.
End of explanation
"""
df['sin_mixed'] = np.multiply(df.sine, df.sine)
df['mean_mixed'] = np.mean(df.sin_mixed)
df[['sin_mixed','mean_mixed']][:1000].plot()
"""
Explanation: However, this ceases to be true when two sinusoids of equal frequency and phase are multiplied together. In this case, instead of averaging out to zero, the product of the two waves have a nonzero mean value.
End of explanation
"""
df['sin_mixed_101'] = np.multiply(df.sine, sine_wave(101))
df['mean_mixed_101'] = np.mean(df.sin_mixed_101)
df[['sin_mixed_101','mean_mixed_101']].plot()
"""
Explanation: This DC voltage produced by the product of the two waves is very sensitive to changes in frequency. The plots below show that a 101Hz signal has a mean value of zero when multiplied by a 100Hz signal.
End of explanation
"""
noise_fl = np.array([(2 * np.random.random() - 1) for a in range(10000)])
df['sine_noisy'] = np.add(noise_fl, 0.1*df['sine'])
df['sin_noisy_mixed'] = np.multiply(df.sine_noisy, df.sine)
df['mean_noisy_mixed'] = df['sin_noisy_mixed'].mean()
fig, axes = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(12,4)
df['sine_noisy'].plot(ax=axes[0])
df[['sin_noisy_mixed', 'mean_noisy_mixed']].plot(ax=axes[1])
"""
Explanation: This is really useful in situations where you have a signal of a known frequency. With the proper equipment, you can "lock in" to your known-frequency signal, and track changes to the amplitude and phase of that signal - even in the presence of overwhelming noise.
You can show this pretty easily by just scaling down one of the waves in our prior example, and burying it in noise. (This signal is about 20dB below the noise floor in this case.)
End of explanation
"""
df['mean_noisy_mixed'].plot()
"""
Explanation: It doesn't look like much at the prior altitude, but it's definitely the signal we're looking for. That's because the lock-in output scales with the amplitude of both the input signal and the reference waveform:
$$U_{out}=\frac{1}{2}V_{sig}V_{ref}cos(\theta)$$
As a result, the lock-in amp has a small (but meaningful) amplitude:
End of explanation
"""
def lowpass(x, alpha=0.001):
data = [x[0]]
for a in x[1:]:
data.append(data[-1] + (alpha*(a-data[-1])))
return np.array(data)
df['sin_mixed_lp'] = lowpass(df.sin_mixed)
df['sin_mixed_lp'].plot()
"""
Explanation: Great! We can pull really weak signals out of seemingly endless noise. So, why haven't we used this technology to revolutionize all communications with infite signal-to-noise ratio?
Like all real systems, there's a tradeoff, and for a lock-in amplifier, that tradeoff is time. Lock-in amps rely on a persistent periodic signal - without one, there isn't anything to lock on to! That's the catch of multiplying two signals of identical frequencies together: it takes time for that DC offset component to form.
A second tradeoff of the averaging method becomes obvious when you consider how to implement the averaging in a practical manner. Since we're talking about this in the context of electronics: one of the simplest ways to average, electronically, is to just filter by frequency, and it doesn't get much simpler than a single pole lowpass filter for a nice gentle average. The result looks pretty good when applied to the product of two sine waves:
End of explanation
"""
df['sin_noisy_mixed_lp'] = lowpass(df.sin_noisy_mixed)
df['sin_noisy_mixed_lp'].plot()
"""
Explanation: ...but it starts to break down when you filter the noisy signals, which can contain large fluctuations that aren't necessarily real:
End of explanation
"""
df['sin_noisy_mixed_lp2'] = lowpass(df.sin_noisy_mixed_lp)
df['sin_noisy_mixed_lp2'].plot()
"""
Explanation: We can clean get rid of some of that statistical noise junk by rerunning the filter, of course, but that takes time, and also robs the lock-in of a bit of responsiveness.
End of explanation
"""
df['sin_phase45_mixed'] = np.multiply(df.sine_noisy, sine_wave(100, phase=45))
df['sin_phase90_mixed'] = np.multiply(df.sine_noisy, sine_wave(100, phase=90))
df['sin_phase45_mixed_lp'] = lowpass(df['sin_phase45_mixed'])
df['sin_phase90_mixed_lp'] = lowpass(df['sin_phase90_mixed'])
fig, axes = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(12,4)
df[['sin_noisy_mixed_lp','sin_phase45_mixed_lp','sin_phase90_mixed_lp']].plot(ax=axes[0])
df[['sin_noisy_mixed_lp','sin_phase45_mixed_lp','sin_phase90_mixed_lp']][6000:].plot(ax=axes[1])
"""
Explanation: On top of all this, lock-in amps are highly sensitive to phase differences between reference and signal tones. Take a look at the plots below, where our noisy signal is mixed with waves 45 and 90 degrees offset from it.
End of explanation
"""
def cosine_wave(freq, phase=0, Fs=10000):
ph_rad = (phase/360.0)*(2.0*np.pi)
return np.array([np.cos(((2 * np.pi * freq * a) / Fs) + ph_rad) for a in range(Fs)])
df['cos_noisy_mixed'] = np.multiply(df.sine_noisy, cosine_wave(100))
df['cos_noisy_mixed_lp'] = lowpass(df['cos_noisy_mixed'])
df['noisy_quad_mag'] = np.sqrt(np.add(np.square(df['cos_noisy_mixed_lp']),
np.square(df['sin_noisy_mixed_lp'])))
df['noisy_quad_pha'] = np.arctan2(df['cos_noisy_mixed_lp'], df['sin_noisy_mixed_lp'])
fig, axes = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(12,4)
axes[0].set_title('Magnitude')
axes[1].set_title('Phase (radians)')
df['noisy_quad_mag'][8000:].plot(ax=axes[0])
df['noisy_quad_pha'][8000:].plot(ax=axes[1])
"""
Explanation: These plots illustrate that there's a component of phase sensitivity. As the phase of signal moves farther and farther out of phase with the reference, the lock-in output starts to trend downwards, closer to zero. You can see, too, why lock-ins require time to settle out to a final value - the left plot shows how signals that are greatly out of phase with one another can produce an initial signal value where none should exist! The right plot, however, shows how the filtered, 90-degree offset signal (green trace) declines over time to the correct average value of approximately zero.
Quadrature Output
Like all lab equipment, lock-in amplifiers were originally analog devices. Analog lock-ins required a bit of tedious work to get optimum performance from amplifier - typically adjusting the phase of the reference so as to be in-phase with the target signal. This could prove time consuming given the time delay required for the output to stabilize! However, advances in digital technology have since yielded some nice improvements for lock-in amplifiers:
digitally generated, near-perfect reference signals,
simultaneous sine and cosine mixing,
DSP based output filter - easily and accurately change filter order and corner!
This easy access to both sine-mixed and cosine-mixed signals allow us to plot the output of a digital lock-in amplifier as a quadrature modulated signal, which shows changes in both the magnitude and phase of the lock-in vector:
End of explanation
"""
|
irockafe/revo_healthcare | notebooks/Effects_of_retention_time_on_classification/mz_rt_grids.ipynb | mit | import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import numpy as np
%matplotlib inline
"""
Explanation: <h2>Goal:</h2>
Write functions to subdivide an m/z : rt space into rt bins. See how this affects classification performance
End of explanation
"""
# Get the data
### Subdivide the data into a feature table
local_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/'
data_path = local_path + '/revo_healthcare/data/processed/MTBLS315/'\
'uhplc_pos/xcms_camera_results.csv'
## Import the data and remove extraneous columns
df = pd.read_csv(data_path, index_col=0)
df.shape
df.head()
not_samples=['']
"""
Explanation: Start with MTBLS315, the malaria vs fever dataset. Could get ~0.85 AUC for whole dataset.
End of explanation
"""
# Show me a scatterplot of m/z rt dots
# distribution along mass-axis and rt axist
plt.scatter(df['mz'], df['rt'], s=1,)
plt.xlabel('mz')
plt.ylabel('rt')
plt.title('mz vs. rt')
plt.show()
# Check out the distribution of rt-windows to choose your rt bin
rt_dist = df['rtmax'] - df['rtmin']
rt_dist.hist(bins=100)
# Choose to use 40 second bins...?
# Divide m/z and rt into windows of certain width
def subdivide_mz_rt(df, rt_bins, mz_bins, samples, not_samples):
'''
GOAL - Subdivide mz/rt space into a grid of defined widths. Sum entrie sof the grid
IPUT - df - dataframe from xcms (rt, mz, patient names as columns)
OUTPUT - Matrix containing summed intensities
NOTE - Do adducts play a large roll here? Uncertain. Could choose your rt-widths based on
rt-distribution
'''
# Define the bins based on separation criteria
# todo this is hacky and will always round down
rt_window = df['rt'].max() / rt_bins
rt_bounds = [(i*rt_window, i*rt_window+rt_window)
for i in range(0, rt_bins)]
mz_window = df['mz'].max() / mz_bins
mz_bounds = [(i*mz_window, i*mz_window+mz_window)
for i in range(0, mz_bins)]
# Tidy up by converting patient labels to column using melt
tidy = pd.melt(df, id_vars=not_samples,
value_vars=samples, var_name='Samples',
value_name='Intensity')
# Go through each sample and sum intensities onto grid
grid_dict = {}
for sample in samples:
# Initiate empty dict with shape mz_windo x rt_window
grid_vals = np.zeros([mz_bins, rt_bins])
df_sample = tidy[tidy['Samples'] == sample]
# use floor, b/c zero is the start of indexes, not 1
# TODO Deal with
y_vals = np.floor(df_sample['rt'].div(
df_sample['rt'].max()+1e-9)*rt_bins
).astype(int)
x_vals = np.floor(df_sample['mz'].div(
df_sample['mz'].max()+1e-9)*mz_bins
).astype(int)
df_sample['rt_bin'] = x_vals
df_sample['mz_bin'] = y_vals
# Go through each row and add intensity to the correct
# grid-entry
for idx, row in df_sample.iterrows():
grid_vals[row['rt_bin'], row['mz_bin']] += row['Intensity']
# Add to grid dict
grid_dict[sample] = grid_vals
ax = sns.heatmap(grid_vals)
ax.set_xlabel('mz')
ax.set_ylabel('rt')
ax.invert_yaxis()
plt.title(sample)
plt.show()
# Create matrix to add intensities to
# Get the number of bins by finding hte max values of m/z and
return grid_dict
not_samples = ['mz', 'mzmin', 'mzmax', 'rt', 'rtmin', 'rtmax',
'adduct', 'isotopes', 'npeaks', 'pcgroup', 'uhplc_pos']
samples = df.columns.difference(not_samples)
print 'max:', df['mz'].max()
rt_divisor = 50 # base this on distribution of rt-widths..?
mz_bins = 5
rt_bins = 5
print mz_bins
print rt_bins
grid_dict = subdivide_mz_rt(df, mz_bins, rt_bins, samples, not_samples)
# Make shit tidy
test_df = pd.DataFrame({'mz': [10,20,30,100], 'rt':[100,200,300,1000],
'A': [1,2,3,4], 'B': [10,20,30,40],
'C': [5,15,25,35]})
print 'Original: \n', test_df
tidy_df = pd.melt(test_df, id_vars=['mz', 'rt'], value_vars=['A', 'B', 'C'],
var_name='Subject', value_name='Intensity')
print '\n\n Tidy:\n', tidy_df
feature_table_df = pd.pivot_table(tidy_df, index=['mz', 'rt'], values='Intensity',
columns='Subject')
print '\n\n Unpivoted, step 1:\n',feature_table_df
feature_table_df.reset_index(inplace=True)
print feature_table_df
abc = tidy_df[tidy_df['Subject'] == 'A']
a = np.zeros([2,6])
a[0.0,2.0] += 3
a[0,2]+=7
a
# divide feature table into slices of retention time
def get_rt_slice(df, rt_bounds):
'''
PURPOSE:
Given a tidy feature table with 'mz' and 'rt' column headers,
retain only the features whose rt is between rt_left
and rt_right
INPUT:
df - a tidy pandas dataframe with 'mz' and 'rt' column
headers
rt_left, rt_right: the boundaries of your rt_slice, in seconds
'''
out_df = df.loc[ (df['rt'] > rt_bounds[0]) &
(df['rt'] < rt_bounds[1])]
return out_df
def sliding_window_rt(df, rt_width, step=rt_width*0.25):
pass
# get range of values [(0, rt_width)
#get_rt_slice(df, )
rt_min = np.min(df['rt'])
rt_max = np.max(df['rt'])
# define the ranges
left_bound = np.arange(rt_min, rt_max, step)
right_bound = left_bound + rt_width
rt_bounds = zip(left_bound, right_bound)
for rt_slice in rt_bounds:
rt_window = get_rt_slice(df, rt_slice)
#print rt_window.head()
print 'shape', rt_window.shape
raise hee
# TODO Send to ml pipeline here? Or separate function?
print type(np.float64(3.5137499999999999))
a = get_rt_slice(df, (750, 1050))
print 'Original dataframe shape: ', df.shape
print '\n Shape:', a.shape, '\n\n\n\n'
sliding_window_rt(df, 100)
# Convert selected slice to feature table, X, get labels y
"""
Explanation: Almost everything is below 30sec rt-window
End of explanation
"""
### Subdivide the data into a feature table
local_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/'\
'projects'
data_path = local_path + '/revo_healthcare/data/processed/MTBLS72/positive_mode/'\
'mtbls_no_retcor_bw2.csv'
## Import the data and remove extraneous columns
df = pd.read_csv(data_path, index_col=0)
df.shape
df.head()
# Make a new index of mz:rt
mz = df.loc[:,"mz"].astype('str')
rt = df.loc[:,"rt"].astype('str')
idx = mz+':'+rt
df.index = idx
df.head()
a = get_rt_slice(df, (0,100))
print df.shape
print a.shape
"""
Explanation: <h2> Show me the distribution of features from alzheimers dataset </h2>
End of explanation
"""
|
turi-code/tutorials | strata-sj-2016/deep-learning/image_similarity.ipynb | apache-2.0 | ## Creating the SFrame with our path to the directory where it is saved.
image_sf = gl.SFrame(path_to_dir +
'sf_processed.sframe'
)
image_sf
image_sf.show() #Explore the data using Canvas visual explorer
pretrained_model = gl.load_model(path_to_dir +
'pretrained_model.gl')
## Let's take a look at the network topology of hte pretrained model
pretrained_model['network']
"""
Explanation: Building an Image Similarity Service
Who doesn't love dresses? Dresses can be incredibly diverse in terms of look, fit, feel, material, trendiness, and quality. They help us feel good, look good, and express ourselves. Needless to say, buying and selling dresses is a big deal. Depending on who you are, what you need, what you want, and what you can afford, this can be really hard. Sometimes it's just hard because you like everything and need to make a decision.
In the tutorial demo, we saw an end-to-end application that finds similar items based on text and image metadata. We're going to focus on one of the core parts of building such an application, Image Similarity, that uses transfer learning and nearest neighbours to extract and compare features on images and determine how similar they are.
This iPython notebook consists of the following parts:
1. Loading the Data - loading existing data into an SFrame.
2. Extracing the features - loading an existing pre-trained model into a model object and using it to extract features.
3. Calculating the Distance - Creating and training a k-nearest neighbors classification model on the extracted features.
4. Finding Similar Items - Using the k-nearest neighbors model to help us find similar items.
5. Saving our new model for use in deploying a predictive service! (This is covered in a separate notebook)
<img src='images/workflow1.png'></img>
End of explanation
"""
image_sf['image'][:1].show()
# Here are the features of the first image
extracted = pretrained_model.extract_features(image_sf[['image']][:1])
# NB: image_sf is an SFrame, image_sf['image'] is an SArray, and image_sf[['image']] is an SFrame
extracted
"""
Explanation: Step 2: Extract Features
<img src='images/workflow2.png'></img>
We will be using deep visual features to match the product images to each other. In order to do that, we need to load in our pre-trained ImageNet neural network model to be used as a feature extractor, and extract features from the images in the dataset.
As an example extract the features of the first image in the dataset. The <code>extract_features</code> takes the output from the second last layer of the pretrained_model, before the classification layer discards the rich features the network has learned.
We also observe the size of the features extracted, compared with the original image:
196608: bytes values per 256x256pixel image with an RGB component.
4096: size of the penultimate fully-connected layer and hence the # of features 4096.
<img src='images/extract_features.png'></img>
End of explanation
"""
# extracted_features = pretrained_model.extract_features(image_sf[['image']]) #Caution! this may take a while
# image_sf['features'] = extracted_features ## adding the extracted_features to our SFrame
"""
Explanation: We now extract the features for all the images in our dataset using extract_features. This does the following:
For each image in our dataset, we progagate it through our pre-trained neural network.
At each layer of the neural network, some or all neurons are excited -- to various degrees -- by our image.
We can represent the excitement of every neuron in a given layer as a vector of all of their excitements.
There's an additional parameter layer_id, that allows you to choose any fully-connected layer to extract features from. The default is layer_id=None, which returns features from the penultimate layer of the pre-trained DNN of your choosing.
End of explanation
"""
nn_model = gl.nearest_neighbors.create(image_sf,
features=['features']) # We're using the pre-extracted features
"""
Explanation: Step 3: Calculating the Distance
<img src='images/workflow3.png'></img>
This is the last step in building the similar items recommdendation model. Using the features we extracted above, we are going to create a k-Nearest Neighbors model that measures the distance between all our features enables end users to find products whose images match most closely.
<img src='images/nearest_neighbors.png'></img>
End of explanation
"""
blue = image_sf[194:195]
blue['image'].show()
integer = 42 ##number of nearest neighbors you want to query, can be in the range(1, len(image_sf))
similar_to_blue = nn_model.query(blue,
k=integer,
verbose=True)
similar_to_blue
## To get the images, we need to join the reference label in our kNN model to our main SFrame
blue_images = image_sf.join(similar_to_blue,
on={'_id':'reference_label'}
).sort('distance')
## Let's show just the first 10 nearest neighbors
blue_images['image'][0:10].show() ## notice how we're taking a length 10 slice of the query we did above, of size 42.
"""
Explanation: Find Similar Items
<img src='images/workflow4.png'></img>
Query 1: Blue, da ba dee da ba die
End of explanation
"""
interesting = image_sf[2:3]
interesting['image'].show()
similar_to_interesting = nn_model.query(interesting, k=10, verbose=True)
similar_to_interesting
## To get the images, we need to join the reference label in our kNN model to our main SFrame
interesting_images = image_sf.join(similar_to_interesting,
on={'_id':'reference_label'}
).sort('distance')
## Let's show just the first 10 nearest neighbors
interesting_images['image'].show()
"""
Explanation: Query 2: Similarity with more unique images
End of explanation
"""
Image('http://static.ddmcdn.com/gif/blue-dress.jpg')
img = graphlab.Image('http://static.ddmcdn.com/gif/blue-dress.jpg')
img = graphlab.image_analysis.resize(img, 256,256,3)
sf = graphlab.SFrame()
sf['image'] = [img]
sf['features'] = pretrained_model.extract_features(sf)
similar_to_blue_or_gold = nn_model.query(sf, k=10, verbose=True)
blue_or_gold = image_sf.join(similar_to_blue_or_gold,
on={'_id':'reference_label'}
).sort('distance')
blue_or_gold['image'].show()
"""
Explanation: Query 3: Blue or Gold
End of explanation
"""
|
machinelearningnanodegree/stanford-cs231 | assignments/assignment2/ConvolutionalNetworks.ipynb | mit | # As usual, a bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.cnn import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
from cs231n.fast_layers import *
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.iteritems():
print '%s: ' % k, v.shape
"""
Explanation: Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
End of explanation
"""
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]]])
# Compare your output to ours; difference should be around 1e-8
print 'Testing conv_forward_naive'
print 'difference: ', rel_error(out, correct_out)
"""
Explanation: Convolution: Naive forward pass
The core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
End of explanation
"""
from scipy.misc import imread, imresize
kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d/2:-d/2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
""" Tiny helper to show images as uint8 and remove axis labels """
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
"""
Explanation: Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
End of explanation
"""
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-9'
print 'Testing conv_backward_naive function'
print 'dx error: ', rel_error(dx, dx_num)
print 'dw error: ', rel_error(dw, dw_num)
print 'db error: ', rel_error(db, db_num)
"""
Explanation: Convolution: Naive backward pass
Implement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency.
When you are done, run the following to check your backward pass with a numeric gradient check.
End of explanation
"""
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print 'Testing max_pool_forward_naive function:'
print 'difference: ', rel_error(out, correct_out)
"""
Explanation: Max pooling: Naive forward
Implement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency.
Check your implementation by running the following:
End of explanation
"""
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print 'Testing max_pool_backward_naive function:'
print 'dx error: ', rel_error(dx, dx_num)
"""
Explanation: Max pooling: Naive backward
Implement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.
Check your implementation with numeric gradient checking by running the following:
End of explanation
"""
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print 'Testing conv_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'Difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting conv_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'Fast: %fs' % (t2 - t1)
print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
print 'dw difference: ', rel_error(dw_naive, dw_fast)
print 'db difference: ', rel_error(db_naive, db_fast)
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print 'Testing pool_forward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'fast: %fs' % (t2 - t1)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'difference: ', rel_error(out_naive, out_fast)
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print '\nTesting pool_backward_fast:'
print 'Naive: %fs' % (t1 - t0)
print 'speedup: %fx' % ((t1 - t0) / (t2 - t1))
print 'dx difference: ', rel_error(dx_naive, dx_fast)
"""
Explanation: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:
bash
python setup.py build_ext --inplace
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
End of explanation
"""
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print 'Testing conv_relu_pool'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print 'Testing conv_relu:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, dw)
print 'db error: ', rel_error(db_num, db)
"""
Explanation: Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.
End of explanation
"""
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print 'Initial loss (no regularization): ', loss
model.reg = 0.5
loss, grads = model.loss(X, y)
print 'Initial loss (with regularization): ', loss
"""
Explanation: Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file cs231n/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:
Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.
End of explanation
"""
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
"""
Explanation: Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer.
End of explanation
"""
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=10, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=1)
solver.train()
"""
Explanation: Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
End of explanation
"""
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
"""
Explanation: Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
End of explanation
"""
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=1, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
"""
Explanation: Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
End of explanation
"""
from cs231n.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
"""
Explanation: Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
End of explanation
"""
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print 'Before spatial batch normalization:'
print ' Shape: ', x.shape
print ' Means: ', x.mean(axis=(0, 2, 3))
print ' Stds: ', x.std(axis=(0, 2, 3))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization:'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print 'After spatial batch normalization (nontrivial gamma, beta):'
print ' Shape: ', out.shape
print ' Means: ', out.mean(axis=(0, 2, 3))
print ' Stds: ', out.std(axis=(0, 2, 3))
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in xrange(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print 'After spatial batch normalization (test-time):'
print ' means: ', a_norm.mean(axis=(0, 2, 3))
print ' stds: ', a_norm.std(axis=(0, 2, 3))
"""
Explanation: Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.
Spatial batch normalization: forward
In the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:
End of explanation
"""
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print 'dx error: ', rel_error(dx_num, dx)
print 'dgamma error: ', rel_error(da_num, dgamma)
print 'dbeta error: ', rel_error(db_num, dbeta)
"""
Explanation: Spatial batch normalization: backward
In the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:
End of explanation
"""
# Train a really good model on CIFAR-10
"""
Explanation: Experiment!
Experiment and try to get the best performance that you can on CIFAR-10 using a ConvNet. Here are some ideas to get you started:
Things you should try:
Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
Number of filters: Above we used 32 filters. Do more or fewer do better?
Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization aafter affine layers. Do your networks train faster?
Network architecture: The network above has two layers of trainable parameters. Can you do better with a deeper network? You can implement alternative architectures in the file cs231n/classifiers/convnet.py. Some good architectures to try include:
[conv-relu-pool]xN - conv - relu - [affine]xM - [softmax or SVM]
[conv-relu-pool]XN - [affine]XM - [softmax or SVM]
[conv-relu-conv-relu-pool]xN - [affine]xM - [softmax or SVM]
Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If the parameters are working well, you should see improvement within a few hundred iterations
Remember the course-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
Alternative activation functions such as leaky ReLU, parametric ReLU, or MaxOut.
Model ensembles
Data augmentation
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
What we expect
At the very least, you should be able to train a ConvNet that gets at least 65% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network. The final cell in this notebook should contain the training, validation, and test set accuracies for your final trained network. In this notebook you should also write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.
Have fun and happy training!
End of explanation
"""
|
shngli/Data-Mining-Python | Mining massive datasets/recommendation systems.ipynb | gpl-3.0 | %matplotlib inline
import numpy as np
from scipy import spatial
import pickle
"""
Explanation: Recommendation Systems
In this question you will apply these methods to a real dataset. The data contains information about TV shows. More precisely, for 9985 users and 563 popular TV shows, we know if a given user watched a given show over a 3 month period.
The data sets are:
user-shows.txt: This is the ratings matrix $R$, where each row corresponds to a user and each column corresponds to a TV show. $R_ij$ = 1 if user $i$ watched the show $j$ over a period of three months. The columns are separated by a space.
shows.txt: This is a file containing the titles of the TV shows, in the same order as the columns of $R$.
We will compare the user-user and item-item collaborative filtering recommendations for the 500th user of the dataset. Let’s call him Alex. In order to do so, we have erased the first 100 entries of Alex’s row in the matrix, and replaced them by 0s. This means that we don’t know which of the first 100 shows Alex has watched or not. Based on Alex’s behaviour on the other shows, we will give Alex recommendations on the first 100 shows. We will then see if our recommendations match what Alex had in fact watched.
Compute the matrices $P$ and $Q$.
Compute $\Gamma$ for the user-user collaborative filtering. Let $S$ denote the set of the first 100 shows (the first 100 columns of the matrix). From all the TV shows in $S$, which are the five that have the highest similarity scores for Alex? What are their similarity score? In case of ties between two shows, choose the one with smaller index. Do not write the index of the TV shows, write their names using the file shows.txt.
Compute the matrix $\Gamma$ for the movie-movie collaborative filtering. From all the TV shows in $S$, which are the five that have the highest similarity scores for Alex? In case of ties between two shows, choose the one with smaller index. Again, hand in the names of the shows and their similarity score.
Alex’s original row is given in the file alex.txt. For a given number $k$, the precision at top-$k$ is defined as follows: using the matrix $\Gamma$ computed previously, compute the top-$k$ TV shows in $S$ that are most similar to Alex (break ties as before). The precision is the fraction of the top-$k$ TV shows that were watched by Alex in reality.
Plot the precision at top-$k$ (defined above) as a function of $k$, for $k$ $\in$ [1, 19], with predictions obtained by the user-user collaborative filtering.
• On the same figure, plot the precision at top-$k$ as a function of $k$, for $k$ $\in$ [1, 19], with predictions obtained by the item-item collaborative filtering.
End of explanation
"""
def loadRatings():
ratings = []
with open('user-shows.txt', 'r') as f:
for line in f:
line = map(int, line.strip().split())
ratings.append(line)
return ratings
"""
Explanation: Load the ratings matrix R
End of explanation
"""
def calcP(R):
m, _ = R.shape
P = np.zeros([m, m])
for i in range(m):
P[i, i] = np.sum(R[i])
return P
"""
Explanation: Compute the matrices P
End of explanation
"""
def calcQ(R):
_, n = R.shape
Q = np.zeros([n, n])
for i in range(n):
Q[i, i] = np.sum(R[:, i])
return Q
"""
Explanation: Compute the matrices Q
End of explanation
"""
def loadAlex():
with open('alex.txt', 'r') as f:
alex = map(int, f.readline().strip().split())
return np.array(alex)
def recommend():
print "load ratings"
R = np.matrix(loadRatings())
m, n = R.shape
print "calc P"
P = calcP(R)
print "calc Q"
Q = calcQ(R)
Ps = P
for i in range(m):
Ps[i, i] = np.sqrt(1.0 / P[i, i])
Qs = Q
for i in range(n):
Qs[i, i] = np.sqrt(1.0 / Q[i, i])
print "Compute gamma for the user-user collaborative filtering"
Guu = Ps * R * R.T * Ps * R
print np.array(Guu[499, :100]).flatten().argsort()[::-1]
print "Compute gamma for the item-item collaborative filtering"
Gii = R * Qs * R.T * R * Qs
print np.array(Gii[499, :100]).flatten().argsort()[::-1]
def plot(uu, ii, alex):
uuplot = []
for i in range(19):
cnt = 0
for s in uu[:(i+1)]:
if alex[s] == 1:
cnt += 1
uuplot.append(float(cnt) / (i + 1))
print uuplot
iiplot = []
for i in range(19):
cnt = 0
for s in ii[:(i+1)]:
if alex[s] == 1:
cnt += 1
iiplot.append(float(cnt) / (i + 1))
print iiplot
import matplotlib.pyplot as plt
plt.plot(range(1, 20), uuplot, 'bs-', range(1, 20), iiplot, 'r^-')
plt.xlabel('K')
plt.ylabel('Precision')
plt.grid(True)
plt.legend(('User-user', 'Item-item'))
if __name__ == '__main__':
# recommend()
uu = map(int, '96 74 45 60 9 68 82 5 72 62 64 59 20 90 97 35 46 76 65 25'.split())
ii = map(int, '96 74 60 45 82 9 68 5 72 20 62 59 64 90 97 35 25 65 76 2'.split())
alex = loadAlex()
plot(uu, ii, alex)
"""
Explanation: Load Alex
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/gapic/custom/showcase_custom_tabular_regression_batch.ipynb | apache-2.0 | import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
"""
Explanation: Vertex client library: Custom training tabular regression model for batch prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_tabular_regression_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_tabular_regression_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to train and deploy a custom tabular regression model for batch prediction.
Dataset
The dataset used for this tutorial is the Boston Housing Prices dataset. The version of the dataset you will use in this tutorial is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD.
Objective
In this tutorial, you create a custom model, with a training pipeline, from a Python script in a Google prebuilt Docker container using the Vertex client library, and then do a batch prediction on the uploaded model. You can alternatively create custom models using gcloud command-line tool or online using Google Cloud Console.
The steps performed include:
Create a Vertex custom job for training a model.
Train the TensorFlow model.
Retrieve and load the model artifacts.
View the model evaluation.
Upload the model as a Vertex Model resource.
Make a batch prediction.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
"""
! pip3 install -U google-cloud-storage $USER_FLAG
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Vertex client library, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex runs
the code from this package. In this tutorial, Vertex also saves the
trained model that results from your job in the same bucket. You can then
create an Endpoint resource based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
"""
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
"""
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
"""
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
"""
Explanation: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify (None, None) to use a container image to run on a CPU.
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
"""
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
"""
Explanation: Container (Docker) image
Next, we will set the Docker container images for training and prediction
TensorFlow 1.15
gcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest
gcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest
TensorFlow 2.1
gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest
TensorFlow 2.2
gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest
TensorFlow 2.3
gcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest
TensorFlow 2.4
gcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest
gcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest
XGBoost
gcr.io/cloud-aiplatform/training/xgboost-cpu.1-1
Scikit-learn
gcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest
Pytorch
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest
gcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest
For the latest list, see Pre-built containers for training.
TensorFlow 1.15
gcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest
gcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest
TensorFlow 2.1
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest
TensorFlow 2.2
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest
TensorFlow 2.3
gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest
gcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest
XGBoost
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest
gcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest
Scikit-learn
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest
gcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest
For the latest list, see Pre-built containers for prediction
End of explanation
"""
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
"""
Explanation: Machine Type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
"""
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["job"] = create_job_client()
clients["model"] = create_model_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
"""
Explanation: Tutorial
Now you are ready to start creating your own custom model and training for Boston Housing.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Model Service for Model resources.
Endpoint Service for deployment.
Job Service for batch jobs and custom training.
Prediction Service for serving.
End of explanation
"""
if TRAIN_GPU:
machine_spec = {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": TRAIN_GPU,
"accelerator_count": TRAIN_NGPU,
}
else:
machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
"""
Explanation: Train a model
There are two ways you can train a custom model using a container image:
Use a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.
Use your own custom container image. If you use your own container, the container needs to contain your code for training a custom model.
Prepare your custom job specification
Now that your clients are ready, your first step is to create a Job Specification for your custom training job. The job specification will consist of the following:
worker_pool_spec : The specification of the type of machine(s) you will use for training and how many (single or distributed)
python_package_spec : The specification of the Python package to be installed with the pre-built container.
Prepare your machine specification
Now define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training.
- machine_type: The type of GCP instance to provision -- e.g., n1-standard-8.
- accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU.
- accelerator_count: The number of accelerators.
End of explanation
"""
DISK_TYPE = "pd-ssd" # [ pd-ssd, pd-standard]
DISK_SIZE = 200 # GB
disk_spec = {"boot_disk_type": DISK_TYPE, "boot_disk_size_gb": DISK_SIZE}
"""
Explanation: Prepare your disk specification
(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training.
boot_disk_type: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD.
boot_disk_size_gb: Size of disk in GB.
End of explanation
"""
JOB_NAME = "custom_job_" + TIMESTAMP
MODEL_DIR = "{}/{}".format(BUCKET_NAME, JOB_NAME)
if not TRAIN_NGPU or TRAIN_NGPU < 2:
TRAIN_STRATEGY = "single"
else:
TRAIN_STRATEGY = "mirror"
EPOCHS = 20
STEPS = 100
PARAM_FILE = BUCKET_NAME + "/params.txt"
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
"--param-file=" + PARAM_FILE,
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
"--distribute=" + TRAIN_STRATEGY,
"--param-file=" + PARAM_FILE,
]
worker_pool_spec = [
{
"replica_count": 1,
"machine_spec": machine_spec,
"disk_spec": disk_spec,
"python_package_spec": {
"executor_image_uri": TRAIN_IMAGE,
"package_uris": [BUCKET_NAME + "/trainer_boston.tar.gz"],
"python_module": "trainer.task",
"args": CMDARGS,
},
}
]
"""
Explanation: Define the worker pool specification
Next, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following:
replica_count: The number of instances to provision of this machine type.
machine_spec: The hardware specification.
disk_spec : (optional) The disk storage specification.
python_package: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.
Let's dive deeper now into the python package specification:
-executor_image_spec: This is the docker image which is configured for your custom training job.
-package_uris: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.
-python_module: The Python module (script) to invoke for running the custom training job. In this example, you will be invoking trainer.task.py -- note that it was not neccessary to append the .py suffix.
-args: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting:
- "--model-dir=" + MODEL_DIR : The Cloud Storage location where to store the model artifacts. There are two ways to tell the training script where to save the model artifacts:
- direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
- indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
- "--epochs=" + EPOCHS: The number of epochs for training.
- "--steps=" + STEPS: The number of steps (batches) per epoch.
- "--distribute=" + TRAIN_STRATEGY" : The training distribution strategy to use for single or distributed training.
- "single": single device.
- "mirror": all GPU devices on a single compute instance.
- "multi": all GPU devices on all compute instances.
- "--param-file=" + PARAM_FILE: The Cloud Storage location for storing feature normalization values.
End of explanation
"""
if DIRECT:
job_spec = {"worker_pool_specs": worker_pool_spec}
else:
job_spec = {
"worker_pool_specs": worker_pool_spec,
"base_output_directory": {"output_uri_prefix": MODEL_DIR},
}
custom_job = {"display_name": JOB_NAME, "job_spec": job_spec}
"""
Explanation: Assemble a job specification
Now assemble the complete description for the custom job specification:
display_name: The human readable name you assign to this custom job.
job_spec: The specification for the custom job.
worker_pool_specs: The specification for the machine VM instances.
base_output_directory: This tells the service the Cloud Storage location where to save the model artifacts (when variable DIRECT = False). The service will then pass the location to the training script as the environment variable AIP_MODEL_DIR, and the path will be of the form: <output_uri_prefix>/model
End of explanation
"""
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Boston Housing tabular regression\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
"""
Explanation: Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
"""
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for Boston Housing
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import numpy as np
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--param-file', dest='param_file',
default='/tmp/param.txt', type=str,
help='Output file for parameters')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
def make_dataset():
# Scaling Boston Housing data features
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float)
return feature, max
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
params = []
for _ in range(13):
x_train[_], max = scale(x_train[_])
x_test[_], _ = scale(x_test[_])
params.append(max)
# store the normalization (max) value for each feature
with tf.io.gfile.GFile(args.param_file, 'w') as f:
f.write(str(params))
return (x_train, y_train), (x_test, y_test)
# Build the Keras model
def build_and_compile_dnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='linear')
])
model.compile(
loss='mse',
optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))
return model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
BATCH_SIZE = 16
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_dnn_model()
# Train the model
(x_train, y_train), (x_test, y_test) = make_dataset()
model.fit(x_train, y_train, epochs=args.epochs, batch_size=GLOBAL_BATCH_SIZE)
model.save(args.model_dir)
"""
Explanation: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary:
Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads Boston Housing dataset from TF.Keras builtin datasets
Builds a simple deep neural network model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs specified by args.epochs.
Saves the trained model (save(args.model_dir)) to the specified model directory.
Saves the maximum value for each feature f.write(str(params)) to the specified parameters file.
End of explanation
"""
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz
"""
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
"""
def create_custom_job(custom_job):
response = clients["job"].create_custom_job(parent=PARENT, custom_job=custom_job)
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = create_custom_job(custom_job)
"""
Explanation: Train the model
Now start the training of your custom training job on Vertex. Use this helper function create_custom_job, which takes the following parameter:
-custom_job: The specification for the custom job.
The helper function calls job client service's create_custom_job method, with the following parameters:
-parent: The Vertex location path to Dataset, Model and Endpoint resources.
-custom_job: The specification for the custom job.
You will display a handful of the fields returned in response object, with the two that are of most interest are:
response.name: The Vertex fully qualified identifier assigned to this custom training job. You save this identifier for using in subsequent steps.
response.state: The current state of the custom training job.
End of explanation
"""
# The full unique ID for the custom job
job_id = response.name
# The short numeric ID for the custom job
job_short_id = job_id.split("/")[-1]
print(job_id)
"""
Explanation: Now get the unique identifier for the custom job you created.
End of explanation
"""
def get_custom_job(name, silent=False):
response = clients["job"].get_custom_job(name=name)
if silent:
return response
print("name:", response.name)
print("display_name:", response.display_name)
print("state:", response.state)
print("create_time:", response.create_time)
print("update_time:", response.update_time)
return response
response = get_custom_job(job_id)
"""
Explanation: Get information on a custom job
Next, use this helper function get_custom_job, which takes the following parameter:
name: The Vertex fully qualified identifier for the custom job.
The helper function calls the job client service'sget_custom_job method, with the following parameter:
name: The Vertex fully qualified identifier for the custom job.
If you recall, you got the Vertex fully qualified identifier for the custom job in the response.name field when you called the create_custom_job method, and saved the identifier in the variable job_id.
End of explanation
"""
while True:
response = get_custom_job(job_id, True)
if response.state != aip.JobState.JOB_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_path_to_deploy = None
if response.state == aip.JobState.JOB_STATE_FAILED:
break
else:
if not DIRECT:
MODEL_DIR = MODEL_DIR + "/model"
model_path_to_deploy = MODEL_DIR
print("Training Time:", response.update_time - response.create_time)
break
time.sleep(60)
print("model_to_deploy:", model_path_to_deploy)
"""
Explanation: Deployment
Training the above model may take upwards of 20 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, we will need to know the location of the saved model, which the Python script saved in your local Cloud Storage bucket at MODEL_DIR + '/saved_model.pb'.
End of explanation
"""
import tensorflow as tf
model = tf.keras.models.load_model(MODEL_DIR)
"""
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
"""
import numpy as np
from tensorflow.keras.datasets import boston_housing
(_, _), (x_test, y_test) = boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float32)
return feature
# Let's save one data item that has not been scaled
x_test_notscaled = x_test[0:1].copy()
for _ in range(13):
x_test[_] = scale(x_test[_])
x_test = x_test.astype(np.float32)
print(x_test.shape, x_test.dtype, y_test.shape)
print("scaled", x_test[0])
print("unscaled", x_test_notscaled)
"""
Explanation: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This will return the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the feature data, and the corresponding labels (median value of owner-occupied home).
You don't need the training data, and hence why we loaded it as (_, _).
Before you can run the data through evaluation, you need to preprocess it:
x_test:
1. Normalize (rescaling) the data in each column by dividing each value by the maximum value of that column. This will replace each single value with a 32-bit floating point number between 0 and 1.
End of explanation
"""
model.evaluate(x_test, y_test)
"""
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
"""
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
"""
Explanation: Upload the model for serving
Next, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.
How does the serving function work
When you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string.
The serving function consists of two parts:
preprocessing function:
Converts the input (tf.string) to the input shape and data type of the underlying model (dynamic graph).
Performs the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.
post-processing function:
Converts the model output to format expected by the receiving application -- e.q., compresses the output.
Packages the output for the the receiving application -- e.g., add headings, make JSON object, etc.
Both the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.
One consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported.
Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
End of explanation
"""
IMAGE_URI = DEPLOY_IMAGE
def upload_model(display_name, image_uri, model_uri):
model = {
"display_name": display_name,
"metadata_schema_uri": "",
"artifact_uri": model_uri,
"container_spec": {
"image_uri": image_uri,
"command": [],
"args": [],
"env": [{"name": "env_name", "value": "env_value"}],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": "",
},
}
response = clients["model"].upload_model(parent=PARENT, model=model)
print("Long running operation:", response.operation.name)
upload_model_response = response.result(timeout=180)
print("upload_model_response")
print(" model:", upload_model_response.model)
return upload_model_response.model
model_to_deploy_id = upload_model(
"boston-" + TIMESTAMP, IMAGE_URI, model_path_to_deploy
)
"""
Explanation: Upload the model
Use this helper function upload_model to upload your model, stored in SavedModel format, up to the Model service, which will instantiate a Vertex Model resource instance for your model. Once you've done that, you can use the Model resource instance in the same way as any other Vertex Model resource instance, such as deploying to an Endpoint resource for serving predictions.
The helper function takes the following parameters:
display_name: A human readable name for the Endpoint service.
image_uri: The container image for the model deployment.
model_uri: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the trainer/task.py saved the model artifacts, which we specified in the variable MODEL_DIR.
The helper function calls the Model client service's method upload_model, which takes the following parameters:
parent: The Vertex location root path for Dataset, Model and Endpoint resources.
model: The specification for the Vertex Model resource instance.
Let's now dive deeper into the Vertex model specification model. This is a dictionary object that consists of the following fields:
display_name: A human readable name for the Model resource.
metadata_schema_uri: Since your model was built without an Vertex Dataset resource, you will leave this blank ('').
artificat_uri: The Cloud Storage path where the model is stored in SavedModel format.
container_spec: This is the specification for the Docker container that will be installed on the Endpoint resource, from which the Model resource will serve predictions. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
Uploading a model into a Vertex Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex Model resource is ready.
The helper function returns the Vertex fully qualified identifier for the corresponding Vertex Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.
End of explanation
"""
def get_model(name):
response = clients["model"].get_model(name=name)
print(response)
get_model(model_to_deploy_id)
"""
Explanation: Get Model resource information
Now let's get the model information for just your model. Use this helper function get_model, with the following parameter:
name: The Vertex unique identifier for the Model resource.
This helper function calls the Vertex Model client service's method get_model, with the following parameter:
name: The Vertex unique identifier for the Model resource.
End of explanation
"""
test_item_1 = x_test[0]
test_label_1 = y_test[0]
test_item_2 = x_test[1]
test_label_2 = y_test[1]
print(test_item_1.shape)
"""
Explanation: Model deployment for batch prediction
Now deploy the trained Vertex Model resource you created for batch prediction. This differs from deploying a Model resource for on-demand prediction.
For online prediction, you:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Make online prediction requests to the Endpoint resource.
For batch-prediction, you:
Create a batch prediction job.
The job service will provision resources for the batch prediction request.
The results of the batch prediction request are returned to the caller.
The job service will unprovision the resoures for the batch prediction request.
Make a batch prediction request
Now do a batch prediction to your deployed model.
Get test items
You will use examples out of the test (holdout) portion of the dataset as a test items.
End of explanation
"""
import json
gcs_input_uri = BUCKET_NAME + "/" + "test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {serving_input: test_item_1.tolist()}
f.write(json.dumps(data) + "\n")
data = {serving_input: test_item_2.tolist()}
f.write(json.dumps(data) + "\n")
"""
Explanation: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. Each instance in the prediction request is a dictionary entry of the form:
{serving_input: content}
serving_input: the name of the input layer of the underlying model.
content: The feature values of the test item as a list.
End of explanation
"""
MIN_NODES = 1
MAX_NODES = 1
"""
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your batch prediction requests:
Single Instance: The batch prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them.
Auto Scaling: The batch prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
"""
BATCH_MODEL = "boston_batch-" + TIMESTAMP
def create_batch_prediction_job(
display_name,
model_name,
gcs_source_uri,
gcs_destination_output_uri_prefix,
parameters=None,
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
batch_prediction_job = {
"display_name": display_name,
# Format: 'projects/{project}/locations/{location}/models/{model_id}'
"model": model_name,
"model_parameters": json_format.ParseDict(parameters, Value()),
"input_config": {
"instances_format": IN_FORMAT,
"gcs_source": {"uris": [gcs_source_uri]},
},
"output_config": {
"predictions_format": OUT_FORMAT,
"gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix},
},
"dedicated_resources": {
"machine_spec": machine_spec,
"starting_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try:
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", response.labels)
return response
IN_FORMAT = "jsonl"
OUT_FORMAT = "jsonl"
response = create_batch_prediction_job(
BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME
)
"""
Explanation: Make batch prediction request
Now that your batch of two test items is ready, let's do the batch request. Use this helper function create_batch_prediction_job, with the following parameters:
display_name: The human readable name for the prediction job.
model_name: The Vertex fully qualified identifier for the Model resource.
gcs_source_uri: The Cloud Storage path to the input file -- which you created above.
gcs_destination_output_uri_prefix: The Cloud Storage path that the service will write the predictions to.
parameters: Additional filtering parameters for serving prediction results.
The helper function calls the job client service's create_batch_prediction_job metho, with the following parameters:
parent: The Vertex location root path for Dataset, Model and Pipeline resources.
batch_prediction_job: The specification for the batch prediction job.
Let's now dive into the specification for the batch_prediction_job:
display_name: The human readable name for the prediction batch job.
model: The Vertex fully qualified identifier for the Model resource.
dedicated_resources: The compute resources to provision for the batch prediction job.
machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
starting_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.
max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.
model_parameters: Additional filtering parameters for serving prediction results. No Additional parameters are supported for custom models.
input_config: The input source and format type for the instances to predict.
instances_format: The format of the batch prediction request file: csv or jsonl.
gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests.
output_config: The output destination and format for the predictions.
prediction_format: The format of the batch prediction response file: csv or jsonl.
gcs_destination: The output destination for the predictions.
This call is an asychronous operation. You will print from the response object a few select fields, including:
name: The Vertex fully qualified identifier assigned to the batch prediction job.
display_name: The human readable name for the prediction batch job.
model: The Vertex fully qualified identifier for the Model resource.
generate_explanations: Whether True/False explanations were provided with the predictions (explainability).
state: The state of the prediction job (pending, running, etc).
Since this call will take a few moments to execute, you will likely get JobState.JOB_STATE_PENDING for state.
End of explanation
"""
# The full unique ID for the batch job
batch_job_id = response.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id)
"""
Explanation: Now get the unique identifier for the batch prediction job you created.
End of explanation
"""
def get_batch_prediction_job(job_name, silent=False):
response = clients["job"].get_batch_prediction_job(name=job_name)
if silent:
return response.output_config.gcs_destination.output_uri_prefix, response.state
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try: # not all data types support explanations
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" error:", response.error)
gcs_destination = response.output_config.gcs_destination
print(" gcs_destination")
print(" output_uri_prefix:", gcs_destination.output_uri_prefix)
return gcs_destination.output_uri_prefix, response.state
predictions, state = get_batch_prediction_job(batch_job_id)
"""
Explanation: Get information on a batch prediction job
Use this helper function get_batch_prediction_job, with the following paramter:
job_name: The Vertex fully qualified identifier for the batch prediction job.
The helper function calls the job client service's get_batch_prediction_job method, with the following paramter:
name: The Vertex fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex fully qualified identifier for your batch prediction job -- batch_job_id
The helper function will return the Cloud Storage path to where the predictions are stored -- gcs_destination.
End of explanation
"""
def get_latest_predictions(gcs_out_dir):
""" Get the latest prediction subfolder using the timestamp in the subfolder name"""
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
predictions, state = get_batch_prediction_job(batch_job_id, True)
if state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", state)
if state == aip.JobState.JOB_STATE_FAILED:
raise Exception("Batch Job Failed")
else:
folder = get_latest_predictions(predictions)
! gsutil ls $folder/prediction.results*
print("Results:")
! gsutil cat $folder/prediction.results*
print("Errors:")
! gsutil cat $folder/prediction.errors*
break
time.sleep(60)
"""
Explanation: Get the predictions
When the batch prediction is done processing, the job state will be JOB_STATE_SUCCEEDED.
Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name prediction, and under that folder will be a file called prediction.results-xxxxx-of-xxxxx.
Now display (cat) the contents. You will see multiple JSON objects, one for each prediction.
The response contains a JSON object for each instance, in the form:
dense_input: The input for the prediction.
prediction: The predicted value.
End of explanation
"""
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
"""
|
geoneill12/phys202-2015-work | assignments/assignment04/MatplotlibEx02.ipynb | mit | import math
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Matplotlib Exercise 2
Imports
End of explanation
"""
!head -n 30 open_exoplanet_catalogue.txt
"""
Explanation: Exoplanet properties
Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets.
http://iopscience.iop.org/1402-4896/2008/T130/014001
Your job is to reproduce Figures 2 and 4 from this paper using an up-to-date dataset of extrasolar planets found on this GitHub repo:
https://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue
A text version of the dataset has already been put into this directory. The top of the file has documentation about each column of data:
End of explanation
"""
data = np.genfromtxt('open_exoplanet_catalogue.txt', delimiter = ',')
data_mass = data[:,2]
data_mass
def remove(x):
a = []
for item in x:
if not math.isnan(item):
a.append(item)
return a
h = remove(data_mass)
h
assert data.shape==(1993,24)
"""
Explanation: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data:
End of explanation
"""
plt.hist(h, bins = 1000, color = 'blue', alpha=0.7)
plt.xlim(0,30)
plt.ylim(0,300)
plt.ylabel('Number of Exoplanets')
plt.xlabel('Jupiter Masses')
assert True # leave for grading
"""
Explanation: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
Pick the number of bins for the histogram appropriately.
End of explanation
"""
data_orbit = data[:,6]
data_axis = data[:,5]
r = remove(data_orbit)
s = remove(data_axis)
s = s[0:880]
plt.figure(figsize=(25,6))
plt.scatter(s,r, color = 'green', alpha = 0.4)
plt.xlim(0,10)
plt.ylim(0,1)
plt.xlabel('Semimajor Axis')
plt.ylabel('Orbital Eccentricity')
plt.grid(True)
assert True # leave for grading
"""
Explanation: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis.
Customize your plot to follow Tufte's principles of visualizations.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation
"""
|
VectorBlox/PYNQ | Pynq-Z1/notebooks/examples/tracebuffer_spi.ipynb | bsd-3-clause | from pprint import pprint
from time import sleep
from pynq import PL
from pynq import Overlay
from pynq.drivers import Trace_Buffer
from pynq.iop import Pmod_OLED
from pynq.iop import PMODA
from pynq.iop import PMODB
from pynq.iop import ARDUINO
ol = Overlay("base.bit")
ol.download()
"""
Explanation: Trace Buffer - Tracing SPI Transactions
The Trace_Buffer class can monitor the waveform and transations on PMODA, PMODB, and ARDUINO connectors.
This demo shows how to use this class to track SPI transactions. For this demo, users have to connect the Pmod OLED to PMODB.
Step 1: Overlay Management
Users have to import all the necessary classes. Make sure to use the right bitstream.
End of explanation
"""
oled = Pmod_OLED(PMODB)
"""
Explanation: Step 2: Instantiating OLED
Although this demo can also be done on PMODA, we use PMODB in this demo.
End of explanation
"""
tr_buf = Trace_Buffer(PMODB,pins=[0,1,2,3],probes=['CS','MOSI','NC','CLK'],
protocol="spi",rate=20000000)
# Start the trace buffer
tr_buf.start()
# Write characters
oled.write("1 2 3 4 5 6")
# Stop the trace buffer
tr_buf.stop()
"""
Explanation: Step 3: Tracking Transactions
Instantiating the trace buffer with SPI protocol. The SPI clock is controlled by the 100MHz IO Processor (IOP). The SPI clock period is 16 times the IOP clock rate based on the settings of the IOP SPI controller. Hence we set the sample rate to 20MHz.
After starting the trace buffer DMA, also start to write some characters. Then stop the trace buffer DMA.
End of explanation
"""
# Configuration for PMODB
start = 25000
stop = 35000
# Parsing and decoding
tr_buf.parse("spi_trace.csv",start,stop)
tr_buf.decode("spi_trace.pd",
options=':wordsize=8:cpol=0:cpha=0')
"""
Explanation: Step 4: Parsing and Decoding Transactions
The trace buffer object is able to parse the transactions into a *.csv file (saved into the same folder as this script).
Then the trace buffer object can also decode the transactions using the open-source sigrok decoders. The decoded file (*.pd) is saved into the same folder as this script.
Reference:
https://sigrok.org/wiki/Main_Page
End of explanation
"""
tr_buf.display()
"""
Explanation: Step 5: Displaying the Result
The final waveform and decoded transactions are shown using the open-source wavedrom library.
Note: It may take a while for the waveforms to be displayed.
Reference:
https://www.npmjs.com/package/wavedrom
End of explanation
"""
|
scientific-visualization-2016/ClassMaterials | Week-02/03_intro_matplotlib.ipynb | cc0-1.0 | #inline to use with notebook (from pylab import *)
%pylab inline
"""
Explanation: <img src='https://www.rc.colorado.edu/sites/all/themes/research/logo.png'>
Introduction to Data Visualization with matplotlib
Thomas Hauser
<img src='https://s3.amazonaws.com/research_computing_tutorials/mpl-overview.png'>
Objectives
Understand the different between pylab and pyplot.
Understand the basic components of a plot.
Understand style
Give you enough information to use the gallery.
Reference for several standard plots.
histogram, density, boxplot (when appropriate)
scatter, line, hexbin
contour, false-color
References
This tutorial based on some of the following excellent content.
J.R. Johansson's tutorial
Matplotlib tutorial by Jake Vanderplas
Nicolas P. Rougier's tutorial
Painless create beautiful matplotlib
Making matplotlib look like ggplot
https://github.com/jakevdp/mpld3
Harvard CS109 Data Science Class.
Alternatives
ggplot for python.
vincent: Python to Vega (and ultimately d3).
bokeh
mpld3: Render matplotlib as d3.js in the notebook.
Object and Functional Models
Functional
Emulate Matlab
Convension: implicit statefrom pylab import *
Object-oriented
Not a flat model.
Figure, Axesimport matplotlib.pyplot as plt
Caution: redundant interface, namespace issues
Enabling plotting
IPython terminal
ipython --pylab
ipython --matplotlib
IPython notebook
%pylab inline
%matplotlib inline
ipython notebook --pylab=inline
ipython notebook --matplotlib=inline
The funtional pylab interface
Loads all of numpy and matplotlib into the global namesapce.
Great for interactive use.
End of explanation
"""
# make the plots smaller or larger
rcParams['figure.figsize'] = 10, 6
x = linspace(0, 2*pi, 100)
y = sin(x)
plot(x, y)
show()
hist(randn(1000), alpha=0.5, histtype='stepfilled')
hist(0.75*randn(1000)+1, alpha=0.5, histtype='stepfilled')
show()
#hist?
"""
Explanation: Customizing matplotlib
http://matplotlib.org/users/customizing.html
rcParams for configuration
End of explanation
"""
#restart plotting
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = 8, 4
plt.plot(range(20))
"""
Explanation: Quick, easy, simple plots.
Object-oriented pyplot interface
No global variables
Separates style from graph
Can easily have multiple subplots
End of explanation
"""
x = np.linspace(0, 2*np.pi, 100) #same as before
y = np.sin(x)
fig = plt.figure()
ax = fig.add_subplot(1,1,1) # 1 row, 1 col, graphic 1
ax.plot(x, y)
ax.set_title("sin(x)")
ax.set_xlabel("x")
ax.set_ylabel("y")
"""
Explanation: The figures and axes objects
First, we create a blank figure. Then we add a subpot.
End of explanation
"""
fig = plt.figure()
ax1 = fig.add_subplot(1,2,1) # 1 row, 2 cols, graphic 1
ax2 = fig.add_subplot(1,2,2) # graphic 2
ax1.plot(x, y)
ax2.hist(np.random.randn(1000), alpha=0.5, histtype='stepfilled')
ax2.hist(0.75*np.random.randn(1000)+1, alpha=0.5, histtype='stepfilled')
fig.show()
"""
Explanation: Multiple subplots
End of explanation
"""
fig, ax = plt.subplots(2,3, figsize=(10,10))
ax[0,0].plot(x, y)
ax[0,1].plot(x, np.cos(x), color="r")
ax[0,2].hist(np.random.randn(100), alpha=0.5, color="g")
ax[1,0].plot(x, y, 'o-')
ax[1,0].set_xlim([0,np.pi])
ax[1,0].set_ylim([0,1])
ax[1,1].plot(x, np.cos(x), 'x', color="r")
ax[1,2].scatter(np.random.randn(10), np.random.randn(10), color="g")
fig.show()
"""
Explanation: The plt.subplots() command
End of explanation
"""
fig = plt.figure(figsize=(8,6))
ax1 = plt.subplot2grid((3,3), (0,0), colspan=3)
ax2 = plt.subplot2grid((3,3), (1,0), colspan=2)
ax3 = plt.subplot2grid((3,3), (1,2), rowspan=2)
ax4 = plt.subplot2grid((3,3), (2,0))
ax5 = plt.subplot2grid((3,3), (2,1))
ax1.plot(x, y)
fig.tight_layout()
fig.show()
?plt.subplot2grid
"""
Explanation: plt.plot?
========== ========
character color
========== ========
'b' blue
'g' green
'r' red
'c' cyan
'm' magenta
'y' yellow
'k' black
'w' white
========== ========
The subplot2grid command
End of explanation
"""
fig, axes = plt.subplots( 3, 1, sharex = True)
for ax in axes:
ax.set_axis_bgcolor('0.95')
axes[0].plot(x,y)
fig.show()
print axes.shape
fig, axes = plt.subplots( 2, 2, sharex = True, sharey = True)
plt.subplots_adjust( wspace = 0.3, hspace = 0.1)
fig.show()
print axes.shape
"""
Explanation: Sharing axis values
End of explanation
"""
from mpld3 import enable_notebook
enable_notebook()
fig, ax = plt.subplots(1,2, sharey=True, sharex=True)
print ax.shape
ax[0].plot(x, y, color='green')
ax[1].scatter(np.random.randn(10), np.random.randn(10), color='red')
fig.show()
"""
Explanation: How about a little d3.js with mpld3?
https://github.com/jakevdp/mpld3
End of explanation
"""
|
INGEOTEC/CursoCategorizacionTexto | 02_representacion_vectorial_de_texto.ipynb | apache-2.0 | from microtc.textmodel import norm_chars
text = "Autoridades de la Ciudad de México aclaran que el equipo del cineasta mexicano no fue asaltado, pero sí una riña ahhh."
"""
Explanation: Aprendizaje computacional en grandes volúmenes de texto
Mario Graff (mgraffg@ieee.org, mario.graff@infotec.mx)
Sabino Miranda (sabino.miranda@infotec.mx)
Daniela Moctezuma (dmoctezuma@centrogeo.edu.mx)
Eric S. Tellez (eric.tellez@infotec.mx)
CONACYT, INFOTEC y CentroGEO
https://github.com/ingeotec
Representación vectorial del texto
Normalización
Tokenización (n-words, q-grams, skip-grams)
Pesado de texto (TFIDF)
Medidas de similitud
Aprendizaje supervisado
Modelo general de aprendizaje; Entrenamiento, test, score (accuracy, recall, precision, f1)
Máquinas de soporte vectorial (SVM)
Programación genética (EvoDAG)
Distant supervision
$\mu$TC
Pipeline de transformaciones
Optimización de parámetros
Clasificadores
Uso del $\mu$TC
Aplicaciones
Análisis de sentimientos
Determinación de autoría
Clasificación de noticias
Spam
Género y edad
Conclusiones
Procesamiento de Lenguaje Natural (NLP)
$d=s_1\cdots s_n$ es un documento donde $s \in \Sigma$, $\Sigma$ es un alfabeto de tamaño $\sigma = |\Sigma|$
Twitter tendría: $26^{140} \simeq 1.248 \times 10^{198}$
Reglas sobre que símbolos se pueden unir
Noción de términos o palabras, i.e., morfología
Reglas sobre como las palabras se pueden combinar, i.e., sintaxis y gramática
Problema sumamente complicado
Reglas
Variantes
Excepciones
Errores
Conceptos que aparecen de manera diferente en todos los lenguajes
Además, esta el problema semántico:
Un término $s_i$ tiene significados diferentes (antónimos)
Lo contrario también existe, $s_i \not= s_j$ pero que son idénticos en significado (sinónimos)
En ambos casos, el significado preciso depende del contexto
También hay casos aproximados de todo lo anterior
Ironias, sarcamos, etc.
... hay muchísimos problemas abiertos. NLP es complicado, de hecho es AI-complete
Categorización de texto
El problema consiste en, dado un texto $d$, determinar la(s) categoría(s) a la que pertenece en un conjunto $C$ de categorias, previamente conocido.
Más formalmente:
Dado un conjunto de categorias $\cal{C} = {c_1, ..., c_m}$, determinar el subconjunto de categorias
$C_d \in \wp(\cal{C})$ a las que pertenece $d$.
Notese que $C_t$ puede ser vacio o $\cal{C}$.
Clasificación de texto
La clasificación de texto es una especialización del problema de categorización, donde $|C_d| = 1$, esto es $d$ solo puede ser asignado a una categoría.
Es un problema de interés en la industria y la acádemia, con aplicaciones variadas a distintas áreas del conocimiento.
Análisis de sentimiento
Determinación de autoría, e.g., género, edad, estilo, etc.
Detección de spam
Categorización de noticias
Clasificación de idioma
Nuestro Enfoque
Por su complejidad, trabajar en NLP tiene una gran cantidad de problemas abiertos, en particular nosotros nos enfocamos en la clasificación de texto escrito de manera informal (e.g., Twitter).
Para esto se utiliza un pipeline estándar
Enfoque teórico (muchas simplificaciones)
Lógica
Lingüistica
Semántica
Enfoque práctico supone muchas cosas
Se fija el lenguaje
Se fija el problema
Se supone que mientras más técnicas sofísticadas se usen, mejores resultados se tendrán
Ambos enfoques suponen ausencia de errores
Nuestro enfoque se basa en:
* Aprendizaje computacional
* Optimización combinatoria
Características:
* Independiente del lenguaje
* Robusto a errores
Esta compuesto por:
* Una serie de funciones de transformación de texto
* Una serie de tokenizadores
* Filtros de palabras
* Algoritmos de pesado de términos
Normalizadores multilenguaje
| nombre | valores | descripción |
|-----------|---------------------|--------------------------------------|
| del-punc | yes, no | Determina si las puntuaciones deben removerse |
| del-d1 | yes, no | Determina si se deben borrar letras repetidas |
| del-diac | yes, no | Determina si los simbolos que no ocupan espacios deben ser removidos |
| lc | yes, no | Determina si los símbolos deben ser normalizados en minúsculas |
| emo | remove, group, none | Controla como deben tratarse los emoticones |
| num | remove, group, none | ........................ números |
| url | remove, group, none | ........................ urls |
| usr | remove, group, none | ........................ usuarios |
End of explanation
"""
diac = norm_chars(text, del_diac=True, del_dup=False, del_punc=False).replace('~', ' ')
Markdown("## diac\n" + diac)
dup = norm_chars(text, del_diac=False, del_dup=True, del_punc=False).replace('~', ' ')
Markdown("## dup\n" + dup)
punc = norm_chars(text, del_diac=False, del_dup=False, del_punc=True).replace('~', ' ')
Markdown("## punc\n" + punc)
from microtc.emoticons import EmoticonClassifier
from microtc.params import OPTION_GROUP, OPTION_DELETE
text = "Hoy es un día feliz :) :) o no :( "
"""
Explanation: diac, dup y punc
Autoridades de la Ciudad de México aclaran que el equipo del cineasta mexicano no fue asaltado, pero sí una riña ahhh.
End of explanation
"""
emo = EmoticonClassifier()
group = emo.replace(text, OPTION_GROUP)
delete = emo.replace(text, OPTION_DELETE)
Markdown("## delete\n%s\n## group\n%s" % (delete, group))
from IPython.core.display import Markdown
import re
text = "@mgraffg pon atención para sacar 10 en http://github.com/INGEOTEC"
"""
Explanation: emo
Hoy es un día feliz :) :) o no :(
End of explanation
"""
lc = text.lower()
print(lc)
"""
Explanation: lc
@mgraffg pon atención para sacar 10 en http://github.com/INGEOTEC
End of explanation
"""
delete = re.sub(r"\d+\.?\d+", "", text)
group = re.sub(r"\d+\.?\d+", "_num", text)
Markdown("## delete\n%s\n## group\n%s" % (delete, group))
"""
Explanation: num
@mgraffg pon atención para sacar 10 en http://github.com/INGEOTEC
End of explanation
"""
delete = re.sub(r"https?://\S+", "", text)
group = re.sub(r"https?://\S+", "_url", text)
Markdown("## delete\n%s\n## group\n%s" % (delete, group))
"""
Explanation: url
@mgraffg pon atención para sacar 10 en http://github.com/INGEOTEC
End of explanation
"""
delete = re.sub(r"@\S+", "", text)
group = re.sub(r"@\S+", "_usr", text)
Markdown("## delete\n%s\n## group\n%s" % (delete, group))
"""
Explanation: usr
@mgraffg pon atención para sacar 10 en http://github.com/INGEOTEC
End of explanation
"""
from microtc.textmodel import TextModel
text = "que buena esta la platica"
model = TextModel([], token_list=[-1, -2])
"""
Explanation: Tokenizadores
Los tokenizadores son en realidad una lista de tokenizadores, y están definidos tokenizer un elemento en $\wp{(\text{n-words} \cup \text{q-grams} \cup \text{skip-grams})} \setminus {\emptyset}$
| nombre | valores | descripción |
|-----------|---------------------|--------------------------------------|
| n-words | ${1,2,3}$ | Longitud de n-gramas de palabras (n-words) |
| q-grams | ${1,2,3,4,5,6,7}$ | Longitud de q-gramas de caracteres) |
| skip-grams | ${(2,1), (3, 1), (2, 2), (3, 2)}$ | Lista de skip-grams|
configuraciones: 16383
End of explanation
"""
model = TextModel([], token_list=[-1])
words = model.tokenize(text)
model = TextModel([], token_list=[-2])
biw = model.tokenize(text)
Markdown("## -1\n %s\n## -2\n%s" % (", ".join(words), ", ".join(biw)))
"""
Explanation: n-words
que buena esta la platica
End of explanation
"""
model = TextModel([], token_list=[3])
words = model.tokenize(text)
model = TextModel([], token_list=[4])
biw = model.tokenize(text)
Markdown("## 3\n %s\n## 4\n%s" % (", ".join(words), ", ".join(biw)))
"""
Explanation: q-grams
que buena esta la platica
End of explanation
"""
model = TextModel([], token_list=[(2, 1)])
words = model.tokenize(text)
model = TextModel([], token_list=[(2, 2)])
biw = model.tokenize(text)
Markdown("## (2, 1)\n %s\n## (2, 2)\n%s" % (", ".join(words), ", ".join(biw)))
"""
Explanation: skip-grams
que buena esta la platica
End of explanation
"""
docs = ["buen dia microtc", "excelente dia", "buenas tardes",
"las vacas me deprimen", "odio los lunes", "odio el trafico",
"la computadora", "la mesa", "la ventana"]
l = ["* " + x for x in docs]
Markdown("# Corpus\n" + "\n".join(l))
"""
Explanation: ¿Por qué es robusto a errores?
Considere los siguientes textos $T=I_like_vanilla$, $T' = I_lik3_vanila$
Para fijar ideas pongamos que se usar el coeficiente de Jaccard como medida de similitud, i.e.
$$\frac{|{{I, like, vanilla}} \cap {{I, lik3, vanila}}|}{|{{I, like, vanilla}} \cup {{I, lik3, vanila}}|} = 0.2$$
$$Q^T_3 = { I_l, _li, lik, ike, ke_, e_v, _va, van, ani, nil, ill, lla }$$
$$Q^{T'}_3 = { I_l, _li, lik, ik3, k3_, 3_v, _va, van, ani, nil, ila }$$
Bajo la misma medida
$$\frac{|Q^T_3 \cap Q^{T'}_3|}{|Q^T_3 \cup Q^{T'}_3|} = 0.448.$$
Se puede ver que estos conjuntos son más similares que los tokenizados por palabra
La idea es que un algoritmo de aprendizaje tenga un poco más de soporte para determinar que $T$ es parecido a $T'$
Pesado de texto
| nombre | valores | descripción |
|-----------|---------------------|--------------------------------------|
|token_min_filter | ${0.01, 0.03, 0.1, 0.30, -1, -5, -10}$ | Filtro de frequencias bajas |
|token_max_filter | ${0.9, 99, 1.0}$ | Filtro de frequencias altas |
| tfidf | yes, no | Determina si se debe realizar un pesado TFIDF de terminos |
Sobre el pesado
El pesado de tokens esta fijo a TFIDF. Su nombre viene de la formulación $tf \times idf$
$tf$ es term frequency; es una medida de importancia local del término $t$ en el documento $d$, de manera normalizada esta definida como:
$$tf(t,d) = \frac{freq(t, d)}{\max_{w \in d}{freq(w, d)}}$$
entre más veces aparece en el documento $d$, $t$ es más importante
$idf$ quiere decir inverse document frequency; es una medida global a la colección $D$, esta definida como:
$$ idf(t,d) = log{\frac{|D|}{1+|{d \in D: t \in d}|}} $$
entre más veces aparece $t$ en la colección, el término es más común y menos discriminante; por lo tanto, menos importante
End of explanation
"""
from microtc.textmodel import TextModel
model = TextModel(docs, token_list=[-1])
print(model[docs[0]])
"""
Explanation: TFIDF
buen dia microtc
End of explanation
"""
|
kubeflow/examples | house-prices-kaggle-competition/house-prices-orig.ipynb | apache-2.0 |
import os
import warnings
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from IPython.display import display
from pandas.api.types import CategoricalDtype
from category_encoders import MEstimateEncoder
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
from sklearn.feature_selection import mutual_info_regression
from sklearn.model_selection import KFold, cross_val_score
from xgboost import XGBRegressor
# Set Matplotlib defaults
plt.style.use("seaborn-whitegrid")
plt.rc("figure", autolayout=True)
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=14,
titlepad=10,
)
# Mute warnings
warnings.filterwarnings('ignore')
"""
Explanation: Introduction
Welcome to the feature engineering project for the House Prices - Advanced Regression Techniques competition! This competition uses nearly the same data you used in the exercises of the Feature Engineering course.
Step 1 - Preliminaries
Imports and Configuration
We'll start by importing the packages we used in the exercises and setting some notebook defaults. Unhide this cell if you'd like to see the libraries we'll use:
End of explanation
"""
def load_data():
# Read data
data_dir = Path("../input/house-prices-advanced-regression-techniques/")
df_train = pd.read_csv(data_dir / "train.csv", index_col="Id")
df_test = pd.read_csv(data_dir / "test.csv", index_col="Id")
# Merge the splits so we can process them together
df = pd.concat([df_train, df_test])
# Preprocessing
df = clean(df)
df = encode(df)
df = impute(df)
# Reform splits
df_train = df.loc[df_train.index, :]
df_test = df.loc[df_test.index, :]
return df_train, df_test
"""
Explanation: Data Preprocessing
Before we can do any feature engineering, we need to preprocess the data to get it in a form suitable for analysis. The data we used in the course was a bit simpler than the competition data. For the Ames competition dataset, we'll need to:
- Load the data from CSV files
- Clean the data to fix any errors or inconsistencies
- Encode the statistical data type (numeric, categorical)
- Impute any missing values
We'll wrap all these steps up in a function, which will make easy for you to get a fresh dataframe whenever you need. After reading the CSV file, we'll apply three preprocessing steps, clean, encode, and impute, and then create the data splits: one (df_train) for training the model, and one (df_test) for making the predictions that you'll submit to the competition for scoring on the leaderboard.
End of explanation
"""
data_dir = Path("../input/house-prices-advanced-regression-techniques/")
df = pd.read_csv(data_dir / "train.csv", index_col="Id")
df.Exterior2nd.unique()
"""
Explanation: Clean Data
Some of the categorical features in this dataset have what are apparently typos in their categories:
End of explanation
"""
def clean(df):
df["Exterior2nd"] = df["Exterior2nd"].replace({"Brk Cmn": "BrkComm"})
# Some values of GarageYrBlt are corrupt, so we'll replace them
# with the year the house was built
df["GarageYrBlt"] = df["GarageYrBlt"].where(df.GarageYrBlt <= 2010, df.YearBuilt)
# Names beginning with numbers are awkward to work with
df.rename(columns={
"1stFlrSF": "FirstFlrSF",
"2ndFlrSF": "SecondFlrSF",
"3SsnPorch": "Threeseasonporch",
}, inplace=True,
)
return df
"""
Explanation: Comparing these to data_description.txt shows us what needs cleaning. We'll take care of a couple of issues here, but you might want to evaluate this data further.
End of explanation
"""
# The numeric features are already encoded correctly (`float` for
# continuous, `int` for discrete), but the categoricals we'll need to
# do ourselves. Note in particular, that the `MSSubClass` feature is
# read as an `int` type, but is actually a (nominative) categorical.
# The nominative (unordered) categorical features
features_nom = ["MSSubClass", "MSZoning", "Street", "Alley", "LandContour", "LotConfig", "Neighborhood", "Condition1", "Condition2", "BldgType", "HouseStyle", "RoofStyle", "RoofMatl", "Exterior1st", "Exterior2nd", "MasVnrType", "Foundation", "Heating", "CentralAir", "GarageType", "MiscFeature", "SaleType", "SaleCondition"]
# The ordinal (ordered) categorical features
# Pandas calls the categories "levels"
five_levels = ["Po", "Fa", "TA", "Gd", "Ex"]
ten_levels = list(range(10))
ordered_levels = {
"OverallQual": ten_levels,
"OverallCond": ten_levels,
"ExterQual": five_levels,
"ExterCond": five_levels,
"BsmtQual": five_levels,
"BsmtCond": five_levels,
"HeatingQC": five_levels,
"KitchenQual": five_levels,
"FireplaceQu": five_levels,
"GarageQual": five_levels,
"GarageCond": five_levels,
"PoolQC": five_levels,
"LotShape": ["Reg", "IR1", "IR2", "IR3"],
"LandSlope": ["Sev", "Mod", "Gtl"],
"BsmtExposure": ["No", "Mn", "Av", "Gd"],
"BsmtFinType1": ["Unf", "LwQ", "Rec", "BLQ", "ALQ", "GLQ"],
"BsmtFinType2": ["Unf", "LwQ", "Rec", "BLQ", "ALQ", "GLQ"],
"Functional": ["Sal", "Sev", "Maj1", "Maj2", "Mod", "Min2", "Min1", "Typ"],
"GarageFinish": ["Unf", "RFn", "Fin"],
"PavedDrive": ["N", "P", "Y"],
"Utilities": ["NoSeWa", "NoSewr", "AllPub"],
"CentralAir": ["N", "Y"],
"Electrical": ["Mix", "FuseP", "FuseF", "FuseA", "SBrkr"],
"Fence": ["MnWw", "GdWo", "MnPrv", "GdPrv"],
}
# Add a None level for missing values
ordered_levels = {key: ["None"] + value for key, value in
ordered_levels.items()}
def encode(df):
# Nominal categories
for name in features_nom:
df[name] = df[name].astype("category")
# Add a None category for missing values
if "None" not in df[name].cat.categories:
df[name].cat.add_categories("None", inplace=True)
# Ordinal categories
for name, levels in ordered_levels.items():
df[name] = df[name].astype(CategoricalDtype(levels,
ordered=True))
return df
"""
Explanation: Encode the Statistical Data Type
Pandas has Python types corresponding to the standard statistical types (numeric, categorical, etc.). Encoding each feature with its correct type helps ensure each feature is treated appropriately by whatever functions we use, and makes it easier for us to apply transformations consistently. This hidden cell defines the encode function:
End of explanation
"""
def impute(df):
for name in df.select_dtypes("number"):
df[name] = df[name].fillna(0)
for name in df.select_dtypes("category"):
df[name] = df[name].fillna("None")
return df
"""
Explanation: Handle Missing Values
Handling missing values now will make the feature engineering go more smoothly. We'll impute 0 for missing numeric values and "None" for missing categorical values. You might like to experiment with other imputation strategies. In particular, you could try creating "missing value" indicators: 1 whenever a value was imputed and 0 otherwise.
End of explanation
"""
df_train, df_test = load_data()
"""
Explanation: Load Data
And now we can call the data loader and get the processed data splits:
End of explanation
"""
# Peek at the values
#display(df_train)
#display(df_test)
# Display information about dtypes and missing values
#display(df_train.info())
#display(df_test.info())
"""
Explanation: Uncomment and run this cell if you'd like to see what they contain. Notice that df_test is
missing values for SalePrice. (NAs were willed with 0's in the imputation step.)
End of explanation
"""
def score_dataset(X, y, model=XGBRegressor()):
# Label encoding for categoricals
#
# Label encoding is good for XGBoost and RandomForest, but one-hot
# would be better for models like Lasso or Ridge. The `cat.codes`
# attribute holds the category levels.
for colname in X.select_dtypes(["category"]):
X[colname] = X[colname].cat.codes
# Metric for Housing competition is RMSLE (Root Mean Squared Log Error)
log_y = np.log(y)
score = cross_val_score(
model, X, log_y, cv=5, scoring="neg_mean_squared_error",
)
score = -1 * score.mean()
score = np.sqrt(score)
return score
"""
Explanation: Establish Baseline
Finally, let's establish a baseline score to judge our feature engineering against.
Here is the function we created in Lesson 1 that will compute the cross-validated RMSLE score for a feature set. We've used XGBoost for our model, but you might want to experiment with other models.
End of explanation
"""
X = df_train.copy()
y = X.pop("SalePrice")
baseline_score = score_dataset(X, y)
print(f"Baseline score: {baseline_score:.5f} RMSLE")
"""
Explanation: We can reuse this scoring function anytime we want to try out a new feature set. We'll run it now on the processed data with no additional features and get a baseline score:
End of explanation
"""
def make_mi_scores(X, y):
X = X.copy()
for colname in X.select_dtypes(["object", "category"]):
X[colname], _ = X[colname].factorize()
# All discrete features should now have integer dtypes
discrete_features = [pd.api.types.is_integer_dtype(t) for t in X.dtypes]
mi_scores = mutual_info_regression(X, y, discrete_features=discrete_features, random_state=0)
mi_scores = pd.Series(mi_scores, name="MI Scores", index=X.columns)
mi_scores = mi_scores.sort_values(ascending=False)
return mi_scores
def plot_mi_scores(scores):
scores = scores.sort_values(ascending=True)
width = np.arange(len(scores))
ticks = list(scores.index)
plt.barh(width, scores)
plt.yticks(width, ticks)
plt.title("Mutual Information Scores")
"""
Explanation: This baseline score helps us to know whether some set of features we've assembled has actually led to any improvement or not.
Step 2 - Feature Utility Scores
In Lesson 2 we saw how to use mutual information to compute a utility score for a feature, giving you an indication of how much potential the feature has. This hidden cell defines the two utility functions we used, make_mi_scores and plot_mi_scores:
End of explanation
"""
X = df_train.copy()
y = X.pop("SalePrice")
mi_scores = make_mi_scores(X, y)
mi_scores
"""
Explanation: Let's look at our feature scores again:
End of explanation
"""
def drop_uninformative(df, mi_scores):
return df.loc[:, mi_scores > 0.0]
"""
Explanation: You can see that we have a number of features that are highly informative and also some that don't seem to be informative at all (at least by themselves). As we talked about in Tutorial 2, the top scoring features will usually pay-off the most during feature development, so it could be a good idea to focus your efforts on those. On the other hand, training on uninformative features can lead to overfitting. So, the features with 0.0 scores we'll drop entirely:
End of explanation
"""
X = df_train.copy()
y = X.pop("SalePrice")
X = drop_uninformative(X, mi_scores)
score_dataset(X, y)
"""
Explanation: Removing them does lead to a modest performance gain:
End of explanation
"""
def label_encode(df):
X = df.copy()
for colname in X.select_dtypes(["category"]):
X[colname] = X[colname].cat.codes
return X
"""
Explanation: Later, we'll add the drop_uninformative function to our feature-creation pipeline.
Step 3 - Create Features
Now we'll start developing our feature set.
To make our feature engineering workflow more modular, we'll define a function that will take a prepared dataframe and pass it through a pipeline of transformations to get the final feature set. It will look something like this:
def create_features(df):
X = df.copy()
y = X.pop("SalePrice")
X = X.join(create_features_1(X))
X = X.join(create_features_2(X))
X = X.join(create_features_3(X))
# ...
return X
Let's go ahead and define one transformation now, a label encoding for the categorical features:
End of explanation
"""
def mathematical_transforms(df):
X = pd.DataFrame() # dataframe to hold new features
X["LivLotRatio"] = df.GrLivArea / df.LotArea
X["Spaciousness"] = (df.FirstFlrSF + df.SecondFlrSF) / df.TotRmsAbvGrd
# This feature ended up not helping performance
# X["TotalOutsideSF"] = \
# df.WoodDeckSF + df.OpenPorchSF + df.EnclosedPorch + \
# df.Threeseasonporch + df.ScreenPorch
return X
def interactions(df):
X = pd.get_dummies(df.BldgType, prefix="Bldg")
X = X.mul(df.GrLivArea, axis=0)
return X
def counts(df):
X = pd.DataFrame()
X["PorchTypes"] = df[[
"WoodDeckSF",
"OpenPorchSF",
"EnclosedPorch",
"Threeseasonporch",
"ScreenPorch",
]].gt(0.0).sum(axis=1)
return X
def break_down(df):
X = pd.DataFrame()
X["MSClass"] = df.MSSubClass.str.split("_", n=1, expand=True)[0]
return X
def group_transforms(df):
X = pd.DataFrame()
X["MedNhbdArea"] = df.groupby("Neighborhood")["GrLivArea"].transform("median")
return X
"""
Explanation: A label encoding is okay for any kind of categorical feature when you're using a tree-ensemble like XGBoost, even for unordered categories. If you wanted to try a linear regression model (also popular in this competition), you would instead want to use a one-hot encoding, especially for the features with unordered categories.
Create Features with Pandas
This cell reproduces the work you did in Exercise 3, where you applied strategies for creating features in Pandas. Modify or add to these functions to try out other feature combinations.
End of explanation
"""
cluster_features = [
"LotArea",
"TotalBsmtSF",
"FirstFlrSF",
"SecondFlrSF",
"GrLivArea",
]
def cluster_labels(df, features, n_clusters=20):
X = df.copy()
X_scaled = X.loc[:, features]
X_scaled = (X_scaled - X_scaled.mean(axis=0)) / X_scaled.std(axis=0)
kmeans = KMeans(n_clusters=n_clusters, n_init=50, random_state=0)
X_new = pd.DataFrame()
X_new["Cluster"] = kmeans.fit_predict(X_scaled)
return X_new
def cluster_distance(df, features, n_clusters=20):
X = df.copy()
X_scaled = X.loc[:, features]
X_scaled = (X_scaled - X_scaled.mean(axis=0)) / X_scaled.std(axis=0)
kmeans = KMeans(n_clusters=20, n_init=50, random_state=0)
X_cd = kmeans.fit_transform(X_scaled)
# Label features and join to dataset
X_cd = pd.DataFrame(
X_cd, columns=[f"Centroid_{i}" for i in range(X_cd.shape[1])]
)
return X_cd
"""
Explanation: Here are some ideas for other transforms you could explore:
- Interactions between the quality Qual and condition Cond features. OverallQual, for instance, was a high-scoring feature. You could try combining it with OverallCond by converting both to integer type and taking a product.
- Square roots of area features. This would convert units of square feet to just feet.
- Logarithms of numeric features. If a feature has a skewed distribution, applying a logarithm can help normalize it.
- Interactions between numeric and categorical features that describe the same thing. You could look at interactions between BsmtQual and TotalBsmtSF, for instance.
- Other group statistics in Neighboorhood. We did the median of GrLivArea. Looking at mean, std, or count could be interesting. You could also try combining the group statistics with other features. Maybe the difference of GrLivArea and the median is important?
k-Means Clustering
The first unsupervised algorithm we used to create features was k-means clustering. We saw that you could either use the cluster labels as a feature (a column with 0, 1, 2, ...) or you could use the distance of the observations to each cluster. We saw how these features can sometimes be effective at untangling complicated spatial relationships.
End of explanation
"""
def apply_pca(X, standardize=True):
# Standardize
if standardize:
X = (X - X.mean(axis=0)) / X.std(axis=0)
# Create principal components
pca = PCA()
X_pca = pca.fit_transform(X)
# Convert to dataframe
component_names = [f"PC{i+1}" for i in range(X_pca.shape[1])]
X_pca = pd.DataFrame(X_pca, columns=component_names)
# Create loadings
loadings = pd.DataFrame(
pca.components_.T, # transpose the matrix of loadings
columns=component_names, # so the columns are the principal components
index=X.columns, # and the rows are the original features
)
return pca, X_pca, loadings
def plot_variance(pca, width=8, dpi=100):
# Create figure
fig, axs = plt.subplots(1, 2)
n = pca.n_components_
grid = np.arange(1, n + 1)
# Explained variance
evr = pca.explained_variance_ratio_
axs[0].bar(grid, evr)
axs[0].set(
xlabel="Component", title="% Explained Variance", ylim=(0.0, 1.0)
)
# Cumulative Variance
cv = np.cumsum(evr)
axs[1].plot(np.r_[0, grid], np.r_[0, cv], "o-")
axs[1].set(
xlabel="Component", title="% Cumulative Variance", ylim=(0.0, 1.0)
)
# Set up figure
fig.set(figwidth=8, dpi=100)
return axs
"""
Explanation: Principal Component Analysis
PCA was the second unsupervised model we used for feature creation. We saw how it could be used to decompose the variational structure in the data. The PCA algorithm gave us loadings which described each component of variation, and also the components which were the transformed datapoints. The loadings can suggest features to create and the components we can use as features directly.
Here are the utility functions from the PCA lesson:
End of explanation
"""
def pca_inspired(df):
X = pd.DataFrame()
X["Feature1"] = df.GrLivArea + df.TotalBsmtSF
X["Feature2"] = df.YearRemodAdd * df.TotalBsmtSF
return X
def pca_components(df, features):
X = df.loc[:, features]
_, X_pca, _ = apply_pca(X)
return X_pca
pca_features = [
"GarageArea",
"YearRemodAdd",
"TotalBsmtSF",
"GrLivArea",
]
"""
Explanation: And here are transforms that produce the features from the Exercise 5. You might want to change these if you came up with a different answer.
End of explanation
"""
def corrplot(df, method="pearson", annot=True, **kwargs):
sns.clustermap(
df.corr(method),
vmin=-1.0,
vmax=1.0,
cmap="icefire",
method="complete",
annot=annot,
**kwargs,
)
corrplot(df_train, annot=None)
"""
Explanation: These are only a couple ways you could use the principal components. You could also try clustering using one or more components. One thing to note is that PCA doesn't change the distance between points -- it's just like a rotation. So clustering with the full set of components is the same as clustering with the original features. Instead, pick some subset of components, maybe those with the most variance or the highest MI scores.
For further analysis, you might want to look at a correlation matrix for the dataset:
End of explanation
"""
def indicate_outliers(df):
X_new = pd.DataFrame()
X_new["Outlier"] = (df.Neighborhood == "Edwards") & (df.SaleCondition == "Partial")
return X_new
"""
Explanation: Groups of highly correlated features often yield interesting loadings.
PCA Application - Indicate Outliers
In Exercise 5, you applied PCA to determine houses that were outliers, that is, houses having values not well represented in the rest of the data. You saw that there was a group of houses in the Edwards neighborhood having a SaleCondition of Partial whose values were especially extreme.
Some models can benefit from having these outliers indicated, which is what this next transform will do.
End of explanation
"""
class CrossFoldEncoder:
def __init__(self, encoder, **kwargs):
self.encoder_ = encoder
self.kwargs_ = kwargs # keyword arguments for the encoder
self.cv_ = KFold(n_splits=5)
# Fit an encoder on one split and transform the feature on the
# other. Iterating over the splits in all folds gives a complete
# transformation. We also now have one trained encoder on each
# fold.
def fit_transform(self, X, y, cols):
self.fitted_encoders_ = []
self.cols_ = cols
X_encoded = []
for idx_encode, idx_train in self.cv_.split(X):
fitted_encoder = self.encoder_(cols=cols, **self.kwargs_)
fitted_encoder.fit(
X.iloc[idx_encode, :], y.iloc[idx_encode],
)
X_encoded.append(fitted_encoder.transform(X.iloc[idx_train, :])[cols])
self.fitted_encoders_.append(fitted_encoder)
X_encoded = pd.concat(X_encoded)
X_encoded.columns = [name + "_encoded" for name in X_encoded.columns]
return X_encoded
# To transform the test data, average the encodings learned from
# each fold.
def transform(self, X):
from functools import reduce
X_encoded_list = []
for fitted_encoder in self.fitted_encoders_:
X_encoded = fitted_encoder.transform(X)
X_encoded_list.append(X_encoded[self.cols_])
X_encoded = reduce(
lambda x, y: x.add(y, fill_value=0), X_encoded_list
) / len(X_encoded_list)
X_encoded.columns = [name + "_encoded" for name in X_encoded.columns]
return X_encoded
"""
Explanation: You could also consider applying some sort of robust scaler from scikit-learn's sklearn.preprocessing module to the outlying values, especially those in GrLivArea. Here is a tutorial illustrating some of them. Another option could be to create a feature of "outlier scores" using one of scikit-learn's outlier detectors.
Target Encoding
Needing a separate holdout set to create a target encoding is rather wasteful of data. In Tutorial 6 we used 25% of our dataset just to encode a single feature, Zipcode. The data from the other features in that 25% we didn't get to use at all.
There is, however, a way you can use target encoding without having to use held-out encoding data. It's basically the same trick used in cross-validation:
1. Split the data into folds, each fold having two splits of the dataset.
2. Train the encoder on one split but transform the values of the other.
3. Repeat for all the splits.
This way, training and transformation always take place on independent sets of data, just like when you use a holdout set but without any data going to waste.
In the next hidden cell is a wrapper you can use with any target encoder:
End of explanation
"""
def create_features(df, df_test=None):
X = df.copy()
y = X.pop("SalePrice")
mi_scores = make_mi_scores(X, y)
# Combine splits if test data is given
#
# If we're creating features for test set predictions, we should
# use all the data we have available. After creating our features,
# we'll recreate the splits.
if df_test is not None:
X_test = df_test.copy()
X_test.pop("SalePrice")
X = pd.concat([X, X_test])
# Lesson 2 - Mutual Information
X = drop_uninformative(X, mi_scores)
# Lesson 3 - Transformations
X = X.join(mathematical_transforms(X))
X = X.join(interactions(X))
X = X.join(counts(X))
# X = X.join(break_down(X))
X = X.join(group_transforms(X))
# Lesson 4 - Clustering
# X = X.join(cluster_labels(X, cluster_features, n_clusters=20))
# X = X.join(cluster_distance(X, cluster_features, n_clusters=20))
# Lesson 5 - PCA
X = X.join(pca_inspired(X))
# X = X.join(pca_components(X, pca_features))
# X = X.join(indicate_outliers(X))
X = label_encode(X)
# Reform splits
if df_test is not None:
X_test = X.loc[df_test.index, :]
X.drop(df_test.index, inplace=True)
# Lesson 6 - Target Encoder
encoder = CrossFoldEncoder(MEstimateEncoder, m=1)
X = X.join(encoder.fit_transform(X, y, cols=["MSSubClass"]))
if df_test is not None:
X_test = X_test.join(encoder.transform(X_test))
if df_test is not None:
return X, X_test
else:
return X
df_train, df_test = load_data()
X_train = create_features(df_train)
y_train = df_train.loc[:, "SalePrice"]
score_dataset(X_train, y_train)
"""
Explanation: Use it like:
encoder = CrossFoldEncoder(MEstimateEncoder, m=1)
X_encoded = encoder.fit_transform(X, y, cols=["MSSubClass"]))
You can turn any of the encoders from the category_encoders library into a cross-fold encoder. The CatBoostEncoder would be worth trying. It's similar to MEstimateEncoder but uses some tricks to better prevent overfitting. Its smoothing parameter is called a instead of m.
Create Final Feature Set
Now let's combine everything together. Putting the transformations into separate functions makes it easier to experiment with various combinations. The ones I left uncommented I found gave the best results. You should experiment with you own ideas though! Modify any of these transformations or come up with some of your own to add to the pipeline.
End of explanation
"""
X_train = create_features(df_train)
y_train = df_train.loc[:, "SalePrice"]
xgb_params = dict(
max_depth=6, # maximum depth of each tree - try 2 to 10
learning_rate=0.01, # effect of each tree - try 0.0001 to 0.1
n_estimators=1000, # number of trees (that is, boosting rounds) - try 1000 to 8000
min_child_weight=1, # minimum number of houses in a leaf - try 1 to 10
colsample_bytree=0.7, # fraction of features (columns) per tree - try 0.2 to 1.0
subsample=0.7, # fraction of instances (rows) per tree - try 0.2 to 1.0
reg_alpha=0.5, # L1 regularization (like LASSO) - try 0.0 to 10.0
reg_lambda=1.0, # L2 regularization (like Ridge) - try 0.0 to 10.0
num_parallel_tree=1, # set > 1 for boosted random forests
)
xgb = XGBRegressor(**xgb_params)
score_dataset(X_train, y_train, xgb)
"""
Explanation: Step 4 - Hyperparameter Tuning
At this stage, you might like to do some hyperparameter tuning with XGBoost before creating your final submission.
End of explanation
"""
X_train, X_test = create_features(df_train, df_test)
y_train = df_train.loc[:, "SalePrice"]
xgb = XGBRegressor(**xgb_params)
# XGB minimizes MSE, but competition loss is RMSLE
# So, we need to log-transform y to train and exp-transform the predictions
xgb.fit(X_train, np.log(y))
predictions = np.exp(xgb.predict(X_test))
output = pd.DataFrame({'Id': X_test.index, 'SalePrice': predictions})
output.to_csv('my_submission.csv', index=False)
print("Your submission was successfully saved!")
"""
Explanation: Just tuning these by hand can give you great results. However, you might like to try using one of scikit-learn's automatic hyperparameter tuners. Or you could explore more advanced tuning libraries like Optuna or scikit-optimize.
Here is how you can use Optuna with XGBoost:
```
import optuna
def objective(trial):
xgb_params = dict(
max_depth=trial.suggest_int("max_depth", 2, 10),
learning_rate=trial.suggest_float("learning_rate", 1e-4, 1e-1, log=True),
n_estimators=trial.suggest_int("n_estimators", 1000, 8000),
min_child_weight=trial.suggest_int("min_child_weight", 1, 10),
colsample_bytree=trial.suggest_float("colsample_bytree", 0.2, 1.0),
subsample=trial.suggest_float("subsample", 0.2, 1.0),
reg_alpha=trial.suggest_float("reg_alpha", 1e-4, 1e2, log=True),
reg_lambda=trial.suggest_float("reg_lambda", 1e-4, 1e2, log=True),
)
xgb = XGBRegressor(**xgb_params)
return score_dataset(X_train, y_train, xgb)
study = optuna.create_study(direction="minimize")
study.optimize(objective, n_trials=20)
xgb_params = study.best_params
```
Copy this into a code cell if you'd like to use it, but be aware that it will take quite a while to run. After it's done, you might enjoy using some of Optuna's visualizations.
Step 5 - Train Model and Create Submissions
Once you're satisfied with everything, it's time to create your final predictions! This cell will:
- create your feature set from the original data
- train XGBoost on the training data
- use the trained model to make predictions from the test set
- save the predictions to a CSV file
End of explanation
"""
|
suriyan/ethnicolr | ethnicolr/examples/ethnicolr_app_contrib20xx.ipynb | mit | import pandas as pd
df = pd.read_csv('/opt/names/fec_contrib/contribDB_2000.csv', nrows=100)
df.columns
from ethnicolr import census_ln, pred_census_ln
"""
Explanation: Application: 2000/2010 Political Campaign Contributions by Race
Using ethnicolr, we look to answer three basic questions:
<ol>
<li>What proportion of contributions were made by blacks, whites, Hispanics, and Asians?
<li>What proportion of unique contributors were blacks, whites, Hispanics, and Asians?
<li>What proportion of total donations were given by blacks, whites, Hispanics, and Asians?
</ol>
End of explanation
"""
df = pd.read_csv('/opt/names/fec_contrib/contribDB_2000.csv', usecols=['amount', 'contributor_type', 'contributor_lname', 'contributor_fname', 'contributor_name'])
sdf = df[df.contributor_type=='I'].copy()
rdf2000 = pred_census_ln(sdf, 'contributor_lname', 2000)
rdf2000['year'] = 2000
df = pd.read_csv('/opt/names/fec_contrib/contribDB_2010.csv.zip', usecols=['amount', 'contributor_type', 'contributor_lname', 'contributor_fname', 'contributor_name'])
sdf = df[df.contributor_type=='I'].copy()
rdf2010 = pred_census_ln(sdf, 'contributor_lname', 2010)
rdf2010['year'] = 2010
rdf = pd.concat([rdf2000, rdf2010])
rdf.head(20)
"""
Explanation: Load and Subset on Individual Contributors
End of explanation
"""
adf = rdf.groupby(['year', 'race']).agg({'contributor_lname': 'count'})
adf.unstack().apply(lambda r: r / r.sum(), axis=1).style.format("{:.2%}")
"""
Explanation: What proportion of contributions were by blacks, whites, Hispanics, and Asians?
End of explanation
"""
udf = rdf.drop_duplicates(subset=['contributor_name']).copy()
gdf = udf.groupby(['year', 'race']).agg({'contributor_name': 'count'})
gdf.unstack().apply(lambda r: r / r.sum(), axis=1).style.format("{:.2%}")
"""
Explanation: What proportion of the donors were blacks, whites, Hispanics, and Asians?
End of explanation
"""
bdf = rdf.groupby(['year', 'race']).agg({'amount': 'sum'})
bdf.unstack().apply(lambda r: r / r.sum(), axis=1).style.format("{:.2%}")
"""
Explanation: What proportion of the total donation was given by blacks, whites, Hispanics, and Asians?
End of explanation
"""
rdf['white_count'] = rdf.white
rdf['black_count'] = rdf.black
rdf['api_count'] = rdf.api
rdf['hispanic_count'] = rdf.hispanic
gdf = rdf.groupby(['year']).agg({'white_count': 'sum', 'black_count': 'sum', 'api_count': 'sum', 'hispanic_count': 'sum'})
gdf.apply(lambda r: r / r.sum(), axis=1).style.format("{:.2%}")
"""
Explanation: What if we estimated by using probabilities for race rather than labels?
What proportion of contributons were by blacks, whites, Hispanics, and Asians?
End of explanation
"""
udf['white_count'] = udf.white
udf['black_count'] = udf.black
udf['api_count'] = udf.api
udf['hispanic_count'] = udf.hispanic
gdf = udf.groupby(['year']).agg({'white_count': 'sum', 'black_count': 'sum', 'api_count': 'sum', 'hispanic_count': 'sum'})
gdf.apply(lambda r: r / r.sum(), axis=1).style.format("{:.2%}")
"""
Explanation: What proportion of the donors were blacks, whites, Hispanics, and Asians?
End of explanation
"""
rdf['white_amount'] = rdf.amount * rdf.white
rdf['black_amount'] = rdf.amount * rdf.black
rdf['api_amount'] = rdf.amount * rdf.api
rdf['hispanic_amount'] = rdf.amount * rdf.hispanic
gdf = rdf.groupby(['year']).agg({'white_amount': 'sum', 'black_amount': 'sum', 'api_amount': 'sum', 'hispanic_amount': 'sum'}) / 10e6
gdf.style.format("{:0.2f}")
gdf.apply(lambda r: r / r.sum(), axis=1).style.format("{:.2%}")
"""
Explanation: What proportion of the total donation was given by blacks, whites, Hispanics, and Asians?
End of explanation
"""
|
jason-neal/eniric | docs/Notebooks/Split_verse_Weighted_masking.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
from eniric.atmosphere import Atmosphere
from eniric.legacy import mask_clumping, RVprec_calc_masked
from scripts.phoenix_precision import convolve_and_resample
from eniric.snr_normalization import snr_constant_band
from eniric.precision import pixel_weights, rv_precision
from eniric.utilities import band_limits, load_aces_spectrum, wav_selector
wav_, flux_ = load_aces_spectrum([3900, 4.5, 0, 0])
# Small section in K bands to experiment with
wav, flux = wav_selector(wav_, flux_, 2.1025, 2.1046)
# Telluric mask
atm_ = Atmosphere.from_band("K", bary=True)
atm = atm_.at(wav)
mask = atm.mask
"""
Explanation: Test masking with clumping verse straight masking
With both Old and new gradient
In this notebook we assess the change in handling the weighted mask for Condition #2 of Figueria et al 2016.
For that paper the spectrum is broken into several small spectra on regions masked telluric lines (>2%).
The updated version just applies a boolen mask after pixel weights are calculated.
End of explanation
"""
# Clumping method
wclump, fclump = mask_clumping(wav, flux, mask)
print("# Number of clumps = ", len(wclump))
# print(wclump, fclump)
print(len(wclump))
wis_0 = pixel_weights(wav, flux, grad=False)
wis_1 = pixel_weights(wav, flux, grad=True)
wis_0 *= mask[:-1]
wis_1 *= mask
wis_0[wis_0 == 0] = np.nan
wis_1[wis_1 == 0] = np.nan
plt_setting = {"figsize": (15, 6)}
plt.figure(**plt_setting)
plt.plot(wav, flux / np.max(flux), label="Star")
plt.plot(atm.wl, atm.transmission, label="Telluric")
plt.plot(atm.wl, atm.mask, "--", label="Mask")
plt.axhline(0.98)
plt.legend()
plt.figure(**plt_setting)
plt.plot(wav[:-1], wis_0, "bs-", label="Mask Grad False")
plt.plot(wav, wis_1, "ko--", label="Mask Grad true")
w, f = (wclump[0], fclump[0])
wis1 = pixel_weights(w, f, grad=True)
wis0 = pixel_weights(w, f, grad=False)
plt.plot(w[:-1], wis0 * 1.05, "g+:", label="Clump, Grad False")
plt.plot(w, wis1 * 1.05, "r.-.", label="Clump, Grad True")
plt.legend()
plt.xlim(wclump[0][0] * 0.99999, wclump[0][-1] * 1.00001)
plt.show()
plt.figure(**plt_setting)
plt.plot(wav[:-1], wis_0, "bs-", label="grad False")
plt.plot(wav, wis_1, "ko--", label="grad true")
w, f = (wclump[1], fclump[1])
wis1 = pixel_weights(w, f, grad=True)
wis0 = pixel_weights(w, f, grad=False)
plt.plot(w[:-1], wis0 * 1.05, "g+:", label="Clump grad False")
plt.plot(w, wis1 * 1.05, "r.-.", label="Clump grad True")
plt.legend()
plt.xlim(wclump[-1][0] * 0.999999, wclump[-1][-1] * 1.00001)
plt.show()
"""
Explanation: Visualize the Pixel Weights:
End of explanation
"""
# Old and new indicate the split method.
print("Old with gradient {:0.06f}".format(RVprec_calc_masked(wav, flux, atm.mask, grad=True)))
print("New with gradient {:0.06f}".format(rv_precision(wav, flux, atm.mask, grad=True)))
print("Old without finite diff{:0.06f}".format(RVprec_calc_masked(wav, flux, atm.mask, grad=False)))
print("New with finite diff{:0.06f}".format(rv_precision(wav, flux, atm.mask, grad=False)))
"""
Explanation: Ffrom these two examples the calculations with the same gradients produces the same pixel weights The clumped version produces less weight though.
The masked version produces slightly different value for the last pixel due to how it is calculated.
With the new graident all pixels are kept except in the clumped version the last pixel s the end and not in the middle so is not calculated with central difference but finite backward difference.
Calculations of RV
End of explanation
"""
# Explore relative difference of different bands
wav_, flux_ = load_aces_spectrum([3900, 4.5, 0, 0])
wav, flux = wav_selector(wav_, flux_, 0.7, 2.5)
table = []
table.append("Band, Cond#1, Split, Masked, ratio, Cond#1, Split, Masked, ratio")
table.append("Grad, False , True ")
# Get J band SNR normalization value
wav_j, flux_j = convolve_and_resample(
wav, flux, vsini=1, R=100000, band="J", sampling=3
)
snr_norm = snr_constant_band(wav_j, flux_j, snr=100, band="J")
for band in ["Z", "Y", "J", "H", "K"]:
atm = Atmosphere.from_band(band, bary=True)
w, f = convolve_and_resample(wav, flux, vsini=1, R=100000, band=band, sampling=3)
f /= snr_norm
atm = atm.at(w)
a = RVprec_calc_masked(w, f, atm.mask, grad=True)
b = RVprec_calc_masked(w, f, atm.mask, grad=False)
c = rv_precision(w, f, atm.mask, grad=True)
d = rv_precision(w, f, atm.mask, grad=False)
e = rv_precision(w, f, grad=True)
f = rv_precision(w, f, grad=False)
false_ratio = (d - b) / b
true_ratio = (c - a) / a
table.append(
"{0:5}, {1:4.02f}, {2:6.02f}, {3:6.02f}, {4:5.04f}, {5:6.02f}, {6:6.02f}, {7:6.02f}, {8:5.04f}".format(
band,
f.value,
b.value,
d.value,
false_ratio,
e.value,
a.value,
c.value,
true_ratio,
)
)
for line in table:
print(line)
"""
Explanation: Differences between versions with same gradient is at the 4th sf. These are not the correct scale, this will be addressed in the next section.
Calculating difference ratios.
Assessing the changes to actual Figueira et al. values by not splitting on telluric lines first then applying mask to the weights, or slitting then calculating weights.
Convolving the spectra to vsini=1 an R=100k of a M0 spectra in the Z,Y,J,H,K bands to be consistent with paper.
Using Old gradient and new gradient methods to access difference.
The old gradient drops the last pixel, which drops many pixels when spectra is split between telluric lines.
End of explanation
"""
|
miaecle/deepchem | examples/tutorials/07_Uncertainty_In_Deep_Learning.ipynb | mit | %tensorflow_version 1.x
!curl -Lo deepchem_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import deepchem_installer
%time deepchem_installer.install(version='2.3.0')
"""
Explanation: Tutorial Part 7: Uncertainty in Deep Learning
A common criticism of deep learning models is that they tend to act as black boxes. A model produces outputs, but doesn't given enough context to interpret them properly. How reliable are the model's predictions? Are some predictions more reliable than others? If a model predicts a value of 5.372 for some quantity, should you assume the true value is between 5.371 and 5.373? Or that it's between 2 and 8? In some fields this situation might be good enough, but not in science. For every value predicted by a model, we also want an estimate of the uncertainty in that value so we can know what conclusions to draw based on it.
DeepChem makes it very easy to estimate the uncertainty of predicted outputs (at least for the models that support it—not all of them do). Let's start by seeing an example of how to generate uncertainty estimates. We load a dataset, create a model, train it on the training set, predict the output on the test set, and then derive some uncertainty estimates.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Setup
To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment.
End of explanation
"""
import deepchem as dc
import numpy as np
import matplotlib.pyplot as plot
tasks, datasets, transformers = dc.molnet.load_sampl(reload=False)
train_dataset, valid_dataset, test_dataset = datasets
model = dc.models.MultitaskRegressor(len(tasks), 1024, uncertainty=True)
model.fit(train_dataset, nb_epoch=200)
y_pred, y_std = model.predict_uncertainty(test_dataset)
"""
Explanation: We'll use the SAMPL dataset from the MoleculeNet suite to run our experiments in this tutorial. Let's load up our dataset for our experiments, and then make some uncertainty predictions.
End of explanation
"""
# Generate some fake data and plot a regression line.
x = np.linspace(0, 5, 10)
y = 0.15*x + np.random.random(10)
plot.scatter(x, y)
fit = np.polyfit(x, y, 1)
line_x = np.linspace(-1, 6, 2)
plot.plot(line_x, np.poly1d(fit)(line_x))
plot.show()
"""
Explanation: All of this looks exactly like any other example, with just two differences. First, we add the option uncertainty=True when creating the model. This instructs it to add features to the model that are needed for estimating uncertainty. Second, we call predict_uncertainty() instead of predict() to produce the output. y_pred is the predicted outputs. y_std is another array of the same shape, where each element is an estimate of the uncertainty (standard deviation) of the corresponding element in y_pred. And that's all there is to it! Simple, right?
Of course, it isn't really that simple at all. DeepChem is doing a lot of work to come up with those uncertainties. So now let's pull back the curtain and see what is really happening. (For the full mathematical details of calculating uncertainty, see https://arxiv.org/abs/1703.04977)
To begin with, what does "uncertainty" mean? Intuitively, it is a measure of how much we can trust the predictions. More formally, we expect that the true value of whatever we are trying to predict should usually be within a few standard deviations of the predicted value. But uncertainty comes from many sources, ranging from noisy training data to bad modelling choices, and different sources behave in different ways. It turns out there are two fundamental types of uncertainty we need to take into account.
Aleatoric Uncertainty
Consider the following graph. It shows the best fit linear regression to a set of ten data points.
End of explanation
"""
plot.figure(figsize=(12, 3))
line_x = np.linspace(0, 5, 50)
for i in range(3):
plot.subplot(1, 3, i+1)
plot.scatter(x, y)
fit = np.polyfit(np.concatenate([x, [3]]), np.concatenate([y, [i]]), 10)
plot.plot(line_x, np.poly1d(fit)(line_x))
plot.show()
"""
Explanation: The line clearly does not do a great job of fitting the data. There are many possible reasons for this. Perhaps the measuring device used to capture the data was not very accurate. Perhaps y depends on some other factor in addition to x, and if we knew the value of that factor for each data point we could predict y more accurately. Maybe the relationship between x and y simply isn't linear, and we need a more complicated model to capture it. Regardless of the cause, the model clearly does a poor job of predicting the training data, and we need to keep that in mind. We cannot expect it to be any more accurate on test data than on training data. This is known as aleatoric uncertainty.
How can we estimate the size of this uncertainty? By training a model to do it, of course! At the same time it is learning to predict the outputs, it is also learning to predict how accurately each output matches the training data. For every output of the model, we add a second output that produces the corresponding uncertainty. Then we modify the loss function to make it learn both outputs at the same time.
Epistemic Uncertainty
Now consider these three curves. They are fit to the same data points as before, but this time we are using 10th degree polynomials.
End of explanation
"""
abs_error = np.abs(y_pred.flatten()-test_dataset.y.flatten())
plot.scatter(y_std.flatten(), abs_error)
plot.xlabel('Standard Deviation')
plot.ylabel('Absolute Error')
plot.show()
"""
Explanation: Each of them perfectly interpolates the data points, yet they clearly are different models. (In fact, there are infinitely many 10th degree polynomials that exactly interpolate any ten data points.) They make identical predictions for the data we fit them to, but for any other value of x they produce different predictions. This is called epistemic uncertainty. It means the data does not fully constrain the model. Given the training data, there are many different models we could have found, and those models make different predictions.
The ideal way to measure epistemic uncertainty is to train many different models, each time using a different random seed and possibly varying hyperparameters. Then use all of them for each input and see how much the predictions vary. This is very expensive to do, since it involves repeating the whole training process many times. Fortunately, we can approximate the same effect in a less expensive way: by using dropout.
Recall that when you train a model with dropout, you are effectively training a huge ensemble of different models all at once. Each training sample is evaluated with a different dropout mask, corresponding to a different random subset of the connections in the full model. Usually we only perform dropout during training and use a single averaged mask for prediction. But instead, let's use dropout for prediction too. We can compute the output for lots of different dropout masks, then see how much the predictions vary. This turns out to give a reasonable estimate of the epistemic uncertainty in the outputs.
Uncertain Uncertainty?
Now we can combine the two types of uncertainty to compute an overall estimate of the error in each output:
$$\sigma_\text{total} = \sqrt{\sigma_\text{aleatoric}^2 + \sigma_\text{epistemic}^2}$$
This is the value DeepChem reports. But how much can you trust it? Remember how I started this tutorial: deep learning models should not be used as black boxes. We want to know how reliable the outputs are. Adding uncertainty estimates does not completely eliminate the problem; it just adds a layer of indirection. Now we have estimates of how reliable the outputs are, but no guarantees that those estimates are themselves reliable.
Let's go back to the example we started with. We trained a model on the SAMPL training set, then generated predictions and uncertainties for the test set. Since we know the correct outputs for all the test samples, we can evaluate how well we did. Here is a plot of the absolute error in the predicted output versus the predicted uncertainty.
End of explanation
"""
plot.hist(abs_error/y_std.flatten(), 20)
plot.show()
"""
Explanation: The first thing we notice is that the axes have similar ranges. The model clearly has learned the overall magnitude of errors in the predictions. There also is clearly a correlation between the axes. Values with larger uncertainties tend on average to have larger errors.
Now let's see how well the values satisfy the expected distribution. If the standard deviations are correct, and if the errors are normally distributed (which is certainly not guaranteed to be true!), we expect 95% of the values to be within two standard deviations, and 99% to be within three standard deviations. Here is a histogram of errors as measured in standard deviations.
End of explanation
"""
|
eshlykov/mipt-day-after-day | labs/term-4/lab-1-1.ipynb | unlicense | import numpy as np
import scipy as ps
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Работа 1.1. Определение скорости полета пули при помощи баллистического маятника
Цель работы: определить скорость полёта пули, применяя законы сохранения и используя баллистический маятник; познакомиться с базовыми принципами обработки экспериментальных данных.
В работе используются: духовое ружье на штативе, осветитель, оптическая система для измерения отклонений маятника, измерительная линейка, пули и весы для их взвешивания, баллистический маятник.
End of explanation
"""
data = pd.read_excel('lab-1-1.xlsx', 'table-1')
data.head(len(data))
"""
Explanation: Положим $L=3.2$ м — длина нити, $g=9.8$ м/c — величина ускорения свободного падения, $M=3$ кг — масса балистического маятника.
Через $\Delta x$ обозначим отклонение маятника, через $m$ — массу пули, через $u$ — скорость пули при вылете, через $\sigma_t$ — абсолютную ошибку величины $t$, а через $\varepsilon_t$ — относительную.
По формуле скорость пули есть $u = \frac{M}{m} \sqrt{\frac{g}{L}} \Delta x$, откуда относительная ошибка скорости пули составляет $\varepsilon_u=\sqrt{\left(\frac{\sigma_M}{M}\right)^2+\left(\frac{\sigma_m}{m}\right)^2+\left(\frac{\sigma_L}{2L}\right)^2+\left(\frac{\sigma_{\Delta x}}{\Delta x}\right)^2}$.
Ясно, что слагаемое $\left(\frac{\sigma_{\Delta x}}{\Delta x}\right)^2$ существенно превышает все остальные, так что можно считать $\sigma_u \approx \frac{\sigma_{\Delta x}}{\Delta x}$.
Погрешность измерения $x$ есть половина цены деления, то есть $0.000125$ м. Таким образом, $\sigma_x \approx 0.000176$ м.
End of explanation
"""
u = data.values[:, 2]
print(u.mean())
print(u.std())
"""
Explanation: Посчитаем средний разброс скоростей.
End of explanation
"""
|
tuanavu/coursera-university-of-washington | machine_learning/1_machine_learning_foundations/assignment/week6/Deep Features for Image Classification.ipynb | mit | import graphlab
"""
Explanation: Using deep features to build an image classifier
Fire up GraphLab Create
End of explanation
"""
image_train = graphlab.SFrame('image_train_data/')
image_test = graphlab.SFrame('image_test_data/')
"""
Explanation: Load a common image analysis dataset
We will use a popular benchmark dataset in computer vision called CIFAR-10.
(We've reduced the data to just 4 categories = {'cat','bird','automobile','dog'}.)
This dataset is already split into a training set and test set.
End of explanation
"""
graphlab.canvas.set_target('ipynb')
image_train['image'].show()
"""
Explanation: Exploring the image data
End of explanation
"""
raw_pixel_model = graphlab.logistic_classifier.create(image_train,target='label',
features=['image_array'])
"""
Explanation: Train a classifier on the raw image pixels
We first start by training a classifier on just the raw pixels of the image.
End of explanation
"""
image_test[0:3]['image'].show()
image_test[0:3]['label']
raw_pixel_model.predict(image_test[0:3])
"""
Explanation: Make a prediction with the simple model based on raw pixels
End of explanation
"""
raw_pixel_model.evaluate(image_test)
"""
Explanation: The model makes wrong predictions for all three images.
Evaluating raw pixel model on test data
End of explanation
"""
len(image_train)
"""
Explanation: The accuracy of this model is poor, getting only about 46% accuracy.
Can we improve the model using deep features
We only have 2005 data points, so it is not possible to train a deep neural network effectively with so little data. Instead, we will use transfer learning: using deep features trained on the full ImageNet dataset, we will train a simple model on this small dataset.
End of explanation
"""
#deep_learning_model = graphlab.load_model('http://s3.amazonaws.com/GraphLab-Datasets/deeplearning/imagenet_model_iter45')
#image_train['deep_features'] = deep_learning_model.extract_features(image_train)
"""
Explanation: Computing deep features for our images
The two lines below allow us to compute deep features. This computation takes a little while, so we have already computed them and saved the results as a column in the data you loaded.
(Note that if you would like to compute such deep features and have a GPU on your machine, you should use the GPU enabled GraphLab Create, which will be significantly faster for this task.)
End of explanation
"""
image_train.head()
"""
Explanation: As we can see, the column deep_features already contains the pre-computed deep features for this data.
End of explanation
"""
deep_features_model = graphlab.logistic_classifier.create(image_train,
features=['deep_features'],
target='label')
"""
Explanation: Given the deep features, let's train a classifier
End of explanation
"""
image_test[0:3]['image'].show()
deep_features_model.predict(image_test[0:3])
"""
Explanation: Apply the deep features model to first few images of test set
End of explanation
"""
deep_features_model.evaluate(image_test)
"""
Explanation: The classifier with deep features gets all of these images right!
Compute test_data accuracy of deep_features_model
As we can see, deep features provide us with significantly better accuracy (about 78%)
End of explanation
"""
|
JuBra/cobrapy | documentation_builder/qp.ipynb | lgpl-2.1 | %matplotlib inline
import plot_helper
plot_helper.plot_qp1()
"""
Explanation: Quadratic Programming
Suppose we want to minimize the Euclidean distance of the solution to the origin while subject to linear constraints. This will require a quadratic objective function. Consider this example problem:
min $\frac{1}{2}\left(x^2 + y^2 \right)$
subject to
$x + y = 2$
$x \ge 0$
$y \ge 0$
This problem can be visualized graphically:
End of explanation
"""
import scipy
from cobra import Reaction, Metabolite, Model, solvers
"""
Explanation: The objective can be rewritten as $\frac{1}{2} v^T \cdot \mathbf Q \cdot v$, where
$v = \left(\begin{matrix} x \ y\end{matrix} \right)$ and
$\mathbf Q = \left(\begin{matrix} 1 & 0\ 0 & 1 \end{matrix}\right)$
The matrix $\mathbf Q$ can be passed into a cobra model as the quadratic objective.
End of explanation
"""
Q = scipy.sparse.eye(2).todok()
Q
"""
Explanation: The quadratic objective $\mathbf Q$ should be formatted as a scipy sparse matrix.
End of explanation
"""
Q.todense()
"""
Explanation: In this case, the quadratic objective is simply the identity matrix
End of explanation
"""
print(solvers.get_solver_name(qp=True))
c = Metabolite("c")
c._bound = 2
x = Reaction("x")
y = Reaction("y")
x.add_metabolites({c: 1})
y.add_metabolites({c: 1})
m = Model()
m.add_reactions([x, y])
sol = m.optimize(quadratic_component=Q, objective_sense="minimize")
sol.x_dict
"""
Explanation: We need to use a solver that supports quadratic programming, such as gurobi or cplex. If a solver which supports quadratic programming is installed, this function will return its name.
End of explanation
"""
plot_helper.plot_qp2()
"""
Explanation: Suppose we change the problem to have a mixed linear and quadratic objective.
min $\frac{1}{2}\left(x^2 + y^2 \right) - y$
subject to
$x + y = 2$
$x \ge 0$
$y \ge 0$
Graphically, this would be
End of explanation
"""
y.objective_coefficient = -1
sol = m.optimize(quadratic_component=Q, objective_sense="minimize")
sol.x_dict
"""
Explanation: QP solvers in cobrapy will combine linear and quadratic coefficients. The linear portion will be obtained from the same objective_coefficient attribute used with LP's.
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/agents/tutorials/2_environments_tutorial.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2021 The TF-Agents Authors.
End of explanation
"""
!pip install "gym>=0.21.0"
!pip install tf-agents
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import tensorflow as tf
import numpy as np
from tf_agents.environments import py_environment
from tf_agents.environments import tf_environment
from tf_agents.environments import tf_py_environment
from tf_agents.environments import utils
from tf_agents.specs import array_spec
from tf_agents.environments import wrappers
from tf_agents.environments import suite_gym
from tf_agents.trajectories import time_step as ts
"""
Explanation: 環境
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/agents/tutorials/2_environments_tutorial"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/agents/tutorials/2_environments_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colabで実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/agents/tutorials/2_environments_tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示{</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/agents/tutorials/2_environments_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
</table>
はじめに
強化学習 (RL) の目標は、環境と対話することにより学習するエージェントを設計することです。標準的な RL のセットアップでは、エージェントは各タイムステップで観測を受け取り、行動を選択します。行動は環境に適用され、環境は報酬と新しい観察を返します。 エージェントは、報酬の合計 (リターン) を最大化する行動を選択するポリシーをトレーニングします。
TF-Agent では、環境は Python または TensorFlow で実装できます。通常、Python 環境はより分かりやすく、実装やデバッグが簡単ですが、TensorFlow 環境はより効率的で自然な並列化が可能です。最も一般的なワークフローは、Python で環境を実装し、ラッパーを使用して自動的に TensorFlow に変換することです。
最初に Python 環境を見てみましょう。TensorFlow 環境の API もよく似ています。
セットアップ
TF-Agent または gym をまだインストールしていない場合は、以下を実行します。
End of explanation
"""
class PyEnvironment(object):
def reset(self):
"""Return initial_time_step."""
self._current_time_step = self._reset()
return self._current_time_step
def step(self, action):
"""Apply action and return new time_step."""
if self._current_time_step is None:
return self.reset()
self._current_time_step = self._step(action)
return self._current_time_step
def current_time_step(self):
return self._current_time_step
def time_step_spec(self):
"""Return time_step_spec."""
@abc.abstractmethod
def observation_spec(self):
"""Return observation_spec."""
@abc.abstractmethod
def action_spec(self):
"""Return action_spec."""
@abc.abstractmethod
def _reset(self):
"""Return initial_time_step."""
@abc.abstractmethod
def _step(self, action):
"""Apply action and return new time_step."""
"""
Explanation: Python 環境
Python環境には、環境に行動を適用し、次のステップに関する以下の情報を返す step(action) -> next_time_step メソッドがあります。
observation:これは、エージェントが次のステップで行動を選択するために観察できる環境状態の一部です。
reward:エージェントは、複数のステップにわたってこれらの報酬の合計を最大化することを学習します。
step_type:環境との相互作用は通常、シーケンス/エピソードの一部です (チェスのゲームで複数の動きがあるように)。step_type は、FIRST、MIDまたはLASTのいずれかで、このタイムステップがシーケンスの最初、中間、または最後のステップかどうかを示します。
discount:これは、現在のタイムステップでの報酬に対する次のタイムステップでの報酬の重み付けを表す浮動小数です。
これらは、名前付きタプルTimeStep(step_type, reward, discount, observation)にグループ化されます。
すべての Python 環境で実装する必要があるインターフェースは、environments/py_environment.PyEnvironmentです。主なメソッドは、以下のとおりです。
End of explanation
"""
environment = suite_gym.load('CartPole-v0')
print('action_spec:', environment.action_spec())
print('time_step_spec.observation:', environment.time_step_spec().observation)
print('time_step_spec.step_type:', environment.time_step_spec().step_type)
print('time_step_spec.discount:', environment.time_step_spec().discount)
print('time_step_spec.reward:', environment.time_step_spec().reward)
"""
Explanation: step()メソッドに加えて、環境では、新しいシーケンスを開始して新規TimeStepを提供するreset()メソッドも提供されます。resetメソッドを明示的に呼び出す必要はありません。エピソードの最後、またはstep()が初めて呼び出されたときに、環境は自動的にリセットされると想定されています。
サブクラスはstep()またはreset()を直接実装しないことに注意してください。代わりに、_step()および_reset()メソッドをオーバーライドします。これらのメソッドから返されたタイムステップはキャッシュされ、current_time_step()を通じて公開されます。
observation_specおよびaction_specメソッドは(Bounded)ArraySpecsのネストを返します。このネストは観測と行動の名前、形状、データ型、範囲をそれぞれ記述します。
TF-Agent では、リスト、タプル、名前付きタプル、またはディクショナリからなるツリー構造で定義されるネストを繰り返し参照します。これらは、観察と行動の構造を維持するために任意に構成できます。これは、多くの観察と行動がある、より複雑な環境で非常に役立ちます。
標準環境の使用
TF Agent には、py_environment.PyEnvironmentインターフェースに準拠するように、OpenAI Gym、DeepMind-control、Atari などの多くの標準環境用のラッパーが組み込まれていています。これらのラップされた環境は、環境スイートを使用して簡単に読み込めます。OpenAI Gym から CartPole 環境を読み込み、行動と time_step_spec を見てみましょう。
End of explanation
"""
action = np.array(1, dtype=np.int32)
time_step = environment.reset()
print(time_step)
while not time_step.is_last():
time_step = environment.step(action)
print(time_step)
"""
Explanation: 環境は [0, 1] のint64タイプの行動を予期し、 TimeStepsを返します。観測値は長さ 4 のfloat32ベクトルであり、割引係数は [0.0, 1.0] のfloat32です。では、エピソード全体に対して固定した行動(1,)を実行してみましょう。
End of explanation
"""
class CardGameEnv(py_environment.PyEnvironment):
def __init__(self):
self._action_spec = array_spec.BoundedArraySpec(
shape=(), dtype=np.int32, minimum=0, maximum=1, name='action')
self._observation_spec = array_spec.BoundedArraySpec(
shape=(1,), dtype=np.int32, minimum=0, name='observation')
self._state = 0
self._episode_ended = False
def action_spec(self):
return self._action_spec
def observation_spec(self):
return self._observation_spec
def _reset(self):
self._state = 0
self._episode_ended = False
return ts.restart(np.array([self._state], dtype=np.int32))
def _step(self, action):
if self._episode_ended:
# The last action ended the episode. Ignore the current action and start
# a new episode.
return self.reset()
# Make sure episodes don't go on forever.
if action == 1:
self._episode_ended = True
elif action == 0:
new_card = np.random.randint(1, 11)
self._state += new_card
else:
raise ValueError('`action` should be 0 or 1.')
if self._episode_ended or self._state >= 21:
reward = self._state - 21 if self._state <= 21 else -21
return ts.termination(np.array([self._state], dtype=np.int32), reward)
else:
return ts.transition(
np.array([self._state], dtype=np.int32), reward=0.0, discount=1.0)
"""
Explanation: 独自 Python 環境の作成
多くの場合、一般的に、TF-Agent の標準エージェント (agents/を参照) の 1 つが問題に適用されます。そのためには、問題を環境としてまとめる必要があります。 次に、Python で環境を実装する方法を見てみましょう。
次の (ブラックジャックのような ) カードゲームをプレイするようにエージェントをトレーニングするとします。
ゲームは、1~10 の数値が付けられた無限のカード一式を使用してプレイします。
毎回、エージェントは2つの行動 (新しいランダムカードを取得する、またはその時点のラウンドを停止する) を実行できます。
目標はラウンド終了時にカードの合計を 21 にできるだけ近づけることです。
ゲームを表す環境は次のようになります。
行動:2 つの行動があります( 行動 0:新しいカードを取得、行動1:その時点のラウンドを終了)。
観察:その時点のラウンドのカードの合計。
報酬:目標は、21 にできるだけ近づけることなので、ラウンド終了時に次の報酬を使用します。sum_of_cards - 21 if sum_of_cards <= 21, else -21
End of explanation
"""
environment = CardGameEnv()
utils.validate_py_environment(environment, episodes=5)
"""
Explanation: 上記の環境がすべて正しく定義されていることを確認しましょう。独自の環境を作成する場合、生成された観測と time_steps が仕様で定義されている正しい形状とタイプに従っていることを確認する必要があります。これらは TensorFlow グラフの生成に使用されるため、問題が発生するとデバッグが困難になる可能性があります。
この環境を検証するために、ランダムなポリシーを使用して行動を生成し、5 つのエピソードでイテレーションを実行し、意図したとおりに機能していることを確認します。環境の仕様に従っていない time_step を受け取ると、エラーが発生します。
End of explanation
"""
get_new_card_action = np.array(0, dtype=np.int32)
end_round_action = np.array(1, dtype=np.int32)
environment = CardGameEnv()
time_step = environment.reset()
print(time_step)
cumulative_reward = time_step.reward
for _ in range(3):
time_step = environment.step(get_new_card_action)
print(time_step)
cumulative_reward += time_step.reward
time_step = environment.step(end_round_action)
print(time_step)
cumulative_reward += time_step.reward
print('Final Reward = ', cumulative_reward)
"""
Explanation: 環境が意図するとおりに機能していることが確認できたので、固定ポリシーを使用してこの環境を実行してみましょう。3 枚のカードを要求して、ラウンドを終了します。
End of explanation
"""
env = suite_gym.load('Pendulum-v1')
print('Action Spec:', env.action_spec())
discrete_action_env = wrappers.ActionDiscretizeWrapper(env, num_actions=5)
print('Discretized Action Spec:', discrete_action_env.action_spec())
"""
Explanation: 環境ラッパー
環境ラッパーはpython環境を取り、環境の変更されたバージョンを返します。元の環境と変更された環境はどちらもpy_environment.PyEnvironmentのインスタンスであり、複数のラッパーをチェーン化できます。
一般的なラッパーはenvironments/wrappers.pyにあります。 例:
ActionDiscretizeWrapper:連続空間で定義された行動を離散化された行動に変換します。
RunStats: 実行したステップ数、完了したエピソード数など、環境の実行統計をキャプチャします。
TimeLimit:一定のステップ数の後にエピソードを終了します。
例1:行動離散化ラッパー
InvertedPendulumは、[-2, 2]の範囲の連続行動を受け入れる PyBullet 環境です。この環境で DQN などの離散行動エージェントをトレーニングする場合は、行動空間を離散化(量子化)する必要があります。ActionDiscretizeWrapperは、これを行います。ラップ前とラップ後のaction_specを比較しましょう。
End of explanation
"""
class TFEnvironment(object):
def time_step_spec(self):
"""Describes the `TimeStep` tensors returned by `step()`."""
def observation_spec(self):
"""Defines the `TensorSpec` of observations provided by the environment."""
def action_spec(self):
"""Describes the TensorSpecs of the action expected by `step(action)`."""
def reset(self):
"""Returns the current `TimeStep` after resetting the Environment."""
return self._reset()
def current_time_step(self):
"""Returns the current `TimeStep`."""
return self._current_time_step()
def step(self, action):
"""Applies the action and returns the new `TimeStep`."""
return self._step(action)
@abc.abstractmethod
def _reset(self):
"""Returns the current `TimeStep` after resetting the Environment."""
@abc.abstractmethod
def _current_time_step(self):
"""Returns the current `TimeStep`."""
@abc.abstractmethod
def _step(self, action):
"""Applies the action and returns the new `TimeStep`."""
"""
Explanation: ラップされたdiscrete_action_envは、py_environment.PyEnvironmentのインスタンスで、通常の python 環境のように扱うことができます。
TensorFlow 環境
TF環境のインターフェースはenvironments/tf_environment.TFEnvironmentで定義されており、Python環境とよく似ています。 TF環境は、いくつかの点でpython環境と異なります。
配列の代わりにテンソルオブジェクトを生成する
TF 環境は、仕様と比較したときに生成されたテンソルにバッチディメンションを追加します
Python環境をTF環境に変換すると、TensorFlowで操作を並列化できます。たとえば、環境からデータを収集してreplay_bufferに追加するcollect_experience_op、および、replay_bufferから読み取り、エージェントをトレーニングするtrain_opを定義し、TensorFlowで自然に並列実行することができます。
End of explanation
"""
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
print(isinstance(tf_env, tf_environment.TFEnvironment))
print("TimeStep Specs:", tf_env.time_step_spec())
print("Action Specs:", tf_env.action_spec())
"""
Explanation: current_time_step()メソッドは現在の time_step を返し、必要に応じて環境を初期化します。
reset()メソッドは環境を強制的にリセットし、current_step を返します。
actionが以前のtime_stepに依存しない場合、Graphモードではtf.control_dependencyが必要です。
ここでは、TFEnvironmentsを作成する方法を見ていきます。
独自 TensorFlow 環境の作成
これは Python で環境を作成するよりも複雑であるため、この Colab では取り上げません。例はこちらからご覧いただけます。より一般的な使用例は、Python で環境を実装し、TFPyEnvironment ラッパーを使用して TensorFlow でラップすることです (以下を参照)。
TensorFlow で Python 環境をラップ
TFPyEnvironmentラッパーを使用すると、任意の Python 環境を TensorFlow 環境に簡単にラップできます。
End of explanation
"""
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
# reset() creates the initial time_step after resetting the environment.
time_step = tf_env.reset()
num_steps = 3
transitions = []
reward = 0
for i in range(num_steps):
action = tf.constant([i % 2])
# applies the action and returns the new TimeStep.
next_time_step = tf_env.step(action)
transitions.append([time_step, action, next_time_step])
reward += next_time_step.reward
time_step = next_time_step
np_transitions = tf.nest.map_structure(lambda x: x.numpy(), transitions)
print('\n'.join(map(str, np_transitions)))
print('Total reward:', reward.numpy())
"""
Explanation: 仕様のタイプが(Bounded)TensorSpecになっていることに注意してください。
使用例
簡単な例
End of explanation
"""
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
time_step = tf_env.reset()
rewards = []
steps = []
num_episodes = 5
for _ in range(num_episodes):
episode_reward = 0
episode_steps = 0
while not time_step.is_last():
action = tf.random.uniform([1], 0, 2, dtype=tf.int32)
time_step = tf_env.step(action)
episode_steps += 1
episode_reward += time_step.reward.numpy()
rewards.append(episode_reward)
steps.append(episode_steps)
time_step = tf_env.reset()
num_steps = np.sum(steps)
avg_length = np.mean(steps)
avg_reward = np.mean(rewards)
print('num_episodes:', num_episodes, 'num_steps:', num_steps)
print('avg_length', avg_length, 'avg_reward:', avg_reward)
"""
Explanation: 全エピソード
End of explanation
"""
|
Soil-Carbon-Coalition/atlasdata | Combining rows w groupby, transform, or multiIndex.ipynb | mit | %matplotlib inline
import sys
import numpy as np
import pandas as pd
import json
import matplotlib.pyplot as plt
from io import StringIO
print(sys.version)
print("Pandas:", pd.__version__)
df = pd.read_csv('C:/Users/Peter/Documents/atlas/atlasdata/obs_types/transect.csv', parse_dates=['date'])
df = df.astype(dtype='str')# we don't need numbers in this dataset.
df=df.replace('nan','')
#this turns dates into strings with the proper format for JSON:
#df['date'] = df['date'].dt.strftime('%Y-%m-%d')
df.type = df.type.str.replace('\*remonitoring notes','transect')
df.type = df.type.str.replace('\*plot summary','transect')
"""
Explanation: Transect
Groupby and transform allow me to combine rows into a single 'transect' row.
Or, use a multiIndex, a hierarchical index, so I can target specific cells using id and type. The index item for a multiIndex is a TUPLE.
End of explanation
"""
df.loc[df.type =='map',['mapPhoto']]=df['url'] #moving cell values to correct column
df.loc[df.type.str.contains('lineminus'),['miscPhoto']]=df['url']
df.loc[df.type.str.contains('lineplus'),['miscPhoto']]=df['url']
df.loc[df.type.str.contains('misc'),['miscPhoto']]=df['url']
#now to deal with type='photo'
photos = df[df.type=='photo']
nonphotos = df[df.type != 'photo'] #we can concatenate these later
grouped = photos.groupby(['id','date'])
photos.shape
values=grouped.groups.values()
for value in values:
photos.loc[value[2],['type']] = 'misc'
#photos.loc[value[1],['type']] = 'linephoto2'
photos.loc[photos.type=='linephoto1']
for name, group in grouped:
print(grouped[name])
photos = df[df.type == 'photo']
photos.set_index(['id','date'],inplace=True)
photos.index[1]
photos=df[df.type=='photo']
photos.groupby(['id','date']).count()
photos.loc[photos.index[25],['type','note']]
#combine photo captions
df['caption']=''
df.loc[(df.type.str.contains('lineminus'))|(df.type.str.contains('lineplus')),['caption']]=df['type'] + ' | ' + df['note']
df.loc[df.type.str.contains('lineplus'),['caption']]=df['url']
df.loc[df.type.str.contains('misc'),['caption']]=df['url']
df['mystart'] = 'Baseline summary:'
df.loc[df.type =='transect',['site_description']]= df[['mystart','label1','value1','label2','value2','label3','value3','note']].apply(' | '.join, axis=1)
df.loc[df.type.str.contains('line-'),['linephoto1']]=df['url']
df.loc[df.type.str.contains('line\+'),['linephoto2']]=df['url']#be sure to escape the +
df.loc[df.type.str.contains('linephoto1'),['linephoto1']]=df['url']
df.loc[df.type.str.contains('linephoto2'),['linephoto2']]=df['url']
df.loc[df.type == 'plants',['general_observations']]=df['note']
"""
Explanation: shift data to correct column
using loc for assignment: df.loc[destination condition, column] = df.loc[source]
End of explanation
"""
#since we're using string methods, NaNs won't work
mycols =['general_observations','mapPhoto','linephoto1','linephoto2','miscPhoto','site_description']
for item in mycols:
df[item] = df[item].fillna('')
df.mapPhoto = df.groupby('id')['mapPhoto'].transform(lambda x: "%s" % ''.join(x))
df.linephoto1 = df.groupby(['id','date'])['linephoto1'].transform(lambda x: "%s" % ''.join(x))
df.linephoto2 = df.groupby(['id','date'])['linephoto2'].transform(lambda x: "%s" % ''.join(x))
df.miscPhoto = df.groupby(['id','date'])['miscPhoto'].transform(lambda x: "%s" % ''.join(x))
df['site_description'] = df['site_description'].str.strip()
df.to_csv('test.csv')
#done to here. Next, figure out what to do with linephotos, unclassified photos, and their notes.
#make column for photocaptions. When adding linephoto1, add 'note' and 'type' fields to caption column. E.g. 'linephoto1: 100line- | view east along transect.' Then join the rows in the groupby transform and add to site_description field.
df.shape
df[(df.type.str.contains('line\+'))&(df.linephoto2.str.len()<50)]
maps.str.len().sort_values()
"""
Explanation: use groupby and transform to fill the row
End of explanation
"""
ids = list(df['id'])#make a list of ids to iterate over, before the hierarchical index
#df.type = df.type.map({'\*plot summary':'transect','\*remonitoring notes':'transect'})
df.loc[df.type =='map',['mapPhoto']]=df['url'] #moving cell values to correct column
df.set_index(['id','type'],inplace=True) # hierarchical index so we can call locations
#a hierarchical index uses a tuple. You can set values using loc.
#this format: df.loc[destination] = df.loc[source].values[0]
for item in ids:
df.loc[(item,'*plot summary'),'mapPhoto'] = df.loc[(item,'map'),'mapPhoto'].values[0]
#generates a pink warning about performance, but oh well.
#here we are using an expression in parens to test for a condition
(df['type'].str.contains('\s') & df['note'].notnull()).value_counts()
df.url = df.url.str.replace(' ','_');df.url
df.url.head()
df['newurl'] = df.url.str.replace
df.newurl.head()
#for combining rows try something like this:
print(df.groupby('somecolumn')['temp variable'].apply(' '.join).reset_index())
"""
Explanation: shift data to correct row using a multi-Index
End of explanation
"""
|
dataplumber/nexus | esip-workshop/student-material/workshop1/3 - Python Basics.ipynb | apache-2.0 | 1+2
1+1
1+2
"""
Explanation: Tutorial Brief
This tutorial is an introduction to Python 3. This should give you the set of pythonic skills that you will need to proceed with this tutorial series.
If you don't have the Jupyter installed, shame on you. No just kidding you can follow this tutorial using an online jupyter service:
https://try.jupyter.org/
Cell Input and Output
End of explanation
"""
print(1+2)
"""
Explanation: The print function
Notice: print is a function in Python 3. You should use parentheses around your parameter.
End of explanation
"""
a = 4
b = 1.5
c = 121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212121212
d = 1j
e = 1/3
f = True
a+b
a*c
(b+d)*a
a+f
type(1.5)
"""
Explanation: Variables
There are many variable types in Python 3. Here is a list of the most common types:
Numerical Types:
bool (Boolean)
int (Integer/Long)
float
complex
Notice: In Python 3 integer represents integer and long. Because there is no more long data type, you will not get L at the end of long integers.
End of explanation
"""
my_name = "Roshan"
print(my_name)
my_list = [1,2,3,4,5]
my_list
my_list + [6]
my_list
my_list += [6,7,8]
my_list
my_list.append(9)
my_list
my_tuple = (1,2,3)
my_tuple
my_tuple + (4,5,6)
my_dict = {"name":"Roshan", "credit":100}
my_dict
my_dict["name"]
my_dict["level"] = 4
my_dict
my_dict.values()
my_dict.keys()
"""
Explanation: Other Value Types:
str (String)
list (Ordered Array)
tuple (Ordered Immutable Array)
dict (Unordered list of keys and values)
End of explanation
"""
len(my_list)
"""
Explanation: Selecting / Slicing
Use len function to measure the length of a list.
End of explanation
"""
my_list[0]
"""
Explanation: To access a single value in a list use this syntax:
python
list_name[index]
End of explanation
"""
my_list[1:2]
my_list[:3]
my_list[3:]
"""
Explanation: To select multiple value from a list use this syntax:
python
index[start:end:step]
End of explanation
"""
my_list[-1]
my_list[-2]
"""
Explanation: Notice: negative index selected from the end of the list
End of explanation
"""
my_list[-2:]
my_list[:-2]
my_list[3:-1]
"""
Explanation: You can use negative indexing in selecting multiple values.
End of explanation
"""
my_list[::2]
my_list[3::2]
my_list[::-1]
"""
Explanation: The third location in the index is the step. If the step is negative the the list is returned in descending order.
End of explanation
"""
my_name
my_name[0]
"""
Explanation: Working with Strings
You can select from a string like a list suing this syntax:
python
my_string[star:end:step]
End of explanation
"""
my_name[:2]
"""
Explanation: Notice: You can also use negative indexing.
End of explanation
"""
# Sorted by most spoken languages in order
divide_by_zero = {"zho":"你不能除以零",
"eng":"You cannot divide by zero",
"esp":"No se puede dividir por cero",
"hin":"आप शून्य से विभाजित नहीं किया जा सकता \u2248",
"arb":"لا يمكن القسمة على صفر"}
print(divide_by_zero["hin"])
type(divide_by_zero["hin"])
"""
Explanation: Unicode
Notice: You can use unicode inside your string variables. Unlike Python 2, no need to use u"" to use unicode.
End of explanation
"""
first_name = "Roshan"
last_name = "Rush"
formatted_name = "%s, %s." % (last_name, first_name[0])
print(formatted_name)
"""
Explanation: String Formatting
You can use this syntax to format a string:
python
some_variable = 50
x = "Value: %s" % some_variable
print(x) # Value: 50
End of explanation
"""
print("π ≈ %.2f" % 3.14159)
"""
Explanation: Other formatters could be used to format numbers:
End of explanation
"""
homeworks = 15.75
midterm = 22
final = 51
total = homeworks + midterm + final
print("Homeworks: %.2f\nMid-term: %.2f\nFinal: %.2f\nTotal: %.2f/100" % (homeworks, midterm, final, total))
"""
Explanation: To find unicode symbols:
http://www.fileformat.info/info/unicode/char/search.htm
End of explanation
"""
url = "http://{language}.wikipedia.org/"
url = url.format(language="en")
url
"""
Explanation: Using format(*args, **kwargs) function
End of explanation
"""
1+1
4-5
"""
Explanation: Mathematics
End of explanation
"""
14/5
14//5
2*5
"""
Explanation: Notice: The default behavior of division in Python 3 is float division. To use integer division like Python 2, use //
End of explanation
"""
2**3
"""
Explanation: To raise a number to any power use down asterisk **. To represent $a^{n}$:
python
a**n
End of explanation
"""
10 % 3
"""
Explanation: To calculate the remainder (modulo operator) use %. To represent $a \mod b = r$:
python
a % b # Returns r
End of explanation
"""
import math
n=52
k=1
math.factorial(n) / (math.factorial(k) * math.factorial(n-k))
"""
Explanation: You can
You can use the math library to access a varaity of tools for algebra and geometry. To import a library, you can use one of these syntaxes:
python
import library_name
import library_name as alias
from module_name import some_class
End of explanation
"""
for counter in [1,2,3,4]:
print(counter)
"""
Explanation: Loops
End of explanation
"""
for counter in range(5):
print(counter)
"""
Explanation: range
In Python 3 range is a data type that generates a list of numbers.
python
range(stop)
range(start,stop[ ,step])
Notice: In Python 2 range is a function that returns a list. In Python 3, range returns an iterable of type range. If you need to get a list you can use the list() function:
python
list(range(start,stop[, step]))
End of explanation
"""
list(range(1,10)) == list(range(1,5)) + list(range(5,10))
"""
Explanation: Notice: The list doesn't reach the stop value and stops one step before. The reason behind that is to make this syntax possible:
End of explanation
"""
for counter in range(1,5):
print(counter)
for counter in range(2,10,2):
print(counter)
"""
Explanation: Notice: In Python 3 use use == to check if two values are equal. To check if two values are not equal use != and don't use <> from Python 2 because it is not supported any more in Python 3.
End of explanation
"""
counter =1
while counter < 5:
print(counter)
counter += 1
"""
Explanation: While Loop
End of explanation
"""
if math.pi == 3.2:
print("Edward J. Goodwin was right!")
else:
print("π is irrational")
if math.sqrt(2) == (10/7):
print("Edward J. Goodwin was right!")
elif math.sqrt(2) != (10/7):
print("Square root of 2 is irrational")
"""
Explanation: If .. Else
End of explanation
"""
probability = 0.3
if probability >= 0.75:
print("Sure thing")
elif probability >= 0.5:
print("Maybe")
elif probability >= 0.25:
print("Unusual")
else:
print("No way")
"""
Explanation: If you like Math:
Fun story about pi where it was almost set by law to be equal to 3.2!
If you don't what is the "pi bill" you can read about it here:
http://en.wikipedia.org/wiki/Indiana_Pi_Bill
Or watch Numberphile video about it:
https://www.youtube.com/watch?v=bFNjA9LOPsg
End of explanation
"""
def get_circumference(r):
return math.pi * r * 2
get_circumference(5)
def binomilal_coef(n,k):
"""
This function returns the binominal coef
Parameters:
===========
n, k int
return n!/(k!*(n-k)!)
"""
value = math.factorial(n)/(math.factorial(k)*math.factorial(n-k))
return value
binomilal_coef(52,2)
"""
Explanation: Functions
Functions are defined in Python using def keyword.
End of explanation
"""
|
ICL-SML/Doubly-Stochastic-DGP | demos/demo_mnist.ipynb | apache-2.0 | import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import tensorflow as tf
from gpflow.likelihoods import MultiClass
from gpflow.kernels import RBF, White
from gpflow.models.svgp import SVGP
from gpflow.training import AdamOptimizer
from scipy.stats import mode
from scipy.cluster.vq import kmeans2
from doubly_stochastic_dgp.dgp import DGP
import time
def get_mnist_data(data_path='/data'):
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets(data_path+'/MNIST_data/', one_hot=False)
X, Y = mnist.train.next_batch(mnist.train.num_examples)
Xval, Yval = mnist.validation.next_batch(mnist.validation.num_examples)
Xtest, Ytest = mnist.test.next_batch(mnist.test.num_examples)
Y, Yval, Ytest = [np.array(y, dtype=float)[:, None] for y in [Y, Yval, Ytest]]
X = np.concatenate([X, Xval], 0)
Y = np.concatenate([Y, Yval], 0)
return X.astype(float), Y.astype(float), Xtest.astype(float), Ytest.astype(float)
X, Y, Xs, Ys = get_mnist_data()
"""
Explanation: MNIST classification
End of explanation
"""
M = 100
Z = kmeans2(X, M, minit='points')[0]
"""
Explanation: We'll use 100 inducing points
End of explanation
"""
m_sgp = SVGP(X, Y, RBF(784, lengthscales=2., variance=2.), MultiClass(10),
Z=Z, num_latent=10, minibatch_size=1000, whiten=True)
def make_dgp(L):
kernels = [RBF(784, lengthscales=2., variance=2.)]
for l in range(L-1):
kernels.append(RBF(30, lengthscales=2., variance=2.))
model = DGP(X, Y, Z, kernels, MultiClass(10),
minibatch_size=1000,
num_outputs=10)
# start things deterministic
for layer in model.layers[:-1]:
layer.q_sqrt = layer.q_sqrt.value * 1e-5
return model
m_dgp2 = make_dgp(2)
m_dgp3 = make_dgp(3)
"""
Explanation: We'll compare three models: an ordinary sparse GP and DGPs with 2 and 3 layers.
We'll use a batch size of 1000 for all models
End of explanation
"""
def assess_model_sgp(model, X_batch, Y_batch):
m, v = model.predict_y(X_batch)
l = model.predict_density(X_batch, Y_batch)
a = (np.argmax(m, 1).reshape(Y_batch.shape).astype(int)==Y_batch.astype(int))
return l, a
"""
Explanation: For the SGP model we'll calcuate accuracy by simply taking the max mean prediction:
End of explanation
"""
S = 100
def assess_model_dgp(model, X_batch, Y_batch):
m, v = model.predict_y(X_batch, S)
l = model.predict_density(X_batch, Y_batch, S)
a = (mode(np.argmax(m, 2), 0)[0].reshape(Y_batch.shape).astype(int)==Y_batch.astype(int))
return l, a
"""
Explanation: For the DGP models we have stochastic predictions. We need a single prediction for each datum, so to do this we take $S$ samples for the one-hot predictions ($(S, N, 10)$ matrices for mean and var), then we take the max over the class means (to give a $(S, N)$ matrix), and finally we take the modal class over the samples (to give a vector of length $N$):
We'll use 100 samples
End of explanation
"""
def batch_assess(model, assess_model, X, Y):
n_batches = max(int(len(X)/1000), 1)
lik, acc = [], []
for X_batch, Y_batch in zip(np.split(X, n_batches), np.split(Y, n_batches)):
l, a = assess_model(model, X_batch, Y_batch)
lik.append(l)
acc.append(a)
lik = np.concatenate(lik, 0)
acc = np.array(np.concatenate(acc, 0), dtype=float)
return np.average(lik), np.average(acc)
"""
Explanation: We need batch predictions (we might run out of memory otherwise)
End of explanation
"""
iterations = 20000
AdamOptimizer(0.01).minimize(m_sgp, maxiter=iterations)
l, a = batch_assess(m_sgp, assess_model_sgp, Xs, Ys)
print('sgp test lik: {:.4f}, test acc {:.4f}'.format(l, a))
"""
Explanation: Now we're ready to go
The sparse GP:
End of explanation
"""
AdamOptimizer(0.01).minimize(m_dgp2, maxiter=iterations)
l, a = batch_assess(m_dgp2, assess_model_dgp, Xs, Ys)
print('dgp2 test lik: {:.4f}, test acc {:.4f}'.format(l, a))
"""
Explanation: Using more inducing points improves things, but at the expense of very slow computation (500 inducing points takes about a day)
The two layer DGP:
End of explanation
"""
AdamOptimizer(0.01).minimize(m_dgp3, maxiter=iterations)
l, a = batch_assess(m_dgp3, assess_model_dgp, Xs, Ys)
print('dgp3 test lik: {:.4f}, test acc {:.4f}'.format(l, a))
"""
Explanation: And the three layer:
End of explanation
"""
|
mathias-gibson/ps239t-final-project | 01_collect-data.ipynb | mit | # Import required libraries
import requests
import urllib
import json
from __future__ import division
import math
import time
"""
Explanation: To collect my data I used get requests to retrieve information from two ProPublica APIs in .json format, and exported the data into two separate .csv files.
End of explanation
"""
# set key
key="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
# set base url
base_url="https://api.propublica.org/campaign-finance/v1/"
# set headers
headers = {'X-API-Key': key}
# set url parameters
cycle = "2014/"
method = "candidates/"
file_format = ".json"
# create a list of FEC IDs from http://www.fec.gov/data/DataCatalog.do to run the API request on more than one ID
fec_id_list = []
with open('fecid2014.txt') as file:
for line in file:
fec_id_list.append(line.strip())
# make request, build list of results for each FEC ID
data = []
for fec_id in fec_id_list:
r = requests.get(base_url+cycle+method+fec_id+file_format, headers=headers)
candidate = r.json()['results']
data.append(candidate)
time.sleep(3)
print(data)
# format data for export
data = [v for sublist in data for v in sublist]
data_keys = data[0].keys()
# export to csv
import csv
with open('ppcampfin.csv', 'w') as file:
dict_writer = csv.DictWriter(file, data_keys)
dict_writer.writeheader()
dict_writer.writerows(data)
"""
Explanation: ProPublica Campaign Finance API
https://propublica.github.io/campaign-finance-api-docs/#candidates
End of explanation
"""
# set key
key="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
# set base url
base_url="https://api.propublica.org/congress/v1/"
# set url parameters
congress = "114/" #102-114 for House, 80-114 for Senate
chamber = "senate" #house or senate
method="/members"
file_format = ".json"
#set headers
headers = {'X-API-Key': key}
# make request
r = requests.get(base_url+congress+chamber+method+file_format, headers=headers)
# parse data for component nested dictionaries
data=(r.json())
bio_keys = data['results'][0]['members'][0]
bio_list = data['results'][0]['members']
# export to csv
import csv
with open('bio.csv', 'w') as file:
dict_writer = csv.DictWriter(file, bio_keys)
dict_writer.writeheader()
dict_writer.writerows(bio_list)
"""
Explanation: ProPublica Congress API - list of all members
https://propublica.github.io/congress-api-docs/?shell#lists-of-members
End of explanation
"""
|
khalido/nd101 | siraj math for deep learning.ipynb | gpl-3.0 | X = np.array([[0,0,1],
[0,1,1],
[1,0,1],
[1,1,1]])
Y = np.array([[0],
[1],
[1],
[0]])
X
Y
"""
Explanation: Step 1: Collect Data
End of explanation
"""
num_epochs = 60000
#initialize weights
syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1
syn0
syn1
def nonlin(x,deriv=False):
if(deriv==True):
return x*(1-x)
return 1/(1+np.exp(-x))
"""
Explanation: Step 2: build model
End of explanation
"""
for epoch in range(num_epochs):
#feed forward through layers 1, 2 & 3
l0 = X # the input layer
l1 = nonlin(np.dot(l0, syn0)) # hidden layer
l2 = nonlin(np.dot(l1, syn1)) # final, hence output layer
# by how much did we miss the target value?
l2_error = Y - l2
l2_delta = l2_error * nonlin(l2, deriv=True)
# propgating to the next layer
l1_error = np.dot(l2_delta, syn1.T)
l1_delta = l1_error * nonlin(l1, deriv=True)
# updating weights
syn1 += l1.T.dot(l2_delta)
syn0 += l0.T.dot(l1_delta)
if (epoch% 10000) == 0:
print("Error:", (np.mean(np.abs(l2_error))))
#print(l1_error)
print('============')
"""
Explanation: Step 3: train model
End of explanation
"""
|
tensorflow/docs-l10n | site/ja/guide/keras/save_and_serialize.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
import numpy as np
import tensorflow as tf
from tensorflow import keras
"""
Explanation: Kerasモデルの保存と読み込み
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/keras/save_and_serialize"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/save_and_serialize.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/save_and_serialize.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/keras/save_and_serialize.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
はじめに
Keras モデルは以下の複数のコンポーネントで構成されています。
アーキテクチャー/構成(モデルに含まれるレイヤーとそれらの接続方法を指定する)
重み値のセット(「モデルの状態」)
オプティマイザ(モデルのコンパイルで定義する)
損失とメトリックのセット(モデルのコンパイルで定義するか、add_loss()またはadd_metric()を呼び出して定義する)
Keras API を使用すると、これらを一度にディスクに保存したり、一部のみを選択して保存できます。
すべてを TensorFlow SavedModel 形式(または古い Keras H5 形式)で1つのアーカイブに保存。これは標準的な方法です。
アーキテクチャ/構成のみを(通常、JSON ファイルとして)保存。
重み値のみを保存。(通常、モデルのトレーニング時に使用)。
では、次にこれらのオプションの用途と機能をそれぞれ見ていきましょう。
保存と読み込みに関する簡単な説明
このガイドを読む時間が 10 秒しかない場合は、次のことを知っておく必要があります。
Keras モデルの保存
python
model = ... # Get model (Sequential, Functional Model, or Model subclass) model.save('path/to/location')
モデルの再読み込み
python
from tensorflow import keras model = keras.models.load_model('path/to/location')
では、詳細を見てみましょう。
セットアップ
End of explanation
"""
def get_model():
# Create a simple model.
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
model.compile(optimizer="adam", loss="mean_squared_error")
return model
model = get_model()
# Train the model.
test_input = np.random.random((128, 32))
test_target = np.random.random((128, 1))
model.fit(test_input, test_target)
# Calling `save('my_model')` creates a SavedModel folder `my_model`.
model.save("my_model")
# It can be used to reconstruct the model identically.
reconstructed_model = keras.models.load_model("my_model")
# Let's check:
np.testing.assert_allclose(
model.predict(test_input), reconstructed_model.predict(test_input)
)
# The reconstructed model is already compiled and has retained the optimizer
# state, so training can resume:
reconstructed_model.fit(test_input, test_target)
"""
Explanation: モデル全体の保存と読み込み
モデル全体を1つのアーティファクトとして保存できます。その場合、以下が含まれます。
モデルのアーキテクチャ/構成
モデルの重み値(トレーニング時に学習される)
モデルのコンパイル情報(compile()が呼び出された場合)
オプティマイザとその状態(存在する場合)。これは、中断した所からトレーニングを再開するために使用します。
API
model.save()またはtf.keras.models.save_model()
tf.keras.models.load_model()
モデル全体をディスクに保存するには {nbsp}TensorFlow SavedModel 形式と古い Keras H5 形式の 2 つの形式を使用できます。推奨される形式は SavedModel です。これは、model.save()を使用する場合のデフォルトです。
次の方法で H5 形式に切り替えることができます。
save_format='h5'をsave()に渡す。
.h5または.kerasで終わるファイル名をsave()に渡す。
SavedModel 形式
例:
End of explanation
"""
!ls my_model
"""
Explanation: SavedModel に含まれるもの
model.save('my_model')を呼び出すと、以下を含むmy_modelという名前のフォルダが作成されます。
End of explanation
"""
class CustomModel(keras.Model):
def __init__(self, hidden_units):
super(CustomModel, self).__init__()
self.dense_layers = [keras.layers.Dense(u) for u in hidden_units]
def call(self, inputs):
x = inputs
for layer in self.dense_layers:
x = layer(x)
return x
model = CustomModel([16, 16, 10])
# Build the model by calling it
input_arr = tf.random.uniform((1, 5))
outputs = model(input_arr)
model.save("my_model")
# Delete the custom-defined model class to ensure that the loader does not have
# access to it.
del CustomModel
loaded = keras.models.load_model("my_model")
np.testing.assert_allclose(loaded(input_arr), outputs)
print("Original model:", model)
print("Loaded model:", loaded)
"""
Explanation: モデルアーキテクチャとトレーニング構成(オプティマイザ、損失、メトリックを含む)は、saved_model.pbに格納されます。重みはvariables/ディレクトリに保存されます。
SavedModel 形式についての詳細は「SavedModel ガイド(ディスク上の SavedModel 形式」)をご覧ください。
SavedModel によるカスタムオブジェクトの処理
モデルとそのレイヤーを保存する場合、SavedModel 形式はクラス名、呼び出し関数、損失、および重み(および実装されている場合は構成)を保存します。呼び出し関数は、モデル/レイヤーの計算グラフを定義します。
モデル/レイヤーの構成がない場合、トレーニング、評価、および推論に使用できる元のモデルのようなモデルを作成するために呼び出し関数が使用されます。
しかしながら、カスタムモデルまたはレイヤークラスを作成する場合は、常にget_configおよびfrom_configメソッドを使用して定義することをお勧めします。これにより、必要に応じて後で計算を簡単に更新できます。詳細については「カスタムオブジェクト」をご覧ください。
以下は、config メソッドを上書きせずに SavedModel 形式からカスタムレイヤーを読み込んだ場合の例です。
End of explanation
"""
model = get_model()
# Train the model.
test_input = np.random.random((128, 32))
test_target = np.random.random((128, 1))
model.fit(test_input, test_target)
# Calling `save('my_model.h5')` creates a h5 file `my_model.h5`.
model.save("my_h5_model.h5")
# It can be used to reconstruct the model identically.
reconstructed_model = keras.models.load_model("my_h5_model.h5")
# Let's check:
np.testing.assert_allclose(
model.predict(test_input), reconstructed_model.predict(test_input)
)
# The reconstructed model is already compiled and has retained the optimizer
# state, so training can resume:
reconstructed_model.fit(test_input, test_target)
"""
Explanation: 上記の例のように、ローダーは、元のモデルのように機能する新しいモデルクラスを動的に作成します。
Keras H5 形式
Keras は、モデルのアーキテクチャ、重み値、およびcompile()情報を含む1つの HDF5 ファイルの保存もサポートしています。これは、SavedModel に代わる軽量な形式です。
例:
End of explanation
"""
layer = keras.layers.Dense(3, activation="relu")
layer_config = layer.get_config()
new_layer = keras.layers.Dense.from_config(layer_config)
"""
Explanation: 制限事項
SavedModel 形式と比較して、H5 ファイルに含まれないものが 2 つあります。
SavedModel とは異なり、model.add_loss()およびmodel.add_metric()を介して追加された外部損失およびメトリックは保存されません。モデルにそのような損失とメトリックがあり、トレーニングを再開する場合は、モデルを読み込んだ後、これらの損失を自分で追加する必要があります。これは、self.add_loss()およびself.add_metric()を介して内部レイヤーで作成された損失/メトリックには適用されないことに注意してください。レイヤーが読み込まれる限り、これらの損失とメトリックはレイヤーのcallメソッドの一部であるため保持されます。
カスタムレイヤーなどのカスタムオブジェクトの計算グラフは、保存されたファイルに含まれません。読み込む際に、Keras はモデルを再構築するためにこれらのオブジェクトの Python クラス/関数にアクセスする必要があります。詳細については、「カスタムオブジェクト」をご覧ください。
アーキテクチャの保存
モデルの構成(アーキテクチャ)は、モデルに含まれるレイヤー、およびこれらのレイヤーの接続方法を指定します*。モデルの構成がある場合、コンパイル情報なしで、重みが新しく初期化された状態でモデルを作成することができます。
*これは、サブクラス化されたモデルではなく、Functional または Sequential API を使用して定義されたモデルにのみ適用されることに注意してください。
Sequential モデルまたは Functional API モデルの構成
これらのタイプのモデルは、レイヤーの明示的なグラフです。それらの構成は常に構造化された形式で提供されます。
API
get_config()およびfrom_config()
tf.keras.models.model_to_json()およびtf.keras.models.model_from_json()
get_config()およびfrom_config()
config = model.get_config()を呼び出すと、モデルの構成を含むPython dictが返されます。その後、同じモデルをSequential.from_config(config)(<br>Sequentialモデルの場合)またはModel.from_config(config)(Functional API モデルの場合) で再度構築できます。
同じワークフローは、シリアル化可能なレイヤーでも使用できます。
レイヤーの例:
End of explanation
"""
model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)])
config = model.get_config()
new_model = keras.Sequential.from_config(config)
"""
Explanation: Sequential モデルの例:
End of explanation
"""
inputs = keras.Input((32,))
outputs = keras.layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(config)
"""
Explanation: Functional モデルの例:
End of explanation
"""
model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)])
json_config = model.to_json()
new_model = keras.models.model_from_json(json_config)
"""
Explanation: to_json()およびtf.keras.models.model_from_json()
これは、get_config / from_configと似ていますが、モデルを JSON 文字列に変換します。この文字列は、元のモデルクラスなしで読み込めます。また、これはモデル固有であり、レイヤー向けではありません。
例:
End of explanation
"""
model.save("my_model")
tensorflow_graph = tf.saved_model.load("my_model")
x = np.random.uniform(size=(4, 32)).astype(np.float32)
predicted = tensorflow_graph(x).numpy()
"""
Explanation: カスタムオブジェクト
モデルとレイヤー
サブクラス化されたモデルとレイヤーのアーキテクチャは、メソッド__init__およびcallで定義されています。それらは Python バイトコードと見なされ、JSON と互換性のある構成にシリアル化できません。pickleなどを使用してバイトコードのシリアル化を試すことができますが、これは安全ではなく、モデルを別のシステムに読み込むことはできません。
カスタム定義されたレイヤーのあるモデル、またはサブクラス化されたモデルを保存/読み込むには、get_configおよびfrom_config(オプション) メソッドを上書きする必要があります。さらに、Keras が認識できるように、カスタムオブジェクトの登録を使用する必要があります。
カスタム関数
カスタム定義関数 (アクティブ化の損失や初期化など) には、get_configメソッドは必要ありません。カスタムオブジェクトとして登録されている限り、関数名は読み込みに十分です。
TensorFlow グラフのみの読み込み
Keras により生成された TensorFlow グラフを以下のように読み込むことができます。 その場合、custom_objectsを提供する必要はありません。
End of explanation
"""
class CustomLayer(keras.layers.Layer):
def __init__(self, a):
self.var = tf.Variable(a, name="var_a")
def call(self, inputs, training=False):
if training:
return inputs * self.var
else:
return inputs
def get_config(self):
return {"a": self.var.numpy()}
# There's actually no need to define `from_config` here, since returning
# `cls(**config)` is the default behavior.
@classmethod
def from_config(cls, config):
return cls(**config)
layer = CustomLayer(5)
layer.var.assign(2)
serialized_layer = keras.layers.serialize(layer)
new_layer = keras.layers.deserialize(
serialized_layer, custom_objects={"CustomLayer": CustomLayer}
)
"""
Explanation: この方法にはいくつかの欠点があることに注意してください。
再作成できないモデルをプロダクションにロールアウトしないように、履歴を追跡するために使用されたカスタムオブジェクトに常にアクセスできる必要があります。
tf.saved_model.loadにより返されるオブジェクトは、Keras モデルではないので、簡単には使えません。たとえば、.predict()や.fit()へのアクセスはありません。
この方法は推奨されていませんが、カスタムオブジェクトのコードを紛失した場合やtf.keras.models.load_model()でモデルを読み込む際に問題が発生した場合などに役に立ちます。
詳細は、tf.saved_model.loadに関するページをご覧ください。
構成メソッドの定義
仕様:
get_configは、Keras のアーキテクチャおよびモデルを保存する API と互換性があるように、JSON シリアル化可能なディクショナリを返す必要があります。
from_config(config) (classmethod) は、構成から作成された新しいレイヤーまたはモデルオブジェクトを返します。デフォルトの実装は cls(**config)を返します。
例:
End of explanation
"""
class CustomLayer(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super(CustomLayer, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(CustomLayer, self).get_config()
config.update({"units": self.units})
return config
def custom_activation(x):
return tf.nn.tanh(x) ** 2
# Make a model with the CustomLayer and custom_activation
inputs = keras.Input((32,))
x = CustomLayer(32)(inputs)
outputs = keras.layers.Activation(custom_activation)(x)
model = keras.Model(inputs, outputs)
# Retrieve the config
config = model.get_config()
# At loading time, register the custom objects with a `custom_object_scope`:
custom_objects = {"CustomLayer": CustomLayer, "custom_activation": custom_activation}
with keras.utils.custom_object_scope(custom_objects):
new_model = keras.Model.from_config(config)
"""
Explanation: カスタムオブジェクトの登録
Keras は構成を生成したクラスについての情報を保持します。上記の例では、tf.keras.layers.serializeはシリアル化された形態のカスタムレイヤーを生成します。
{'class_name': 'CustomLayer', 'config': {'a': 2}}
Keras は、すべての組み込みのレイヤー、モデル、オプティマイザ、およびメトリッククラスのマスターリストを保持し、from_configを呼び出すための正しいクラスを見つけるために使用されます。クラスが見つからない場合は、エラー(Value Error: Unknown layer)が発生します。このリストにカスタムクラスを登録する方法は、いくつかあります。
読み込み関数でcustom_objects引数を設定する。(上記の「config メソッドの定義」セクションの例をご覧ください)
tf.keras.utils.custom_object_scopeまたはtf.keras.utils.CustomObjectScope
tf.keras.utils.register_keras_serializable
カスタムレイヤーと関数の例
End of explanation
"""
with keras.utils.custom_object_scope(custom_objects):
new_model = keras.models.clone_model(model)
"""
Explanation: メモリ内でモデルのクローンを作成する
また、tf.keras.models.clone_model()を通じて、メモリ内でモデルのクローンを作成できます。これは、構成を取得し、その構成からモデルを再作成する方法と同じです (したがって、コンパイル情報やレイヤーの重み値は保持されません)。
例:
End of explanation
"""
def create_layer():
layer = keras.layers.Dense(64, activation="relu", name="dense_2")
layer.build((None, 784))
return layer
layer_1 = create_layer()
layer_2 = create_layer()
# Copy weights from layer 2 to layer 1
layer_2.set_weights(layer_1.get_weights())
"""
Explanation: モデルの重み値のみを保存および読み込む
モデルの重みのみを保存および読み込むように選択できます。これは次の場合に役立ちます。
推論のためのモデルだけが必要とされる場合。この場合、トレーニングを再開する必要がないため、コンパイル情報やオプティマイザの状態は必要ありません。
転移学習を行う場合。以前のモデルの状態を再利用して新しいモデルをトレーニングするため、以前のモデルのコンパイル情報は必要ありません。
インメモリの重みの移動のための API
異なるオブジェクト間で重みをコピーするにはget_weightsおよびset_weightsを使用します。
tf.keras.layers.Layer.get_weights(): numpy配列のリストを返す。
tf.keras.layers.Layer.set_weights(): モデルの重みをweights引数の値に設定する。
以下に例を示します。
インメモリで、1 つのレイヤーから別のレイヤーに重みを転送する
End of explanation
"""
# Create a simple functional model
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
# Define a subclassed model with the same architecture
class SubclassedModel(keras.Model):
def __init__(self, output_dim, name=None):
super(SubclassedModel, self).__init__(name=name)
self.output_dim = output_dim
self.dense_1 = keras.layers.Dense(64, activation="relu", name="dense_1")
self.dense_2 = keras.layers.Dense(64, activation="relu", name="dense_2")
self.dense_3 = keras.layers.Dense(output_dim, name="predictions")
def call(self, inputs):
x = self.dense_1(inputs)
x = self.dense_2(x)
x = self.dense_3(x)
return x
def get_config(self):
return {"output_dim": self.output_dim, "name": self.name}
subclassed_model = SubclassedModel(10)
# Call the subclassed model once to create the weights.
subclassed_model(tf.ones((1, 784)))
# Copy weights from functional_model to subclassed_model.
subclassed_model.set_weights(functional_model.get_weights())
assert len(functional_model.weights) == len(subclassed_model.weights)
for a, b in zip(functional_model.weights, subclassed_model.weights):
np.testing.assert_allclose(a.numpy(), b.numpy())
"""
Explanation: インメモリで 1 つのモデルから互換性のあるアーキテクチャを備えた別のモデルに重みを転送する
End of explanation
"""
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
# Add a dropout layer, which does not contain any weights.
x = keras.layers.Dropout(0.5)(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model_with_dropout = keras.Model(
inputs=inputs, outputs=outputs, name="3_layer_mlp"
)
functional_model_with_dropout.set_weights(functional_model.get_weights())
"""
Explanation: ステートレスレイヤーの場合
ステートレスレイヤーは重みの順序や数を変更しないため、ステートレスレイヤーが余分にある場合や不足している場合でも、モデルのアーキテクチャは互換性があります。
End of explanation
"""
# Runnable example
sequential_model = keras.Sequential(
[
keras.Input(shape=(784,), name="digits"),
keras.layers.Dense(64, activation="relu", name="dense_1"),
keras.layers.Dense(64, activation="relu", name="dense_2"),
keras.layers.Dense(10, name="predictions"),
]
)
sequential_model.save_weights("ckpt")
load_status = sequential_model.load_weights("ckpt")
# `assert_consumed` can be used as validation that all variable values have been
# restored from the checkpoint. See `tf.train.Checkpoint.restore` for other
# methods in the Status object.
load_status.assert_consumed()
"""
Explanation: 重みをディスクに保存して再度読み込むための API
以下の形式でmodel.save_weightsを呼び出すことにより、重みをディスクに保存できます。
TensorFlow Checkpoint
HDF5
model.save_weightsのデフォルトの形式は TensorFlow Checkpoint です。保存形式を指定する方法は 2 つあります。
save_format引数:値をsave_format = "tf"またはsave_format = "h5"に設定する。
path引数:パスが.h5または.hdf5で終わる場合、HDF5 形式が使用されます。save_formatが設定されていない限り、他のサフィックスでは、TensorFlow Checkpoint になります。
また、オプションとしてインメモリの numpy 配列として重みを取得することもできます。各 API には、以下の長所と短所があります。
TF Checkpoint 形式
例:
End of explanation
"""
class CustomLayer(keras.layers.Layer):
def __init__(self, a):
self.var = tf.Variable(a, name="var_a")
layer = CustomLayer(5)
layer_ckpt = tf.train.Checkpoint(layer=layer).save("custom_layer")
ckpt_reader = tf.train.load_checkpoint(layer_ckpt)
ckpt_reader.get_variable_to_dtype_map()
"""
Explanation: 形式の詳細
TensorFlow Checkpoint 形式は、オブジェクト属性名を使用して重みを保存および復元します。 たとえば、tf.keras.layers.Denseレイヤーを見てみましょう。このレイヤーには、2 つの重み、dense.kernelとdense.biasがあります。レイヤーがtf形式で保存されると、結果のチェックポイントには、キー「kernel」と「bias」およびそれらに対応する重み値が含まれます。 詳細につきましては、TF Checkpoint ガイドの「読み込みの仕組み」をご覧ください。
属性/グラフのエッジは、変数名ではなく、親オブジェクトで使用される名前で命名されていることに注意してください。以下の例のCustomLayerでは、変数CustomLayer.varは、"var_a"ではなく、"var"をキーの一部として保存されます。
End of explanation
"""
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
# Extract a portion of the functional model defined in the Setup section.
# The following lines produce a new model that excludes the final output
# layer of the functional model.
pretrained = keras.Model(
functional_model.inputs, functional_model.layers[-1].input, name="pretrained_model"
)
# Randomly assign "trained" weights.
for w in pretrained.weights:
w.assign(tf.random.normal(w.shape))
pretrained.save_weights("pretrained_ckpt")
pretrained.summary()
# Assume this is a separate program where only 'pretrained_ckpt' exists.
# Create a new functional model with a different output dimension.
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(5, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs, name="new_model")
# Load the weights from pretrained_ckpt into model.
model.load_weights("pretrained_ckpt")
# Check that all of the pretrained weights have been loaded.
for a, b in zip(pretrained.weights, model.weights):
np.testing.assert_allclose(a.numpy(), b.numpy())
print("\n", "-" * 50)
model.summary()
# Example 2: Sequential model
# Recreate the pretrained model, and load the saved weights.
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
pretrained_model = keras.Model(inputs=inputs, outputs=x, name="pretrained")
# Sequential example:
model = keras.Sequential([pretrained_model, keras.layers.Dense(5, name="predictions")])
model.summary()
pretrained_model.load_weights("pretrained_ckpt")
# Warning! Calling `model.load_weights('pretrained_ckpt')` won't throw an error,
# but will *not* work as expected. If you inspect the weights, you'll see that
# none of the weights will have loaded. `pretrained_model.load_weights()` is the
# correct method to call.
"""
Explanation: 転移学習の例
基本的に、2 つのモデルが同じアーキテクチャを持っている限り、同じチェックポイントを共有できます。
例:
End of explanation
"""
# Create a subclassed model that essentially uses functional_model's first
# and last layers.
# First, save the weights of functional_model's first and last dense layers.
first_dense = functional_model.layers[1]
last_dense = functional_model.layers[-1]
ckpt_path = tf.train.Checkpoint(
dense=first_dense, kernel=last_dense.kernel, bias=last_dense.bias
).save("ckpt")
# Define the subclassed model.
class ContrivedModel(keras.Model):
def __init__(self):
super(ContrivedModel, self).__init__()
self.first_dense = keras.layers.Dense(64)
self.kernel = self.add_variable("kernel", shape=(64, 10))
self.bias = self.add_variable("bias", shape=(10,))
def call(self, inputs):
x = self.first_dense(inputs)
return tf.matmul(x, self.kernel) + self.bias
model = ContrivedModel()
# Call model on inputs to create the variables of the dense layer.
_ = model(tf.ones((1, 784)))
# Create a Checkpoint with the same structure as before, and load the weights.
tf.train.Checkpoint(
dense=model.first_dense, kernel=model.kernel, bias=model.bias
).restore(ckpt_path).assert_consumed()
"""
Explanation: 通常、モデルの作成には同じ API を使用することをお勧めします。Sequential と Functional、またはFunctional とサブクラス化などの間で切り替える場合は、常に事前トレーニング済みモデルを再構築し、事前トレーニング済みの重みをそのモデルに読み込みます。
モデルのアーキテクチャがまったく異なる場合は、どうすれば重みを保存して異なるモデルに読み込むことができるのでしょうか?tf.train.Checkpointを使用すると、正確なレイヤー/変数を保存および復元することができます。
例:
End of explanation
"""
# Runnable example
sequential_model = keras.Sequential(
[
keras.Input(shape=(784,), name="digits"),
keras.layers.Dense(64, activation="relu", name="dense_1"),
keras.layers.Dense(64, activation="relu", name="dense_2"),
keras.layers.Dense(10, name="predictions"),
]
)
sequential_model.save_weights("weights.h5")
sequential_model.load_weights("weights.h5")
"""
Explanation: HDF5 形式
HDF5 形式には、レイヤー名でグループ化された重みが含まれています。重みは、トレーニング可能な重みのリストをトレーニング不可能な重みのリストに連結することによって並べられたリストです(layer.weightsと同じ)。 したがって、チェックポイントに保存されているものと同じレイヤーとトレーニング可能な状態がある場合、モデルは HDF 5 チェックポイントを使用できます。
例:
End of explanation
"""
class NestedDenseLayer(keras.layers.Layer):
def __init__(self, units, name=None):
super(NestedDenseLayer, self).__init__(name=name)
self.dense_1 = keras.layers.Dense(units, name="dense_1")
self.dense_2 = keras.layers.Dense(units, name="dense_2")
def call(self, inputs):
return self.dense_2(self.dense_1(inputs))
nested_model = keras.Sequential([keras.Input((784,)), NestedDenseLayer(10, "nested")])
variable_names = [v.name for v in nested_model.weights]
print("variables: {}".format(variable_names))
print("\nChanging trainable status of one of the nested layers...")
nested_model.get_layer("nested").dense_1.trainable = False
variable_names_2 = [v.name for v in nested_model.weights]
print("\nvariables: {}".format(variable_names_2))
print("variable ordering changed:", variable_names != variable_names_2)
"""
Explanation: ネストされたレイヤーがモデルに含まれている場合、layer.trainableを変更すると、layer.weightsの順序が異なる場合があることに注意してください。
End of explanation
"""
def create_functional_model():
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
return keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
functional_model = create_functional_model()
functional_model.save_weights("pretrained_weights.h5")
# In a separate program:
pretrained_model = create_functional_model()
pretrained_model.load_weights("pretrained_weights.h5")
# Create a new model by extracting layers from the original model:
extracted_layers = pretrained_model.layers[:-1]
extracted_layers.append(keras.layers.Dense(5, name="dense_3"))
model = keras.Sequential(extracted_layers)
model.summary()
"""
Explanation: 転移学習の例
HDF5 から事前トレーニングされた重みを読み込む場合は、元のチェックポイントモデルに重みを読み込んでから、目的の重み/レイヤーを新しいモデルに抽出することをお勧めします。
例:
End of explanation
"""
|
QuantScientist/Deep-Learning-Boot-Camp | day03/additional materials/1.1.1 Perceptron and Adaline.ipynb | mit | # Display plots in notebook
%matplotlib inline
# Define plot's default figure size
import matplotlib
"""
Explanation: (exceprt from Python Machine Learning Essentials, Supplementary Materials)
Sections
Implementing a perceptron learning algorithm in Python
Training a perceptron model on the Iris dataset
Adaptive linear neurons and the convergence of learning
Implementing an adaptive linear neuron in Python
End of explanation
"""
import numpy as np
class Perceptron(object):
"""Perceptron classifier.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
errors_ : list
Number of misclassifications in every epoch.
"""
def __init__(self, eta=0.01, n_iter=10):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
"""Fit training data.
Parameters
----------
X : {array-like}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
"""
self.w_ = np.zeros(1 + X.shape[1])
self.errors_ = []
for _ in range(self.n_iter):
errors = 0
for xi, target in zip(X, y):
update = self.eta * (target - self.predict(xi))
self.w_[1:] += update * xi
self.w_[0] += update
errors += int(update != 0.0)
self.errors_.append(errors)
return self
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.net_input(X) >= 0.0, 1, -1)
"""
Explanation: Implementing a perceptron learning algorithm in Python
[back to top]
End of explanation
"""
import numpy as np
import pandas as pd
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
data = np.hstack((X, y[:, np.newaxis]))
labels = iris.target_names
features = iris.feature_names
df = pd.DataFrame(data, columns=iris.feature_names+['label'])
df.label = df.label.map({k:v for k,v in enumerate(labels)})
df.tail()
"""
Explanation: <br>
<br>
Training a perceptron model on the Iris dataset
[back to top]
Reading-in the Iris data
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
# select setosa and versicolor
y = df.iloc[0:100, 4].values
y = np.where(y == 'setosa', -1, 1)
# extract sepal length and petal length
X = df.iloc[0:100, [0, 2]].values
# plot data
plt.scatter(X[:50, 0], X[:50, 1],
color='red', marker='o', label='setosa')
plt.scatter(X[50:100, 0], X[50:100, 1],
color='blue', marker='x', label='versicolor')
plt.xlabel('petal length [cm]')
plt.ylabel('sepal length [cm]')
plt.legend(loc='upper left')
plt.show()
"""
Explanation: <br>
<br>
Plotting the Iris data
End of explanation
"""
ppn = Perceptron(eta=0.1, n_iter=10)
ppn.fit(X, y)
plt.plot(range(1, len(ppn.errors_) + 1), ppn.errors_, marker='o')
plt.xlabel('Epochs')
plt.ylabel('Number of misclassifications')
plt.tight_layout()
plt.show()
"""
Explanation: <br>
<br>
Training the perceptron model
End of explanation
"""
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
plot_decision_regions(X, y, classifier=ppn)
plt.xlabel('sepal length [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
"""
Explanation: <br>
<br>
A function for plotting decision regions
End of explanation
"""
class AdalineGD(object):
"""ADAptive LInear NEuron classifier.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
errors_ : list
Number of misclassifications in every epoch.
"""
def __init__(self, eta=0.01, n_iter=50):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
""" Fit training data.
Parameters
----------
X : {array-like}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
"""
self.w_ = np.zeros(1 + X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
output = self.net_input(X)
errors = (y - output)
self.w_[1:] += self.eta * X.T.dot(errors)
self.w_[0] += self.eta * errors.sum()
cost = (errors**2).sum() / 2.0
self.cost_.append(cost)
return self
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def activation(self, X):
"""Compute linear activation"""
return self.net_input(X)
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.activation(X) >= 0.0, 1, -1)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(8, 4))
ada1 = AdalineGD(n_iter=10, eta=0.01).fit(X, y)
ax[0].plot(range(1, len(ada1.cost_) + 1), np.log10(ada1.cost_), marker='o')
ax[0].set_xlabel('Epochs')
ax[0].set_ylabel('log(Sum-squared-error)')
ax[0].set_title('Adaline - Learning rate 0.01')
ada2 = AdalineGD(n_iter=10, eta=0.0001).fit(X, y)
ax[1].plot(range(1, len(ada2.cost_) + 1), ada2.cost_, marker='o')
ax[1].set_xlabel('Epochs')
ax[1].set_ylabel('Sum-squared-error')
ax[1].set_title('Adaline - Learning rate 0.0001')
plt.tight_layout()
plt.show()
"""
Explanation: <br>
<br>
Adaptive linear neurons and the convergence of learning
[back to top]
Implementing an adaptive linear neuron in Python
End of explanation
"""
# standardize features
X_std = np.copy(X)
X_std[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std()
X_std[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std()
ada = AdalineGD(n_iter=15, eta=0.01)
ada.fit(X_std, y)
plot_decision_regions(X_std, y, classifier=ada)
plt.title('Adaline - Gradient Descent')
plt.xlabel('sepal length [standardized]')
plt.ylabel('petal length [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
plt.plot(range(1, len(ada.cost_) + 1), ada.cost_, marker='o')
plt.xlabel('Epochs')
plt.ylabel('Sum-squared-error')
plt.tight_layout()
plt.show()
"""
Explanation: <br>
<br>
Standardizing features and re-training adaline
End of explanation
"""
from numpy.random import seed
class AdalineSGD(object):
"""ADAptive LInear NEuron classifier.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
errors_ : list
Number of misclassifications in every epoch.
shuffle : bool (default: True)
Shuffles training data every epoch if True to prevent cycles.
random_state : int (default: None)
Set random state for shuffling and initializing the weights.
"""
def __init__(self, eta=0.01, n_iter=10, shuffle=True, random_state=None):
self.eta = eta
self.n_iter = n_iter
self.w_initialized = False
self.shuffle = shuffle
if random_state:
seed(random_state)
def fit(self, X, y):
""" Fit training data.
Parameters
----------
X : {array-like}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
"""
self._initialize_weights(X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
if self.shuffle:
X, y = self._shuffle(X, y)
cost = []
for xi, target in zip(X, y):
cost.append(self._update_weights(xi, target))
avg_cost = sum(cost)/len(y)
self.cost_.append(avg_cost)
return self
def partial_fit(self, X, y):
"""Fit training data without reinitializing the weights"""
if not self.w_initialized:
self._initialize_weights(X.shape[1])
if y.ravel().shape[0] > 1:
for xi, target in zip(X, y):
self._update_weights(xi, target)
else:
self._update_weights(X, y)
return self
def _shuffle(self, X, y):
"""Shuffle training data"""
r = np.random.permutation(len(y))
return X[r], y[r]
def _initialize_weights(self, m):
"""Initialize weights to zeros"""
self.w_ = np.zeros(1 + m)
self.w_initialized = True
def _update_weights(self, xi, target):
"""Apply Adaline learning rule to update the weights"""
output = self.net_input(xi)
error = (target - output)
self.w_[1:] += self.eta * xi.dot(error)
self.w_[0] += self.eta * error
cost = 0.5 * error**2
return cost
def net_input(self, X):
"""Calculate net input"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def activation(self, X):
"""Compute linear activation"""
return self.net_input(X)
def predict(self, X):
"""Return class label after unit step"""
return np.where(self.activation(X) >= 0.0, 1, -1)
ada = AdalineSGD(n_iter=15, eta=0.01, random_state=1)
ada.fit(X_std, y)
plot_decision_regions(X_std, y, classifier=ada)
plt.title('Adaline - Stochastic Gradient Descent')
plt.xlabel('sepal length [standardized]')
plt.ylabel('petal length [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
plt.plot(range(1, len(ada.cost_) + 1), ada.cost_, marker='o')
plt.xlabel('Epochs')
plt.ylabel('Average Cost')
plt.tight_layout()
plt.show()
ada.partial_fit(X_std[0, :], y[0])
"""
Explanation: <br>
<br>
Large scale machine learning and stochastic gradient descent
[back to top]
End of explanation
"""
|
rubensfernando/mba-analytics-big-data | Python/2016-07-22/aula2-parte1-funcoes.ipynb | mit | def maximo(x, y):
if x > y:
z = x
else:
z = y
"""
Explanation: Funções
Até agora, vimos diversos tipos de dados, atribuições, comparações e estruturas de controle.
A ideia da função é dividir para conquistar, onde:
Um problema é dividido em diversos subproblemas
As soluções dos subproblemas são combinadas numa solução do problema maior.
Esses subproblemas têm o nome de funções.
Funções possibilitam abstrair, ou seja permite capturar a computação realiza e tratá-la como primitiva.
Suponha que queremos que a variável z seja o máximo de dois números (x e y).
Um programa simples seria:
if x > y:
z = x
else:
z = y
A ideia é encapsular essa computação dentro de um escopo que pode ser tratado como primitiva.
É utilizado simplesmente chamando o nome e fornecendo uma entrada.
Os detalhes internos sendo escondidos dos usuários.
Uma função tem 3 partes importantes:
def <nome> ( <parametros> ):
<corpo da função>
def é uma palavra chave
<nome> é qualquer nome aceito pelo Python
<parametros> é a quantidade de parâmetros que será passado para a função (pode ser nenhum).
<corpo da função> contém o código da função.
Voltando ao exemplo:
End of explanation
"""
def maximo(x, y):
if x > y:
return x
else:
return y
"""
Explanation: Ótimo temos uma função e podemos reaproveita-la. Porém, para de fato reaproveita-la temos que utilizar o comando return.
End of explanation
"""
z = maximo(3, 4)
"""
Explanation: Pronto agora sim! Já podemos reaproveitar nossa função!
E como fazer isso?
End of explanation
"""
print(z)
"""
Explanation: Quando chamamos a função maximo(3, 4) estamos definindo que x = 3 e y = 4. Após, as expressões são avaliadas até que não se tenha mais expressões, e nesse caso é retornado None. Ou até que encontre a palavra especial return, retornando como valor da chamada da função.
End of explanation
"""
def economias (dinheiro, conta, gastos):
total = (dinheiro + conta) - gastos
return (total)
eco = economias(10, 20, 10)
print(eco)
"""
Explanation: Já entendemos o que é e como criar funções.
Para testar vamos criar uma função que irá realizar uma conta.
End of explanation
"""
def economias(dinheiro, conta, gastos=150):
total = (dinheiro + conta) - gastos
return(total)
print(economias(100, 60))
print(economias(100, 60, 10))
"""
Explanation: Também podemos definir um valor padrão para um ou mais argumentos
Vamos reescrever a função economias para que os gastos sejam fixados em 150, caso não seja passado nenhum valor por padrão.
End of explanation
"""
print(dinheiro)
"""
Explanation: É importante notar que uma variável que está dentro de uma função, não pode ser utilizada novamente enquanto a função não terminar de ser executada.
No mundo da programação, isso é chamado de escopo. Vamos tentar imprimir o valor da variável dinheiro.
End of explanation
"""
def economias(dinheiro, conta, gastos=150):
total = (dinheiro + conta) - gastos
total = total + eco
return(total)
print(economias(100,60))
"""
Explanation: <span style="color:blue;">Por que isso aconteceu?</span>
Esse erro acontece pois a variável dinheiro somente existe dentro da função economias, ou seja, ela existe apenas no contexto local dessa função.
Vamos modificar novamente a função economias:
End of explanation
"""
def conta(valor, multa=7):
# Seu código aqui
"""
Explanation: <span style="color:blue;">Por que não deu problema?</span>
Quando utilizamos uma variável que está fora da função dentro de uma função estamos utilizando a ideia de variáveis globais, onde dentro do contexto geral essa variável existe e pode ser utilizada dentro da função.
<span style="color:red;">Isso não é recomendado! O correto seria ter um novo argumento!</span>
Exercício de Funções
Crie uma função que receba dois argumentos.
* O primeiro argumento é o valor de um determinado serviço
* O segundo é a porcentagem da multa por atraso do pagamento. O valor padrão da porcentagem, se não passado, é de 7%. A função deve retornar o valor final da conta com o juros. Lembre-se de converter 7%.
End of explanation
"""
idade = input('Digite sua idade:')
print(idade)
nome = input('Digite seu nome:')
print(nome)
print(type(idade))
print(type(nome))
"""
Explanation: Funções embutidas
Python tem um número de funções embutidas que sempre estão presentes. Uma lista completa pode ser encontrada em https://docs.python.org/3/library/functions.html.
<span style="color:blue;">Já utilizamos algumas delas! Quais?</span>
input
Uma outra função que é bem interessante, é a input. Essa função permite que o usuário digite uma entrada, por exemplo:
End of explanation
"""
idade = int(input("Digite sua idade:"))
print(type(idade))
"""
Explanation: Note que ambas as variáveis são strings. Portanto precisamos converter para inteiro a idade.
End of explanation
"""
import os
os.remove("arquivo.txt")
arq = open("arquivo.txt", "w")
for i in range(1, 5):
arq.write('{}. Escrevendo em arquivo\n'.format(i))
arq.close()
"""
Explanation: open
A função open, permite abrir um arquivo para leitura e escrita.
open(nome_do_arquivo, modo)
Modos:
* r - abre o arquivo para leitura.
* w - abre o arquivo para escrita.
* a - abre o arquivo para escrita acrescentando os dados no final do arquivo.
* + - pode ser lido e escrito simultaneamente.
End of explanation
"""
f = open("arquivo.txt", "r")
print(f, '\n')
texto = f.read()
print(texto)
f.close()
f = open("arquivo.txt", "r")
texto = f.readlines()
print(texto)
f.close()
#help(f.readlines)
"""
Explanation: Métodos
read() - retorna uma string única com todo o conteúdo do arquivo.
readlines() - todo o conteúdo do arquivo é salvo em uma lista, onde cada linha do arquivo será um elemento da lista.
End of explanation
"""
f = open("arquivo.txt", "r")
texto = f.read().splitlines()
print(texto)
f.close()
"""
Explanation: Para remover o \n podemos utilizar o método read que irá gerar uma única string e depois aplicamos o método splitlines.
End of explanation
"""
|
lily-tian/fanfictionstatistics | jupyter_notebooks/.ipynb_checkpoints/story_analysis-checkpoint.ipynb | mit | # examines state of stories
state = df['state'].value_counts()
# plots chart
(state/np.sum(state)).plot.bar()
plt.xticks(rotation=0)
plt.show()
"""
Explanation: Story Analysis
In this section, we take a sample of ~5000 stories from fanfiction.net and break down some of their characteristics.
Activity and volume
Let's begin by examining the current state of stories: online, deleted, or missing. Missing stories are stories whose URL has moved due to shifts in the fanfiction achiving system.
End of explanation
"""
# examines when stories first created
df_online['pub_year'] = [int(row[2]) for row in df_online['published']]
entry = df_online['pub_year'].value_counts().sort_index()
# plots chart
(entry/np.sum(entry)).plot()
plt.xlim([np.min(entry.index.values), cyear-1])
plt.show()
"""
Explanation: Surprisingly, it appears only about ~60% of stories that were once published still remain on the site! This is in stark contrast to user profiles, where less than 0.1% are deleted.
From this, we can only guess that authors actively take stories down, presumably to hide earlier works as their writing abilities improve or to replace them with rewrites. Authors who delete their profiles and stories that were deleted for fanfiction policy violations would also contribute to these figures.
Now let's examine the volume of stories published across time (that have survived on the site).
End of explanation
"""
# examines top genres individually
genres_indiv = [item for sublist in df_online['genre'] for item in sublist]
genres_indiv = pd.Series(genres_indiv).value_counts()
# plots chart
(genres_indiv/np.sum(genres_indiv)).plot.bar()
plt.xticks(rotation=90)
plt.show()
"""
Explanation: We see a large jump starting in the 2010s, peaking around 2013, then a steady decline afterward. Unlike with profiles, you do not see the dips matching the Great Fanfiction Purge of 2002 and 2012.
The decline could be from a variety of factors. One could be competing fanfiction sites. Most notably, the nonprofit site, Archive of Our Own (AO3), started gaining traction due to its greater inclusivity of works and its tagging system that helps users to filter and search for works.
Another question to ask is if the increasing popularity of fanfiction is fueled by particular fandoms. It is well known in the fanfiction community that fandoms like Star Trek paved the road. Harry Potter and Naruto also held a dominating presence in the 2000s. Later on, we will try to quantify how much each of these fandoms contributed to the volume of fanfiction produced.
Genres
Now let's look at the distribution across the stories. Note that "General" includes stories that do not have a genre label.
End of explanation
"""
# creates contingency table
gen_pairs = df_online.loc[[len(row) > 1 for row in df_online.genre], 'genre']
gen1 = pd.Series([row[0][:3] for row in gen_pairs] + [row[1][:3] for row in gen_pairs])
gen2 = pd.Series([row[1][:3] for row in gen_pairs] + [row[0][:3] for row in gen_pairs])
cross = pd.crosstab(index=gen1, columns=gen2, colnames=[''])
del cross.index.name
# finds relative frequency
for col in cross.columns.values:
cross[col] = cross[col]/np.sum(cross[col])
# plots heatmap
f, ax = plt.subplots(figsize=(6,6))
cm = sns.color_palette("Blues")
ax = sns.heatmap(cross, cmap=cm, cbar=False, robust=True,
square=True, linewidths=0.1, linecolor='white')
plt.show()
"""
Explanation: Romance takes the lead! In fact, ~30% of the genre labels used is "Romance". In second and third place are Humor and Drama respectively.
The least popular genres appear to be Crime, Horror, and Mystery.
So far, nothing here deviates much from intuition. We'd expect derivative works to focus more on existing character relationships and/or the canonic world, and less on stand-alone plots and twists.
What about how the genres combine?
End of explanation
"""
# examines state of stories
rated = df_online['rated'].value_counts()
rated.plot.pie(autopct='%.f', figsize=(5,5))
plt.ylabel('')
plt.show()
"""
Explanation: In terms of how genres cross, romance appears to pair with almost everything. Romance is particularly common with drama (the romantic drama) and humor (the rom-com). The only genre that shies away from romance is parody, which goes in hand with humor instead.
The second most crossed genre is adventure, which is often combined with fantasy, sci-fi, mystery, or suspense.
The third genre to note is angst, which is often combined with horror, poetry, or tragedy.
Ratings
The breakdown of how stories are rated are given below.
End of explanation
"""
# examines distribution of languages
top_language = df_online['language'].copy()
top_language[top_language != 'English'] = 'Non-English'
top_language = top_language.value_counts()
top_language.plot.pie(autopct='%.f', figsize=(5,5))
plt.ylabel('')
plt.show()
"""
Explanation: ~40% of stories are rated T, ~40% rated K or K+, and ~20% are rated M.
Language
Stories on the site are written predominately in English. The next common languages are Spanish, French, Indonesian, and Portuguese. However, these are all minute, with Spanish consisting of only 5% and the other languages 3% or less.
End of explanation
"""
# examines distribution of media
media = df_online['media'].value_counts()
(media/np.sum(media)).plot.bar()
plt.xticks(rotation=90)
plt.show()
"""
Explanation: Media and fandoms
As for 2017, fanfiction.net has nine different media categories, plus crossovers. The breakdown of these is given below:
End of explanation
"""
# examines distribution of media
fandom = df_online['fandom'].value_counts()
(fandom[:10]/np.sum(fandom)).plot.bar()
plt.xticks(rotation=90)
plt.show()
"""
Explanation: Anime/Manga is the most popular media, taking up approximately ~30% of all works. TV Shows and Books both contribute to ~20% each.
What about by fandom?
End of explanation
"""
df_online['top_fandom'] = df_online['fandom']
nottop = [row not in fandom[:10].index.values for row in df_online['fandom']]
df_online.loc[nottop, 'top_fandom'] = 'Other'
entry_fandom = pd.crosstab(df_online.pub_year, df_online.top_fandom)
entry_fandom = entry_fandom[np.append(fandom[:5].index.values, ['Other'])][:-1]
# plots chart
(entry_fandom/np.sum(entry)).plot.bar(stacked=True)
plt.axes().get_xaxis().set_label_text('')
plt.legend(title=None, frameon=False)
plt.show()
"""
Explanation: The most popular fandom is, unsurprisingly, Harry Potter. However, it still consitutes a much smaller portion of the fanfiction base than initially assumed, at only ~10%.
One question we asked earlier is what fandoms contributed to the increases in stories over time.
End of explanation
"""
# examines distribution of number of words
df_online['words1k'] = df_online['words']/1000
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True)
sns.kdeplot(df_online['words1k'], shade=True, bw=.5, legend=False, ax=ax1)
sns.kdeplot(df_online['words1k'], shade=True, bw=.5, legend=False, ax=ax2)
plt.xlim(0,100)
plt.show()
"""
Explanation: It would appear that backin the year 2000, Harry Potter constituted nearly half of the fanfictions published. However, the overall growth in fanfiction is due to many other fandoms jumping in, with no one particular fandom holding sway.
Of the top 5 fandoms, Harry Potter and Naruto prove to be the most persistent in holding their volumes per year. Twilight saw a giant spike in popularity in 2009 and 2010 but faded since.
Word count, chapter length and completion status
Let's take a look at the distribution of word and chapter lengths.
End of explanation
"""
# examines distribution of number of chapters
df_online['chapters'] = df_online['chapters'].fillna(1)
df_online['chapters'].plot.hist(normed=True,
bins=np.arange(1, max(df_online.chapters)+1, 1))
plt.show()
"""
Explanation: The bulk of stories appear to be less than 50 thousand words, with a high proportion between 0-20 thousand words. In other words, we have a significant proportion of short stories and novelettes, and some novellas. Novels become more rare. Finally, there are a few "epics", ranging from 200 thousand to 600 thousand words.
The number of chapters per story, unsurprisingly, follows a similarly skewed distribution.
End of explanation
"""
# examines distribution of story status
status = df_online['status'].value_counts()
status.plot.pie(autopct='%.f', figsize=(5,5))
plt.show()
"""
Explanation: Stories with over 20 chapters become exceedingly rare.
How often are stories completed?
End of explanation
"""
complete = df_online.loc[df_online.status == 'Complete', 'chapters']
incomplete = df_online.loc[df_online.status == 'Incomplete', 'chapters']
plt.hist([complete, incomplete], normed=True, range=[1,10])
plt.show()
"""
Explanation: This is unexpected. It looks to be about an even split between completed and incompleted stories.
Let's see what types of stories are the completed ones.
End of explanation
"""
days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
months = ['NA', 'January', 'February', 'March', 'April', 'May', 'June', 'July',
'August', 'Septemeber', 'October', 'November', 'December']
# examines when stories first created
df_online['pub_month'] = [months[int(row[0])] for row in df_online['published']]
month = df_online['pub_month'].value_counts()
(month/np.sum(month)).plot.bar()
plt.xticks(rotation=90)
plt.axhline(y=0.0833, color='orange')
plt.show()
"""
Explanation: Oneshots explain the large proportion of completed stories.
Publication timing
Do authors publish more frequently on certain months or days?
End of explanation
"""
month_xs = chisquare(month)
"""
Explanation: It appears some months are more popular than others. September, October, and November are the least popular months. Given that the majority of the user base is from the United States, and presumably children and young adults, this is perhaps due to the timing of the academic calendar -- school begins in the fall. Similarly, the three most popular months (December, July, and April) coincides with winter vacation, summer vacation, and spring break respectively. This is all purely speculatory.
End of explanation
"""
# examines when stories first created
dayofweek = [days[datetime.date(int(row[2]), int(row[0]), int(row[1])).weekday()]
for row in df_online['published']]
dayofweek = pd.Series(dayofweek).value_counts()
(dayofweek/np.sum(dayofweek)).plot.bar()
plt.xticks(rotation=90)
plt.axhline(y=0.143, color='orange')
plt.show()
"""
Explanation: One thing we can test is how likely these differences are happenstance. Afterall, if you draw a bunch of stories at random, you might by chance get more stories published on certain months than others.
Using a chi-squared test, we found that -- assuming that the month really doesn't matter for publication -- the probability of getting the distribution that we are seeing is ~3%.
That's fairly low. So we have some evidence against the idea that the month doesn't matter for volume of publication.
End of explanation
"""
dayofweek_xs = chisquare(dayofweek)
"""
Explanation: As for days of the week, publications are least likely to happen on a Friday.
End of explanation
"""
# examines word/character count of titles
title_cc = [len(row) for row in df_online['title']]
title_wc = [len(row.split()) for row in df_online['title']]
pd.Series(title_cc).plot.hist(normed=True, bins=np.arange(0, max(title_cc), 1))
plt.show()
pd.Series(title_wc).plot.hist(normed=True, bins=np.arange(0, max(title_wc), 1))
plt.show()
"""
Explanation: And how likely is this discrepency in days random? Well, the probability that we would get the distribution that we are seeing, assuming that the day of the week doesn't matter for publication, is only ~0.07%.
That's only a fraction of a percent. Once again, we go against the idea that the day of week doesn't matter for volume of publication.
Friday is at the end of a long week, maybe people are eager to go out and hang out with friends? Reading and writing are more reserved for the quieter, lazier Sundays. At least that is the anecdote of this author.
Titles and summaries
How long are titles and summaries? Is there a systematic way authors write them? Do some words appear more often than others? Here we explore some of those questions.
Let's start by examining character and word count, respectively, for titles.
End of explanation
"""
# examines word/character count of summaries
summary_cc = [len(row) for row in df_online['summary']]
summary_wc = [len(row.split()) for row in df_online['summary']]
pd.Series(summary_cc).plot.hist(normed=True, bins=np.arange(0, max(summary_cc), 1))
plt.show()
pd.Series(summary_wc).plot.hist(normed=True, bins=np.arange(0, max(summary_wc), 1))
plt.show()
"""
Explanation: Almost identical in shape of distribution. It would appear stories typically have 2-3 words in the title, or 15-20 characters.
Now let's look at summaries.
End of explanation
"""
title_eng_wf = [row.lower().translate(str.maketrans('', '', string.punctuation)).split()
for row in df_online.loc[df_online.language == 'English', 'title']]
title_eng_wf = [item for sublist in title_eng_wf for item in sublist]
title_eng_wf = pd.Series(title_eng_wf).value_counts()
print((title_eng_wf.loc[[row not in stop_word_list
for row in title_eng_wf.index.values]][:10]/np.sum(title_eng_wf)).to_string())
"""
Explanation: Again, similar shapes. We can see the vestige of the original 255 character limit for summaries. Overall, it would appear summary lengths are pretty well dispersed.
Examining English stories only, let's see what are the top 10 words commonly used in titles. This is excluding stop words, such as "the", "is", or "are".
End of explanation
"""
summary_eng_wf = [row.lower().translate(str.maketrans('', '', string.punctuation)).split()
for row in df_online.loc[df_online.language == 'English', 'summary']]
summary_eng_wf = [item for sublist in summary_eng_wf for item in sublist]
summary_eng_wf = pd.Series(summary_eng_wf).value_counts()
print((summary_eng_wf.loc[[row not in stop_word_list
for row in summary_eng_wf.index.values]][:10]/np.sum(summary_eng_wf)).to_string())
"""
Explanation: It appears that "love" is the most popular for story titles. In fact, it appears about 1% out of all the words! Then there are time indicator words like "time", "day", and "night". Interesting!
What about story summaries?
End of explanation
"""
title = 'New Love'
summary = '''Remember the first time in your life that you wrote a story based off some character you really liked?
Thinking no one would find your little oneshot? Well, guess who now knows about it... AU.'''
attributes = 'Not Harry Potter - Rated: T - English - Romance/Humor - Words: 8,290 - Published: Dec 22, 2013'
display(HTML('<a href>' + title + '</a><br>' + summary + '<br><span style="color:grey">' + attributes + '</body>'))
"""
Explanation: Once again, "love" is a popular word. Also reoccuring are "one", "new", "life", and "story".
You also start seeing tags like "oneshot". Other tags like "first story", "read and review", and "don't like, don't read" may be also contributing to some of the other words on the list we see. To thoroughly test this, we would need to do more natural language processing and ngrams, which we will reserve for another exercise.
Thoughts and conclusions
Here is a summary of what we have discovered thus far:
Only ~60% of stories survive on the site.
The volume of stories on the site has increased at a constant pace until 2013, of which it has been in decline since.
Romance is by far the most popular genre, at a whopping ~30%.
Mature stories is smaller than expected, at only ~20%.
Most stories on the site are written in English, at ~90%.
The Harry Potter fandom still claims plurality.
The Twilight fandom had a large spike then went out just as quickly.
About half of stories are completed, but they are mostly oneshots!
Less stories get published in autumn.
Less stories get published on Fridays.
255 characters is not enough space for a summary for many authors!
And finally, for the heck of it, introducing our new fanfiction, about fanfiction:
End of explanation
"""
|
google/jax-md | notebooks/implicit_differentiation.ipynb | apache-2.0 | #@title Import & Util
!pip install -q git+https://www.github.com/google/jax-md
!pip install -q jaxopt
import jax
import jax.numpy as jnp
from jax.config import config
config.update("jax_enable_x64", True)
from jax import random, jit, lax
from jax_md import space, energy, minimize, quantity
from jaxopt.implicit_diff import custom_root
f32 = jnp.float32
f64 = jnp.float64
# Energy minimization using a while loop.
def run_minimization_while(
energy_fn, R_init, shift, max_grad_thresh=1e-12, max_steps=1000000, **kwargs
):
init, apply = minimize.fire_descent(
jit(energy_fn), shift, dt_start=0.001, dt_max=0.005, **kwargs
)
apply = jit(apply)
@jit
def get_maxgrad(state):
return jnp.amax(jnp.abs(state.force))
@jit
def cond_fn(val):
state, i = val
return (get_maxgrad(state) > max_grad_thresh) & (i < max_steps)
@jit
def body_fn(val):
state, i = val
return apply(state), i + 1
state = init(R_init)
state, num_iterations = lax.while_loop(cond_fn, body_fn, (state, 0))
return state.position, get_maxgrad(state), num_iterations + 1
# Energy minimization using both a while loop and neighbor lists.
# We add the parameter forced_rebuilding to make sure
# that it is possible to construct new neighbor lists during the optimization.
def run_minimization_while_nl(
neighbor_fn,
energy_fn,
R_init,
shift,
forced_rebuilding=True,
max_grad_thresh=1e-12,
max_num_steps=1000000,
**kwargs
):
init_fn, apply_fn = minimize.fire_descent(
energy_fn, shift, dt_start=0.001, dt_max=0.005, **kwargs
)
apply_fn = jit(apply_fn)
nbrs = neighbor_fn.allocate(R_init)
state = init_fn(R_init, neighbor=nbrs)
@jit
def get_maxgrad(state):
return jnp.amax(jnp.abs(state.force))
@jit
def cond_fn(state, i):
return (get_maxgrad(state) > max_grad_thresh) & (i < max_num_steps)
@jit
def update_nbrs(R, nbrs):
return neighbor_fn.update(R, nbrs)
steps = 0
while cond_fn(state, steps):
nbrs = update_nbrs(state.position, nbrs)
new_state = apply_fn(state, neighbor=nbrs)
if forced_rebuilding and steps == 10:
print("Forced rebuilding of neighbor_list.")
nbrs = neighbor_fn.allocate(state.position)
if nbrs.did_buffer_overflow:
print("Rebuilding neighbor_list.")
nbrs = neighbor_fn.allocate(state.position)
else:
state = new_state
steps += 1
return state.position, nbrs, steps + 1
# jax.grad compatible version using a scan.
def run_minimization_scan(force_fn, R_init, shift, steps=5000, **kwargs):
init, apply = minimize.fire_descent(
jit(force_fn), shift, dt_start=0.001, dt_max=0.005, **kwargs
)
apply = jit(apply)
@jit
def scan_fn(state, i):
return apply(state), 0.0
state = init(R_init)
state, _ = lax.scan(scan_fn, state, jnp.arange(steps))
return state.position, jnp.amax(jnp.abs(force_fn(state.position)))
"""
Explanation: Implicit Differentiation
This cookbook was contributed by Maxi Lechner.
End of explanation
"""
N = 7
density = 0.3
dimension = 2
box_size = quantity.box_size_at_number_density(N, density, dimension)
displacement, shift = space.periodic(box_size)
key = random.PRNGKey(5)
key, split = random.split(key)
R_init = random.uniform(key, (N, dimension), maxval=box_size, dtype=f64)
sigma = jnp.full((N, N), 2.0)
alpha = jnp.full((N, N), 2.0)
param_dict = {"sigma": sigma, "alpha": alpha}
"""
Explanation: Meta Optimization
In this notebook we'll have another look at differentiating through an energy minimization routine. But this time we'll look at how a technique called implicit differentiation lets us do so much more efficiently than before. We will use the great jaxopt package for that purpose.
Let us first set up our system and see for ourselves what goes wrong when we aren't using implicit differentiation.
We'll work with a system of N soft spheres and we are interested in first computing the energy minimum of this system and then to compute the gradient of the energy with respect to the parameters sigma and/or alpha, which we take to be matrices such that for every pair of particles both sigma and alpha can take on a different value.
We'll start of with a very small and rather dilute system.
End of explanation
"""
def explicit_diff(params, R_init, displacement, num_steps):
energy_fn = energy.soft_sphere_pair(displacement, **params)
force_fn = quantity.force(energy_fn)
force_fn = jit(force_fn)
# We need to use a scan instead of a while loop in order to use jax.grad.
solver = lambda f, x: run_minimization_scan(f, x, shift, steps=num_steps)[0]
R_final = solver(force_fn, R_init)
return energy_fn(R_final), jnp.amax(jnp.abs(force_fn(R_final)))
(exp_e, exp_f), exp_g = jax.value_and_grad(explicit_diff, has_aux=True)(
param_dict, R_init, displacement, 19400
)
print("Energy : ", exp_e)
print("Max_grad_force: ", exp_f)
print("Gradient of the energy:")
print(exp_g["sigma"][0])
print(exp_g["alpha"][0])
"""
Explanation: Note that we are using a scan for the explicit function in order to use jax.grad. Here the step size is fixed and we need to select a large enough number of steps in order to reach the energy minimum!
End of explanation
"""
# Increase the number of particles
N = 55
# and the density.
density = 0.5
dimension = 2
box_size = quantity.box_size_at_number_density(N, density, dimension)
displacement, shift = space.periodic(box_size)
key = random.PRNGKey(5)
key, split = random.split(key)
R_init = random.uniform(key, (N, dimension), maxval=box_size, dtype=f64)
sigma = jnp.full((N, N), 2.0)
alpha = jnp.full((N, N), 2.0)
param_dict = {"sigma": sigma, "alpha": alpha}
"""
Explanation: This being plain jax code we can easily compute gradients with respect to a whole dictionary of parameters in one go.
Now what's the problem here?\
The answer to that is quite simple and easily demonstrated by increasing the systems size and/or by working with a system where we need to take more optimization steps to reach the energy minimum.
End of explanation
"""
# Do not run this cell locally! This cell realiably crashes my mac.
(exp_e_large, exp_f_large), exp_g_large = jax.value_and_grad(
explicit_diff, has_aux=True
)(param_dict, R_init, displacement, 163954)
print("Energy : ", exp_e_large)
print("Max_grad_force: ", exp_f_large)
print("Gradient of the energy:")
print(exp_g_large["sigma"][0])
print(exp_g_large["alpha"][0])
"""
Explanation: For this system we now have to take nearly 10 times more steps.
End of explanation
"""
# Let's go back to our small system in order
# to compare explicit and implicit differentiation.
N = 7
density = 0.3
dimension = 2
box_size = quantity.box_size_at_number_density(N, density, dimension)
displacement, shift = space.periodic(box_size)
key = random.PRNGKey(5)
key, split = random.split(key)
R_init = random.uniform(key, (N, dimension), maxval=box_size, dtype=f64)
sigma = jnp.full((N, N), 2.0)
alpha = jnp.full((N, N), 2.0)
param_dict = {"sigma": sigma, "alpha": alpha}
def implicit_diff(params, R_init, displacement):
energy_fn = energy.soft_sphere_pair(displacement, **params)
force_fn = jit(quantity.force(energy_fn))
# Wrap force_fn with a lax.stop_gradient to prevent a CustomVJPException.
no_grad_force_fn = jit(lambda x: lax.stop_gradient(force_fn(x)))
# Make the dependence on the variables we want to differentiate explicit.
explicit_force_fn = jit(lambda R, p: force_fn(R, **p))
def solver(params, x):
# params are unused
del params
# need to use no_grad_force_fn!
return run_minimization_while(no_grad_force_fn, x, shift)[0]
decorated_solver = custom_root(explicit_force_fn)(solver)
R_final = decorated_solver(None, R_init)
# Here we can just use our original energy_fn/force_fn.
return energy_fn(R_final), jnp.amax(jnp.abs(force_fn(R_final)))
(imp_e, imp_f), imp_g = jax.value_and_grad(implicit_diff, has_aux=True)(
param_dict, R_init, displacement
)
print("Energy : ", imp_e)
print("Max_grad_force: ", imp_f)
print("Gradient of the energy:")
print(imp_g["sigma"][0])
print(imp_g["alpha"][0])
# Implicit and explicit differentiation gives the same result.
print(jax.tree_map(jnp.allclose,exp_e,imp_e))
print(jax.tree_map(jnp.allclose,exp_f,imp_f))
print(jax.tree_map(jnp.allclose,exp_g,imp_g))
"""
Explanation: As you can see our computation fails due to running out of memory when trying to compute the gradient. The reason for that is that for reverse mode differentiation (e.g. jax.grad) the memory consumption grows linearly with respect to the number of optimization steps we have to take since reverse mode differentiation needs to store the whole forward pass in order to compute the gradient in the backwards pass.
We could reduce the memory requirements by using a technique called gradient rematerialization/checkpointing for a corresponding increase in computation time. While this strategy works, there exists a better solution for our problem called implicit differentiation.
Implicit Differentiation
Implicit differentiation gets its name from the implicit function theorem. This theorem roughly states that when we want to differentiate through a root finding procedure, e.g. find $z$ such that $F(a,z) = 0$, it does not matter how we computed the solution. Instead it is possible to directly differentiate through the solution $z^*$ of our root finding problem using the following formular. Here $\partial_i$ means we differentiate with respect to the $i$'th argument.
$$
\partial z^*(a) = -[\partial_1 f(a_0,z_0))]^{-1} \partial_0 f(a_0,z_0))
$$
This expression can be efficiently solved. If you are interested in a derivation and how one could implement this in jax then I can highly recommend chapter $2$ of the NeurIPS 2020 Deep Implicit Layers tutorial.
Alternatively we can also phrase this as an fixed point equation g(a,z) = z. Since the force is $0$ at the minimum it is a fixed point of gradient decent, e.g. after having reached the energy minium $z^$ any additional gradient decent step will always just return $z^$.
In order to use implicit differentiation we first need to rewrite our optimization problem as a root finding procedure. This is easily done since the force is $0$ at the energy minimum.
I'll first define a new function that uses implicit differentiation and then I'll highlight the differences.
End of explanation
"""
def implicit_diff(params, R_init, displacement):
energy_fn = energy.soft_sphere_pair(displacement, **params)
force_fn = jit(quantity.force(energy_fn))
# Wrap force_fn with a lax.stop_gradient to prevent a CustomVJPException.
no_grad_force_fn = jit(lambda x: lax.stop_gradient(force_fn(x)))
def solver(params, x):
# params are unused
del params
# need to use no_grad_force_fn!
return run_minimization_while(no_grad_force_fn, x, shift)[0]
decorated_solver = custom_root(force_fn)(solver)
R_final = decorated_solver(None, R_init)
# Here we can just use our original energy_fn/force_fn.
return energy_fn(R_final), jnp.amax(jnp.abs(force_fn(R_final)))
(imp_e, imp_f), imp_g = jax.value_and_grad(implicit_diff, has_aux=True)(
param_dict, R_init, displacement
)
print("Energy : ", imp_e)
print("Max_grad_force: ", imp_f)
print("Gradient of the energy:")
print(imp_g["sigma"][0])
print(imp_g["alpha"][0])
# Implicit and explicit differentiation gives the same result.
print(jax.tree_map(jnp.allclose,exp_e,imp_e))
print(jax.tree_map(jnp.allclose,exp_f,imp_f))
print(jax.tree_map(jnp.allclose,exp_g,imp_g))
"""
Explanation: Let us now look at the changes we had to make in order to use implicit differentiation.
We use jaxopt to define implicit gradients for our energy minimization routine. To this end we need to define two new force functions from our original force_fn. This is necessary for two seperate reasons.
For one jaxopt.implicit_diff.custom root requires a function that takes the parameter that we want to differentiate as an explicit input. To this end we define a new explicit_force_fn as
python
explicit_force_fn = jit(lambda R, p: force_fn(R,**p))
Furthermore we cannot pass our original force_fn to our solver as this will cause a CustomVJPException. We can just wrap the output of our force_fn with lax.stop_gradient in order to fix this issue.
python
no_grad_force_fn = jit(lambda x: lax.stop_gradient(force_fn(x)))
Having done that we can define our solver which takes two parameters as its input. A dummy variable params, which we can just delete as we do not need it and a variable x which is our set of initial positions. It is necessary that we pass our newly defined no_grad_force_fn to our solver.
Now we can put it all together and define a decorated_solver using jaxopt in order to be able to efficiently differentiate through it:
python
decorated_solver = implicit_diff.custom_root(explicit_force_fn)(solver)
Here we have to use our explicit_force_fn. It is also possible to use a different linear solver for the vjp computation using the solve argument of implicit_diff.custom_root. Jax comes with a handfull of sparse linear solvers in jax.scipy.sparse.linalg.
A small simplification
We can slightly simplify our code as constructing the explicit_force_fn is in fact unnecessary! While jax requires this explicit verson jaxopt is infact able to construct it automatically. This should work for all energy functions in jax-md as long as they and the solver do not take a catch-all **kwargs parameter.
End of explanation
"""
def implicit_diff_nl(params, R_init, box_size):
neighbor_fn, energy_fn = energy.soft_sphere_neighbor_list(
displacement, box_size, **params
)
force_fn = jit(quantity.force(energy_fn))
# wrap force_fn with a lax.stop_gradient to prevent a memory leak
no_grad_force_fn = jit(lambda x, neighbor: lax.stop_gradient(force_fn(x, neighbor)))
def solver(params, x):
# params are unused
del params
# need to use no_grad_force_fn!
return run_minimization_while_nl(neighbor_fn, no_grad_force_fn, x, shift)
# We need to use hax_aux=True.
decorated_solver = custom_root(force_fn, has_aux=True)(solver)
R_final, nbrs, num_steps = decorated_solver(None, R_init)
# Here we can just use our original energy_fn/force_fn.
return (
energy_fn(R_final, neighbor=nbrs),
jnp.amax(jnp.abs(force_fn(R_final, neighbor=nbrs))),
)
out_nl = jax.value_and_grad(implicit_diff_nl, has_aux=True)(
param_dict, R_init, box_size
)
(imp_nl_e, imp_nl_f), imp_nl_g = out_nl
print("Energy : ", imp_nl_e)
print("Max_grad_force: ", imp_nl_f)
print("Gradient of the energy:")
print(imp_nl_g["sigma"][0])
print(imp_nl_g["alpha"][0])
# Using neighbor lists also gives the same results.
print(jax.tree_map(jnp.allclose,exp_e,imp_nl_e))
print(jax.tree_map(jnp.allclose,exp_f,imp_nl_f))
# We cannot directly compare the gradients because by using neighbor lists we
# silently assume that the input matrix is symmetric. Thus we compare the upper
# triangular part of exp_g to imp_nl_g. Due to this symmetry we also have to
# divide exp_g by 2 in order to prevent double counting.
exp_g_triu = jax.tree_map(jnp.triu,exp_g)
imp_nl_g_2 = jax.tree_map(lambda x: x/2, imp_nl_g)
print(jax.tree_map(jnp.allclose,exp_g_triu, imp_nl_g_2))
"""
Explanation: Adding neighbor lists and other auxiliary output to our solver
Until now we have only worked with solvers that return one parameter R_final. Now we'll see how we have to change our implicit_diff function in order to be able to also have our solver return a neighbor list nbrs and the number of steps num_steps needed to reach the minimium.
There is only one thing we have to change besides the normal stuff we have to change when using neighbor lists. We simply add the has_aux=True keyword to implicit_diff.custom_root.
There's one more funny thing to note about the neighbor list version. Instead of getting an CustomVJPException when we forget to wrap our force_fn with lax.stop_gradient we get no explicit error message but we now get a memory leak instead which drastically slows down the computation. So pay attention that your are passing the correct version of your force_fn to your solver!
End of explanation
"""
def run_implicit(D, key, N=128):
box_size = 4.5 * D
# box_size = lax.stop_gradient(box_size)
displacement, shift = space.periodic(box_size)
R_init = random.uniform(key, (N, 2), minval=0.0, maxval=box_size, dtype=f64)
energy_fn = jit(energy.soft_sphere_pair(displacement, sigma=D))
force_fn = jit(quantity.force(energy_fn))
# wrap force_fn with a lax.stop_gradient to prevent a CustomVJPException
no_grad_force_fn = jit(lambda x: lax.stop_gradient(force_fn(x)))
def solver(params, x):
# params are unused
del params
# need to use no_grad_force_fn!
return run_minimization_while(no_grad_force_fn, x, shift)[0]
decorated_solver = custom_root(force_fn)(solver)
R_final = decorated_solver(None, R_init)
# Here we can just use our original energy_fn/force_fn
return energy_fn(R_final)
key = random.PRNGKey(5)
jax.grad(run_implicit)(1.3, key, N = 32)
"""
Explanation: A cautionary tale
Jaxopt uses the custom derivatives machinery of jax to implement implicit differentiation which I've found to be rather brittle compared to the rest of jax.
In order to demonstrate this let us now make a seemingly trivial change to our code. We'll move the setup of our system into our run_implicit function and we make the box_size of our system depend on the particle diameter D. When we now try to call grad on run_implicit we get an NotImplementedError: Differentiation rule for 'custom_lin' not implemented.
End of explanation
"""
|
leriomaggio/python-in-a-notebook | 05 While Loops and User input.ipynb | mit | # Set an initial condition.
game_active = True
# Set up the while loop.
while game_active:
# Run the game.
# At some point, the game ends and game_active will be set to False.
# When that happens, the loop will stop executing.
# Do anything else you want done after the loop runs.
"""
Explanation: Loops, Iteration Schemas and Input
While loops are really useful because they let your program run until a user decides to quit the program. They set up an infinite loop that runs until the user does something to end the loop. This section also introduces the first way to get input from your program's users.
<a name="top"></a>Contents
What is a while loop?
General syntax
Example
Exercises
Accepting user input
General syntax
Example
Exercises
Using while loops to keep your programs running
Exercises
Using while loops to make menus
Using while loops to process items in a list
Accidental Infinite loops
Exercises
Overall Challenges
The FOR (iteration) loop
The for loop statement is the most widely used iteration mechanisms in Python.
Almost every structure in Python can be iterated (element by element) by a for loop
a list, a tuple, a dictionary, $\ldots$ (more details will follows)
In Python, also while loops are permitted, but for is the one you would see (and use) most of the time!
<a name='what'></a>What is a while loop?
A while loop tests an initial condition. If that condition is true, the loop starts executing. Every time the loop finishes, the condition is reevaluated. As long as the condition remains true, the loop keeps executing. As soon as the condition becomes false, the loop stops executing.
<a name='general_syntax'></a>General syntax
End of explanation
"""
# The player's power starts out at 5.
power = 5
# The player is allowed to keep playing as long as their power is over 0.
while power > 0:
print("You are still playing, because your power is %d." % power)
# Your game code would go here, which includes challenges that make it
# possible to lose power.
# We can represent that by just taking away from the power.
power = power - 1
print("\nOh no, your power dropped to 0! Game Over.")
"""
Explanation: Every while loop needs an initial condition that starts out true.
The while statement includes a condition to test.
All of the code in the loop will run as long as the condition remains true.
As soon as something in the loop changes the condition such that the test no longer passes, the loop stops executing.
Any code that is defined after the loop will run at this point.
<a name='example'></a>Example
Here is a simple example, showing how a game will stay active as long as the player has enough power.
End of explanation
"""
# Ex 6.1 : Growing Strength
# put your code here
"""
Explanation: top
<a name='exercises_while'></a>Exercises
Growing Strength
Make a variable called strength, and set its initial value to 5.
Print a message reporting the player's strength.
Set up a while loop that runs until the player's strength increases to a value such as 10.
Inside the while loop, print a message that reports the player's current strength.
Inside the while loop, write a statement that increases the player's strength.
Outside the while loop, print a message reporting that the player has grown too strong, and that they have moved up to a new level of the game.
Bonus: Play around with different cutoff levels for the value of strength, and play around with different ways to increase the strength value within the while loop.
End of explanation
"""
# Get some input from the user.
variable = input('Please enter a value: ')
# Do something with the value that was entered.
"""
Explanation: top
<a name='input'></a>Accepting user input
Almost all interesting programs accept input from the user at some point. You can start accepting user input in your programs by using the input() function. The input function displays a messaget to the user describing the kind of input you are looking for, and then it waits for the user to enter a value. When the user presses Enter, the value is passed to your variable.
<a name='general_user_input'></a>General syntax
The general case for accepting input looks something like this:
End of explanation
"""
# Start with a list containing several names.
names = ['guido', 'tim', 'jesse']
# Ask the user for a name.
new_name = input("Please tell me someone I should know: ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
"""
Explanation: You need a variable that will hold whatever value the user enters, and you need a message that will be displayed to the user.
<a name='example_user_input'></a>Example
In the following example, we have a list of names. We ask the user for a name, and we add it to our list of names.
End of explanation
"""
# Ex 6.2 : Game Preferences
# put your code here
"""
Explanation: <a name='exercises_input'></a>Exercises
Game Preferences
Make a list that includes 3 or 4 games that you like to play.
Print a statement that tells the user what games you like.
Ask the user to tell you a game they like, and store the game in a variable such as new_game.
Add the user's game to your list.
Print a new statement that lists all of the games that we like to play (we means you and your user).
End of explanation
"""
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
"""
Explanation: top
<a name='keep_running'></a>Using while loops to keep your programs running
Most of the programs we use every day run until we tell them to quit, and in the background this is often done with a while loop.
Here is an example of how to let the user enter an arbitrary number of names.
End of explanation
"""
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
if new_name != 'quit':
names.append(new_name)
# Show that the name has been added to the list.
print(names)
"""
Explanation: That worked, except we ended up with the name 'quit' in our list. We can use a simple if test to eliminate this bug:
End of explanation
"""
# Ex 6.3 : Many Games
# put your code here
"""
Explanation: This is pretty cool! We now have a way to accept input from users while our programs run, and we have a way to let our programs run until our users are finished working.
<a name='exercises_running_input'></a>Exercises
Many Games
Modify Game Preferences so your user can add as many games as they like.
End of explanation
"""
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
print("\nHere's a bicycle. Have fun!\n")
elif choice == '2':
print("\nHere are some running shoes. Run fast!\n")
elif choice == '3':
print("\nHere's a map. Can you leave a trip plan for us?\n")
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
"""
Explanation: top
<a name='menus'></a>Using while loops to make menus
You now have enough Python under your belt to offer users a set of choices, and then respond to those choices until they choose to quit.
Let's look at a simple example, and then analyze the code:
End of explanation
"""
# Define the actions for each choice we want to offer.
def ride_bicycle():
print("\nHere's a bicycle. Have fun!\n")
def go_running():
print("\nHere are some running shoes. Run fast!\n")
def climb_mountain():
print("\nHere's a map. Can you leave a trip plan for us?\n")
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
ride_bicycle()
elif choice == '2':
go_running()
elif choice == '3':
climb_mountain()
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
"""
Explanation: Our programs are getting rich enough now, that we could do many different things with them. Let's clean this up in one really useful way. There are three main choices here, so let's define a function for each of those items. This way, our menu code remains really simple even as we add more complicated code to the actions of riding a bicycle, going for a run, or climbing a mountain.
End of explanation
"""
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop()
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
"""
Explanation: This is much cleaner code, and it gives us space to separate the details of taking an action from the act of choosing that action.
top
<a name='processing_list'></a>Using while loops to process items in a list
In the section on Lists, you saw that we can pop() items from a list. You can use a while list to pop items one at a time from one list, and work with them in whatever way you need.
Let's look at an example where we process a list of unconfirmed users.
End of explanation
"""
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop(0)
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
"""
Explanation: This works, but let's make one small improvement. The current program always works with the most recently added user. If users are joining faster than we can confirm them, we will leave some users behind. If we want to work on a 'first come, first served' model, or a 'first in first out' model, we can pop the first item in the list each time.
End of explanation
"""
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
1
1
1
1
1
...
"""
Explanation: This is a little nicer, because we are sure to get to everyone, even when our program is running under a heavy load. We also preserve the order of people as they join our project. Notice that this all came about by adding one character to our program!
top
<a name='infinite_loops'></a>Accidental Infinite loops
Sometimes we want a while loop to run until a defined action is completed, such as emptying out a list. Sometimes we want a loop to run for an unknown period of time, for example when we are allowing users to give as much input as they want. What we rarely want, however, is a true 'runaway' infinite loop.
Take a look at the following example. Can you pick out why this loop will never stop?
End of explanation
"""
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number + 1
"""
Explanation: I faked that output, because if I ran it the output would fill up the browser. You can try to run it on your computer, as long as you know how to interrupt runaway processes:
On most systems, Ctrl-C will interrupt the currently running program.
If you are using Geany, your output is displayed in a popup terminal window. You can either press Ctrl-C, or you can use your pointer to close the terminal window.
The loop runs forever, because there is no way for the test condition to ever fail. The programmer probably meant to add a line that increments current_number by 1 each time through the loop:
End of explanation
"""
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number - 1
1
0
-1
-2
-3
...
"""
Explanation: You will certainly make some loops run infintely at some point. When you do, just interrupt the loop and figure out the logical error you made.
Infinite loops will not be a real problem until you have users who run your programs on their machines. You won't want infinite loops then, because your users would have to shut down your program, and they would consider it buggy and unreliable. Learn to spot infinite loops, and make sure they don't pop up in your polished programs later on.
Here is one more example of an accidental infinite loop:
End of explanation
"""
# Ex 6.4 : Marveling at Infinity
# put your code here
"""
Explanation: In this example, we accidentally started counting down. The value of current_number will always be less than 5, so the loop will run forever.
<a name='exercises_infinite_loops'></a>Exercises
Marveling at Infinity
Use one of the examples of a failed while loop to create an infinite loop.
Interrupt your output.
Marvel at the fact that if you had not interrupted your output, your computer would have kept doing what you told it to until it ran out of power, or memory, or until the universe went cold around it.
End of explanation
"""
# Overall Challenge: Gaussian Addition
# put your code here
"""
Explanation: top
<a name='overall_challenges'></a>Overall Challenges
Gaussian Addition
This challenge is inspired by a story about the mathematician Carl Frederich Gauss. As the story goes, when young Gauss was in grade school his teacher got mad at his class one day.
"I'll keep the lot of you busy for a while", the teacher said sternly to the group. "You are to add the numbers from 1 to 100, and you are not to say a word until you are done."
The teacher expected a good period of quiet time, but a moment later our mathematician-to-be raised his hand with the answer. "It's 5050!" Gauss had realized that if you list all the numbers from 1 to 100, you can always match the first and last numbers in the list and get a common answer:
1, 2, 3, ..., 98, 99, 100
1 + 100 = 101
2 + 99 = 101
3 + 98 = 101
Gauss realized there were exactly 50 pairs of numbers in the range 1 to 100, so he did a quick calculation: 50 * 101 = 5050.
Write a program that passes a list of numbers to a function.
The function should use a while loop to keep popping the first and last numbers from the list and calculate the sum of those two numbers.
The function should print out the current numbers that are being added, and print their partial sum.
The function should keep track of how many partial sums there are.
The function should then print out how many partial sums there were.
The function should perform Gauss' multiplication, and report the final answer.
Prove that your function works, by passing in the range 1-100, and verifying that you get 5050.
gauss_addition(list(range(1,101)))
Your function should work for any set of consecutive numbers, as long as that set has an even length.
Bonus: Modify your function so that it works for any set of consecutive numbers, whether that set has an even or odd length.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.19/_downloads/69a53f341b5a9d09407d309924aa4d14/plot_source_power_spectrum.ipynb | bsd-3-clause | # Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, compute_source_psd
print(__doc__)
"""
Explanation: ======================================================
Compute source power spectral density (PSD) in a label
======================================================
Returns an STC file containing the PSD (in dB) of each of the sources
within a label.
End of explanation
"""
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_label = data_path + '/MEG/sample/labels/Aud-lh.label'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, verbose=False)
events = mne.find_events(raw, stim_channel='STI 014')
inverse_operator = read_inverse_operator(fname_inv)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, exclude='bads')
tmin, tmax = 0, 120 # use the first 120s of data
fmin, fmax = 4, 100 # look at frequencies between 4 and 100Hz
n_fft = 2048 # the FFT size (n_fft). Ideally a power of 2
label = mne.read_label(fname_label)
stc = compute_source_psd(raw, inverse_operator, lambda2=1. / 9., method="dSPM",
tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax,
pick_ori="normal", n_fft=n_fft, label=label,
dB=True)
stc.save('psd_dSPM')
"""
Explanation: Set parameters
End of explanation
"""
plt.plot(1e3 * stc.times, stc.data.T)
plt.xlabel('Frequency (Hz)')
plt.ylabel('PSD (dB)')
plt.title('Source Power Spectrum (PSD)')
plt.show()
"""
Explanation: View PSD of sources in label
End of explanation
"""
|
jpn--/larch | book/user-guide/data-fundamentals.ipynb | gpl-3.0 | import numpy as np
import pandas as pd
import xarray as xr
import sharrow as sh
import larch.numba as lx
"""
Explanation: (data-fundamentals)=
Data for Discrete Choice
End of explanation
"""
data_co = pd.read_csv("example-data/tiny_idco.csv", index_col="caseid")
data_co
"""
Explanation: Fundamental Data Formats
When working with discrete choice models in Larch, we will generally
receive data to input into the system in one of two basic formats: the case-only ("idco")
format or the case-alternative ("idca") format.
This are sometimes referred to as
IDCase (each record contains all the information for mode choice over
alternatives for a single trip) or IDCase-IDAlt (each record contains all the
information for a single alternative available to each decision maker so there is one
record for each alternative for each choice).
(idco)=
idco Format
In the idco case-only format, each record provides all the relevant information
about an individual choice, including the variables related to the decision maker
or the choice itself, as well as alternative-related variables for all available
alternatives, and a variable indicating which alternative was chosen. This style
of data has a variety of names in the choice modeling literature, including
"IDCase", "case-only", and "wide".
End of explanation
"""
data_ca = pd.read_csv("example-data/tiny_idca.csv")
data_ca
"""
Explanation: (idca)=
idca Format
In the idca case-alternative format, each record can include information on the variables
related to the decision maker or the choice itself, the attributes of that
particular alternative, and a choice variable that indicates whether the
alternative was or was not chosen. This style of data has a variety of names in the
choice modeling literature, including "IDCase-IDAlt", "case-alternative", and "tall".
End of explanation
"""
data_ca.set_index(['caseid', 'altid']).unstack()
"""
Explanation: (idce)=
sparse vs dense
The idca format actually has two technical variations, a sparse version and a
dense version. The table shown above is a sparse version, where any alterative that
is not available is simply missing from the data table. Thus, in caseid 2 above,
there are only 2 rows, not 3. By dropping these rows, this data storage is potentially
more efficient than the dense version. But, in cases where the number of missing alternatives
is managably small (less than half of all the data, certainly) it can be much more computationally
efficient to simply store and work with the dense array.
In Larch, these two distinct sub-types of idca data are labeled so
that the dense version labeled as idca and the sparse version
labeled as idce.
Data Conversion
Converting between idca format data and idco format in Python can be super easy if the alternative
id's are stored appropriately in a two-level MultiIndex. In that case, we can simply stack or unstack the DataFrame, and change formats. This is typically more readily available when switching from idca to idco
formats, as the alterative id's typically appear in a column of the DataFrame that can be used for indexing.
End of explanation
"""
forced_ca = data_co.T.set_index(
pd.MultiIndex.from_tuples([
['Car', 'Income'],
['Car','Time'],
['Car','Cost'],
['Bus','Time'],
['Bus','Cost'],
['Walk','Time'],
['Car', 'Chosen'],
], names=('alt','var'))
).T.stack(0)
forced_ca[['Chosen', 'Income']] = forced_ca[['Chosen', 'Income']].groupby("caseid").transform(
lambda x: x.fillna(x.value_counts().index[0])
)
forced_ca['Chosen'] = (
forced_ca['Chosen'] == forced_ca.index.get_level_values('alt')
).astype(float)
forced_ca
"""
Explanation: Getting our original idco data into idca format is not so clean, as there's no analagous
set_columns method in pandas, and even if there were, the alternative codes are not typically
neatly arranged in a row of data. We can force it to work, but it's not pretty.
End of explanation
"""
dataset = lx.merge(
[
data_co[['Income', 'Chosen']].to_xarray(),
data_ca.set_index(['caseid', 'altid'])[['Time', 'Cost']].to_xarray(),
],
caseid='caseid',
alts='altid',
)
dataset
"""
Explanation: Practical Data Formating in Larch
The data formats described above are relevant when storing data in
a tabular (two-dimensional) format. This is quite common and generally
expected, especially for data exchange between most software tools,
but Larch doesn't require you to choose one or the other.
Instead, Larch uses a Dataset structure based
on xarray, to store and use a collection of relevant variables, and
each variable can be stored in either |idco| or |idca| format, as
appropriate.
End of explanation
"""
lx.Dataset.construct.from_idca(
data_ca.set_index(['caseid', 'altid']),
)
# TEST
t = _
from pytest import approx
assert all(t.altid == [1,2,3])
assert all(t.altnames == ['Bus', 'Car', 'Walk'])
assert t.Cost.where(t._avail_, 0).data == approx(np.array([
[ 100, 150, 0],
[ 100, 125, 0],
[ 75, 125, 0],
[ 150, 225, 0],
]))
assert t.Time.where(t._avail_, 0).data == approx(np.array([
[40, 30, 20],
[35, 25, 0],
[50, 40, 30],
[20, 15, 10],
]))
"""
Explanation: As we saw above, it's quite easy to move from idca to idco format,
and Larch can apply those transformations automatically when loading idca
data. In the example below, note that the Income variable has automatically
been collapsed to idco, while the other variables remain as idca.
End of explanation
"""
lx.Dataset.construct.from_idce(
data_ca.set_index(['caseid', 'altid']),
)
# TEST
z = _
assert z.Income.dims == ('caseid',)
assert z.Time.dims == ('_casealt_',)
assert z['_caseptr_'].shape == (5,)
assert all(z['_caseptr_'] == [0,3,5,8,11])
"""
Explanation: Loading data in sparse format is as easy as swapping out
from_idca for
from_idce. The resulting
dataset will have a similar collection of variables, but
each idca variable is stored in a one-dimensional array,
using a variety of the compressed sparse row data format.
End of explanation
"""
choices = data_co['Chosen'].astype("category")
choices
"""
Explanation: Data Encoding
For the most part, data used in the utility functions of discrete choice models enters into the utility function as part of a linear-in-parameters function. That is, we have some "data" that expresses an attribute of some part of the transportation system as a number, we multiply that by some numerical parameter that will be estimated, and we sum up the total over all the data-times-parameter operations. This kind of structure is known as "linear algebra" and it's something computers can do super fast, as long as all the data and all the parameters are queued up in memory in the right formats. So, typically it is optimal to pre-compute the "data" part of the process into one large contiguous array of floating point values, regardless if the values otherwise seem to be binary or integers. Most tools, such as Larch, will do much of this work for you, so you don't need to worry about it too much.
There are two notable exceptions to this guideline:
choices: the data that represents the observed choices, which are inherently categorical
availablity: data that represents the availability of each choice, which is inherently boolean
Categorical Encoding
When we are looking at discrete choices, it is natural to employ a categorical data type for at least the "choice" data itself, if not for other columns as well. Pandas can convert columns to categorical data simply by assigning the type "category".
End of explanation
"""
choices.cat.codes
"""
Explanation: Once we have categorical data, if we like we can work with the underlying code values instead of the original raw data.
End of explanation
"""
choices.cat.categories
"""
Explanation: The cat.categories attribute contains the array of values matching each of the code.
End of explanation
"""
choices1 = data_co['Chosen'].astype(pd.CategoricalDtype(['_','Car','Bus','Walk']))
choices1
choices1.cat.codes
"""
Explanation: When using astype("category") there's no control over the ordering of the categories. If we want
to control the apparent order (e.g. we already have codes defined elsewhere such that Car is 1, Bus is 2, and walk is 3) then we can explicitly set the category value positions using pd.CategoricalDtype instead of "category".
Note that the cat.codes numbers used internally by categoricals start with zero as standard in Python,
so if you want codes to start with 1 you need to include a dummy placeholder for zero.
End of explanation
"""
pd.CategoricalDtype(['NoCars','1Car','2Cars','3+Cars'], ordered=True)
"""
Explanation: To be clear, by asserting the placement ordering of alternative like this, we are not simultaneously asserting that the alternatives are ordinal. Put another way, we are forcing Car to be coded as 1 and Bus to be coded as 2, but we are not saying that Car is less than Bus. Pandas categoricals can allow this, by adding ordered=True to the CategoricalDtype.
End of explanation
"""
pd.get_dummies(choices)
"""
Explanation: One Hot Encoding
One-hot encoding, also known as dummy variables, is the creation of a seperate binary-valued column for every categorical value. We can convert a categorical data column into a set of one-hot encoded columns using the get_dummies function.
End of explanation
"""
pd.get_dummies(data_co['Chosen'])
"""
Explanation: It's not required to have first converted the data to a categorical data type.
End of explanation
"""
dataset['Chosen_ca'] = lx.DataArray(
pd.get_dummies(data_co['Chosen']).rename_axis(columns="altid")
)
dataset
"""
Explanation: Encoding with xarray
The xarray library doesn't use formal "categorical" datatypes, but we can still use
the get_dummies function to explode choice and availability data as needed.
End of explanation
"""
|
Cyb3rWard0g/ThreatHunter-Playbook | docs/notebooks/windows/03_persistence/WIN-190810170510.ipynb | gpl-3.0 | from openhunt.mordorutils import *
spark = get_spark()
"""
Explanation: WMI Eventing
Metadata
| | |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/08/10 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might be leveraging WMI eventing for persistence in my environment.
Technical Context
WMI is the Microsoft implementation of the Web-Based Enterprise Management (WBEM) and Common Information Model (CIM). Both standards aim to provide an industry-agnostic means of collecting and transmitting information related to any managed component in an enterprise.
An example of a managed component in WMI would be a running process, registry key, installed service, file information, etc.
At a high level, Microsoft’s implementation of these standards can be summarized as follows > Managed Components Managed components are represented as WMI objects — class instances representing highly structured operating system data. Microsoft provides a wealth of WMI objects that communicate information related to the operating system. E.g. Win32_Process, Win32_Service, AntiVirusProduct, Win32_StartupCommand, etc.
Offensive Tradecraft
From an offensive perspective WMI has the ability to trigger off nearly any conceivable event, making it a good technique for persistence.
Three requirements
* Filter – An action to trigger off of
* Consumer – An action to take upon triggering the filter
* Binding – Registers a FilterConsumer
Mordor Test Data
| | |
|:----------|:----------|
| metadata | https://mordordatasets.com/notebooks/small/windows/03_persistence/SDWIN-190518184306.html |
| link | https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/persistence/host/empire_wmi_local_event_subscriptions_elevated_user.zip |
Analytics
Initialize Analytics Engine
End of explanation
"""
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/persistence/host/empire_wmi_local_event_subscriptions_elevated_user.zip"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
"""
Explanation: Download & Process Mordor Dataset
End of explanation
"""
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, User, EventNamespace, Name, Query
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 19
'''
)
df.show(10,False)
"""
Explanation: Analytic I
Look for WMI event filters registered
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| WMI object | Microsoft-Windows-Sysmon/Operational | User created Wmi filter | 19 |
End of explanation
"""
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, User, Name, Type, Destination
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 20
'''
)
df.show(10,False)
"""
Explanation: Analytic II
Look for WMI event consumers registered
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| WMI object | Microsoft-Windows-Sysmon/Operational | User created Wmi consumer | 20 |
End of explanation
"""
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, User, Operation, Consumer, Filter
FROM mordorTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 21
'''
)
df.show(10,False)
"""
Explanation: Analytic III
Look for WMI consumers binding to filters
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| WMI object | Microsoft-Windows-Sysmon/Operational | User created Wmi subscription | 21 |
End of explanation
"""
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Message
FROM mordorTable
WHERE Channel = "Microsoft-Windows-WMI-Activity/Operational"
AND EventID = 5861
'''
)
df.show(10,False)
"""
Explanation: Analytic IV
Look for events related to the registration of FilterToConsumerBinding
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| WMI object | Microsoft-Windows-WMI-Activity/Operational | Wmi subscription created | 5861 |
End of explanation
"""
|
Luke035/dlnd-lessons | into-to-tflearn/TFLearn_Sentiment_Analysis_Solution.ipynb | mit | import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
"""
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
"""
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
"""
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
"""
from collections import Counter
total_counts = Counter()
for _, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
"""
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
"""
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
"""
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
"""
print(vocab[-1], ': ', total_counts[vocab[-1]])
"""
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
"""
word2idx = {word: i for i, word in enumerate(vocab)}
"""
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
"""
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] += 1
return np.array(word_vector)
"""
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
"""
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
"""
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
"""
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
"""
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
"""
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
"""
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
"""
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Inputs
net = tflearn.input_data([None, 10000])
# Hidden layer(s)
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
# Output layer
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd',
learning_rate=0.1,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
"""
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
"""
model = build_model()
"""
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
"""
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)
"""
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
"""
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
"""
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
"""
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
"""
Explanation: Try out your own text!
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | blogs/rl-on-gcp/DQN_Breakout/RL_on_GCP.ipynb | apache-2.0 | %%bash
# Install packages to test model locally.
apt-get update
apt-get install -y python-numpy python-dev cmake zlib1g-dev libjpeg-dev xvfb libav-tools xorg-dev python-opengl libboost-all-dev libsdl2-dev swig libffi-dev
pip install gym
pip install gym[atari]
pip install opencv-python
apt update && apt install -y libsm6 libxext6
apt-get install -y libxrender-dev
pip install keras
"""
Explanation: Reinforcement Learning on Google Cloud
Ran from datalab. You can do this from cloud using datalab create ... Or here are steps to get datalab running locally:
sudo docker run -it -p "127.0.0.1:8081:8080" -v "${HOME}:/content" -e "PROJECT_ID=[YOUR_PROJECT_ID]" gcr.io/cloud-datalab/datalab:local
Click user image at top right of datalab, and login to you GCP account. Gives you authentification for everything :)
End of explanation
"""
%%bash
rm -r rl_on_gcp/my_model
gcloud ml-engine local train \
--module-name=trainer.trainer \
--package-path=${PWD}/rl_on_gcp/trainer \
--\
--steps=100000\
--start_train=100\
--buffer_size=100\
--save_model=True\
--model_dir='my_model'
"""
Explanation: First run locally to make sure everything is working.
End of explanation
"""
%%bash
JOBNAME=rl_breakout_$(date -u +%y%m%d_%H%M%S)
REGION='us-central1'
BUCKET='dqn-breakout'
gcloud ml-engine jobs submit training $JOBNAME \
--package-path=$PWD/rl_on_gcp/trainer \
--module-name=trainer.trainer \
--region=$REGION \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_TPU\
--runtime-version=1.9 \
--\
--steps=5000000\
--start_train=10000\
--buffer_size=1000000\
--save_model=True\
--model_dir='gs://dqn-breakout/models/'
"""
Explanation: Run on ML-Engine
End of explanation
"""
%%writefile hyperparam.yaml
trainingInput:
scaleTier: BASIC_GPU
hyperparameters:
maxTrials: 40
maxParallelTrials: 5
enableTrialEarlyStopping: False
goal: MAXIMIZE
hyperparameterMetricTag: reward
params:
- parameterName: update_target
type: INTEGER
minValue: 500
maxValue: 5000
scaleType: UNIT_LOG_SCALE
- parameterName: init_eta
type: DOUBLE
minValue: 0.8
maxValue: 0.95
scaleType: UNIT_LOG_SCALE
- parameterName: learning_rate
type: DOUBLE
minValue: 0.00001
maxValue: 0.001
scaleType: UNIT_LOG_SCALE
- parameterName: batch_size
type: DISCRETE
discreteValues:
- 4
- 16
- 32
- 64
- 128
- 256
- 512
%%bash
JOBNAME=rl_breakout_hp_$(date -u +%y%m%d_%H%M%S)
REGION='us-central1'
BUCKET='dqn-breakout'
gcloud ml-engine jobs submit training $JOBNAME \
--package-path=$PWD/rl_on_gcp/trainer \
--module-name=trainer.trainer \
--region=$REGION \
--staging-bucket=gs://$BUCKET \
--config=hyperparam.yaml \
--runtime-version=1.10 \
--\
--steps=100000\
--start_train=10000\
--buffer_size=10000\
--model_dir='gs://dqn-breakout/models/hp/'
"""
Explanation: TODO: why doesn't it use GPU?
From logs using BASIC_GPU:
master-replica-0 | Processes: GPU Memory | I master-replica-0
master-replica-0 | GPU PID Type Process name Usage | I master-replica-0
master-replica-0 |=============================================================================| I master-replica-0
master-replica-0 | No running processes found | I master-replica-0
Hyperparameter tuning
End of explanation
"""
from google.datalab.ml import TensorBoard as tb
tb.start('gs://crawles-sandbox/rl_on_gcp/hp/4/')
!gsutil ls gs://crawles-sandbox/rl_on_gcp/hp/4/
"""
Explanation: Launch tensorboard
Using tensoboard --logdir='gs://dqn-breakout/models/' or within datalab:
End of explanation
"""
|
opesci/devito | examples/userapi/05_conditional_dimension.ipynb | mit | from devito import Dimension, Function, Grid
import numpy as np
# We define a 10x10 grid, dimensions are x, y
shape = (10, 10)
grid = Grid(shape = shape)
x, y = grid.dimensions
# Define function 𝑓. We will initialize f's data with ones on its diagonal.
f = Function(name='f', grid=grid)
f.data[:] = np.eye(10)
f.data
"""
Explanation: ConditionalDimension tutorial
This tutorial explains how to create equations that only get executed under given conditions.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
from devito import Eq, Operator
op0 = Operator(Eq(f, f + 1))
op0.apply()
f.data
"""
Explanation: We begin by constructing an Operator that increases the values of f, a Function defined on a two-dimensional Grid, by one.
End of explanation
"""
#print(op0.ccode) # Print the generated code
"""
Explanation: Every value has been updated by one. You should be able to see twos on the diagonal and ones everywhere else.
End of explanation
"""
from devito import ConditionalDimension
print(ConditionalDimension.__doc__)
"""
Explanation: Devito API for relationals
In order to construct a ConditionalDimension, one needs to get familiar with relationals. The Devito API for relationals currently supports the following classes of relation, which inherit from their SymPy counterparts:
Le (less-than or equal)
Lt (less-than)
Ge (greater-than or equal)
Gt (greater-than)
Ne (not equal)
Relationals are used to define a condition and are passed as an argument to ConditionalDimension at construction time.
Devito API for ConditionalDimension
A ConditionalDimension defines a non-convex iteration sub-space derived from a parent Dimension, implemented by the compiler generating conditional "if-then" code within the parent Dimension's iteration space.
A ConditionalDimension is used in two typical cases:
Use case A:
To constrain the execution of loop iterations to make sure certain conditions are honoured
Use case B:
To subsample a Function
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
from devito import Gt
f.data[:] = np.eye(10)
# Define the condition f(x, y) > 0
condition = Gt(f, 0)
# Construct a ConditionalDimension
ci = ConditionalDimension(name='ci', parent=y, condition=condition)
op1 = Operator(Eq(f, f + 1))
op1.apply()
f.data
# print(op1.ccode) # Uncomment to view code
"""
Explanation: Use case A (honour a condition)
Now it's time to show a more descriptive example. We define a conditional that increments by one all values of a Function that are larger than zero. We name our ConditionalDimension ci.
In this example we want to update again by one all the elements in f that are larger than zero. Before updating the elements we reinitialize our data to the eye function.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
f.data[:] = np.eye(10)
op2 = Operator(Eq(f, f + 1, implicit_dims=ci))
print(op2.ccode)
op2.apply()
assert (np.count_nonzero(f.data - np.diag(np.diagonal(f.data)))==0)
assert (np.count_nonzero(f.data) == 10)
f.data
"""
Explanation: We've constructed ci, but it's still unused, so op1 is still identical to op0. We need to pass ci as an "implicit Dimension" to the equation that we want to be conditionally executed as shown in the next cell.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
from sympy import And
from devito import Ne, Lt
f.data[:] = np.eye(10)
g = Function(name='g', grid=grid)
g.data[:] = np.eye(10)
ci = ConditionalDimension(name='ci', parent=y, condition=And(Ne(g, 0), Lt(y, 5)))
op3 = Operator(Eq(f, f + g, implicit_dims=ci))
op3.apply()
print(op3.ccode)
assert (np.count_nonzero(f.data - np.diag(np.diagonal(f.data)))==0)
assert (np.count_nonzero(f.data) == 10)
assert np.all(f.data[np.nonzero(f.data[:5,:5])] == 2)
assert np.all(f.data[5:,5:] == np.eye(5))
f.data
"""
Explanation: The generated code is as expected and only the elements that were greater than zero were incremented by one.
Let's now create a new Function g, initialized with ones along its diagonal.
We want to add f(x,y) to g(x,y) only if (i) g != 0 and (ii) y < 5 (i.e., over the first five columns). To join the two conditions we can use sympy.And.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
h = Function(name='h', shape=grid.shape, dimensions=(x, ci))
op4 = Operator(Eq(h, h + g))
op4.apply()
print(op4.ccode)
assert (np.count_nonzero(h.data) == 5)
h.data
"""
Explanation: You can see that f has been updated only for the first five columns with the f+g expression.
A ConditionalDimension can be also used at Function construction time. Let's use ci from the previous cell to explicitly construct a Function h. You will notice that in this case there is no need to pass implicit_dims to the Eq as the ConditionalDimension ci is now a dimension in h. h still has the size of the full domain, not the size of the domain that satisfies the condition.
End of explanation
"""
#NBVAL_IGNORE_OUTPUT
size, factor = 16, 4
i = Dimension(name='i')
ci = ConditionalDimension(name='ci', parent=i, factor=factor)
g = Function(name='g', shape=(size,), dimensions=(i,))
# Intialize g
g.data[:,]= list(range(size))
f = Function(name='f', shape=(int(size/factor),), dimensions=(ci,))
op5 = Operator([Eq(f, g)])
print(op5.ccode)
op5.apply()
assert np.all(f.data[:] == g.data[0:-1:4])
print("\n Data in g \n", g.data)
print("\n Subsampled data in f \n", f.data)
"""
Explanation: Use case B (Function subsampling)
ConditionalDimensions are additionally indicated to implement Function subsampling. In the following example, an Operator subsamples Function g and saves its content into f every factor=4 iterations.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/inm/cmip6/models/sandbox-1/seaice.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inm', 'sandbox-1', 'seaice')
"""
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: INM
Source ID: SANDBOX-1
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:05
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
"""
|
karlnapf/shogun | doc/ipython-notebooks/evaluation/xval_modelselection.ipynb | bsd-3-clause | %pylab inline
%matplotlib inline
# include all Shogun classes
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from shogun import *
import shogun as sg
# generate some ultra easy training data
gray()
n=20
title('Toy data for binary classification')
X=hstack((randn(2,n), randn(2,n)+1))
Y=hstack((-ones(n), ones(n)))
_=scatter(X[0], X[1], c=Y , s=100)
p1 = Rectangle((0, 0), 1, 1, fc="w")
p2 = Rectangle((0, 0), 1, 1, fc="k")
legend((p1, p2), ["Class 1", "Class 2"], loc=2)
# training data in Shogun representation
feats=features(X)
labels=BinaryLabels(Y)
"""
Explanation: Evaluation, Cross-Validation, and Model Selection
By Heiko Strathmann - heiko.strathmann@gmail.com - http://github.com/karlnapf - http://herrstrathmann.de.
Based on the model selection framework of his Google summer of code 2011 project | Saurabh Mahindre - github.com/Saurabh7 as a part of Google Summer of Code 2014 project mentored by - Heiko Strathmann
This notebook illustrates the evaluation of prediction algorithms in Shogun using <a href="http://en.wikipedia.org/wiki/Cross-validation_(statistics)">cross-validation</a>, and selecting their parameters using <a href="http://en.wikipedia.org/wiki/Hyperparameter_optimization">grid-search</a>. We demonstrate this for a toy example on <a href="http://en.wikipedia.org/wiki/Binary_classification">Binary Classification</a> using <a href="http://en.wikipedia.org/wiki/Support_vector_machine">Support Vector Machines</a> and also a regression problem on a real world dataset.
General Idea
Splitting Strategies
K-fold cross-validation
Stratified cross-validation
Example: Binary classification
Example: Regression
Model Selection: Grid Search
General Idea
Cross validation aims to estimate an algorithm's performance on unseen data. For example, one might be interested in the average classification accuracy of a Support Vector Machine when being applied to new data, that it was not trained on. This is important in order to compare the performance different algorithms on the same target. Most crucial is the point that the data that was used for running/training the algorithm is not used for testing. Different algorithms here also can mean different parameters of the same algorithm. Thus, cross-validation can be used to tune parameters of learning algorithms, as well as comparing different families of algorithms against each other. Cross-validation estimates are related to the marginal likelihood in Bayesian statistics in the sense that using them for selecting models avoids overfitting.
Evaluating an algorithm's performance on training data should be avoided since the learner may adjust to very specific random features of the training data which are not very important to the general relation. This is called overfitting. Maximising performance on the training examples usually results in algorithms explaining the noise in data (rather than actual patterns), which leads to bad performance on unseen data. This is one of the reasons behind splitting the data and using different splits for training and testing, which can be done using cross-validation.
Let us generate some toy data for binary classification to try cross validation on.
End of explanation
"""
k=5
normal_split = sg.splitting_strategy('CrossValidationSplitting', labels=labels, num_subsets=k)
"""
Explanation: Types of splitting strategies
As said earlier Cross-validation is based upon splitting the data into multiple partitions. Shogun has various strategies for this. The base class for them is CSplittingStrategy.
K-fold cross-validation
Formally, this is achieved via partitioning a dataset $X$ of size $|X|=n$ into $k \leq n$ disjoint partitions $X_i\subseteq X$ such that $X_1 \cup X_2 \cup \dots \cup X_n = X$ and $X_i\cap X_j=\emptyset$ for all $i\neq j$. Then, the algorithm is executed on all $k$ possibilities of merging $k-1$ partitions and subsequently tested on the remaining partition. This results in $k$ performances which are evaluated in some metric of choice (Shogun support multiple ones). The procedure can be repeated (on different splits) in order to obtain less variance in the estimate. See [1] for a nice review on cross-validation using different performance measures.
End of explanation
"""
stratified_split = sg.splitting_strategy('StratifiedCrossValidationSplitting', labels=labels, num_subsets=k)
"""
Explanation: Stratified cross-validation
On classificaiton data, the best choice is stratified cross-validation. This divides the data in such way that the fraction of labels in each partition is roughly the same, which reduces the variance of the performance estimate quite a bit, in particular for data with more than two classes. In Shogun this is implemented by CStratifiedCrossValidationSplitting class.
End of explanation
"""
split_strategies=[stratified_split, normal_split]
#code to visualize splitting
def get_folds(split, num):
split.build_subsets()
x=[]
y=[]
lab=[]
for j in range(num):
indices=split.generate_subset_indices(j)
x_=[]
y_=[]
lab_=[]
for i in range(len(indices)):
x_.append(X[0][indices[i]])
y_.append(X[1][indices[i]])
lab_.append(Y[indices[i]])
x.append(x_)
y.append(y_)
lab.append(lab_)
return x, y, lab
def plot_folds(split_strategies, num):
for i in range(len(split_strategies)):
x, y, lab=get_folds(split_strategies[i], num)
figure(figsize=(18,4))
gray()
suptitle(split_strategies[i].get_name(), fontsize=12)
for j in range(0, num):
subplot(1, num, (j+1), title='Fold %s' %(j+1))
scatter(x[j], y[j], c=lab[j], s=100)
_=plot_folds(split_strategies, 4)
"""
Explanation: Leave One Out cross-validation
Leave One Out Cross-validation holds out one sample as the validation set. It is thus a special case of K-fold cross-validation with $k=n$ where $n$ is number of samples. It is implemented in LOOCrossValidationSplitting class.
Let us visualize the generated folds on the toy data.
End of explanation
"""
# define SVM with a small rbf kernel (always normalise the kernel!)
C=1
kernel = sg.kernel("GaussianKernel", log_width=np.log(0.001))
kernel.init(feats, feats)
kernel.set_normalizer(SqrtDiagKernelNormalizer())
classifier = sg.machine('LibSVM', C1=C, C2=C, kernel=kernel, labels=labels)
# train
_=classifier.train()
"""
Explanation: Stratified splitting takes care that each fold has almost the same number of samples from each class. This is not the case with normal splitting which usually leads to imbalanced folds.
Toy example: Binary Support Vector Classification
Following the example from above, we will tune the performance of a SVM on the binary classification problem. We will
demonstrate how to evaluate a loss function or metric on a given algorithm
then learn how to estimate this metric for the algorithm performing on unseen data
and finally use those techniques to tune the parameters to obtain the best possible results.
The involved methods are
LibSVM as the binary classification algorithms
the area under the ROC curve (AUC) as performance metric
three different kernels to compare
End of explanation
"""
# instanciate a number of Shogun performance measures
metrics=[ROCEvaluation(), AccuracyMeasure(), ErrorRateMeasure(), F1Measure(), PrecisionMeasure(), RecallMeasure(), SpecificityMeasure()]
for metric in metrics:
print(metric.get_name(), metric.evaluate(classifier.apply(feats), labels))
"""
Explanation: Ok, we now have performed classification on the training data. How good did this work? We can easily do this for many different performance measures.
End of explanation
"""
metric = sg.evaluation('AccuracyMeasure')
cross = sg.machine_evaluation('CrossValidation', machine=classifier, features=feats, labels=labels,
splitting_strategy=stratified_split, evaluation_criterion=metric)
# perform the cross-validation, note that this call involved a lot of computation
result=cross.evaluate()
# this class contains a field "mean" which contain the mean performance metric
print("Testing", metric.get_name(), result.get('mean'))
"""
Explanation: Note how for example error rate is 1-accuracy. All of those numbers represent the training error, i.e. the ability of the classifier to explain the given data.
Now, the training error is zero. This seems good at first. But is this setting of the parameters a good idea? No! A good performance on the training data alone does not mean anything. A simple look up table is able to produce zero error on training data. What we want is that our methods generalises the input data somehow to perform well on unseen data. We will now use cross-validation to estimate the performance on such.
We will use CStratifiedCrossValidationSplitting, which accepts a reference to the labels and the number of partitions as parameters. This instance is then passed to the class CCrossValidation, which does the estimation using the desired splitting strategy. The latter class can take all algorithms that are implemented against the CMachine interface.
End of explanation
"""
print("Testing", metric.get_name(), [cross.evaluate().get('mean') for _ in range(10)])
"""
Explanation: Now this is incredibly bad compared to the training error. In fact, it is very close to random performance (0.5). The lesson: Never judge your algorithms based on the performance on training data!
Note that for small data sizes, the cross-validation estimates are quite noisy. If we run it multiple times, we get different results.
End of explanation
"""
# 25 runs and 95% confidence intervals
cross.put('num_runs', 25)
# perform x-validation (now even more expensive)
cross.evaluate()
result=cross.evaluate()
print("Testing cross-validation mean %.2f " \
% (result.get('mean')))
"""
Explanation: It is better to average a number of different runs of cross-validation in this case. A nice side effect of this is that the results can be used to estimate error intervals for a given confidence rate.
End of explanation
"""
widths=2**linspace(-5,25,10)
results=zeros(len(widths))
for i in range(len(results)):
kernel.put('log_width', np.log(widths[i]))
result= cross.evaluate()
results[i]=result.get('mean')
plot(log2(widths), results, 'blue')
xlabel("log2 Kernel width")
ylabel(metric.get_name())
_=title("Accuracy for different kernel widths")
print("Best Gaussian kernel width %.2f" % widths[results.argmax()], "gives", results.max())
# compare this with a linear kernel
classifier.put('kernel', sg.kernel('LinearKernel'))
lin_k = cross.evaluate()
plot([log2(widths[0]), log2(widths[len(widths)-1])], [lin_k.get('mean'),lin_k.get('mean')], 'r')
# please excuse this horrible code :)
print("Linear kernel gives", lin_k.get('mean'))
_=legend(["Gaussian", "Linear"], loc="lower center")
"""
Explanation: Using this machinery, it is very easy to compare multiple kernel parameters against each other to find the best one. It is even possible to compare a different kernel.
End of explanation
"""
feats=features(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/fm_housing.dat')))
labels=RegressionLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/housing/housing_label.dat')))
preproc = sg.transformer('RescaleFeatures')
preproc.fit(feats)
feats = preproc.transform(feats)
#Regression models
ls = sg.machine('LeastSquaresRegression', features=feats, labels=labels)
tau=1
rr = sg.machine('LinearRidgeRegression', tau=tau, features=feats, labels=labels)
width=1
tau=1
kernel=sg.kernel("GaussianKernel", log_width=np.log(width))
kernel.set_normalizer(SqrtDiagKernelNormalizer())
krr = sg.machine('KernelRidgeRegression', tau=tau, kernel=kernel, labels=labels)
regression_models=[ls, rr, krr]
"""
Explanation: This gives a brute-force way to select paramters of any algorithm implemented under the CMachine interface. The cool thing about this is, that it is also possible to compare different model families against each other. Below, we compare a a number of regression models in Shogun on the Boston Housing dataset.
Regression problem and cross-validation
Various regression models in Shogun are now used to predict house prices using the boston housing dataset. Cross-validation is used to find best parameters and also test the performance of the models.
End of explanation
"""
n=30
taus = logspace(-4, 1, n)
#5-fold cross-validation
k=5
split = sg.splitting_strategy('CrossValidationSplitting', labels=labels, num_subsets=k)
metric = sg.evaluation('MeanSquaredError')
cross = sg.machine_evaluation('CrossValidation', machine=rr, features=feats, labels=labels, splitting_strategy=split,
evaluation_criterion=metric, autolock=False)
cross.put('num_runs', 50)
errors=[]
for tau in taus:
#set necessary parameter
rr.put('tau', tau)
result=cross.evaluate()
#Enlist mean error for all runs
errors.append(result.get('mean'))
figure(figsize=(20,6))
suptitle("Finding best (tau) parameter using cross-validation", fontsize=12)
p=subplot(121)
title("Ridge Regression")
plot(taus, errors, linewidth=3)
p.set_xscale('log')
p.set_ylim([0, 80])
xlabel("Taus")
ylabel("Mean Squared Error")
cross = sg.machine_evaluation('CrossValidation', machine=krr, features=feats, labels=labels, splitting_strategy=split, evaluation_criterion=metric)
cross.put('num_runs', 50)
errors=[]
for tau in taus:
krr.put('tau', tau)
result=cross.evaluate()
#print tau, "error", result.get_mean()
errors.append(result.get('mean'))
p2=subplot(122)
title("Kernel Ridge regression")
plot(taus, errors, linewidth=3)
p2.set_xscale('log')
xlabel("Taus")
_=ylabel("Mean Squared Error")
"""
Explanation: Let us use cross-validation to compare various values of tau paramter for ridge regression (Regression notebook). We will use MeanSquaredError as the performance metric. Note that normal splitting is used since it might be impossible to generate "good" splits using Stratified splitting in case of regression since we have continous values for labels.
End of explanation
"""
n=50
widths=logspace(-2, 3, n)
krr.put('tau', 0.1)
metric =sg.evaluation('MeanSquaredError')
k=5
split = sg.splitting_strategy('CrossValidationSplitting', labels=labels, num_subsets=k)
cross = sg.machine_evaluation('CrossValidation', machine=krr, features=feats, labels=labels, splitting_strategy=split, evaluation_criterion=metric)
cross.put('num_runs', 10)
errors=[]
for width in widths:
kernel.put('log_width', np.log(width))
result=cross.evaluate()
#print width, "error", result.get('mean')
errors.append(result.get('mean'))
figure(figsize=(15,5))
p=subplot(121)
title("Finding best width using cross-validation")
plot(widths, errors, linewidth=3)
p.set_xscale('log')
xlabel("Widths")
_=ylabel("Mean Squared Error")
"""
Explanation: A low value of error certifies a good pick for the tau paramter which should be easy to conclude from the plots. In case of Ridge Regression the value of tau i.e. the amount of regularization doesn't seem to matter but does seem to in case of Kernel Ridge Regression. One interpretation of this could be the lack of over fitting in the feature space for ridge regression and the occurence of over fitting in the new kernel space in which Kernel Ridge Regression operates. </br> Next we will compare a range of values for the width of Gaussian Kernel used in Kernel Ridge Regression
End of explanation
"""
n=40
taus = logspace(-3, 0, n)
widths=logspace(-1, 4, n)
cross = sg.machine_evaluation('CrossValidation', machine=krr, features=feats, labels=labels, splitting_strategy=split, evaluation_criterion=metric)
cross.put('num_runs', 1)
x, y=meshgrid(taus, widths)
grid=array((ravel(x), ravel(y)))
print(grid.shape)
errors=[]
for i in range(0, n*n):
krr.put('tau', grid[:,i][0])
kernel.put('log_width', np.log(grid[:,i][1]))
result=cross.evaluate()
errors.append(result.get('mean'))
errors=array(errors).reshape((n, n))
from mpl_toolkits.mplot3d import Axes3D
#taus = logspace(0.5, 1, n)
jet()
fig=figure(figsize(15,7))
ax=subplot(121)
c=pcolor(x, y, errors)
_=contour(x, y, errors, linewidths=1, colors='black')
_=colorbar(c)
xlabel('Taus')
ylabel('Widths')
ax.set_xscale('log')
ax.set_yscale('log')
ax1=fig.add_subplot(122, projection='3d')
ax1.plot_wireframe(log10(y),log10(x), errors, linewidths=2, alpha=0.6)
ax1.view_init(30,-40)
xlabel('Taus')
ylabel('Widths')
_=ax1.set_zlabel('Error')
"""
Explanation: The values for the kernel parameter and tau may not be independent of each other, so the values we have may not be optimal. A brute force way to do this would be to try all the pairs of these values but it is only feasible for a low number of parameters.
End of explanation
"""
#use the best parameters
rr.put('tau', 1)
krr.put('tau', 0.05)
kernel.put('log_width', np.log(2))
title_='Performance on Boston Housing dataset'
print("%50s" %title_)
for machine in regression_models:
metric = sg.evaluation('MeanSquaredError')
cross = sg.machine_evaluation('CrossValidation', machine=machine, features=feats, labels=labels, splitting_strategy=split,
evaluation_criterion=metric, autolock=False)
cross.put('num_runs', 25)
result=cross.evaluate()
print("-"*80)
print("|", "%30s" % machine.get_name(),"|", "%20s" %metric.get_name(),"|","%20s" %result.get('mean') ,"|" )
print("-"*80)
"""
Explanation: Let us approximately pick the good parameters using the plots. Now that we have the best parameters, let us compare the various regression models on the data set.
End of explanation
"""
#Root
param_tree_root=ModelSelectionParameters()
#Parameter tau
tau=ModelSelectionParameters("tau")
param_tree_root.append_child(tau)
# also R_LINEAR/R_LOG is available as type
min_value=0.01
max_value=1
type_=R_LINEAR
step=0.05
base=2
tau.build_values(min_value, max_value, type_, step, base)
"""
Explanation: Model selection using Grid Search
A standard way of selecting the best parameters of a learning algorithm is by Grid Search. This is done by an exhaustive search of a specified parameter space. CModelSelectionParameters is used to select various parameters and their ranges to be used for model selection. A tree like structure is used where the nodes can be CSGObject or the parameters to the object. The range of values to be searched for the parameters is set using build_values() method.
End of explanation
"""
#kernel object
param_gaussian_kernel=ModelSelectionParameters("kernel", kernel)
gaussian_kernel_width=ModelSelectionParameters("log_width")
gaussian_kernel_width.build_values(0.1, 6.0, R_LINEAR, 0.5, 2.0)
#kernel parameter
param_gaussian_kernel.append_child(gaussian_kernel_width)
param_tree_root.append_child(param_gaussian_kernel)
# cross validation instance used
cross_validation = sg.machine_evaluation('CrossValidation', machine=krr, features=feats, labels=labels, splitting_strategy=split,
evaluation_criterion=metric)
cross_validation.put('num_runs', 1)
# model selection instance
model_selection=GridSearchModelSelection(cross_validation, param_tree_root)
print_state=False
# TODO: enable it once crossval has been fixed
#best_parameters=model_selection.select_model(print_state)
#best_parameters.apply_to_machine(krr)
#best_parameters.print_tree()
result=cross_validation.evaluate()
print('Error with Best parameters:', result.get('mean'))
"""
Explanation: Next we will create CModelSelectionParameters instance with a kernel object which has to be appended the root node. The kernel object itself will be append with a kernel width parameter which is the parameter we wish to search.
End of explanation
"""
|
jss367/assemble | exploratory_notebooks/comm_detect/parse_py.ipynb | mit | import gensim
import os
import numpy as np
import itertools
import json
import re
import pymoji
import importlib
from nltk.tokenize import TweetTokenizer
from gensim import corpora
import string
from nltk.corpus import stopwords
from six import iteritems
import csv
tokenizer = TweetTokenizer()
def keep_retweets(tweets_objs_arr):
return [x["text"] for x in tweets_objs_arr if x['retweet'] != 'N'], [x["name"] for x in tweets_objs_arr if x['retweet'] != 'N'], [x["followers"] for x in tweets_objs_arr if x['retweet'] != 'N']
def convert_emojis(tweets_arr):
return [pymoji.replaceEmojiAlt(x, trailingSpaces=1) for x in tweets_arr]
def tokenize_tweets(tweets_arr):
result = []
for x in tweets_arr:
try:
tokenized = tokenizer.tokenize(x)
result.append([x.lower() for x in tokenized if x not in string.punctuation])
except:
pass
# print(x)
return result
class Tweets(object):
def __init__(self, dirname):
self.dirname = dirname
def __iter__(self):
for root, directories, filenames in os.walk(self.dirname):
for filename in filenames:
if(filename.endswith('json')):
print(root + filename)
with open(os.path.join(root,filename), 'r') as f:
data = json.load(f)
data_parsed_step1, user_names, followers = keep_retweets(data)
data_parsed_step2 = convert_emojis(data_parsed_step1)
data_parsed_step3 = tokenize_tweets(data_parsed_step2)
for data, name, follower in zip(data_parsed_step3, user_names, followers):
yield name, data, follower
#model = gensim.models.Word2Vec(sentences, workers=2, window=5, sg = 1, size = 100, max_vocab_size = 2 * 10000000)
#model.save('tweets_word2vec_2017_1_size100_window5')
#print('done')
#print(time.time() - start_time)
"""
Explanation: Parsing and cleaning tweets
This notebook is a slight modification of @wwymak's word2vec notebook, with different tokenization, and a way to iterate over tweets linked to their named user
WWmyak's iterator and helper functions
End of explanation
"""
# building the dictionary first, from the iterator
sentences = Tweets('/media/henripal/hd1/data/2017/1/') # a memory-friendly iterator
dictionary = corpora.Dictionary((tweet for _, tweet, _ in sentences))
# here we use the downloaded stopwords from nltk and create the list
# of stop ids using the hash defined above
stop = set(stopwords.words('english'))
stop_ids = [dictionary.token2id[stopword] for stopword in stop if stopword in dictionary.token2id]
# and this is the items we don't want - that appear less than 20 times
# hardcoded numbers FTW
low_freq_ids = [tokenid for tokenid, docfreq in iteritems(dictionary.dfs) if docfreq <1500]
# finally we filter the dictionary and compactify
dictionary.filter_tokens(stop_ids + low_freq_ids)
dictionary.compactify() # remove gaps in id sequence after words that were removed
print(dictionary)
# reinitializing the iterator to get more stuff
sentences = Tweets('/media/henripal/hd1/data/2017/1/')
corpus = []
name_to_follower = {}
names = []
for name, tweet, follower in sentences:
corpus.append(tweet)
names.append(name)
name_to_follower[name] = follower
"""
Explanation: My gensim tinkering
Tasks:
- build the gensim dictionary
- build the bow matrix using this dictionary (sparse matrix so memory friendly)
- save the names and the dicitionary for later use
End of explanation
"""
with open('/media/henripal/hd1/data/name_to_follower.csv', 'w') as csv_file:
writer = csv.writer(csv_file)
for key, value in name_to_follower.items():
writer.writerow([key, value])
with open('/media/henripal/hd1/dta/corpus_names.csv', 'w') as csv_file:
writer = csv.writer(csv_file)
writer.writerow(names)
# now we save the sparse bow corpus matrix using matrix market format
corpora.MmCorpus.serialize('/media/henripal/hd1/data/corp.mm', corpus)
# and we save the dictionary as a text file
dictionary.save('/media/henripal/hd1/data/dict')
"""
Explanation: And now we save everything for later analysis
End of explanation
"""
|
rahulremanan/python_tutorial | NLP/04-Character_embedding/src/04_Char_embedding.ipynb | mit | from __future__ import print_function
from keras.models import Model
from keras.layers import Dense, Activation, Embedding
from keras.layers import LSTM, Input
from keras.layers.merge import concatenate
from keras.optimizers import RMSprop, Adam
from keras.utils.data_utils import get_file
from keras.layers.normalization import BatchNormalization
from keras.callbacks import Callback, ModelCheckpoint
from sklearn.decomposition import PCA
from keras.utils import plot_model
import numpy as np
import random
import sys
import csv
import os
import h5py
import time
"""
Explanation: Hello World! Python Workshops @ Think Coffee
3-5pm, 7/30/17
Day 3, Alice NLP generator
@python script author (original content): Rahul
@jupyter notebook converted tutorial author: Nick Giangreco
Ntbk of python script in same directory. Building an RNN based on Lewis Carrol's Alice in Wonderland text.
Importing modules
End of explanation
"""
embeddings_path = "./glove.840B.300d-char.txt" # http://nlp.stanford.edu/data/glove.840B.300d.zip
embedding_dim = 300
batch_size = 32
use_pca = False
lr = 0.001
lr_decay = 1e-4
maxlen = 300
consume_less = 2 # 0 for cpu, 2 for gpu
"""
Explanation: Setting params for model setup and build.
End of explanation
"""
text = open('./Alice.txt').read()
print('corpus length:', len(text))
chars = sorted(list(set(text)))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
"""
Explanation: Loading and reading Alice.txt corpus, saving characters (unique alphabet and punctuation characters in corpus) in array, and making dictionary associating each character with it's position in the character array (making two dictionaries where the key and position are either the key or value)
End of explanation
"""
# cut the text in semi-redundant sequences of maxlen characters
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('nb sequences:', len(sentences))
"""
Explanation: Cutting the document into semi-redundant sentences, where each element in the sentences list contain 40 sentences that overlap with the previous element's sentences (also doing a step size of 3 through each line in the text). Also, storing character in each next_chars array's elements, where the current element is the 40th character after the previous character.
End of explanation
"""
print('Vectorization...')
X = np.zeros((len(sentences), maxlen), dtype=np.int)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
X[i, t] = char_indices[char]
y[i, char_indices[next_chars[i]]] = 1
"""
Explanation: Making X boolean (false) array with a shape of the length of the sentences by the step (40) by the length of the unique characters/punctuation in the document.
Making y boolean (false) array with a shape of the length of the sentences by the length of the unique characters/punctuation in the document.
Then, going through each sentence and character in the sentence, storing a 1 (converting false to true) in the respective sentence and characters in X and y.
End of explanation
"""
# test code to sample on 10% for functional model testing
def random_subset(X, y, p=0.1):
idx = np.random.randint(X.shape[0], size=int(X.shape[0] * p))
X = X[idx, :]
y = y[idx]
return (X, y)
# https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html
def generate_embedding_matrix(embeddings_path):
print('Processing pretrained character embeds...')
embedding_vectors = {}
with open(embeddings_path, 'r') as f:
for line in f:
line_split = line.strip().split(" ")
vec = np.array(line_split[1:], dtype=float)
char = line_split[0]
embedding_vectors[char] = vec
embedding_matrix = np.zeros((len(chars), 300))
#embedding_matrix = np.random.uniform(-1, 1, (len(chars), 300))
for char, i in char_indices.items():
#print ("{}, {}".format(char, i))
embedding_vector = embedding_vectors.get(char)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
# Use PCA from sklearn to reduce 300D -> 50D
if use_pca:
pca = PCA(n_components=embedding_dim)
pca.fit(embedding_matrix)
embedding_matrix_pca = np.array(pca.transform(embedding_matrix))
embedding_matrix_result = embedding_matrix_pca
print (embedding_matrix_pca)
print (embedding_matrix_pca.shape)
else:
embedding_matrix_result = embedding_matrix
return embedding_matrix_result
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds + 1e-6) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
"""
Explanation: Defining helper functions.
End of explanation
"""
print('Build model...')
main_input = Input(shape=(maxlen,))
embedding_matrix = generate_embedding_matrix(embeddings_path)
embedding_layer = Embedding(
len(chars), embedding_dim, input_length=maxlen,
weights=[embedding_matrix])
# embedding_layer = Embedding(
# len(chars), embedding_dim, input_length=maxlen)
embedded = embedding_layer(main_input)
# RNN Layer
rnn = LSTM(256, implementation=consume_less)(embedded)
aux_output = Dense(len(chars))(rnn)
aux_output = Activation('softmax', name='aux_out')(aux_output)
# Hidden Layers
hidden_1 = Dense(512, use_bias=False)(rnn)
hidden_1 = BatchNormalization()(hidden_1)
hidden_1 = Activation('relu')(hidden_1)
hidden_2 = Dense(256, use_bias=False)(hidden_1)
hidden_2 = BatchNormalization()(hidden_2)
hidden_2 = Activation('relu')(hidden_2)
main_output = Dense(len(chars))(hidden_2)
main_output = Activation('softmax', name='main_out')(main_output)
model = Model(inputs=main_input, outputs=[main_output, aux_output])
optimizer = Adam(lr=lr, decay=lr_decay)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer, loss_weights=[1., 0.2])
model.summary()
#plot_model(model, to_file='model.png', show_shapes=True)
if not os.path.exists('./output'):
os.makedirs('./output')
f = open('./log.csv', 'w')
log_writer = csv.writer(f)
log_writer.writerow(['iteration', 'batch', 'batch_loss',
'epoch_loss', 'elapsed_time'])
checkpointer = ModelCheckpoint(
"./output/model.hdf5", monitor='main_out_loss', save_best_only=True)
"""
Explanation: Building text embedding matrix and RNN model. This is what differentiates this tutorial from tutorial 03.
Input layer
Embedding layer - with embedding matrix as weights
RNN Layer - LSTM instance with 256 nodes
Dense layer (2 hidden layers)
Activation (softmax) layer for converting to output probability
full table given below
End of explanation
"""
class BatchLossLogger(Callback):
def on_epoch_begin(self, epoch, logs={}):
self.losses = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('main_out_loss'))
if batch % 50 == 0:
log_writer.writerow([iteration, batch,
logs.get('main_out_loss'),
np.mean(self.losses),
round(time.time() - start_time, 2)])
"""
Explanation: Making batchloss class for more efficient epoch training and writing.
End of explanation
"""
ep = 1
start_time = time.time()
for iteration in range(1, 20):
print()
print('-' * 50)
print('Iteration', iteration)
logger = BatchLossLogger()
# X_train, y_train = random_subset(X, y)
# history = model.fit(X_train, [y_train, y_train], batch_size=batch_size,
# epochs=1, callbacks=[logger, checkpointer])
history = model.fit(X, [y, y], batch_size=batch_size,
epochs=ep, callbacks=[logger, checkpointer])
loss = str(history.history['main_out_loss'][-1]).replace(".", "_")
f2 = open('./output/iter-{:02}-{:.6}.txt'.format(iteration, loss), 'w')
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0, 1.2]:
print()
print('----- diversity:', diversity)
f2.write('----- diversity:' + ' ' + str(diversity) + '\n')
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
f2.write('----- Generating with seed: "' + sentence + '"' + '\n---\n')
sys.stdout.write(generated)
for i in range(1200):
x = np.zeros((1, maxlen), dtype=np.int)
for t, char in enumerate(sentence):
x[0, t] = char_indices[char]
preds = model.predict(x, verbose=0)[0][0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
f2.write(generated + '\n')
print()
f2.close()
# Write embeddings for current characters to file
# The second layer has the embeddings.
embedding_weights = model.layers[1].get_weights()[0]
f3 = open('./output/char-embeddings.txt', 'w')
for char in char_indices:
if ord(char) < 128:
embed_vector = embedding_weights[char_indices[char], :]
f3.write(char + " " + " ".join(str(x)
for x in embed_vector) + "\n")
f3.close()
f.close()
"""
Explanation: Model training. Use one epoch instead of ten.
End of explanation
"""
|
turbomanage/training-data-analyst | blogs/bigquery_datascience/bigquery_tensorflow.ipynb | apache-2.0 | %%bash
# create output dataset
bq mk advdata
%%bigquery
CREATE OR REPLACE MODEL advdata.ulb_fraud_detection
TRANSFORM(
* EXCEPT(Amount),
SAFE.LOG(Amount) AS log_amount
)
OPTIONS(
INPUT_LABEL_COLS=['class'],
AUTO_CLASS_WEIGHTS = TRUE,
DATA_SPLIT_METHOD='seq',
DATA_SPLIT_COL='Time',
MODEL_TYPE='logistic_reg'
) AS
SELECT
*
FROM `bigquery-public-data.ml_datasets.ulb_fraud_detection`
%%bigquery
SELECT * FROM ML.EVALUATE(MODEL advdata.ulb_fraud_detection)
%%bigquery
SELECT predicted_class_probs, Class
FROM ML.PREDICT( MODEL advdata.ulb_fraud_detection,
(SELECT * FROM `bigquery-public-data.ml_datasets.ulb_fraud_detection` WHERE Time = 85285.0)
)
"""
Explanation: How to read BigQuery data from TensorFlow 2.0 efficiently
This notebook accompanies the article
"How to read BigQuery data from TensorFlow 2.0 efficiently"
The example problem is to find credit card fraud from the dataset published in:
<i>
Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015
</i>
and available in BigQuery at <pre>bigquery-public-data.ml_datasets.ulb_fraud_detection</pre>
Benchmark Model
In order to compare things, we will do a simple logistic regression in BigQuery ML.
Note that we are using all the columns in the dataset as predictors (except for the Time and Class columns).
The Time column is used to split the dataset 80:20 with the first 80% used for training and the last 20% used for evaluation.
We will also have BigQuery ML automatically balance the weights.
Because the Amount column has a huge range, we take the log of it in preprocessing.
End of explanation
"""
%%bigquery
WITH counts AS (
SELECT
APPROX_QUANTILES(Time, 5)[OFFSET(4)] AS train_cutoff
, COUNTIF(CLASS > 0) AS pos
, COUNTIF(CLASS = 0) AS neg
FROM `bigquery-public-data`.ml_datasets.ulb_fraud_detection
)
SELECT
train_cutoff
, SAFE.LOG(SAFE_DIVIDE(pos,neg)) AS output_bias
, 0.5*SAFE_DIVIDE(pos + neg, pos) AS weight_pos
, 0.5*SAFE_DIVIDE(pos + neg, neg) AS weight_neg
FROM counts
"""
Explanation: Find the breakoff point etc. for Keras
When we do the training in Keras & TensorFlow, we need to find the place to split the dataset and how to weight the imbalanced data.
(BigQuery ML did that for us because we specified 'seq' as the split method and auto_class_weights to be True).
End of explanation
"""
import tensorflow as tf
from tensorflow.python.framework import dtypes
from tensorflow_io.bigquery import BigQueryClient
from tensorflow_io.bigquery import BigQueryReadSession
def features_and_labels(features):
label = features.pop('Class') # this is what we will train for
return features, label
def read_dataset(client, row_restriction, batch_size=2048):
GCP_PROJECT_ID='ai-analytics-solutions' # CHANGE
COL_NAMES = ['Time', 'Amount', 'Class'] + ['V{}'.format(i) for i in range(1,29)]
COL_TYPES = [dtypes.float64, dtypes.float64, dtypes.int64] + [dtypes.float64 for i in range(1,29)]
DATASET_GCP_PROJECT_ID, DATASET_ID, TABLE_ID, = 'bigquery-public-data.ml_datasets.ulb_fraud_detection'.split('.')
bqsession = client.read_session(
"projects/" + GCP_PROJECT_ID,
DATASET_GCP_PROJECT_ID, TABLE_ID, DATASET_ID,
COL_NAMES, COL_TYPES,
requested_streams=2,
row_restriction=row_restriction)
dataset = bqsession.parallel_read_rows()
return dataset.prefetch(1).map(features_and_labels).shuffle(batch_size*10).batch(batch_size)
client = BigQueryClient()
temp_df = read_dataset(client, 'Time <= 144803', 2)
for row in temp_df:
print(row)
break
train_df = read_dataset(client, 'Time <= 144803', 2048)
eval_df = read_dataset(client, 'Time > 144803', 2048)
"""
Explanation: The time cutoff is 144803 and the Keras model's output bias needs to be set at -6.36
The class weights need to be 289.4 and 0.5
Training a TensorFlow/Keras model that reads from BigQuery
Create the dataset from BigQuery
End of explanation
"""
metrics = [
tf.keras.metrics.BinaryAccuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(name='roc_auc'),
]
# create inputs, and pass them into appropriate types of feature columns (here, everything is numeric)
inputs = {
'V{}'.format(i) : tf.keras.layers.Input(name='V{}'.format(i), shape=(), dtype='float64') for i in range(1, 29)
}
inputs['Amount'] = tf.keras.layers.Input(name='Amount', shape=(), dtype='float64')
input_fc = [tf.feature_column.numeric_column(colname) for colname in inputs.keys()]
# transformations. only the Amount is transformed
transformed = inputs.copy()
transformed['Amount'] = tf.keras.layers.Lambda(
lambda x: tf.math.log(tf.math.maximum(x, 0.01)), name='log_amount')(inputs['Amount'])
input_layer = tf.keras.layers.DenseFeatures(input_fc, name='inputs')(transformed)
# Deep learning model
d1 = tf.keras.layers.Dense(16, activation='relu', name='d1')(input_layer)
d2 = tf.keras.layers.Dropout(0.25, name='d2')(d1)
d3 = tf.keras.layers.Dense(16, activation='relu', name='d3')(d2)
output = tf.keras.layers.Dense(1, activation='sigmoid', name='d4', bias_initializer=tf.keras.initializers.Constant())(d3)
model = tf.keras.Model(inputs, output)
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=metrics)
tf.keras.utils.plot_model(model, rankdir='LR')
class_weight = {0: 0.5, 1: 289.4}
history = model.fit(train_df, validation_data=eval_df, epochs=20, class_weight=class_weight)
import matplotlib.pyplot as plt
plt.plot(history.history['val_roc_auc']);
plt.xlabel('Epoch');
plt.ylabel('AUC');
"""
Explanation: Create Keras model
End of explanation
"""
BUCKET='ai-analytics-solutions-kfpdemo' # CHANGE TO SOMETHING THAT YOU OWN
model.save('gs://{}/bqexample/export'.format(BUCKET))
%%bigquery
CREATE OR REPLACE MODEL advdata.keras_fraud_detection
OPTIONS(model_type='tensorflow', model_path='gs://ai-analytics-solutions-kfpdemo/bqexample/export/*')
"""
Explanation: Load TensorFlow model into BigQuery
Now that we have trained a TensorFlow model off BigQuery data ...
let's load the model into BigQuery and use it for batch prediction!
End of explanation
"""
%%bigquery
SELECT d4, Class
FROM ML.PREDICT( MODEL advdata.keras_fraud_detection,
(SELECT * FROM `bigquery-public-data.ml_datasets.ulb_fraud_detection` WHERE Time = 85285.0)
)
"""
Explanation: Now predict with this model (the reason it's called 'd4' is because the output node of my Keras model was called 'd4').
To get probabilities, etc. we'd have to add the corresponding outputs to the Keras model.
End of explanation
"""
|
landlab/landlab | notebooks/tutorials/hillslope_geomorphology/transport-length_hillslope_diffuser/TLHDiff_tutorial.ipynb | mit | import numpy as np
from matplotlib.pyplot import figure, plot, show, title, xlabel, ylabel
from landlab import RasterModelGrid
from landlab.components import FlowDirectorSteepest, TransportLengthHillslopeDiffuser
from landlab.plot import imshow_grid
# to plot figures in the notebook:
%matplotlib inline
"""
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
The transport-length hillslope diffuser
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
This Jupyter notebook illustrates running the transport-length-model hillslope diffusion component in a simple example.
The Basics
This component uses an approach similar to the Davy and Lague (2009) equation for fluvial erosion and transport, and applies it to hillslope diffusion. The formulation and implementation were inspired by Carretier et al. (2016); see this paper and references therein for justification.
Theory
The elevation $z$ of a point of the landscape (such as a grid node) changes according to:
\begin{equation}
\frac{\partial z}{\partial t} = -\epsilon + D + U \tag{1}\label{eq:1},
\end{equation}
and we define:
\begin{equation}
D = \frac{q_s}{L} \tag{2}\label{eq:2},
\end{equation}
where $\epsilon$ is the local erosion rate [L/T], $D$ the local deposition rate [L/T], $U$ the uplift (or subsidence) rate [L/T], $q_s$ the incoming sediment flux per unit width [L$^2$/T] and $L$ is the transport length.
We specify the erosion rate $\epsilon$ and the transport length $L$:
\begin{equation}
\epsilon = \kappa S \tag{3}\label{eq:3}
\end{equation}
\begin{equation}
L = \frac{dx}{1-({S}/{S_c})^2} \tag{4}\label{eq:4}
\end{equation}
where $\kappa$ [L/T] is an erodibility coefficient, $S$ is the local slope [L/L] and $S_c$ is the critical slope [L/L].
Thus, the elevation variation results from the difference between local rates of detachment and deposition.
The detachment rate is proportional to the local gradient. However, the deposition rate ($q_s/L$) depends on the local slope and the critical slope:
- when $S \ll S_c$, most of the sediment entering a node is deposited there, this is the pure diffusion case. In this case, the sediment flux $q_s$ does not include sediment eroded from above and is thus "local".
- when $S \approx S_c$, $L$ becomes infinity and there is no redeposition on the node, the sediments are transferred further downstream. This behaviour corresponds to mass wasting, grains can travel a long distance before being deposited. In that case, the flux $q_s$ is "non-local" as it incorporates sediments that have both been detached locally and transited from upslope.
- for an intermediate $S$, there is a prgogressive transition between pure creep and "balistic" transport of the material. This is consistent with experiments (Roering et al., 2001; Gabet and Mendoza, 2012).
Contrast with the non-linear diffusion model
Previous models typically use a "non-linear" diffusion model proposed by different authors (e.g. Andrews and Hanks, 1985; Hanks, 1999; Roering et al., 1999) and supported by $^{10}$Be-derived erosion rates (e.g. Binnie et al., 2007) or experiments (Roering et al., 2001). It is usually presented in the followin form:
$ $
\begin{equation}
\frac{\partial z}{\partial t} = \frac{\partial q_s}{\partial x} \tag{5}\label{eq:5}
\end{equation}
$ $
\begin{equation}
q_s = \frac{\kappa' S}{1-({S}/{S_c})^2} \tag{6}\label{eq:6}
\end{equation}
where $\kappa'$ [L$^2$/T] is a diffusion coefficient.
This description is thus based on the definition of a flux of transported sediment parallel to the slope:
- when the slope is small, this flux refers to diffusion-like processes such as biogenic soil disturbance, rain splash, or diffuse runoff
- when the slope gets closer to the specified critical slope, the flux increases dramatically, simulating on average the cumulative effect of mass wasting events.
Despite these conceptual differences, equations ($\ref{eq:3}$) and ($\ref{eq:4}$) predict similar topographic evolution to the 'non-linear' diffusion equations for $\kappa' = \kappa dx$, as shown in the following example.
Example 1:
First, we import what we'll need:
End of explanation
"""
mg = RasterModelGrid(
(20, 20), xy_spacing=50.0
) # raster grid with 20 rows, 20 columns and dx=50m
z = np.random.rand(mg.size("node")) # random noise for initial topography
mg.add_field("topographic__elevation", z, at="node")
mg.set_closed_boundaries_at_grid_edges(
False, True, False, True
) # N and S boundaries are closed, E and W are open
"""
Explanation: Make a grid and set boundary conditions:
End of explanation
"""
total_t = 2000000.0 # total run time (yr)
dt = 1000.0 # time step (yr)
nt = int(total_t // dt) # number of time steps
uplift_rate = 0.0001 # uplift rate (m/yr)
kappa = 0.001 # erodibility (m/yr)
Sc = 0.6 # critical slope
"""
Explanation: Set the initial and run conditions:
End of explanation
"""
fdir = FlowDirectorSteepest(mg)
tl_diff = TransportLengthHillslopeDiffuser(mg, erodibility=kappa, slope_crit=Sc)
"""
Explanation: Instantiate the components:
The hillslope diffusion component must be used together with a flow router/director that provides the steepest downstream slope for each node, with a D4 method (creates the field topographic__steepest_slope at nodes).
End of explanation
"""
for t in range(nt):
fdir.run_one_step()
tl_diff.run_one_step(dt)
z[mg.core_nodes] += uplift_rate * dt # add the uplift
# add some output to let us see we aren't hanging:
if t % 100 == 0:
print(t * dt)
# plot east-west cross-section of topography:
x_plot = range(0, 1000, 50)
z_plot = z[100:120]
figure("cross-section")
plot(x_plot, z_plot)
figure("cross-section")
title("East-West cross section")
xlabel("x (m)")
ylabel("z (m)")
"""
Explanation: Run the components for 2 Myr and trace an East-West cross-section of the topography every 100 kyr:
End of explanation
"""
figure("final topography")
im = imshow_grid(
mg, "topographic__elevation", grid_units=["m", "m"], var_name="Elevation (m)"
)
"""
Explanation: And plot final topography:
End of explanation
"""
# Create grid and topographic elevation field:
mg2 = RasterModelGrid((20, 20), xy_spacing=50.0)
z = np.zeros(mg2.number_of_nodes)
z[mg2.node_x > 500] = mg2.node_x[mg2.node_x > 500] / 10
mg2.add_field("topographic__elevation", z, at="node")
# Set boundary conditions:
mg2.set_closed_boundaries_at_grid_edges(False, True, False, True)
# Show initial topography:
im = imshow_grid(
mg2, "topographic__elevation", grid_units=["m", "m"], var_name="Elevation (m)"
)
# Plot an east-west cross-section of the initial topography:
z_plot = z[100:120]
x_plot = range(0, 1000, 50)
figure(2)
plot(x_plot, z_plot)
title("East-West cross section")
xlabel("x (m)")
ylabel("z (m)")
"""
Explanation: This behaviour corresponds to the evolution observed using a classical non-linear diffusion model.
Example 2:
In this example, we show that when the slope is steep ($S \ge S_c$), the transport-length hillsope diffusion simulates mass wasting, with long transport distances.
First, we create a grid: the western half of the grid is flat at 0 m of elevation, the eastern half is a 45-degree slope.
End of explanation
"""
total_t = 1000000.0 # total run time (yr)
dt = 1000.0 # time step (yr)
nt = int(total_t // dt) # number of time steps
"""
Explanation: Set the run conditions:
End of explanation
"""
fdir = FlowDirectorSteepest(mg2)
tl_diff = TransportLengthHillslopeDiffuser(mg2, erodibility=0.001, slope_crit=0.6)
"""
Explanation: Instantiate the components:
End of explanation
"""
for t in range(nt):
fdir.run_one_step()
tl_diff.run_one_step(dt)
# add some output to let us see we aren't hanging:
if t % 100 == 0:
print(t * dt)
z_plot = z[100:120]
figure(2)
plot(x_plot, z_plot)
"""
Explanation: Run for 1 Myr, plotting the cross-section regularly:
End of explanation
"""
# Import Linear diffuser:
from landlab.components import LinearDiffuser
# Create grid and topographic elevation field:
mg3 = RasterModelGrid((20, 20), xy_spacing=50.0)
z = np.ones(mg3.number_of_nodes)
z[mg.node_x > 500] = mg.node_x[mg.node_x > 500] / 10
mg3.add_field("topographic__elevation", z, at="node")
# Set boundary conditions:
mg3.set_closed_boundaries_at_grid_edges(False, True, False, True)
# Instantiate components:
fdir = FlowDirectorSteepest(mg3)
diff = LinearDiffuser(mg3, linear_diffusivity=0.1)
# Set run conditions:
total_t = 1000000.0
dt = 1000.0
nt = int(total_t // dt)
# Run for 1 Myr, plotting east-west cross-section regularly:
for t in range(nt):
fdir.run_one_step()
diff.run_one_step(dt)
# add some output to let us see we aren't hanging:
if t % 100 == 0:
print(t * dt)
z_plot = z[100:120]
figure(2)
plot(x_plot, z_plot)
"""
Explanation: The material is diffused from the top and along the slope and it accumulates at the bottom, where the topography flattens.
As a comparison, the following code uses linear diffusion on the same slope:
End of explanation
"""
|
lucasb-eyer/BiternionNet | Experiments - Tosato.ipynb | mit | import numpy as np
import pickle, gzip
from ipywidgets import IntProgress
from IPython.display import display
%matplotlib inline
# Font which got unicode math stuff.
import matplotlib as mpl
mpl.rcParams['font.family'] = 'DejaVu Sans'
# Much more readable plots
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import DeepFried2 as df
from lbtoolbox.augmentation import AugmentationPipeline, Cropper
from lbtoolbox.thutil import count_params
from lbtoolbox.util import batched
from lbtoolbox.plotting import liveplot, annotline
from lbtoolbox.plotting_sklearn import confuse as confusion
"""
Explanation: Experiments on Tosato's Benchmark Datasets
Important Note: These are not the experiments from the paper, but me re-implementing the same experiments after the fact in order to clean up the code and use the DeepFried2 library. This means numbers may be ever so slightly different.
End of explanation
"""
class Flatten(df.Module):
def symb_forward(self, symb_in):
return symb_in.flatten(2)
def mknet(*head, midshape=(5,5), extra=df.Identity()):
return df.Sequential( # 3@46
df.SpatialConvolutionCUDNN( 3, 24, 3, 3), # -> 24@44
df.BatchNormalization(24),
df.ReLU(),
df.SpatialConvolutionCUDNN(24, 24, 3, 3), # -> 24@42
df.BatchNormalization(24),
df.SpatialMaxPoolingCUDNN(2, 2), # -> 24@21
df.ReLU(),
df.SpatialConvolutionCUDNN(24, 48, 3, 3), # -> 48@19
df.BatchNormalization(48),
df.ReLU(),
df.SpatialConvolutionCUDNN(48, 48, 3, 3), # -> 48@17
df.BatchNormalization(48),
df.SpatialMaxPooling(2, 2), # -> 48@9
df.ReLU(),
df.SpatialConvolutionCUDNN(48, 64, 3, 3), # -> 64@7
df.BatchNormalization(64),
df.ReLU(),
df.SpatialConvolutionCUDNN(64, 64, 3, 3), # -> 64@5
df.BatchNormalization(64),
df.ReLU(),
extra,
df.Dropout(0.2),
Flatten(),
df.Linear(64*np.prod(midshape), 512),
df.ReLU(),
df.Dropout(0.5),
*head
)
def mknet_classification(nclass):
return mknet(
df.Linear(512, nclass, initW=df.init.const(0)),
df.SoftMax()
)
def plotcost(costs):
fig, ax = plt.subplots()
line, = ax.plot(1+np.arange(len(costs)), costs, label='Training cost')
ax.set_xlabel('Epochs')
ax.set_ylabel('Cost')
annotline(ax, line, np.min)
return fig
def dotrain(model, crit, aug, Xtr, ytr, nepochs=50, batchsize=100):
opt = df.AdaDelta(rho=.95, eps=1e-7, lr=1)
progress = IntProgress(value=0, min=0, max=nepochs, description='Training:')
display(progress)
model.training()
costs = []
for e in range(nepochs):
batchcosts = []
for Xb, yb in batched(batchsize, Xtr, ytr, shuf=True):
if aug is not None:
Xb, yb = aug.augbatch_train(Xb, yb)
model.zero_grad_parameters()
cost = model.accumulate_gradients(Xb, yb, crit)
opt.update_parameters(model)
batchcosts.append(cost)
costs.append(np.mean(batchcosts))
progress.value = e+1
liveplot(plotcost, costs)
return costs
def dostats(model, aug, Xtr, batchsize=100):
model.training()
for Xb in batched(batchsize, Xtr):
if aug is None:
model.accumulate_statistics(Xb)
else:
for Xb_aug in aug.augbatch_pred(Xb):
model.accumulate_statistics(Xb_aug)
def dopred(model, aug, X, batchsize=100):
model.evaluate()
y_preds = []
for Xb in batched(batchsize, X):
if aug is None:
p_y = model.forward(X)
else:
p_y = np.mean([model.forward(X) for X in aug.augbatch_pred(Xb)], axis=0)
y_preds += list(np.argmax(p_y, axis=1))
return np.array(y_preds)
def dopred(model, aug, X, ensembling, output2preds, batchsize=100):
model.evaluate()
y_preds = []
for Xb in batched(batchsize, X):
if aug is None:
p_y = model.forward(X)
else:
p_y = ensembling([model.forward(X) for X in aug.augbatch_pred(Xb)])
y_preds += list(output2preds(p_y))
return np.array(y_preds)
def dopred_clf(model, aug, X, batchsize=100):
return dopred(model, aug, X,
ensembling=lambda p_y: np.mean(p_y, axis=0),
output2preds=lambda p_y: np.argmax(p_y, axis=1),
batchsize=batchsize
)
"""
Explanation: Utilities
End of explanation
"""
Xtr, Xte, ytr, yte, ntr, nte, le = pickle.load(gzip.open('data/HIIT-wflip.pkl.gz', 'rb'))
aug = AugmentationPipeline(Xtr, ytr, Cropper((46,46)))
nets_hiit = [mknet_classification(len(le.classes_)) for _ in range(5)]
print('{:.3f}M params'.format(count_params(nets_hiit[0])/1000000))
trains_hiit = [dotrain(net, df.ClassNLLCriterion(), aug, Xtr, ytr.astype(df.floatX)) for net in nets_hiit]
"""
Explanation: HIIT
End of explanation
"""
for model in nets_hiit:
dostats(model, aug, Xtr, batchsize=1000)
y_preds_hiit = [dopred_clf(net, aug, Xte) for net in nets_hiit]
print("Average accuracy: {:.2%}".format(np.mean([np.mean(y_p == yte) for y_p in y_preds_hiit])))
"""
Explanation: The statistics collection pass needs to be done at the end of the training for the batch-normalization to collect its stats.
End of explanation
"""
fig, ax = confusion(y_pred=np.concatenate(y_preds_hiit), y_true=np.concatenate([yte]*len(y_preds_hiit)), labels=le,
label_order=['left', 'frlf', 'frnt', 'frrg', 'rght', 'rear'])
"""
Explanation: Combined confusion matrix of all the networks. This is not what's shown in the paper's appendix.
End of explanation
"""
Xtr, Xte, ytr, yte, ntr, nte, le = pickle.load(gzip.open('data/HOCoffee-wflip.pkl.gz', 'rb'))
aug = AugmentationPipeline(Xtr, ytr, Cropper((46,46)))
nets_hocoffee = [mknet_classification(len(le.classes_)) for _ in range(5)]
print('{:.3f}M params'.format(count_params(nets_hocoffee[0])/1000000))
trains_hocoffee = [dotrain(net, df.ClassNLLCriterion(), aug, Xtr, ytr.astype(df.floatX)) for net in nets_hocoffee]
for model in nets_hocoffee:
dostats(model, aug, Xtr, batchsize=1000)
y_preds_hocoffee = [dopred_clf(net, aug, Xte) for net in nets_hocoffee]
print("Average accuracy: {:.2%}".format(np.mean([np.mean(y_p == yte) for y_p in y_preds_hocoffee])))
fig, ax = confusion(y_pred=np.concatenate(y_preds_hocoffee), y_true=np.concatenate([yte]*len(y_preds_hocoffee)), labels=le,
label_order=['left', 'frlf', 'frnt', 'frrg', 'rght', 'rear'])
"""
Explanation: HOCoffee
End of explanation
"""
Xtr, Xte, ytr, yte, ntr, nte, le = pickle.load(gzip.open('data/HOC-wflip.pkl.gz', 'rb'))
aug = AugmentationPipeline(Xtr, ytr, Cropper((123, 54)))
def mknet_hoc(nclass=len(le.classes_)):
return df.Sequential( # 3@123x54
df.SpatialConvolutionCUDNN( 3, 24, 3, 3), # -> 24@121x52
df.BatchNormalization(24),
df.ReLU(),
df.SpatialConvolutionCUDNN(24, 24, 3, 3), # -> 24@119x50
df.BatchNormalization(24),
df.ReLU(),
df.SpatialConvolutionCUDNN(24, 24, 3, 3), # -> 24@117x48
df.BatchNormalization(24),
df.SpatialMaxPoolingCUDNN(3, 2), # -> 24@39x24
df.ReLU(),
df.SpatialConvolutionCUDNN(24, 48, 3, 3), # -> 48@37x22
df.BatchNormalization(48),
df.ReLU(),
df.SpatialConvolutionCUDNN(48, 48, 3, 3), # -> 48@35x20
df.BatchNormalization(48),
df.ReLU(),
df.SpatialConvolutionCUDNN(48, 48, 3, 3), # -> 48@33x18
df.BatchNormalization(48),
df.SpatialMaxPooling(3, 3), # -> 48@11x6
df.ReLU(),
df.SpatialConvolutionCUDNN(48, 64, 3, 3), # -> 64@9x4
df.BatchNormalization(64),
df.ReLU(),
df.SpatialConvolutionCUDNN(64, 64, 3, 3), # -> 64@7x2
df.BatchNormalization(64),
df.ReLU(),
df.Dropout(0.2),
Flatten(),
df.Linear(64*7*2, 512),
df.ReLU(),
df.Dropout(0.5),
df.Linear(512, nclass, initW=df.init.const(0)),
df.SoftMax()
)
nets_hoc = [mknet_hoc(len(le.classes_)) for _ in range(5)]
print('{:.3f}M params'.format(count_params(nets_hoc[0])/1000000))
trains_hoc = [dotrain(net, df.ClassNLLCriterion(), aug, Xtr, ytr.astype(df.floatX)) for net in nets_hoc]
for model in nets_hoc:
dostats(model, aug, Xtr, batchsize=100)
y_preds_hoc = [dopred_clf(net, aug, Xte) for net in nets_hoc]
print("Average accuracy: {:.2%}".format(np.mean([np.mean(y_p == yte) for y_p in y_preds_hoc])))
fig, ax = confusion(y_pred=np.concatenate(y_preds_hoc), y_true=np.concatenate([yte]*len(y_preds_hoc)), labels=le,
label_order=['left', 'front', 'right', 'back'])
"""
Explanation: HOC
End of explanation
"""
Xtr, Xte, ytr, yte, ntr, nte, le = pickle.load(gzip.open('data/QMULPoseHeads-wflip.pkl.gz', 'rb'))
"""
Explanation: QMUL - No Background
End of explanation
"""
not_bg = np.where(ytr != le.transform('background'))[0]
Xtr = Xtr[not_bg]
ytr = ytr[not_bg]
ntr = [ntr[i] for i in not_bg]
not_bg = np.where(yte != le.transform('background'))[0]
Xte = Xte[not_bg]
yte = yte[not_bg]
nte = [nte[i] for i in not_bg]
"""
Explanation: The following few steps remove the "background" class from the dataset.
First, remove all datapoints of that class.
End of explanation
"""
old_ytr = ytr.copy()
old_yte = yte.copy()
for new_lbl, old_lbl in enumerate([i for i, cls in enumerate(le.classes_) if cls != 'background']):
ytr[old_ytr == old_lbl] = new_lbl
yte[old_yte == old_lbl] = new_lbl
le.classes_ = np.array([c for c in le.classes_ if c != 'background'])
"""
Explanation: Then, shift the labels such that they are continuous again. No hole. Both in the targets as well as in the label encoder.
End of explanation
"""
aug = AugmentationPipeline(Xtr, ytr, Cropper((46,46)))
nets_qmul4 = [mknet_classification(len(le.classes_)) for _ in range(5)]
print('{:.3f}M params'.format(count_params(nets_qmul4[0])/1000000))
trains_qmul4 = [dotrain(net, df.ClassNLLCriterion(), aug, Xtr, ytr.astype(df.floatX)) for net in nets_qmul4]
for model in nets_qmul4:
dostats(model, aug, Xtr, batchsize=100)
y_preds_qmul4 = [dopred_clf(net, aug, Xte) for net in nets_qmul4]
print("Average accuracy: {:.2%}".format(np.mean([np.mean(y_p == yte) for y_p in y_preds_qmul4])))
fig, ax = confusion(y_pred=np.concatenate(y_preds_qmul4), y_true=np.concatenate([yte]*len(y_preds_qmul4)), labels=le,
label_order=['left', 'front', 'right', 'back'])
"""
Explanation: Done, now go on training as usual.
End of explanation
"""
Xtr, Xte, ytr, yte, ntr, nte, le = pickle.load(gzip.open('data/QMULPoseHeads-wflip.pkl.gz', 'rb'))
aug = AugmentationPipeline(Xtr, ytr, Cropper((46,46)))
nets_qmul = [mknet_classification(len(le.classes_)) for _ in range(5)]
print('{:.3f}M params'.format(count_params(nets_qmul[0])/1000000))
trains_qmul = [dotrain(net, df.ClassNLLCriterion(), aug, Xtr, ytr.astype(df.floatX)) for net in nets_qmul]
for model in nets_qmul:
dostats(model, aug, Xtr, batchsize=100)
y_preds_qmul = [dopred_clf(net, aug, Xte) for net in nets_qmul]
print("Average accuracy: {:.2%}".format(np.mean([np.mean(y_p == yte) for y_p in y_preds_qmul])))
fig, ax = confusion(y_pred=np.concatenate(y_preds_qmul), y_true=np.concatenate([yte]*len(y_preds_qmul)), labels=le,
label_order=['left', 'front', 'right', 'back', 'background'])
"""
Explanation: QMUL - With Background
End of explanation
"""
(Xtr, Ptr, Ttr, Rtr, names_tr), (Xte, Pte, Tte, Rte, names_te) = pickle.load(open('data/IDIAP.pkl', 'rb'))
ytr, yte = np.array([Ptr, Ttr, Rtr]).T, np.array([Pte, Tte, Rte]).T
aug = AugmentationPipeline(Xtr, None, Cropper((68, 68)))
def mknet_idiap(*head):
return mknet(*head, extra=df.SpatialMaxPoolingCUDNN(2,2))
nets_idiap = [mknet_idiap(df.Linear(512, 3, initW=df.init.const(0))) for _ in range(5)]
print('{:.3f}M params'.format(count_params(nets_idiap[0])/1000000))
trains_idiap = [dotrain(net, df.MADCriterion(), aug, Xtr, np.array([Ptr, Ttr, Rtr]).T) for net in nets_idiap]
for model in nets_idiap:
dostats(model, aug, Xtr, batchsize=100)
def ensemble_radians(angles):
return np.arctan2(np.mean(np.sin(angles), axis=0), np.mean(np.cos(angles), axis=0))
def dopred_trilin(model, aug, X, batchsize=100):
return dopred(model, aug, X, ensembling=ensemble_radians, output2preds=lambda x: x, batchsize=batchsize)
y_preds = [dopred_trilin(model, aug, Xte) for model in nets_idiap]
def maad_from_rad_as_deg(preds, reals):
return np.rad2deg(np.abs(np.arctan2(np.sin(reals-preds), np.cos(reals-preds))))
errs = maad_from_rad_as_deg(y_preds, yte)
errs.shape
mean_errs = np.mean(errs, axis=1)
std_errs = np.std(errs, axis=1)
"""
Explanation: IDIAP Regression - Trilinear
End of explanation
"""
avg_mean_err = np.mean(mean_errs, axis=0)
avg_std_err = np.mean(std_errs, axis=0)
print("Pan Error: {:5.2f}°±{:5.2f}°".format(avg_mean_err[0], avg_std_err[0]))
print("Tilt Error: {:5.2f}°±{:5.2f}°".format(avg_mean_err[1], avg_std_err[1]))
print("Roll Error: {:5.2f}°±{:5.2f}°".format(avg_mean_err[2], avg_std_err[2]))
"""
Explanation: Show the average across the five runs of the error and the error standard deviation.
End of explanation
"""
std_mean_err = np.std(mean_errs, axis=0)
std_std_err = np.std(std_errs, axis=0)
print("Pan Error stdev.: {:5.2f}°±{:5.2f}°".format(std_mean_err[0], std_std_err[0]))
print("Tilt Error stdev.: {:5.2f}°±{:5.2f}°".format(std_mean_err[1], std_std_err[1]))
print("Roll Error stdev.: {:5.2f}°±{:5.2f}°".format(std_mean_err[2], std_std_err[2]))
"""
Explanation: And now the standard deviation across the five runs of the above.
End of explanation
"""
def mknet_caviar():
return mknet(df.Linear(512, 1, initW=df.init.const(0)))
def ensemble_degrees(angles):
return np.arctan2(np.mean(np.sin(np.deg2rad(angles)), axis=0), np.mean(np.cos(np.deg2rad(angles)), axis=0))
def dopred_deg(model, aug, X, batchsize=100):
return dopred(model, aug, X, ensembling=ensemble_degrees, output2preds=lambda x: x, batchsize=batchsize)
def maad_from_deg(preds, reals):
return np.rad2deg(np.abs(np.arctan2(np.sin(np.deg2rad(reals-preds)), np.cos(np.deg2rad(reals-preds)))))
"""
Explanation: CAVIAR
End of explanation
"""
(Xtr, Ptr, *_), (Xte, Pte, *_) = pickle.load(gzip.open('data/CAVIAR-c.pkl.gz', 'rb'))
aug = AugmentationPipeline(Xtr, None, Cropper((46,46)))
nets_caviar_c = [mknet_caviar() for _ in range(5)]
print('{:.3f}M params'.format(count_params(nets_caviar_c[0])/1000000))
trains_caviar_c = [dotrain(net, df.MADCriterion(), aug, Xtr, Ptr[:,None]) for net in nets_caviar_c]
for model in nets_caviar_c:
dostats(model, aug, Xtr, batchsize=100)
y_preds = [dopred_deg(model, aug, Xte) for model in nets_caviar_c]
errs = maad_from_rad_as_deg(y_preds, np.deg2rad(Pte[:,None]))
errs.shape
mean_errs = np.mean(errs, axis=1)
std_errs = np.std(errs, axis=1)
"""
Explanation: "Clean"
End of explanation
"""
print("Pan Error: {:5.2f}°±{:5.2f}°".format(np.mean(mean_errs), np.mean(std_errs)))
print("Pan Error stdev.: {:5.2f}°±{:5.2f}°".format(np.std(mean_errs), np.std(std_errs)))
"""
Explanation: Same story as above.
End of explanation
"""
(Xtr, Ptr, *_), (Xte, Pte, *_) = pickle.load(gzip.open('data/CAVIAR-o.pkl.gz', 'rb'))
aug = AugmentationPipeline(Xtr, None, Cropper((46,46)))
nets_caviar_o = [mknet_caviar() for _ in range(5)]
print('{:.3f}M params'.format(count_params(nets_caviar_o[0])/1000000))
trains_caviar_o = [dotrain(net, df.MADCriterion(), aug, Xtr, Ptr[:,None]) for net in nets_caviar_o]
for model in nets_caviar_o:
dostats(model, aug, Xtr, batchsize=100)
y_preds = [dopred_deg(model, aug, Xte) for model in nets_caviar_o]
errs = maad_from_rad_as_deg(y_preds, np.deg2rad(Pte[:,None]))
errs.shape
mean_errs = np.mean(errs, axis=1)
std_errs = np.std(errs, axis=1)
print("Pan Error: {:5.2f}°±{:5.2f}°".format(np.mean(mean_errs), np.mean(std_errs)))
print("Pan Error stdev.: {:5.2f}°±{:5.2f}°".format(np.std(mean_errs), np.std(std_errs)))
"""
Explanation: With occlusions
End of explanation
"""
|
Vvkmnn/books | AutomateTheBoringStuffWithPython/lesson38.ipynb | gpl-3.0 | import webbrowser
webbrowser.open('https://automatetheboringstuff.com')
"""
Explanation: Lesson 38:
The Webbrowser Module
The webbrowser module has tools to manage a webbrowser from Python.
webbrowser.open() opens a new browser window at a url:
End of explanation
"""
import webbrowser, sys, pyperclip
sys.argv # Pass system arguments to program; mapit.py '870' 'Valencia' 'St.'
# Check if command line arguments were passed; useful if this existed as a .py in the path (run via mapit 'Some address')
# For Jupyter version, will just pass in arguments earlier in document
if len(sys.argv) > 1:
# Join individual arguments into one string: mapit.py '870' 'Valencia' 'St.' > mapit.py '870 Valencia St.'
address = ' '.join(sys.argv[1:])
# Skip the first argument (mapit.py), but join every slice from [1:] with ' '
else:
# Read the clipboard if no arguments found
address = pyperclip.paste()
# Example Google Map URLs
# Default: https://www.google.com/maps/place/870+Valencia+St,+San+Francisco,+CA+94110/@37.7589845,-122.4237899,17z/data=!3m1!4b1!4m2!3m1!1s0x808f7e3db2792a09:0x4fc69a2eea9fb3d3
# Test: https://www.google.com/maps/place/870+Valencia+St,+San+Francisco,+CA+94110/
# Test: https://www.google.com/maps/place/870 Valencia St
# This works, so just concatenate the default google maps url with a spaced address variable
webbrowser.open('https://www.google.com/maps/place' + address)
"""
Explanation: This is the primary function of the webbrowser module, but it can be used as part of a script to improve web scraping. selenium is a more full featured web browser module.
Google Maps Opener:
End of explanation
"""
#! usr/bin/env bash
#python3 mapit.py %*
"""
Explanation: To run this is a script, we would need to the full path to python, followed by the script, followed by the address.
An easy way to skip this process is to create a shell script holding the script addresses, and passing arguments to it:
End of explanation
"""
|
Olsthoorn/TransientGroundwaterFlow | Syllabus_in_notebooks/Sec5_5_5_two_sides_equal_sudden_change.ipynb | gpl-3.0 | import numpy as np
import matplotlib.pyplot as plt
import scipy.special as sp
"""
Explanation: Section 5.5.5
Superposition in space and time with the erfc function
IHE, Delft, 2010-01-06
@T.N.Olsthoorn
Two sides of strip of land with equal sudden change of surface-water stage on both sides.
See page 63 of the syllabus
Set up a mirror scheme for the case of a strip of land bounded by straigt surface water on either side, where the surface water stage at both sides suddenly changes by the same amount.
End of explanation
"""
def newfig(title='?', xlabel='?', ylabel='?', xlim=None, ylim=None,
xscale='linear', yscale='linear', size_inches=(14, 8)):
'''Setup a new axis for plotting'''
fig, ax = plt.subplots()
fig.set_size_inches(size_inches)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_xscale(xscale)
ax.set_yscale(yscale)
if xlim is not None: ax.set_xlim(xlim)
if ylim is not None: ax.set_ylim(ylim)
ax.grid(True)
return ax
# Aquifer and land-strip properties
kD = 400 # m2/d
S = 0.1 # [-]
L = 200 # m (width of land strip)
A = 2 # m (sudden change of river stage)
# Choose points within the cross section to compute the heads
x = np.linspace(-L/2, +L/2, 101)
# Choosing visualization times, in this case we'll draw one per week (7 d)
# The half time would be T = 0.24 sqrt(L^2 S / kD) see syllabus.
T50 = 0.24 * np.sqrt(L ** 2 * S / kD)
times = np.arange(T50, 11 * T50, T50)
# Superpositionin in time using the erfc function, see syllabus page 62
ax = newfig('superposition in space', 'time [d]', 'head [m]')
for t in times:
s = np.zeros_like(x)
for i in range(1, 11):
ds = A * (-1)**(i-1) * (
sp.erfc((L/2 * (2 * i -1) + x) * np.sqrt(S / (4 * kD * t))) +
sp.erfc((L/2 * (2 * i -1) - x) * np.sqrt(S / (4 * kD * t))))
s = s + ds
plt.plot(x, s, label='t = {:4.2f} d'.format(t))
plt.legend()
plt.ylim(0, A)
plt.show()
"""
Explanation: Convenience function for setting up graphs
End of explanation
"""
# Using the analytical solution on page 63 of the syllabus
ax = newfig('Solution p63 with cos() * exp()', 'x [m]', 's(x, t) [m]', ylim=(0, 2))
b = L/2
for t in times:
s = np.zeros_like(x)
for j in range(1, 11):
ds = A * 4/np.pi * \
(-1)**(j-1) / (2*j - 1) *\
np.cos((2 * j -1) * np.pi/2 * x/b) *\
np.exp(-(2 * j -1)**2 * (np.pi/2)**2 * kD/(b**2 * S) * t)
s = s + ds
ax.plot(x, s, label='t = {:4.2f} d'.format(t))
plt.legend()
"""
Explanation: Compare this result with the that of the analytical formula on page 63 of the syllabus, which is a summation the product of a cosine and an exponent. The expressions are completely different, but yield the same result
There is a differnce, in that the solution with the erfc functions represents the head in the strip of land after the stage on both sides of the strip suddenly changed by a fixed amount, whereas this one represents a strip in which the groundwater head is initially uniform at a distance A above that of the surface water on both sides, after which it drains out. However, the A - first solution equals the second.
End of explanation
"""
|
opengeostat/pygslib | pygslib/Ipython_templates/.ipynb_checkpoints/qqplt_html-checkpoint.ipynb | mit | #general imports
import pygslib
"""
Explanation: PyGSLIB
QQ and PP plots
End of explanation
"""
#get the data in gslib format into a pandas Dataframe
cluster= pygslib.gslib.read_gslib_file('../datasets/cluster.dat')
true= pygslib.gslib.read_gslib_file('../datasets/true.dat')
true['Declustering Weight'] = 1
"""
Explanation: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
End of explanation
"""
npoints = len(cluster['Primary'])
true['Declustering Weight'] = 1
#using declustering wight
parameters_qpplt = {
# gslib parameters for qq-pp calculation
'qqorpp': 0, # integer (Optional, default 0, Q-Q plot). Q-Q plot (qqorpp=0); P-P plot (qqorpp=1)
#'npts' : None, # integer (Optional, default min length of va1 and va2). Number of points to use on the Q-Q or P-P plot (should not exceed the smallest number of data in data1 / data2
'va1' : cluster['Primary'], # rank-1 array('d') with bounds (nd). Variable 1
'wt1' : cluster['Declustering Weight'], # rank-1 array('d') with bounds (nd) (Optional, set to array of ones). Declustering weight for variable 1.
'va2' : true['Primary'], # rank-1 array('d') with bounds (nd). Variable 2
'wt2' : true['Declustering Weight'], # rank-1 array('d') with bounds (nd) (Optional, set to array of ones). Declustering weight for variable 2.
# visual parameters for figure (if a new figure is created)
#'figure' : None, # a bokeh figure object (Optional: new figure created if None). Set none or undefined if creating a new figure.
#'title' : None, # string (Optional, "QQ plot" or "PP plot"). Figure title
#'xlabel' : 'Z1', # string (Optional, default "Z1" or "P1"). X axis label
#'ylabel' : 'Z2', # string (Optional, default "Z2" or "P2"). Y axis label
#'xlog' : True, # boolean (Optional, default True). If true plot X axis in log sale.
#'ylog' : True, # boolean (Optional, default True). If true plot Y axis in log sale.
# visual parameter for the probplt
#'style' : None, # string with valid bokeh chart type
'color' : 'black', # string with valid CSS colour (https://www.w3schools.com/colors/colors_names.asp), or an RGB(A) hex value, or tuple of integers (r,g,b), or tuple of (r,g,b,a) (Optional, default "navy")
'legend': 'Declustered', # string (Optional, default "NA").
#'alpha' : None, # float [0-1] (Optional, default 0.5). Transparency of the fill colour
#'lwidth': None, # float (Optional, default 1). Line width
# leyend
'legendloc': None} # float (Optional, default 'bottom_right'). Any of top_left, top_center, top_right, center_right, bottom_right, bottom_center, bottom_left, center_left
# Calculate the non declustered qq plot
results, fig = pygslib.plothtml.qpplt(parameters_qpplt)
# Calculate declustered qqplot
# a) get array of ones as weights
cluster['naive']= cluster['Declustering Weight'].values*0 +1
# update parameter dic
parameters_qpplt['wt1'] = cluster['naive']
parameters_qpplt['color'] = 'blue'
parameters_qpplt['legend']='Clustered'
results, fig = pygslib.plothtml.qpplt(parameters_qpplt)
# show the plot
pygslib.plothtml.show(fig)
"""
Explanation: QQ-Plot
End of explanation
"""
|
mdenker/elephant | doc/tutorials/statistics.ipynb | bsd-3-clause | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from quantities import ms, s, Hz
from elephant.spike_train_generation import homogeneous_poisson_process, homogeneous_gamma_process
help(homogeneous_poisson_process)
"""
Explanation: Statistics
The executed version of this tutorial is at https://elephant.readthedocs.io/en/latest/tutorials/statistics.html
This notebook provides an overview of the functions provided by the elephant statistics module.
1. Generating homogeneous Poisson and Gamma processes
All measures presented here require one or two spiketrains as input. We start by importing physical quantities (seconds, milliseconds and Hertz) and two generators of spiketrains - homogeneous_poisson_process() and homogeneous_gamma_process() functions from elephant.spike_train_generation module.
Let's explore homogeneous_poisson_process() function in details with Python help() command.
End of explanation
"""
t_start = 275.5 * ms
print(t_start)
"""
Explanation: The function requires four parameters: the firing rate of the Poisson process, the start time and the stop time, and the refractory period (default refractory_period=None means no refractoriness). The first three parameters are specified as Quantity objects: these are essentially arrays or numbers with a unit of measurement attached. To specify t_start to be equal to 275.5 milliseconds, you write
End of explanation
"""
t_start2 = 3. * s
t_start_sum = t_start + t_start2
print(t_start_sum)
"""
Explanation: The nice thing about Quantities is that once the unit is specified you don't need to worry about rescaling the values to a common unit 'cause Quantities takes care of this for you:
End of explanation
"""
np.random.seed(28) # to make the results reproducible
spiketrain1 = homogeneous_poisson_process(rate=10*Hz, t_start=0.*ms, t_stop=10000.*ms)
spiketrain2 = homogeneous_gamma_process(a=3, b=10*Hz, t_start=0.*ms, t_stop=10000.*ms)
"""
Explanation: For a complete set of operations with quantities refer to its documentation.
Let's get back to spiketrains generation. In this example we'll use one Poisson and one Gamma processes.
End of explanation
"""
print("spiketrain1 type is", type(spiketrain1))
print("spiketrain2 type is", type(spiketrain2))
"""
Explanation: Both spiketrains are instances of neo.core.spiketrain.SpikeTrain class:
End of explanation
"""
print(f"spiketrain2 has {len(spiketrain2)} spikes:")
print(" t_start:", spiketrain2.t_start)
print(" t_stop:", spiketrain2.t_stop)
print(" spike times:", spiketrain2.times)
"""
Explanation: The important properties of a SpikeTrain are:
times stores the spike times in a numpy array with the specified units;
t_start - the beginning of the recording/generation;
t_stop - the end of the recording/generation.
End of explanation
"""
plt.figure(figsize=(8, 3))
plt.eventplot([spiketrain1.magnitude, spiketrain2.magnitude], linelengths=0.75, color='black')
plt.xlabel('Time (ms)', fontsize=16)
plt.yticks([0,1], labels=["spiketrain1", "spiketrain2"], fontsize=16)
plt.title("Figure 1");
"""
Explanation: Before exploring the statistics of spiketrains, let's look at the rasterplot. In the next section we'll compare numerically the difference between two.
End of explanation
"""
from elephant.statistics import mean_firing_rate
print("The mean firing rate of spiketrain1 is", mean_firing_rate(spiketrain1))
print("The mean firing rate of spiketrain2 is", mean_firing_rate(spiketrain2))
"""
Explanation: 2. Rate estimation
Elephant offers three approaches for estimating the underlying rate of a spike train.
2.1. Mean firing rate
The simplest approach is to assume a stationary firing rate and only use the total number of spikes and the duration of the spike train to calculate the average number of spikes per time unit. This results in a single value for a given spiketrain.
End of explanation
"""
fr1 = len(spiketrain1) / (spiketrain1.t_stop - spiketrain1.t_start)
fr2 = len(spiketrain2) / (spiketrain2.t_stop - spiketrain2.t_start)
print("The mean firing rate of spiketrain1 is", fr1)
print("The mean firing rate of spiketrain2 is", fr2)
"""
Explanation: The mean firing rate of spiketrain1 is higher than of spiketrain2 as expected from the Figure 1.
Let's quickly check the correctness of the mean_firing_rate() function by computing the firing rates manually:
End of explanation
"""
mean_firing_rate(spiketrain1, t_start=0*ms, t_stop=1000*ms)
"""
Explanation: Additionally, the period within the spike train during which to estimate the firing rate can be further limited using the t_start and t_stop keyword arguments. Here, we limit the firing rate estimation to the first second of the spiketrain.
End of explanation
"""
multi_spiketrains = np.array([[1,2,3],[4,5,6],[7,8,9]])*ms
mean_firing_rate(multi_spiketrains, axis=0, t_start=0*ms, t_stop=5*ms)
"""
Explanation: In some (rare) cases multiple spiketrains can be represented in multidimensional arrays when they contain the same number of spikes. In such cases, the mean firing rate can be calculated for multiple spiketrains at once by specifying the axis the along which to calculate the firing rate. By default, if no axis is specified, all spiketrains are pooled together before estimating the firing rate.
End of explanation
"""
from elephant.statistics import time_histogram, instantaneous_rate
histogram_count = time_histogram([spiketrain1], 500*ms)
print(type(histogram_count), f"of shape {histogram_count.shape}: {histogram_count.shape[0]} samples, {histogram_count.shape[1]} channel")
print('sampling rate:', histogram_count.sampling_rate)
print('times:', histogram_count.times)
print('counts:', histogram_count.T[0])
"""
Explanation: 2.2. Time histogram
The time histogram is a time resolved way of the firing rate estimation. Here, the spiketrains are binned and either the count or the mean count or the rate of the spiketrains is returned, depending on the output parameter. The result is a count (mean count/rate value) for each of the bins evaluated. This is represented as a neo AnalogSignal object with the corresponding sampling rate and the count (mean count/rate) values as data.
Here, we compute the counts of spikes in bins of 500 millisecond width.
End of explanation
"""
histogram_rate = time_histogram([spiketrain1], 500*ms, output='rate')
print('times:', histogram_rate.times)
print('rate:', histogram_rate.T[0])
"""
Explanation: AnalogSignal is a container for analog signals of any type, sampled at a fixed sampling rate.
In our case,
The shape of histogram_count is (20, 1) - 20 samples in total, 1 channel. A "channel" comes from the fact that AnalogSignals are naturally used to represent recordings from Utah arrays with many electrodes (channels) in electrophysiological experiments. In such experiments AnalogSignal stores the voltage through time for each electrode (channel), leading to a two-dimensional array of shape (#samples, #channels). In our example, however, AnalogSignal stores dimensionless type of data because the counts - the number of spikes per bin - have no physical unit, of course. And a "channel" is introduced to be consistent with the definition of AnalogSignal which should always be a two-dimensional array.
The sampling rate of histogram_count is 0.002 1/ms or 2 Hz. Thus each second interval contains 2 samples.
.times property is a numpy array with seconds or milliseconds unit.
The data itself, the counts, is dimensionless.
Alternatively, time_histogram can also normalize the resulting array to represent the counts mean or the rate.
End of explanation
"""
inst_rate = instantaneous_rate(spiketrain1, sampling_period=50*ms)
"""
Explanation: Additionally, time_histogram can be limited to a shorter time period by using the keyword arguments t_start and t_stop, as described for mean_firing_rate.
2.3. Instantaneous rate
The instantaneous rate is, similar to the time histogram (see above), provides a continuous estimate of the underlying firing rate of a spike train. Here, the firing rate is estimated as a convolution of the spiketrain with a firing rate kernel, representing the contribution of a single spike to the firing rate. In contrast to the time histogram, the instantaneous rate provides a smooth firing rate estimate as it does not rely on binning of a spiketrain.
Estimation of the instantaneous rate requires a sampling period on which the firing rate is estimated. Here we use a sampling period of 50 millisecond.
End of explanation
"""
print(type(inst_rate), f"of shape {inst_rate.shape}: {inst_rate.shape[0]} samples, {inst_rate.shape[1]} channel")
print('sampling rate:', inst_rate.sampling_rate)
print('times (first 10 samples): ', inst_rate.times[:10])
print('instantaneous rate (first 10 samples):', inst_rate.T[0, :10])
"""
Explanation: The resulting rate estimate is again an AnalogSignal with the sampling rate of 1 / (50 ms).
End of explanation
"""
from elephant.kernels import GaussianKernel
instantaneous_rate(spiketrain1, sampling_period=20*ms, kernel=GaussianKernel(200*ms))
"""
Explanation: Additionally, the convolution kernel type can be specified via the kernel keyword argument. E.g. to use an gaussian kernel, we do as follows:
End of explanation
"""
plt.figure(dpi=150)
# plotting the original spiketrain
plt.plot(spiketrain1, [0]*len(spiketrain1), 'r', marker=2, ms=25, markeredgewidth=2, lw=0, label='poisson spike times')
# mean firing rate
plt.hlines(mean_firing_rate(spiketrain1), xmin=spiketrain1.t_start, xmax=spiketrain1.t_stop, linestyle='--', label='mean firing rate')
# time histogram
plt.bar(histogram_rate.times, histogram_rate.magnitude.flatten(), width=histogram_rate.sampling_period, align='edge', alpha=0.3, label='time histogram (rate)')
# instantaneous rate
plt.plot(inst_rate.times.rescale(ms), inst_rate.rescale(histogram_rate.dimensionality).magnitude.flatten(), label='instantaneous rate')
# axis labels and legend
plt.xlabel('time [{}]'.format(spiketrain1.times.dimensionality.latex))
plt.ylabel('firing rate [{}]'.format(histogram_rate.dimensionality.latex))
plt.xlim(spiketrain1.t_start, spiketrain1.t_stop)
plt.legend()
plt.show()
"""
Explanation: To compare all three methods of firing rate estimation, we visualize the results of all methods in a common plot.
End of explanation
"""
spiketrain_list = [
homogeneous_poisson_process(rate=10.0*Hz, t_start=0.0*s, t_stop=100.0*s)
for i in range(100)]
"""
Explanation: Coefficient of Variation (CV)
In this section we will numerically verify that the coefficient of variation (CV), a measure of the variability of inter-spike intervals, of a spike train that is modeled as a random (stochastic) Poisson process, is 1.
Let us generate 100 independent Poisson spike trains for 100 seconds each with a rate of 10 Hz for which we later will calculate the CV. For simplicity, we will store the spike trains in a list.
End of explanation
"""
plt.figure(dpi=150)
plt.eventplot([st.magnitude for st in spiketrain_list], linelengths=0.75, linewidths=0.75, color='black')
plt.xlabel("Time, s")
plt.ylabel("Neuron id")
plt.xlim([0, 1]);
"""
Explanation: Let's look at the rasterplot of the first second of spiketrains.
End of explanation
"""
from elephant.statistics import isi, cv
cv_list = [cv(isi(spiketrain)) for spiketrain in spiketrain_list]
# let's plot the histogram of CVs
plt.figure(dpi=100)
plt.hist(cv_list)
plt.xlabel('CV')
plt.ylabel('count')
plt.title("Coefficient of Variation of homogeneous Poisson process");
"""
Explanation: From the plot you can see the random nature of each Poisson spike train. Let us verify it numerically by calculating the distribution of the 100 CVs obtained from inter-spike intervals (ISIs) of these spike trains.
For each spike train in our list, we first call the isi() function which returns an array of all N-1 ISIs for the N spikes in the input spike train. We then feed the list of ISIs into the cv() function, which returns a single value for the coefficient of variation:
End of explanation
"""
|
quoniammm/happy-machine-learning | Udacity-ML/boston_housing-master_2/boston_housing.ipynb | mit | # Import libraries necessary for this project
# 载入此项目所需要的库
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
from sklearn.model_selection import ShuffleSplit
# Pretty display for notebooks
# 让结果在notebook中显示
%matplotlib inline
# Load the Boston housing dataset
# 载入波士顿房屋的数据集
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
# 完成
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
"""
Explanation: 机器学习工程师纳米学位
模型评价与验证
项目 1: 预测波士顿房价
欢迎来到机器学习工程师纳米学位的第一个项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能来让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以'练习'开始的标题表示接下来的内容中有需要你必须实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以'TODO'标出。请仔细阅读所有的提示!
除了实现代码外,你还必须回答一些与项目和实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。你的项目将会根据你对问题的回答和撰写代码所实现的功能来进行评分。
提示:Code 和 Markdown 区域可通过 Shift + Enter 快捷键运行。此外,Markdown可以通过双击进入编辑模式。
开始
在这个项目中,你将利用马萨诸塞州波士顿郊区的房屋信息数据训练和测试一个模型,并对模型的性能和预测能力进行测试。通过该数据训练后的好的模型可以被用来对房屋做特定预测---尤其是对房屋的价值。对于房地产经纪等人的日常工作来说,这样的预测模型被证明非常有价值。
此项目的数据集来自UCI机器学习知识库。波士顿房屋这些数据于1978年开始统计,共506个数据点,涵盖了麻省波士顿不同郊区房屋14种特征的信息。本项目对原始数据集做了以下处理:
- 有16个'MEDV' 值为50.0的数据点被移除。 这很可能是由于这些数据点包含遗失或看不到的值。
- 有1个数据点的 'RM' 值为8.78. 这是一个异常值,已经被移除。
- 对于本项目,房屋的'RM', 'LSTAT','PTRATIO'以及'MEDV'特征是必要的,其余不相关特征已经被移除。
- 'MEDV'特征的值已经过必要的数学转换,可以反映35年来市场的通货膨胀效应。
运行下面区域的代码以载入波士顿房屋数据集,以及一些此项目所需的Python库。如果成功返回数据集的大小,表示数据集已载入成功。
End of explanation
"""
# TODO: Minimum price of the data
#目标:计算价值的最小值
minimum_price = np.min(prices)
# TODO: Maximum price of the data
#目标:计算价值的最大值
maximum_price = np.max(prices)
# TODO: Mean price of the data
#目标:计算价值的平均值
mean_price = np.mean(prices)
# TODO: Median price of the data
#目标:计算价值的中值
median_price = np.median(prices)
# TODO: Standard deviation of prices of the data
#目标:计算价值的标准差
std_price = np.std(prices)
# Show the calculated statistics
#目标:输出计算的结果
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
"""
Explanation: 分析数据
在项目的第一个部分,你会对波士顿房地产数据进行初步的观察并给出你的分析。通过对数据的探索来熟悉数据可以让你更好地理解和解释你的结果。
由于这个项目的最终目标是建立一个预测房屋价值的模型,我们需要将数据集分为特征(features)和目标变量(target variable)。特征 'RM', 'LSTAT',和 'PTRATIO',给我们提供了每个数据点的数量相关的信息。目标变量:'MEDV',是我们希望预测的变量。他们分别被存在features和prices两个变量名中。
练习:基础统计运算
你的第一个编程练习是计算有关波士顿房价的描述统计数据。我们已为你导入了numpy,你需要使用这个库来执行必要的计算。这些统计数据对于分析模型的预测结果非常重要的。
在下面的代码中,你要做的是:
- 计算prices中的'MEDV'的最小值、最大值、均值、中值和标准差;
- 将运算结果储存在相应的变量中。
End of explanation
"""
# TODO: Import 'r2_score'
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
"""
Explanation: 问题1 - 特征观察
如前文所述,本项目中我们关注的是其中三个值:'RM'、'LSTAT' 和'PTRATIO',对每一个数据点:
- 'RM' 是该地区中每个房屋的平均房间数量;
- 'LSTAT' 是指该地区有多少百分比的房东属于是低收入阶层(有工作但收入微薄);
- 'PTRATIO' 是该地区的中学和小学里,学生和老师的数目比(学生/老师)。
凭直觉,上述三个特征中对每一个来说,你认为增大该特征的数值,'MEDV'的值会是增大还是减小呢?每一个答案都需要你给出理由。
提示:你预期一个'RM' 值是6的房屋跟'RM' 值是7的房屋相比,价值更高还是更低呢?
回答:
RM 增大,MEDV 增大,因为房屋面积变大;
LSTAT 增大,MEDV 减小,因为低收入者变多;
PTRATIO 增大,MEDV 增大,因为教育资源变得更加丰富
建模
在项目的第二部分中,你需要了解必要的工具和技巧来让你的模型进行预测。用这些工具和技巧对每一个模型的表现做精确的衡量可以极大地增强你预测的信心。
练习:定义衡量标准
如果不能对模型的训练和测试的表现进行量化地评估,我们就很难衡量模型的好坏。通常我们会定义一些衡量标准,这些标准可以通过对某些误差或者拟合程度的计算来得到。在这个项目中,你将通过运算决定系数 R<sup>2</sup> 来量化模型的表现。模型的决定系数是回归分析中十分常用的统计信息,经常被当作衡量模型预测能力好坏的标准。
R<sup>2</sup>的数值范围从0至1,表示目标变量的预测值和实际值之间的相关程度平方的百分比。一个模型的R<sup>2</sup> 值为0还不如直接用平均值来预测效果好;而一个R<sup>2</sup> 值为1的模型则可以对目标变量进行完美的预测。从0至1之间的数值,则表示该模型中目标变量中有百分之多少能够用特征来解释。模型也可能出现负值的R<sup>2</sup>,这种情况下模型所做预测有时会比直接计算目标变量的平均值差很多。
在下方代码的 performance_metric 函数中,你要实现:
- 使用 sklearn.metrics 中的 r2_score 来计算 y_true 和 y_predict的R<sup>2</sup>值,作为对其表现的评判。
- 将他们的表现评分储存到score变量中。
End of explanation
"""
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
"""
Explanation: 问题2 - 拟合程度
假设一个数据集有五个数据且一个模型做出下列目标变量的预测:
| 真实数值 | 预测数值 |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
你会觉得这个模型已成功地描述了目标变量的变化吗?如果成功,请解释为什么,如果没有,也请给出原因。
运行下方的代码,使用performance_metric函数来计算模型的决定系数。
End of explanation
"""
# TODO: Import 'train_test_split'
from sklearn.model_selection import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=42)
# Success
print "Training and testing split was successful."
"""
Explanation: 回答: 我觉得成功描述了。因为决定系数很的范围为 0 ~ 1,越接近1,说明这个模型可以对目标变量进行预测的效果越好,结果决定系数计算出来为 0.923 ,说明模型对目标变量的变化进行了良好的描述。
练习: 数据分割与重排
接下来,你需要把波士顿房屋数据集分成训练和测试两个子集。通常在这个过程中,数据也会被重新排序,以消除数据集中由于排序而产生的偏差。
在下面的代码中,你需要:
- 使用 sklearn.model_selection 中的 train_test_split, 将features和prices的数据都分成用于训练的数据子集和用于测试的数据子集。
- 分割比例为:80%的数据用于训练,20%用于测试;
- 选定一个数值以设定 train_test_split 中的 random_state ,这会确保结果的一致性;
- 最终分离出的子集为X_train,X_test,y_train,和y_test。
End of explanation
"""
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
"""
Explanation: 问题 3- 训练及测试
将数据集按一定比例分为训练用的数据集和测试用的数据集对学习算法有什么好处?
提示: 如果没有数据来对模型进行测试,会出现什么问题?
答案: 这样做,可以使得我们可以通过测试用的数据集来对模型的泛化误差进行评估,检验模型的好坏。
分析模型的表现
在项目的第三部分,我们来看一下几个模型针对不同的数据集在学习和测试上的表现。另外,你需要专注于一个特定的算法,用全部训练集训练时,提高它的'max_depth' 参数,观察这一参数的变化如何影响模型的表现。把你模型的表现画出来对于分析过程十分有益。可视化可以让我们看到一些单看结果看不到的行为。
学习曲线
下方区域内的代码会输出四幅图像,它们是一个决策树模型在不同最大深度下的表现。每一条曲线都直观的显示了随着训练数据量的增加,模型学习曲线的训练评分和测试评分的变化。注意,曲线的阴影区域代表的是该曲线的不确定性(用标准差衡量)。这个模型的训练和测试部分都使用决定系数R<sup>2</sup>来评分。
运行下方区域中的代码,并利用输出的图形回答下面的问题。
End of explanation
"""
vs.ModelComplexity(X_train, y_train)
"""
Explanation: 问题 4 - 学习数据
选择上述图像中的其中一个,并给出其最大深度。随着训练数据量的增加,训练曲线的评分有怎样的变化?测试曲线呢?如果有更多的训练数据,是否能有效提升模型的表现呢?
提示:学习曲线的评分是否最终会收敛到特定的值?
答案: 第二个,最大深度为3。训练曲线开始逐渐降低,测试曲线开始逐渐升高,但它们最后都趋于平稳,所以并不能有效提升模型的表现。
复杂度曲线
下列代码内的区域会输出一幅图像,它展示了一个已经经过训练和验证的决策树模型在不同最大深度条件下的表现。这个图形将包含两条曲线,一个是训练的变化,一个是测试的变化。跟学习曲线相似,阴影区域代表该曲线的不确定性,模型训练和测试部分的评分都用的 performance_metric 函数。
运行下方区域中的代码,并利用输出的图形并回答下面的两个问题。
End of explanation
"""
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor(random_state=0)
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth': range(1, 11)}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor, params, scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
"""
Explanation: 问题 5- 偏差与方差之间的权衡取舍
当模型以最大深度 1训练时,模型的预测是出现很大的偏差还是出现了很大的方差?当模型以最大深度10训练时,情形又如何呢?图形中的哪些特征能够支持你的结论?
提示: 你如何得知模型是否出现了偏差很大或者方差很大的问题?
答案: 为1时,出现了很大的偏差,因为此时无论是测试数据还是训练数据b标准系数都很低,测试数据和训练数据的标准系数之间差异很小,说明模型无法对数据进行良好预测。
为 10 时,出现了很大的方差,测试数据和训练数据的标准系数之间差异很大,说明出现了过拟合情况。
问题 6- 最优模型的猜测
你认为最大深度是多少的模型能够最好地对未见过的数据进行预测?你得出这个答案的依据是什么?
答案: 3。因为此时测试数据和训练数据的分数之间差异最小,且测试数据的标准系数达到最高。
评价模型表现
在这个项目的最后,你将自己建立模型,并使用最优化的fit_model函数,基于客户房子的特征来预测该房屋的价值。
问题 7- 网格搜索(Grid Search)
什么是网格搜索法?如何用它来优化学习算法?
回答:
是一种把参数网格化的算法。
它会自动生成一个不同参数值组成的“网格”:
===================================
('param1', param3) | ('param1', param4)
('param2', param3) | ('param2', param4)
==================================
通过尝试所有"网格"中使用的参数,并从中选取最佳的参数组合来优化学习算法。
问题 8- 交叉验证
什么是K折交叉验证法(k-fold cross-validation)?优化模型时,使用这种方法对网格搜索有什么好处?网格搜索是如何结合交叉验证来完成对最佳参数组合的选择的?
提示: 跟为何需要一组测试集的原因差不多,网格搜索时如果不使用交叉验证会有什么问题?GridSearchCV中的'cv_results'属性能告诉我们什么?
答案:
K折交叉验证法是将训练数据平均分配到K个容器,每次去其中一个做测试数据,其余做训练数据,进行K次后,对训练结果取平均值的一种获得更高精确度的一种算法。
可以时网格搜索的训练结果获得更高的精确度,如果不使用交叉验证,不能保证所有的数据都可以用于训练 ,一部分数据要使用再测试集中,模型的泛化误差会变大,从而影响网格搜索的效果。
网格搜索可以使拟合函数尝试所有的参数组合,并返回一个合适的分类器,自动调整至最佳参数组合。
练习:训练模型
在最后一个练习中,你将需要将所学到的内容整合,使用决策树演算法训练一个模型。为了保证你得出的是一个最优模型,你需要使用网格搜索法训练模型,以找到最佳的 'max_depth' 参数。你可以把'max_depth' 参数理解为决策树算法在做出预测前,允许其对数据提出问题的数量。决策树是监督学习算法中的一种。
此外,你会发现你的实现使用的是 ShuffleSplit() 。它也是交叉验证的一种方式(见变量 'cv_sets')。虽然这不是问题8中描述的 K-Fold 交叉验证,这个教程验证方法也很有用!这里 ShuffleSplit() 会创造10个('n_splits')混洗过的集合,每个集合中20%('test_size')的数据会被用作验证集。当你在实现的时候,想一想这跟 K-Fold 交叉验证有哪些相同点,哪些不同点?
在下方 fit_model 函数中,你需要做的是:
- 使用 sklearn.tree 中的 DecisionTreeRegressor 创建一个决策树的回归函数;
- 将这个回归函数储存到 'regressor' 变量中;
- 为 'max_depth' 创造一个字典,它的值是从1至10的数组,并储存到 'params' 变量中;
- 使用 sklearn.metrics 中的 make_scorer 创建一个评分函数;
- 将 performance_metric 作为参数传至这个函数中;
- 将评分函数储存到 'scoring_fnc' 变量中;
- 使用 sklearn.model_selection 中的 GridSearchCV 创建一个网格搜索对象;
- 将变量'regressor', 'params', 'scoring_fnc', 和 'cv_sets' 作为参数传至这个对象中;
- 将 GridSearchCV 存到 'grid' 变量中。
如果有同学对python函数如何传递多个参数不熟悉,可以参考这个MIT课程的视频。
End of explanation
"""
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
"""
Explanation: 做出预测
当我们用数据训练出一个模型,它现在就可用于对新的数据进行预测。在决策树回归函数中,模型已经学会对新输入的数据提问,并返回对目标变量的预测值。你可以用这个预测来获取数据未知目标变量的信息,这些数据必须是不包含在训练数据之内的。
问题 9- 最优模型
最优模型的最大深度(maximum depth)是多少?此答案与你在问题 6所做的猜测是否相同?
运行下方区域内的代码,将决策树回归函数代入训练数据的集合,以得到最优化的模型。
End of explanation
"""
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
data['MEDV'].describe()
"""
Explanation: Answer: 4。与猜测不同,猜测结果为3。
问题 10 - 预测销售价格
想像你是一个在波士顿地区的房屋经纪人,并期待使用此模型以帮助你的客户评估他们想出售的房屋。你已经从你的三个客户收集到以下的资讯:
| 特征 | 客戶 1 | 客戶 2 | 客戶 3 |
| :---: | :---: | :---: | :---: |
| 房屋内房间总数 | 5 间房间 | 4 间房间 | 8 间房间 |
| 社区贫困指数(%被认为是贫困阶层) | 17% | 32% | 3% |
| 邻近学校的学生-老师比例 | 15:1 | 22:1 | 12:1 |
你会建议每位客户的房屋销售的价格为多少?从房屋特征的数值判断,这样的价格合理吗?
提示:用你在分析数据部分计算出来的统计信息来帮助你证明你的答案。
运行下列的代码区域,使用你优化的模型来为每位客户的房屋价值做出预测。
End of explanation
"""
vs.PredictTrials(features, prices, fit_model, client_data)
"""
Explanation: 答案:
第一个顾客: $403,025.00.
第二个顾客:: $237,478.72.
第三个顾客:: $931,636.36.
这样的价格是合理的,以第三个顾客为例,他的房间数最多,社区贫困指数最低,且教育资源最丰富,因而价格最贵。以此类推,顾客一二的预测也是合理地。其次根据 `data['MEDV'].describe()` 运行的结果比较,三个价格也在合理范围类,因而价格是合理的。
敏感度
一个最优的模型不一定是一个健壮模型。有的时候模型会过于复杂或者过于简单,以致于难以泛化新增添的数据;有的时候模型采用的学习算法并不适用于特定的数据结构;有的时候样本本身可能有太多噪点或样本过少,使得模型无法准确地预测目标变量。这些情况下我们会说模型是欠拟合的。执行下方区域中的代码,采用不同的训练和测试集执行 fit_model 函数10次。注意观察对一个特定的客户来说,预测是如何随训练数据的变化而变化的。
End of explanation
"""
### 你的代码
# Import libraries necessary for this project
# 载入此项目所需要的库
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
from sklearn.model_selection import ShuffleSplit
from sklearn.metrics import r2_score
from sklearn.model_selection import train_test_split
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
# Pretty display for notebooks
# 让结果在notebook中显示
%matplotlib inline
# Load the Boston housing dataset
# 载入波士顿房屋的数据集
data = pd.read_csv('bj_housing.csv')
prices = data['Value']
features = data.drop('Value', axis = 1)
print features.head()
print prices.head()
# Success
# 完成
# print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=42)
# Success
print "Training and testing split was successful."
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor(random_state=0)
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth': range(1, 11)}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor, params, scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
client_data = [[128, 3, 2, 0, 2005, 13], [150, 3, 2, 0, 2005, 13]]
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ¥{:,.2f}".format(i+1, price)
"""
Explanation: 问题 11 - 实用性探讨
简单地讨论一下你建构的模型能否在现实世界中使用?
提示: 回答几个问题,并给出相应结论的理由:
- 1978年所采集的数据,在今天是否仍然适用?
- 数据中呈现的特征是否足够描述一个房屋?
- 模型是否足够健壮来保证预测的一致性?
- 在波士顿这样的大都市采集的数据,能否应用在其它乡镇地区?
答案: 不能,首先这只是波士顿的房价,并不具有代表性,而且时间久远,房屋的价格还和其他特性有关,比如装修的程度。
可选问题 - 预测北京房价
(本题结果不影响项目是否通过)通过上面的实践,相信你对机器学习的一些常用概念有了很好的领悟和掌握。但利用70年代的波士顿房价数据进行建模的确对我们来说意义不是太大。现在你可以把你上面所学应用到北京房价数据集中bj_housing.csv。
免责声明:考虑到北京房价受到宏观经济、政策调整等众多因素的直接影响,预测结果仅供参考。
这个数据集的特征有:
- Area:房屋面积,平方米
- Room:房间数,间
- Living: 厅数,间
- School: 是否为学区房,0或1
- Year: 房屋建造时间,年
- Floor: 房屋所处楼层,层
目标变量:
- Value: 房屋人民币售价,万
你可以参考上面学到的内容,拿这个数据集来练习数据分割与重排、定义衡量标准、训练模型、评价模型表现、使用网格搜索配合交叉验证对参数进行调优并选出最佳参数,比较两者的差别,最终得出最佳模型对验证集的预测分数。
End of explanation
"""
|
molgor/spystats | notebooks/.ipynb_checkpoints/model_by_chunks-checkpoint.ipynb | bsd-2-clause | # Load Biospytial modules and etc.
%matplotlib inline
import sys
sys.path.append('/apps')
import django
django.setup()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
## Use the ggplot style
plt.style.use('ggplot')
from external_plugins.spystats import tools
%run ../testvariogram.py
section.shape
"""
Explanation: Here I'm process by chunks the entire region.
End of explanation
"""
minx,maxx,miny,maxy = getExtent(new_data)
maxy
## If prefered a fixed number of chunks
N = 100
xp,dx = np.linspace(minx,maxx,N,retstep=True)
yp,dy = np.linspace(miny,maxy,N,retstep=True)
### Distance interval
print(dx)
print(dy)
## Let's build the partition
## If prefered a fixed size of chunk
ds = 300000 #step size (meters)
xp = np.arange(minx,maxx,step=ds)
yp = np.arange(miny,maxy,step=ds)
dx = ds
dy = ds
N = len(xp)
xx,yy = np.meshgrid(xp,yp)
Nx = xp.size
Ny = yp.size
#coordinates_list = [ (xx[i][j],yy[i][j]) for i in range(N) for j in range(N)]
coordinates_list = [ (xx[i][j],yy[i][j]) for i in range(Ny) for j in range(Nx)]
from functools import partial
tuples = map(lambda (x,y) : partial(getExtentFromPoint,x,y,step_sizex=dx,step_sizey=dy)(),coordinates_list)
chunks = map(lambda (mx,Mx,my,My) : subselectDataFrameByCoordinates(new_data,'newLon','newLat',mx,Mx,my,My),tuples)
## Here we can filter based on a threshold
threshold = 20
chunks_non_empty = filter(lambda df : df.shape[0] > threshold ,chunks)
len(chunks_non_empty)
lengths = pd.Series(map(lambda ch : ch.shape[0],chunks_non_empty))
lengths.plot.hist()
"""
Explanation: Algorithm for processing Chunks
Make a partition given the extent
Produce a tuple (minx ,maxx,miny,maxy) for each element on the partition
Calculate the semivariogram for each chunk and save it in a dataframe
Plot Everything
Do the same with a mMatern Kernel
End of explanation
"""
smaller_list = chunks_non_empty[:10]
variograms =map(lambda chunk : tools.Variogram(chunk,'residuals1',using_distance_threshold=200000),smaller_list)
vars = map(lambda v : v.calculateEmpirical(),variograms)
vars = map(lambda v : v.calculateEnvelope(num_iterations=50),variograms)
"""
Explanation: For efficiency purposes we restrict to 10 variograms
End of explanation
"""
envslow = pd.concat(map(lambda df : df[['envlow']],vars),axis=1)
envhigh = pd.concat(map(lambda df : df[['envhigh']],vars),axis=1)
variogram = pd.concat(map(lambda df : df[['variogram']],vars),axis=1)
lags = vars[0][['lags']]
meanlow = list(envslow.apply(lambda row : np.mean(row),axis=1))
meanhigh = list(envhigh.apply(np.mean,axis=1))
meanvariogram = list(variogram.apply(np.mean,axis=1))
results = pd.DataFrame({'meanvariogram':meanvariogram,'meanlow':meanlow,'meanhigh':meanhigh})
result_envelope = pd.concat([lags,results],axis=1)
meanvg = tools.Variogram(section,'residuals1')
meanvg.plot()
meanvg.envelope.columns
result_envelope.columns
result_envelope.columns = ['lags','envhigh','envlow','variogram']
meanvg.envelope = result_envelope
meanvg.plot(refresh=False)
"""
Explanation: Take an average of the empirical variograms also with the envelope.
We will use the group by directive on the field lags
End of explanation
"""
|
metpy/MetPy | v0.10/_downloads/8b48dbfbd7332023b4aeb5274ed5d62e/Point_Interpolation.ipynb | bsd-3-clause | import cartopy.crs as ccrs
import cartopy.feature as cfeature
from matplotlib.colors import BoundaryNorm
import matplotlib.pyplot as plt
import numpy as np
from metpy.cbook import get_test_data
from metpy.interpolate import (interpolate_to_grid, remove_nan_observations,
remove_repeat_coordinates)
from metpy.plots import add_metpy_logo
def basic_map(proj):
"""Make our basic default map for plotting"""
fig = plt.figure(figsize=(15, 10))
add_metpy_logo(fig, 0, 80, size='large')
view = fig.add_axes([0, 0, 1, 1], projection=proj)
view.set_extent([-120, -70, 20, 50])
view.add_feature(cfeature.STATES.with_scale('50m'))
view.add_feature(cfeature.OCEAN)
view.add_feature(cfeature.COASTLINE)
view.add_feature(cfeature.BORDERS, linestyle=':')
return fig, view
def station_test_data(variable_names, proj_from=None, proj_to=None):
with get_test_data('station_data.txt') as f:
all_data = np.loadtxt(f, skiprows=1, delimiter=',',
usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19),
dtype=np.dtype([('stid', '3S'), ('lat', 'f'), ('lon', 'f'),
('slp', 'f'), ('air_temperature', 'f'),
('cloud_fraction', 'f'), ('dewpoint', 'f'),
('weather', '16S'),
('wind_dir', 'f'), ('wind_speed', 'f')]))
all_stids = [s.decode('ascii') for s in all_data['stid']]
data = np.concatenate([all_data[all_stids.index(site)].reshape(1, ) for site in all_stids])
value = data[variable_names]
lon = data['lon']
lat = data['lat']
if proj_from is not None and proj_to is not None:
try:
proj_points = proj_to.transform_points(proj_from, lon, lat)
return proj_points[:, 0], proj_points[:, 1], value
except Exception as e:
print(e)
return None
return lon, lat, value
from_proj = ccrs.Geodetic()
to_proj = ccrs.AlbersEqualArea(central_longitude=-97.0000, central_latitude=38.0000)
levels = list(range(-20, 20, 1))
cmap = plt.get_cmap('magma')
norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)
x, y, temp = station_test_data('air_temperature', from_proj, to_proj)
x, y, temp = remove_nan_observations(x, y, temp)
x, y, temp = remove_repeat_coordinates(x, y, temp)
"""
Explanation: Point Interpolation
Compares different point interpolation approaches.
End of explanation
"""
gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='linear', hres=75000)
img = np.ma.masked_where(np.isnan(img), img)
fig, view = basic_map(to_proj)
mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm)
fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)
"""
Explanation: Scipy.interpolate linear
End of explanation
"""
gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='natural_neighbor', hres=75000)
img = np.ma.masked_where(np.isnan(img), img)
fig, view = basic_map(to_proj)
mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm)
fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)
"""
Explanation: Natural neighbor interpolation (MetPy implementation)
Reference <https://github.com/Unidata/MetPy/files/138653/cwp-657.pdf>_
End of explanation
"""
gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='cressman', minimum_neighbors=1,
hres=75000, search_radius=100000)
img = np.ma.masked_where(np.isnan(img), img)
fig, view = basic_map(to_proj)
mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm)
fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)
"""
Explanation: Cressman interpolation
search_radius = 100 km
grid resolution = 25 km
min_neighbors = 1
End of explanation
"""
gx, gy, img1 = interpolate_to_grid(x, y, temp, interp_type='barnes', hres=75000,
search_radius=100000)
img1 = np.ma.masked_where(np.isnan(img1), img1)
fig, view = basic_map(to_proj)
mmb = view.pcolormesh(gx, gy, img1, cmap=cmap, norm=norm)
fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)
"""
Explanation: Barnes Interpolation
search_radius = 100km
min_neighbors = 3
End of explanation
"""
gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='rbf', hres=75000, rbf_func='linear',
rbf_smooth=0)
img = np.ma.masked_where(np.isnan(img), img)
fig, view = basic_map(to_proj)
mmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm)
fig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)
plt.show()
"""
Explanation: Radial basis function interpolation
linear
End of explanation
"""
|
Bihaqo/t3f | docs/tutorials/riemannian.ipynb | mit | # Import TF 2.
%tensorflow_version 2.x
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# Fix seed so that the results are reproducable.
tf.random.set_seed(0)
np.random.seed(0)
try:
import t3f
except ImportError:
# Install T3F if it's not already installed.
!git clone https://github.com/Bihaqo/t3f.git
!cd t3f; pip install .
import t3f
# Initialize A randomly, with large tt-ranks
shape = 10 * [2]
init_A = t3f.random_tensor(shape, tt_rank=16)
A = t3f.get_variable('A', initializer=init_A, trainable=False)
# Create an X variable.
init_X = t3f.random_tensor(shape, tt_rank=2)
X = t3f.get_variable('X', initializer=init_X)
def step():
# Compute the gradient of the functional. Note that it is simply X - A.
gradF = X - A
# Let us compute the projection of the gradient onto the tangent space at X.
riemannian_grad = t3f.riemannian.project(gradF, X)
# Compute the update by subtracting the Riemannian gradient
# and retracting back to the manifold
alpha = 1.0
t3f.assign(X, t3f.round(X - alpha * riemannian_grad, max_tt_rank=2))
# Let us also compute the value of the functional
# to see if it is decreasing.
return 0.5 * t3f.frobenius_norm_squared(X - A)
log = []
for i in range(100):
F = step()
if i % 10 == 0:
print(F)
log.append(F.numpy())
"""
Explanation: Riemannian optimization
Open this page in an interactive mode via Google Colaboratory.
Riemannian optimization is a framework for solving optimization problems with a constraint that the solution belongs to a manifold.
Let us consider the following problem. Given some TT tensor $A$ with large tt-ranks we would like to find a tensor $X$ (with small prescribed tt-ranks $r$) which is closest to $A$ (in the sense of Frobenius norm). Mathematically it can be written as follows:
\begin{equation}
\begin{aligned}
& \underset{X}{\text{minimize}}
& & \frac{1}{2}\|X - A\|_F^2 \
& \text{subject to}
& & \text{tt_rank}(X) = r
\end{aligned}
\end{equation}
It is known that the set of TT tensors with elementwise fixed TT ranks forms a manifold. Thus we can solve this problem using the so called Riemannian gradient descent. Given some functional $F$ on a manifold $\mathcal{M}$ it is defined as
$$\hat{x}{k+1} = x{k} - \alpha P_{T_{x_k}\mathcal{M}} \nabla F(x_k),$$
$$x_{k+1} = \mathcal{R}(\hat{x}{k+1})$$
with $P{T_{x_k}} \mathcal{M}$ being the projection onto the tangent space of $\mathcal{M}$ at the point $x_k$ and $\mathcal{R}$ being a retraction - an operation which projects points to the manifold, and $\alpha$ is the learning rate.
We can implement this in t3f using the t3f.riemannian module. As a retraction it is convenient to use the rounding method (t3f.round).
End of explanation
"""
quasi_sol = t3f.round(A, max_tt_rank=2)
val = 0.5 * t3f.frobenius_norm_squared(quasi_sol - A)
print(val)
"""
Explanation: It is intructive to compare the obtained result with the quasioptimum delivered by the TT-round procedure.
End of explanation
"""
plt.semilogy(log, label='Riemannian gradient descent')
plt.axhline(y=val.numpy(), lw=1, ls='--', color='gray', label='TT-round(A)')
plt.xlabel('Iteration')
plt.ylabel('Value of the functional')
plt.legend()
"""
Explanation: We see that the value is slightly bigger than the exact minimum, but TT-round is faster and cheaper to compute, so it is often used in practice.
End of explanation
"""
|
GoogleCloudPlatform/practical-ml-vision-book | 06_preprocessing/06h_tftransform.ipynb | apache-2.0 | import tensorflow as tf
print(tf.version.VERSION)
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
"""
Explanation: Avoid training-serving skew using TensorFlow Transform
In this notebook, we show how to use tf.transform to carry out preprocessing efficiently,
but save the preprocessing operations so that they are automatically applied during inference.
Enable GPU and set up helper functions
This notebook and pretty much every other notebook in this repository
will run faster if you are using a GPU.
On Colab:
- Navigate to Edit→Notebook Settings
- Select GPU from the Hardware Accelerator drop-down
On Cloud AI Platform Notebooks:
- Navigate to https://console.cloud.google.com/ai-platform/notebooks
- Create an instance with a GPU or select your instance and add a GPU
Next, we'll confirm that we can connect to the GPU with tensorflow:
End of explanation
"""
!cat run_dataflow.sh
!./run_dataflow.sh > /dev/null 2>&1
!ls -l flower_tftransform/
"""
Explanation: Run Beam pipeline locally
End of explanation
"""
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
IMG_HEIGHT = 448
IMG_WIDTH = 448
IMG_CHANNELS = 3
CLASS_NAMES = 'daisy dandelion roses sunflowers tulips'.split()
ds = tf.data.experimental.make_batched_features_dataset(
'./flower_tftransform/train-00000-of-00016.gz',
batch_size=5,
features = {
'image': tf.io.FixedLenFeature([IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS], tf.float32),
'label': tf.io.FixedLenFeature([], tf.string),
'label_int': tf.io.FixedLenFeature([], tf.int64)
},
reader=lambda filenames: tf.data.TFRecordDataset(filenames, compression_type='GZIP')
)
for feats in ds.take(1):
print(feats['image'].shape)
f, ax = plt.subplots(1, 5, figsize=(15,15))
for feats in ds.take(1):
for idx in range(5): # batchsize
ax[idx].imshow((feats['image'][idx].numpy()));
ax[idx].set_title(feats['label'][idx].numpy())
ax[idx].axis('off')
"""
Explanation: Display preprocessing data
Note that the files contain already preprocessed (scaled, resized images),
so we can simply read the data and display it.
End of explanation
"""
# Helper functions
def training_plot(metrics, history):
f, ax = plt.subplots(1, len(metrics), figsize=(5*len(metrics), 5))
for idx, metric in enumerate(metrics):
ax[idx].plot(history.history[metric], ls='dashed')
ax[idx].set_xlabel("Epochs")
ax[idx].set_ylabel(metric)
ax[idx].plot(history.history['val_' + metric]);
ax[idx].legend([metric, 'val_' + metric])
import tensorflow_hub as hub
import os
# Load compressed models from tensorflow_hub
os.environ['TFHUB_MODEL_LOAD_FORMAT'] = 'COMPRESSED'
def create_preproc_dataset(pattern, batch_size):
return tf.data.experimental.make_batched_features_dataset(
pattern,
batch_size=batch_size,
features = {
'image': tf.io.FixedLenFeature([IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS], tf.float32),
'label': tf.io.FixedLenFeature([], tf.string),
'label_int': tf.io.FixedLenFeature([], tf.int64)
},
reader=lambda filenames: tf.data.TFRecordDataset(filenames, compression_type='GZIP'),
num_epochs=1
).map(
lambda x: (x['image'], x['label_int'])
)
# parameterize to the values in the previous cell
# WARNING! training on a small subset dataset (note top_dir)
def train_and_evaluate(top_dir='./flower_tftransform',
batch_size = 32,
lrate = 0.001,
l1 = 0.,
l2 = 0.,
num_hidden = 16):
regularizer = tf.keras.regularizers.l1_l2(l1, l2)
train_dataset = create_preproc_dataset(os.path.join(top_dir, 'train-*'), batch_size)
eval_dataset = create_preproc_dataset(os.path.join(top_dir, 'valid-*'), batch_size)
layers = [
tf.keras.layers.experimental.preprocessing.CenterCrop(
height=IMG_HEIGHT//2, width=IMG_WIDTH//2,
input_shape=(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS),
),
hub.KerasLayer(
"https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4",
trainable=False,
name='mobilenet_embedding'),
tf.keras.layers.Dense(num_hidden,
kernel_regularizer=regularizer,
activation=tf.keras.activations.relu,
name='dense_hidden'),
tf.keras.layers.Dense(len(CLASS_NAMES),
kernel_regularizer=regularizer,
activation='softmax',
name='flower_prob')
]
model = tf.keras.Sequential(layers, name='flower_classification')
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=lrate),
loss=tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=False),
metrics=['accuracy'])
print(model.summary())
history = model.fit(train_dataset, validation_data=eval_dataset, epochs=3)
training_plot(['loss', 'accuracy'], history)
return model
model = train_and_evaluate()
"""
Explanation: Train the model
End of explanation
"""
!saved_model_cli show --all --dir ./flower_tftransform/tft/transform_fn
# get some files to do inference on.
filenames = [
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9818247_e2eac18894.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9853885425_4a82356f1d_m.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/98992760_53ed1d26a9.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9939430464_5f5861ebab.jpg',
'gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/dandelion/9965757055_ff01b5ee6f_n.jpg'
]
img_bytes = [
tf.io.read_file(filename) for filename in filenames
]
label = [
'n/a' for filename in filenames
] # not used in inference
label_int = [
-1 for filename in filenames
] # not used in inference
# by calling the preproc function, we get images of the right size & crop
preproc = tf.keras.models.load_model('./flower_tftransform/tft/transform_fn').signatures['transform_signature']
preprocessed = preproc(img_bytes=tf.convert_to_tensor(img_bytes),
label=tf.convert_to_tensor(label, dtype=tf.string),
label_int=tf.convert_to_tensor(label_int, dtype=tf.int64))
# then we call model.predict() and take the argmx of the result
pred_label_index = tf.math.argmax(model.predict(preprocessed)).numpy()
print(pred_label_index)
"""
Explanation: Predictions
For serving, we will write a serving function that calls the transform model
followed by our actual model.
We will look at serving functions in Chapter 7. For now, we show how to explicitly
call the transform function followed by model.predict()
End of explanation
"""
|
linkmax91/bitquant | web/home/ipython/examples/r.ipynb | apache-2.0 | %pylab inline
"""
Explanation: Rmagic Functions Extension
End of explanation
"""
%load_ext rpy2.ipython
"""
Explanation: Line magics
IPython has an rmagic extension that contains a some magic functions for working with R via rpy2. This extension can be loaded using the %load_ext magic as follows:
End of explanation
"""
import numpy as np
import pylab
X = np.array([0,1,2,3,4])
Y = np.array([3,5,4,6,7])
pylab.scatter(X, Y)
"""
Explanation: A typical use case one imagines is having some numpy arrays, wanting to compute some statistics of interest on these
arrays and return the result back to python. Let's suppose we just want to fit a simple linear model to a scatterplot.
End of explanation
"""
%Rpush X Y
%R lm(Y~X)$coef
"""
Explanation: We can accomplish this by first pushing variables to R, fitting a model and returning the results. The line magic %Rpush copies its arguments to variables of the same name in rpy2. The %R line magic evaluates the string in rpy2 and returns the results. In this case, the coefficients of a linear model.
End of explanation
"""
Xr = X - X.mean(); Yr = Y - Y.mean()
slope = (Xr*Yr).sum() / (Xr**2).sum()
intercept = Y.mean() - X.mean() * slope
(intercept, slope)
"""
Explanation: We can check that this is correct fairly easily:
End of explanation
"""
%R resid(lm(Y~X)); coef(lm(X~Y))
"""
Explanation: It is also possible to return more than one value with %R.
End of explanation
"""
b = %R a=resid(lm(Y~X))
%Rpull a
print a
%R -o a
%R d=resid(lm(Y~X)); e=coef(lm(Y~X))
%R -o d -o e
%Rpull e
print d
print e
import numpy as np
np.testing.assert_almost_equal(d, a)
"""
Explanation: One can also easily capture the results of %R into python objects. Like R, the return value of this multiline expression (multiline in the sense that it is separated by ';') is the final value, which is
the coef(lm(X~Y)). To pull other variables from R, there is one more magic.
There are two more line magics, %Rpull and %Rget. Both are useful after some R code has been executed and there are variables
in the rpy2 namespace that one would like to retrieve. The main difference is that one
returns the value (%Rget), while the other pulls it to self.shell.user_ns (%Rpull). Imagine we've stored the results
of some calculation in the variable "a" in rpy2's namespace. By using the %R magic, we can obtain these results and
store them in b. We can also pull them directly to user_ns with %Rpull. They are both views on the same data.
%Rpull is equivalent to calling %R with just -o
End of explanation
"""
A = np.arange(20)
%R -i A
%R mean(A)
"""
Explanation: On the other hand %Rpush is equivalent to calling %R with just -i and no trailing code.
End of explanation
"""
%Rget A
"""
Explanation: The magic %Rget retrieves one variable from R.
End of explanation
"""
v1 = %R plot(X,Y); print(summary(lm(Y~X))); vv=mean(X)*mean(Y)
print 'v1 is:', v1
v2 = %R mean(X)*mean(Y)
print 'v2 is:', v2
"""
Explanation: Plotting and capturing output
R's console (i.e. its stdout() connection) is captured by ipython, as are any plots which are published as PNG files like the notebook with arguments --pylab inline. As a call to %R may produce a return value (see above) we must ask what happens to a magic like the one below. The R code specifies that something is published to the notebook. If anything is published to the notebook, that call to %R returns None.
End of explanation
"""
v = %R plot(X,Y)
assert v == None
"""
Explanation: What value is returned from %R?
Some calls have no particularly interesting return value, the magic %R will not return anything in this case. The return value in rpy2 is actually NULL so %R returns None.
End of explanation
"""
v = %R print(X)
assert v == None
"""
Explanation: Also, if the return value of a call to %R (in line mode) has just been printed to the console, then its value is also not returned.
End of explanation
"""
v = %R print(summary(X)); X
print 'v:', v
"""
Explanation: But, if the last value did not print anything to console, the value is returned:
End of explanation
"""
%R -n X
%R X;
"""
Explanation: The return value can be suppressed by a trailing ';' or an -n argument.
End of explanation
"""
%%R -i X,Y -o XYcoef
XYlm = lm(Y~X)
XYcoef = coef(XYlm)
print(summary(XYlm))
par(mfrow=c(2,2))
plot(XYlm)
"""
Explanation: Cell level magic
Often, we will want to do more than a simple linear regression model. There may be several lines of R code that we want to
use before returning to python. This is the cell-level magic.
For the cell level magic, inputs can be passed via the -i or --inputs argument in the line. These variables are copied
from the shell namespace to R's namespace using rpy2.robjects.r.assign. It would be nice not to have to copy these into R: rnumpy ( http://bitbucket.org/njs/rnumpy/wiki/API ) has done some work to limit or at least make transparent the number of copies of an array. This seems like a natural thing to try to build on. Arrays can be output from R via the -o or --outputs argument in the line. All other arguments are sent to R's png function, which is the graphics device used to create the plots.
We can redo the above calculations in one ipython cell. We might also want to add some output such as a summary
from R or perhaps the standard plotting diagnostics of the lm.
End of explanation
"""
seq1 = np.arange(10)
%%R -i seq1 -o seq2
seq2 = rep(seq1, 2)
print(seq2)
seq2[::2] = 0
seq2
%%R
print(seq2)
"""
Explanation: Passing data back and forth
Currently, data is passed through RMagics.pyconverter when going from python to R and RMagics.Rconverter when
going from R to python. These currently default to numpy.ndarray. Future work will involve writing better converters, most likely involving integration with http://pandas.sourceforge.net.
Passing ndarrays into R seems to require a copy, though once an object is returned to python, this object is NOT copied, and it is possible to change its values.
End of explanation
"""
seq1[0] = 200
%R print(seq1)
"""
Explanation: Once the array data has been passed to R, modifring its contents does not modify R's copy of the data.
End of explanation
"""
print seq1
%R -i seq1 -o seq1
print seq1
seq1[0] = 200
%R print(seq1)
seq1_view = %R seq1
assert(id(seq1_view.data) == id(seq1.data))
"""
Explanation: But, if we pass data as both input and output, then the value of "data" in user_ns will be overwritten and the
new array will be a view of the data in R's copy.
End of explanation
"""
try:
%R -n nosuchvar
except Exception as e:
print e.message
pass
"""
Explanation: Exception handling
Exceptions are handled by passing back rpy2's exception and the line that triggered it.
End of explanation
"""
datapy= np.array([(1, 2.9, 'a'), (2, 3.5, 'b'), (3, 2.1, 'c')],
dtype=[('x', '<i4'), ('y', '<f8'), ('z', '|S1')])
%%R -i datapy -d datar
datar = datapy
datar
%R datar2 = datapy
%Rpull -d datar2
datar2
%Rget -d datar2
"""
Explanation: Structured arrays and data frames
In R, data frames play an important role as they allow array-like objects of mixed type with column names (and row names). In bumpy, the closest analogy is a structured array with named fields. In future work, it would be nice to use pandas to return full-fledged DataFrames from rpy2. In the mean time, structured arrays can be passed back and forth with the -d flag to %R, %Rpull, and %Rget
End of explanation
"""
Z = np.arange(6)
%R -i Z
%Rget -d Z
"""
Explanation: For arrays without names, the -d argument has no effect because the R object has no colnames or names.
End of explanation
"""
%Rget datar2
"""
Explanation: For mixed-type data frames in R, if the -d flag is not used, then an array of a single type is returned and
its value is transposed. This would be nice to fix, but it seems something that should be fixed at the rpy2 level (See: https://bitbucket.org/lgautier/rpy2/issue/44/numpyrecarray-as-dataframe)
End of explanation
"""
|
rsterbentz/phys202-2015-work | assignments/assignment04/MatplotlibEx01.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: Matplotlib Exercise 1
Imports
End of explanation
"""
import os
assert os.path.isfile('yearssn.dat')
"""
Explanation: Line plot of sunspot data
Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook.
End of explanation
"""
data = np.loadtxt('yearssn.dat')
year = data[:,0]
ssc = data[:,1]
year, ssc
assert len(year)==315
assert year.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float)
"""
Explanation: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
End of explanation
"""
max_diff = 0 # This bit of code gives the largest difference of sunspot counts between any one year
for i in range(len(year)-1): # increment. Essentially, the largest slope on the graph.
delta = abs(ssc[i] - ssc[i+1])
if delta > max_diff:
max_diff = delta
max_diff
f =plt.figure(figsize=(20,1.5))
plt.plot(year, ssc, 'g-')
plt.xlabel('Year')
plt.xlim(1700,2015)
plt.ylabel('Sunspot Count')
plt.grid(True)
assert True # leave for grading
"""
Explanation: Make a line plot showing the sunspot count as a function of year.
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation
"""
f, ax = plt.subplots(4,1, figsize = (13,5), sharey = True)
plt.sca(ax[0])
plt.plot(year, ssc, 'r')
plt.xlim(1700,1800)
plt.sca(ax[1])
plt.plot(year, ssc, 'r')
plt.xlim(1800,1900)
plt.sca(ax[2])
plt.plot(year, ssc, 'r')
plt.xlim(1900,2000)
plt.sca(ax[3])
plt.plot(year, ssc, 'r')
plt.xlim(2000,2100)
plt.tight_layout()
assert True # leave for grading
"""
Explanation: Describe the choices you have made in building this visualization and how they make it effective.
I used code above to find the largest slope in the graph. Since this valus is more than 100, following the "steepest slope of 1" rule would make the plot aspect ratio 100:1, which makes the plot too short to even read. I tried to make it as visually appealing as I could. A ratio of 20:1.5 seems to work alright. I chose a green line just to be different from default. I chose to include the grid lines because it doesn't obscure the data much and makes it helpful when determining the y-axis values near the right end of the graph.
Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above:
Customize your plot to follow Tufte's principles of visualizations.
Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.
Customize the box, grid, spines and ticks to match the requirements of this data.
End of explanation
"""
|
aliojjati/aliojjati.github.io | Other_files/DEMO_tSZXlensing_stripped.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches
import astropy.io.fits
import healpy as hp
import scipy.special
import scipy.interpolate
import os
import collections
pi = np.pi
def create_tSZ_catalog(fields, y_map_file, y_map_mask_file, shear_path, pad, output_path, survey):
y_map = hp.read_map(y_map_file, field=0)
y_map_mask = hp.read_map(y_map_mask_file, field=4)
for field in fields:
shear_filename = shear_path + "{}.fits".format(field)
field_info = np.loadtxt(shear_filename + "_sizeinfo.dat")
print("Field: " + field)
bbox = np.array([[field_info[0,0]-pad, field_info[1,0]-pad],
[field_info[0,1]+pad, field_info[1,0]-pad],
[field_info[0,1]+pad, field_info[1,1]+pad],
[field_info[0,0]-pad, field_info[1,1]+pad]])
tSZ_output = output_path + "{}.fits".format(field)
tpcf_formats.create_standard_catalog_from_healpix_map(y_map, bbox, tSZ_output, y_map_mask, use_fits=True, field=field, survey=survey)
del y_map
del y_map_mask
"""
Explanation: This is a demo code, as part of the pipeline for computing the tSZ comptonization y - Lensing convergence or tangential shear cross correlations, from CFHTLenS, RCSLenS or KiDS data.
The y maps are either from Planck data or from maps made at UBC.
End of explanation
"""
pad_RCSLenS = 4.0
y_map_file = "/shared_data/Planck_data/COM_Compton_SZMap_R2.00/milca_ymaps.fits"
mask_file = "/shared_data/Planck_data/COM_Compton_SZMap_R2.00/masks.fits"
RCSLenS_fields = ["CDE0047", "CDE0133", "CDE0310", "CDE0357", "CDE1040", "CDE1111", "CDE1303", "CDE1514", "CDE1613", "CDE1645", "CDE2143", "CDE2329", "CDE2338", "CSP0320"]
shear_path = "/data/aha25/shear_catalogs/RCSLenS/mag_18_30/"
output_path = "/data/aha25/tSZ/tSZ_data/RCSLenS/pad{}/".format(pad_RCSLenS)
create_tSZ_catalog(RCSLenS_fields, y_map_file, mask_file, shear_path, pad_RCSLenS, output_path, survey="RCSLenS")
"""
Explanation: Creating tSZ catalogs from maps (fits files):
RCSLenS:
End of explanation
"""
RCSLenS_fields = ["CDE0047", "CDE0133", "CDE0310", "CDE0357", "CDE1040", "CDE1111", "CDE1303", "CDE1514", "CDE1613", "CDE1645", "CDE2143", "CDE2329", "CDE2338", "CSP0320"]
for field in RCSLenS_fields:
lenses_filename = "/home/aha25/y_maps/{}_y_planck_milca_2048_big_new.fits".format(field)
tSZ_map_output = "/data/aha25/tSZ/tSZ_data/RCSLenS_grid/pad2/{}.fits".format(field)
tpcf_formats.create_catalog_from_grids(mode="lenses",
filename=lenses_filename, output_filename=tSZ_map_output,
pixel_size=1.0/60.0, weight_filename=None, remove_mean=True,
survey="RCSLenS", field=field)
"""
Explanation: Grid the data
End of explanation
"""
tpcf_path = "/data/aha25/tpcf/calculate_shear_2pcf"
tpcf_path_debug = "/data/aha25/tpcf/calculate_shear_2pcf_debug"
bootstrap_configs = [#{"n_resample" : 10000, "type" : "field-equal-weight-blocks", "n_x_block" : None, "n_y_block" : None, "n_block_total" : 40, "supersampling" : None, "n_run" : 1},
{"n_resample" : 10000, "type" : "field-equal-weight-blocks", "n_x_block" : None, "n_y_block" : None, "n_block_total" : 80, "supersampling" : None, "n_run" : 1},
{"n_resample" : 10000, "type" : "field-equal-weight-blocks", "n_x_block" : None, "n_y_block" : None, "n_block_total" : 160, "supersampling" : None, "n_run" : 1},
{"n_resample" : 10000, "type" : "field-rectangular-blocks", "n_x_block" : 5, "n_y_block" : 5, "n_block_total" : None, "supersampling" : None, "n_run" : 1},
{"n_resample" : 10000, "type" : "field-rectangular-blocks", "n_x_block" : 10, "n_y_block" : 10, "n_block_total" : None, "supersampling" : None, "n_run" : 1}]
n_threads = 10
"""
Explanation: Calculate 2-point correlation function:
End of explanation
"""
marks_file = "/data/aha25/tpcf_marks/CFHTLenS/CFHTLenS_kappa_scale_{}_theta_{}_{}_nbin_{}_{}_pad{}_marks.fits".format(scale_CFHTLenS, theta_min_CFHTLenS*60.0, theta_max_CFHTLenS*60.0, n_bin_CFHTLenS, bin_type_CFHTLenS, pad_CFHTLenS)
foreground_path = "/data/aha25/tSZ/tSZ_data/CFHTLenS/pad{}/".format(pad_CFHTLenS)
tpcf_output_path = "/data/aha25/tSZ/tpcf/CFHTLenS_kappa/scale{}/".format(scale_CFHTLenS)
tpcf_formats.mkdirs(tpcf_output_path)
tpcf_driver.run_marks_crosscorrelation("kappa", foreground_path=foreground_path, marks_file=marks_file, tpcf_output_path=tpcf_output_path,
fields=CFHTLenS_fields,
n_bin=n_bin_CFHTLenS, theta_min=theta_min_CFHTLenS, theta_max=theta_max_CFHTLenS, logspaced=logspaced_CFHTLenS,
bootstrap_configs=bootstrap_configs,
n_thread=n_threads, tpcf_path=tpcf_path, verbose=True)
tSZ_path = os.path.join("/data/aha25/tSZ/tSZ_data/sims/", cosmology_sims, feedback_model_sims)
shear_path = os.path.join("/data/aha25/shear_catalogs/cosmo-OWLS/", n_z_sims, cosmology_sims, feedback_model_sims)
kappa_path = os.path.join("/data/aha25/massmaps/cosmo-OWLS/", n_z_sims, cosmology_sims, feedback_model_sims)
for i in range(n_LOS):
tpcf_output_base_path = "/data/aha25/tSZ/tpcf/cosmo-OWLS/{}/".format(i)
if not os.access(tpcf_output_base_path, os.F_OK):
os.mkdir(tpcf_output_base_path)
if not os.access(os.path.join(tpcf_output_base_path, n_z_sims), os.F_OK):
os.mkdir(os.path.join(tpcf_output_base_path, n_z_sims))
if not os.access(os.path.join(tpcf_output_base_path, n_z_sims, cosmology_sims), os.F_OK):
os.mkdir(os.path.join(tpcf_output_base_path, n_z_sims, cosmology_sims))
if not os.access(os.path.join(tpcf_output_base_path, n_z_sims, cosmology_sims, feedback_model_sims), os.F_OK):
os.mkdir(os.path.join(tpcf_output_base_path, n_z_sims, cosmology_sims, feedback_model_sims))
tpcf_output_path = os.path.join(tpcf_output_base_path, n_z_sims, cosmology_sims, feedback_model_sims)
LOS = ["cone_{}".format(i)]
tpcf_driver.run_standard_crosscorrelation(mode="tangential-shear",
foreground_path=tSZ_path, background_path=shear_path, tpcf_output_path=tpcf_output_path,
fields=LOS,
n_bin=n_bin_sims, theta_min=theta_min_sims, theta_max=theta_max_sims, logspaced=logspaced_sims,
bootstrap_configs=None,
spherical_coordinates=False, left_handed_coordinates=True, kdtree=False,
calculate_tpcf=True, n_thread=20)
for i in range(n_LOS):
tpcf_output_base_path = "/data/aha25/tSZ/tpcf/cosmo-OWLS_kappa/{}/".format(i)
if not os.access(tpcf_output_base_path, os.F_OK):
os.mkdir(tpcf_output_base_path)
if not os.access(os.path.join(tpcf_output_base_path, n_z_sims), os.F_OK):
os.mkdir(os.path.join(tpcf_output_base_path, n_z_sims))
if not os.access(os.path.join(tpcf_output_base_path, n_z_sims, cosmology_sims), os.F_OK):
os.mkdir(os.path.join(tpcf_output_base_path, n_z_sims, cosmology_sims))
if not os.access(os.path.join(tpcf_output_base_path, n_z_sims, cosmology_sims, feedback_model_sims), os.F_OK):
os.mkdir(os.path.join(tpcf_output_base_path, n_z_sims, cosmology_sims, feedback_model_sims))
tpcf_output_path = os.path.join(tpcf_output_base_path, n_z_sims, cosmology_sims, feedback_model_sims)
LOS = ["cone_{}".format(i)]
tpcf_driver.run_standard_crosscorrelation(mode="kappa",
foreground_path=tSZ_path, background_path=kappa_path, tpcf_output_path=tpcf_output_path,
fields=LOS,
n_bin=n_bin_sims, theta_min=theta_min_sims, theta_max=theta_max_sims, logspaced=logspaced_sims,
bootstrap_configs=None,
spherical_coordinates=False, left_handed_coordinates=False, kdtree=False,
calculate_tpcf=True, n_thread=20)
"""
Explanation: 2pcf
End of explanation
"""
C_ell_Planck_AGN80_bin2_raw = np.loadtxt("/home/aha25/RCSLenS_new/Ian_sims/FT/AGN_Planck_Cl_kappa_bin2_y_smooth.dat")
C_ell_Planck_AGN80_bin2_raw = C_ell_Planck_AGN80_bin2_raw[1:70]
C_ell_WMAP7_AGN80_bin2_raw = np.loadtxt("/home/aha25/RCSLenS_new/Ian_sims/FT/AGN_WMAP7_Cl_kappa_bin2_y_smooth.dat")
C_ell_WMAP7_AGN80_bin2_raw = C_ell_WMAP7_AGN80_bin2_raw[1:70]
C_ell_WMAP7_AGN80_CFHTLenS_raw = np.loadtxt("/data/aha25/tSZ/Cl_kappa_y_simulation/AGN_WMAP7_Cl_kappa_y.dat")
C_ell_WMAP7_AGN80_CFHTLenS_raw = C_ell_WMAP7_AGN80_CFHTLenS_raw[1:70]
plt.loglog(C_ell_WMAP7_AGN80_bin2_raw[:,0], C_ell_WMAP7_AGN80_bin2_raw[:,0]*C_ell_WMAP7_AGN80_bin2_raw[:,1])
plt.loglog(C_ell_WMAP7_AGN80_CFHTLenS_raw[:,0], C_ell_WMAP7_AGN80_CFHTLenS_raw[:,0]*C_ell_WMAP7_AGN80_CFHTLenS_raw[:,1])
"""
Explanation: Plots
End of explanation
"""
ell_log_C_ell_Planck_AGN80_bin2_intp = scipy.interpolate.InterpolatedUnivariateSpline(np.log(C_ell_Planck_AGN80_bin2_raw[:,0]), np.log(C_ell_Planck_AGN80_bin2_raw[:,0]*C_ell_Planck_AGN80_bin2_raw[:,1]), ext="const")
ell_log_C_ell_WMAP7_AGN80_bin2_intp = scipy.interpolate.InterpolatedUnivariateSpline(np.log(C_ell_WMAP7_AGN80_bin2_raw[:,0]), np.log(C_ell_WMAP7_AGN80_bin2_raw[:,0]*C_ell_WMAP7_AGN80_bin2_raw[:,1]), ext="const")
ell_log_C_ell_WMAP7_AGN80_CFHTLenS_intp = scipy.interpolate.InterpolatedUnivariateSpline(np.log(C_ell_WMAP7_AGN80_CFHTLenS_raw[:,0]), np.log(C_ell_WMAP7_AGN80_CFHTLenS_raw[:,0]*C_ell_WMAP7_AGN80_CFHTLenS_raw[:,1]), ext="const")
C_ell_Planck_AGN80_bin2 = lambda ell: np.exp(ell_log_C_ell_Planck_AGN80_bin2_intp(np.log(ell)))/ell
C_ell_WMAP7_AGN80_bin2 = lambda ell: np.exp(ell_log_C_ell_WMAP7_AGN80_bin2_intp(np.log(ell)))/ell
C_ell_WMAP7_AGN80_CFHTLenS = lambda ell: np.exp(ell_log_C_ell_WMAP7_AGN80_CFHTLenS_intp(np.log(ell)))/ell
ell = np.arange(1, C_ell_Planck_AGN80_bin2_raw[-1,0]+1)
theta_th = np.linspace(0.1, 181.0, 80.0)
xi_th_kappa_Planck_AGN80_bin2 = np.zeros_like(theta_th)
xi_th_g_t_Planck_AGN80_bin2 = np.zeros_like(theta_th)
xi_th_kappa_WMAP7_AGN80_bin2 = np.zeros_like(theta_th)
xi_th_g_t_WMAP7_AGN80_bin2 = np.zeros_like(theta_th)
xi_th_kappa_WMAP7_AGN80_CFHTLenS = np.zeros_like(theta_th)
xi_th_g_t_WMAP7_AGN80_CFHTLenS = np.zeros_like(theta_th)
sigma_Planck = 9.5/60.0/180.0*pi/(2.0*np.sqrt(2.0* np.log(2.0)))
Planck_smoothing = np.exp(-sigma_Planck**2*ell**2/2.0)
sigma_RCSLenS = 2.0/60.0/180.0*pi
RCSLenS_smoothing = np.exp(-sigma_RCSLenS**2*ell**2/2.0)
for i in range(len(theta_th)):
xi_th_kappa_Planck_AGN80_bin2[i] = 1.0e-11*1.0/(2.0*pi) * np.sum(C_ell_Planck_AGN80_bin2(ell) * ell * scipy.special.jv(0, ell*theta_th[i]/60.0/180.0*pi))
xi_th_g_t_Planck_AGN80_bin2[i] = 1.0e-11*1.0/(2.0*pi) * np.sum(C_ell_Planck_AGN80_bin2(ell) * ell * scipy.special.jv(2, ell*theta_th[i]/60.0/180.0*pi))
xi_th_kappa_WMAP7_AGN80_bin2[i] = 1.0e-11*1.0/(2.0*pi) * np.sum(C_ell_WMAP7_AGN80_bin2(ell) * ell * scipy.special.jv(0, ell*theta_th[i]/60.0/180.0*pi))
xi_th_g_t_WMAP7_AGN80_bin2[i] = 1.0e-11*1.0/(2.0*pi) * np.sum(C_ell_WMAP7_AGN80_bin2(ell) * ell * scipy.special.jv(2, ell*theta_th[i]/60.0/180.0*pi))
xi_th_kappa_WMAP7_AGN80_CFHTLenS[i] = 1.0e-11*1.0/(2.0*pi) * np.sum(C_ell_WMAP7_AGN80_CFHTLenS(ell)*Planck_smoothing * ell * scipy.special.jv(0, ell*theta_th[i]/60.0/180.0*pi))
xi_th_g_t_WMAP7_AGN80_CFHTLenS[i] = 1.0e-11*1.0/(2.0*pi) * np.sum(C_ell_WMAP7_AGN80_CFHTLenS(ell)*Planck_smoothing * ell * scipy.special.jv(2, ell*theta_th[i]/60.0/180.0*pi))
ell = C_ell_WMAP7_AGN80_CFHTLenS_raw[:,0]
sigma_Planck = 9.5/60.0/180.0*pi/(2.0*np.sqrt(2.0* np.log(2.0)))
Planck_smoothing = np.exp(-sigma_Planck**2*ell**2/2.0)
sigma_RCSLenS = 2.0/60.0/180.0*pi
RCSLenS_smoothing = np.exp(-sigma_RCSLenS**2*ell**2/2.0)
for i in range(len(theta_th)):
xi_th_kappa_WMAP7_AGN80_CFHTLenS[i] = 1.0e-9*1.0/(2.0*pi) * np.sum(C_ell_WMAP7_AGN80_CFHTLenS_raw[:,1]*Planck_smoothing*RCSLenS_smoothing * ell * scipy.special.jv(0, ell*theta_th[i]/60.0/180.0*pi))
xi_th_g_t_WMAP7_AGN80_CFHTLenS[i] = 1.0e-9*1.0/(2.0*pi) * np.sum(C_ell_WMAP7_AGN80_CFHTLenS_raw[:,1]*Planck_smoothing*RCSLenS_smoothing * ell * scipy.special.jv(2, ell*theta_th[i]/60.0/180.0*pi))
def load_xi(base_path, covariance, n_bin):
xi = np.loadtxt(base_path + "xi.dat")
xi_E_cov = np.loadtxt(base_path + "{}".format(covariance))[:n_bin,:]
xi_B_cov = np.loadtxt(base_path + "{}".format(covariance))[n_bin:,:]
return {"xi" : xi, "xi_E_cov" : xi_E_cov, "xi_B_cov" : xi_B_cov}
def stack_simulations(base_path, N_sims, filter_type, n_bin):
sims_xi = np.zeros((N_sims, n_bin, 3))
xi = np.zeros((n_bin, 3))
for i in range(N_sims):
tmp = np.loadtxt(base_path + "{}/{}/xi.dat".format(i, filter_type))
sims_xi[i] = tmp[:,1:4]
xi[:,0] = tmp[:,0]
xi[:,1] = np.sum(sims_xi[:,:,0]*sims_xi[:,:,2], axis=0)/np.sum(sims_xi[:,:,2], axis=0)
xi[:,2] = np.sum(sims_xi[:,:,1]*sims_xi[:,:,2], axis=0)/np.sum(sims_xi[:,:,2], axis=0)
xi_E_cov = np.cov(sims_xi[:,:,0].T, ddof=1)
xi_B_cov = np.cov(sims_xi[:,:,1].T, ddof=1)
return {"xi" : xi, "xi_E_cov" : xi_E_cov, "xi_B_cov" : xi_B_cov}
xi_kappa = collections.OrderedDict()
xi_shear = collections.OrderedDict()
"""
Explanation: Computing the Fourier space power spectrum, real space correlation function :
End of explanation
"""
|
maojrs/riemann_book | Pressureless_flow.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from exact_solvers import shallow_water
from collections import namedtuple
from utils import riemann_tools
from ipywidgets import interact
from ipywidgets import widgets, Checkbox, IntSlider, FloatSlider
State = namedtuple('State', shallow_water.conserved_variables)
Primitive_State = namedtuple('PrimState', shallow_water.primitive_variables)
plt.style.use('seaborn-talk')
def connect_states(h_l=1.,u_l=-1.,h_r=1.,u_r=1.,logg=0.,plot_unphysical=True):
g = 10.**logg
q_l = np.array([h_l,h_l*u_l])
q_r = np.array([h_r,h_r*u_r])
fig, ax = plt.subplots(1,2,figsize=(14,6))
shallow_water.phase_plane_curves(q_l[0], q_l[1], 'qleft', g,
wave_family=1, ax=ax[0],
plot_unphysical=plot_unphysical)
shallow_water.phase_plane_curves(q_r[0], q_r[1], 'qright', g,
wave_family=2, ax=ax[0],
plot_unphysical=plot_unphysical)
shallow_water.phase_plane_curves(q_l[0], q_l[1], 'qleft', g,
wave_family=1, y_axis='hu',ax=ax[1],
plot_unphysical=plot_unphysical)
shallow_water.phase_plane_curves(q_r[0], q_r[1], 'qright', g,
wave_family=2, y_axis='hu',ax=ax[1],
plot_unphysical=plot_unphysical)
ax[0].set_title('h-u plane'); ax[1].set_title('h-hu plane');
ax[0].set_xlim(0,3); ax[1].set_xlim(0,3);
ax[0].set_ylim(-10,10); ax[1].set_ylim(-10,10);
plt.tight_layout(); plt.show()
interact(connect_states,
h_l=widgets.FloatSlider(min=0.001,max=2,value=1),
u_l=widgets.FloatSlider(min=-5,max=5,value=-1),
h_r=widgets.FloatSlider(min=0.001,max=2,value=1),
u_r=widgets.FloatSlider(min=-5,max=5,value=1),
logg=widgets.FloatSlider(value=0,min=-5,max=2,
description='$\log_{10}(g)$'));
def c1(q, xi, g=1.):
"Characteristic speed for shallow water 1-waves."
h = q[0]
if h > 0:
u = q[1]/q[0]
return u - np.sqrt(g*h)
else:
return 0
def c2(q, xi, g=1.):
"Characteristic speed for shallow water 2-waves."
h = q[0]
if h > 0:
u = q[1]/q[0]
return u + np.sqrt(g*h)
else:
return 0
def make_plot_function(q_l,q_r,force_waves=None,extra_lines=None):
def plot_function(t,logg,plot_1_chars=False,plot_2_chars=False):
varnames = shallow_water.primitive_variables
prim = shallow_water.cons_to_prim
g = 10**logg
states, speeds, reval, wave_types = \
shallow_water.exact_riemann_solution(q_l,q_r,g,
force_waves=force_waves)
ax = riemann_tools.plot_riemann(states,speeds,reval,wave_types,t=t,
t_pointer=0,extra_axes=True,
variable_names=varnames,fill=[0],
derived_variables=prim);
if plot_1_chars:
riemann_tools.plot_characteristics(reval,c1,(g,g),axes=ax[0],
extra_lines=extra_lines)
if plot_2_chars:
riemann_tools.plot_characteristics(reval,c2,(g,g),axes=ax[0],
extra_lines=extra_lines)
shallow_water.phase_plane_plot(q_l,q_r,g,ax=ax[3],
force_waves=force_waves,y_axis='u')
plt.show()
return plot_function
def plot_riemann_SW(q_l,q_r,force_waves=None,extra_lines=None):
plot_function = make_plot_function(q_l,q_r,force_waves,extra_lines)
interact(plot_function, t=FloatSlider(value=0.1,min=0,max=.9),
plot_1_chars=Checkbox(description='1-characteristics',value=False),
plot_2_chars=Checkbox(description='2-characteristics'),
logg=IntSlider(value=0,min=-10,max=2,description='$\log_{10}(g)$'))
"""
Explanation: Pressureless flow
In this chapter we consider flow in the absence of pressure.
Shallow water flow in low gravity
Recall the shallow water equations:
\begin{align}
h_t + (hu)_x & = 0 \
(hu)_t + \left(hu^2 + \frac{1}{2}gh^2\right)_x & = 0.
\end{align}
These are very similar to the isothermal flow equations of gas dynamics:
\begin{align}
\rho_t + (\rho u)_x & = 0 \
(\rho u)_t + \left(\rho u^2 + a^2 \rho \right)_x & = 0
\end{align}
Indeed, if we identify the shallow water depth $h$ with the isothermal gas density $\rho$, then these systems differ only in the second term of the momentum flux. In both systems this term represents pressure; in the first system the hydrostatic pressure $\frac{1}{2}gh^2$ creates a net force from regions of greater depth to lower depth, while in the second system the pressure $a^2 \rho$ increases linearly with density.
In this chapter we investigate what happens when the pressure tends to zero; this corresponds to the limit $g\to 0$ for shallow water and the limit $a \to 0$ for isothermal flow.
First, let's see what happens to the integral curves and Hugoniot loci as $g \to 0$.
End of explanation
"""
q_l = State(Depth = 3.,
Momentum = 1.)
q_r = State(Depth = 1.,
Momentum = 1./3)
plot_riemann_SW(q_l,q_r)
"""
Explanation: Notice the following behaviors as $g \to 0$:
The integral curves and hugoniot loci become parallel.
All four curves become horizontal lines in the $h-u$ plane.
What happens to the hyperbolic structure of this system as $g \to 0$? Recall that the flux jacobian is
\begin{align}
f'(q) & = \begin{pmatrix} 0 & 1 \ -(q_2/q_1)^2 + g q_1 & 2 q_2/q_1 \end{pmatrix}
= \begin{pmatrix} 0 & 1 \ -u^2 + g h & 2 u \end{pmatrix},
\end{align}
with eigenvalues
\begin{align}
\lambda_1 & = u - \sqrt{gh} & \lambda_2 & = u + \sqrt{gh},
\end{align}
and corresponding eigenvectors
\begin{align}
r_1 & = \begin{bmatrix} 1 \ u-\sqrt{gh} \end{bmatrix} &
r_2 & = \begin{bmatrix} 1 \ u+\sqrt{gh} \end{bmatrix}.
\end{align}
As $g \to 0$, the eigenvalues both approach a single value of $\lambda_0 = u$ and the eigenvectors both approach
\begin{align}
r_0 & = \begin{bmatrix} 1 \ u \end{bmatrix}.
\end{align}
Thus for $g=0$ there is a single eigenvalue $\lambda_0=u$ with algebraic multiplicity 2 but geometric multiplicity 1. The flux Jacobian is defective and the system is no longer hyperbolic. The integral curves are everywhere parallel to the eigenvector $r_0$, but this eigenvector points in a direction of constant $u$ and depends only on $u$, so the integral curves in the $h-hu$ plane are straight lines of constant $u$, all passing through the origin.
What about the Hugoniot loci? Recall from the shallow water chapter that a shock with a jump in the depth of $\alpha = h-h_*$ must have a jump in the momentum of
\begin{align}
h u & = h_ u_ + \alpha \left[u_ \pm \sqrt{gh_ \left(1+\frac{\alpha}{h_}\right)\left(1+\frac{\alpha}{2h_}\right)}\right]
\end{align}
Setting $g=0$ we obtain $hu = h_ u_ + (h-h_)u_$, so that $u=u_*$. Thus the Hugoniot loci are the same as the integral curves and are lines of constant $u$.
In the Riemann problem, if the left and right states have the same initial velocity, then we can still connect them as $g \to 0$; in the limit, everything just moves along at that constant velocity. Here's an example.
End of explanation
"""
q_l = State(Depth = 3.,
Momentum = -1.)
q_r = State(Depth = 1.,
Momentum = 2.)
plot_riemann_SW(q_l,q_r)
"""
Explanation: Now suppose we have a Riemann problem whose initial states have different velocities. The only way to connect them is through a pair of rarefactions with a dry state in between. Recall from our earlier treatment of the shallow water equations that a middle dry state occurred if $u_l + 2 \sqrt{gh_l} < u_r-2\sqrt{gh_r}$; for $g=0$ this reduces simply to the condition $u_l < u_r$.
Here's an example with $u_l < u_r$; use the second slider to see what happens as $g \to 0$.
Also, turn on the plots of the two characteristic families and notice how they become parallel as $g \to 0$.
End of explanation
"""
q_l = State(Depth = 3.,
Momentum = 1.)
q_r = State(Depth = 1.,
Momentum = 0.1)
plot_riemann_SW(q_l,q_r)
"""
Explanation: What happens if $u_l \ge u_r$, so that the double-rarefaction solution is unphysical? The answer can be found by considering what happens when $g$ is small but nonzero. Here's an example to play with.
End of explanation
"""
q_l = State(Depth = 1.,
Momentum = 2.)
q_r = State(Depth = 1.,
Momentum = -1.)
plot_riemann_SW(q_l,q_r)
"""
Explanation: Keep a close eye on the scale of the h-axis in the phase plane plot, and notice again that as $g \to 0$ the hugoniot loci become nearly parallel (horizontal in the $h-u$ plane). This means that their intersection occurs at a very large value of $h$, so the middle state depth goes to $\infty$ as $g\to 0$. Meanwhile, the speed of both shocks approaches a single value, so the region occupied by the middle state gets ever narrower. In the limit, the solution for the depth involves a delta function.
To make this even clearer, here's an example with uniform initial depth and both flows directed inward, very similar to the first example we considered in the initial chapter on shallow water.
End of explanation
"""
q_l = State(Depth = 1.,
Momentum = 100.)
q_r = State(Depth = 1.,
Momentum = 0.)
plot_riemann_SW(q_l,q_r)
"""
Explanation: What's going on here, physically, when $g$ is very small? At the interface, water is colliding. With the gravitational pressure term present, this leads to the formation of outgoing shock waves that redistribute water away from the interface. But without that term, there is no force to equilibrate the depth and each parcel of water just flows with its initial velocity, until eventually it collides with water of a different velocity. Water from both sides accumulates at the interface, leading to what is known as a delta shock.
If you drag the time slider for different fixed values of $g$ in the last example, you'll see that since $g$ is still nonzero there are two shock waves and the near-delta-function region expands with time, but the rate of this expansion is smaller for smaller values of $g$. For very small values, the region between the shocks cannot be resolved on the scale of the plots above. Indeed, the width of the line plotted there significantly exaggerates the width of this region when $g$ is very small.
There are two import details we haven't yet explained: first, the delta shock is moving; second, it is growing in time. Movement of the delta shock is necessary in order to conserve momentum; since the water from the left is arriving with a different speed than that from the right, if the delta shock were stationary then momentum would not be conserved. Growth of the delta shock is necessary in order to conserve mass.
Let $h_\delta$ denote the mass of the delta shock. How must $h_\delta$ grow in order to conserve mass? Let $\hat{u}$ denote the velocity of the delta shock, with $u_l \ge \hat{u} \ge u_r$. Then over a unit time interval, a quantity $h_l(u_l-\hat{u})$ of water arrives from the left, while $h_r(\hat{u}-u_r)$ arrives from the right. Thus we must have
$$
h_\delta(t) = t \left( h_l u_l - h_r u_r + \hat{u}(h_r-h_l) \right).
$$
At what speed must the water in the delta shock move in order to conserve momentum? The rate of momentum flowing into the delta shock from the left is $h_l u_l (u_l-\hat{u})$ and the rate of momentum flowing into the delta shock from the right is $h_r u_r (\hat{u}-u_r)$, so we must have
$$
h_\delta(t) \hat{u} = t \left( h_l u_l^2 - h_r u_r^2 + \hat{u}(h_r u_r - h_l u_l) \right).
$$
Combining these two conservation conditions, we find the speed of the delta shock:
$$
\hat{u} = \frac{\sqrt{h_l}u_l + \sqrt{h_r}{u_r}}{\sqrt{h_l}+\sqrt{h_r}}
$$
and its rate of growth:
$$
h_\delta(t) = t \sqrt{h_l h_r}(u_l - u_r).
$$
Note that for any non-zero value of $g$, the length of the interval occupied by the middle state grows in time while its amplitude is constant. In the limit $g\to 0$ the interval becomes infinitesimal so the amplitude must change to conserve mass. Notice also that this change in amplitude implies that the Riemann solution is no longer a similarity solution when $g=0$.
High-speed flows
Above we have considered the limit as gravity tends to zero in the shallow water equations (or equivalently as the sound speed goes to zero in the isothermal flow equations). As with any mathematical analysis involving a "small" parameter, one should ask "small relative to what"?
By non-dimensionalizing the shallow water equations, one obtains the system
\begin{align}
h_t + (hu)_x & = 0 \
(hu)_t + \left(hu^2 + \frac{1}{2F^2}h^2\right)_x & = 0.
\end{align}
where
$$
F = \frac{U}{\sqrt{gH}}
$$
is the ratio of a typical velocity $U$ to the speed of gravity waves with typical depth $H$. From this form we can see that in fact we have studied the large Froude number limit of the shallow water equations, and we should expect to see something similar to a delta shock form whenever the Froude number is large. We can approach this limit in ways other than taking $g \to 0$. For instance, if the velocity is very large and $\sqrt{gh}=O(1)$, then we obtain (approximately) delta-shock solutions like those seen above. Here is an example.
End of explanation
"""
q_l = State(Depth = 3.,
Momentum = 0.)
q_r = State(Depth = 1.,
Momentum = 0.)
plot_riemann_SW(q_l,q_r)
"""
Explanation: Similarly, a non-dimensionalization of the isothermal equations shows that the pressureless flow equations correspond to the large Mach number limit (i.e. when the typical flow velocity is much greater than the sound speed).
References
Delta shocks arising in pressureless flows have been studied by many authors; see Section 16.3 of <cite data-cite="fvmhp"><a href="riemann.html#fvmhp">(LeVeque, 2002)</a></cite> and references therein, as well as <cite data-cite="leveque2004dynamics"><a href="riemann.html#leveque2004dynamics">(LeVeque, 2004)</a></cite>. For an examination of the large Froude number limit in shallow water flow, see <cite data-cite="edwards2008non"><a href="riemann.html#fvmhp">(Edwards et. al., 2008)</a></cite>.
Exercises
(1) In the example below, the initial velocity is zero for both states. Observe that multiplying the value of $g$ by a factor $\alpha$ is equivalent to rescaling all the velocities by $\sqrt{\alpha}$. Prove that this scaling must hold for any Riemann solution of this system when $u_l = u_r = 0$.
End of explanation
"""
|
jserenson/Python_Bootcamp | Sets and Booleans.ipynb | gpl-3.0 | x = set()
# We add to sets with the add() method
x.add(1)
#Show
x
"""
Explanation: Set and Booleans
There are two other object types in Python that we should quickly cover. Sets and Booleans.
Sets
Sets are an unordered collection of unique elements. We can construct them by using the set() function. Let's go ahead and make a set to see how it works
End of explanation
"""
# Add a different element
x.add(2)
#Show
x
# Try to add the same element
x.add(1)
#Show
x
"""
Explanation: Note the curly brackets. This does not indicate a dictionary! Although you can draw analogies as a set being a dictionary with only keys.
We know that a set has only unique entries. So what happens when we try to add something that is already in a set?
End of explanation
"""
# Create a list with repeats
l = [1,1,2,2,3,4,5,6,1,1]
# Cast as set to get unique values
set(l)
"""
Explanation: Notice how it won't place another 1 there. That's because a set is only concerned with unique elements! We can cast a list with multiple repeat elements to a set to get the unique elements. For example:
End of explanation
"""
# Set object to be a boolean
a = True
#Show
a
"""
Explanation: Booleans
Python comes with Booleans (with predefined True and False displays that are basically just the integers 1 and 0). It also has a placeholder object called None. Let's walk through a few quick examples of Booleans (we will dive deeper into them later in this course).
End of explanation
"""
# Output is boolean
1 > 2
"""
Explanation: We can also use comparison operators to create booleans. We will go over all the comparison operators later on in the course.
End of explanation
"""
# None placeholder
b = None
"""
Explanation: We can use None as a placeholder for an object that we don't want to reassign yet:
End of explanation
"""
|
timothyb0912/pylogit | examples/notebooks/Mixed Logit Example--mlogit Benchmark--Electricity.ipynb | bsd-3-clause | from collections import OrderedDict # For recording the model specification
import pandas as pd # For file input/output
import numpy as np # For vectorized math operations
import pylogit as pl # For choice model estimation
"""
Explanation: The purpose of this notebook is twofold. First, it demonstrates the basic functionality of PyLogit for estimatin Mixed Logit models. Secondly, it compares the estimation results for a Mixed Logit model from PyLogit and MLogit on a common dataset and model specification. The dataset is the "Electricity" dataset. Both the dataset and the model speecification come from Kenneth Train's exercises. See <a href=https://cran.r-project.org/web/packages/mlogit/vignettes/Exercises.pdf>this mlogit pdf</a>.
End of explanation
"""
# Load the raw electricity data
long_electricity = pd.read_csv("../data/electricity_r_data_long.csv")
long_electricity.head().T
"""
Explanation: 1. Load the Electricity Dataset
End of explanation
"""
# Make sure that the choice column contains ones and zeros as opposed
# to true and false
long_electricity["choice"] = long_electricity["choice"].astype(int)
# List the variables that are the index variables
index_var_names = ["pf", "cl", "loc", "wk", "tod", "seas"]
# Transform all of the index variable columns to have float dtypes
for col in index_var_names:
long_electricity[col] = long_electricity[col].astype(float)
"""
Explanation: 2. Clean the dataset
End of explanation
"""
# Create the model's specification dictionary and variable names dictionary
# NOTE: - Keys should be variables within the long format dataframe.
# The sole exception to this is the "intercept" key.
# - For the specification dictionary, the values should be lists
# or lists of lists. Within a list, or within the inner-most
# list should be the alternative ID's of the alternative whose
# utility specification the explanatory variable is entering.
example_specification = OrderedDict()
example_names = OrderedDict()
# Note that the names used below are simply for consistency with
# the coefficient names given in the mlogit vignette.
for col in index_var_names:
example_specification[col] = [[1, 2, 3, 4]]
example_names[col] = [col]
"""
Explanation: 6. Specify and Estimate the Mixed Logit Model
6a. Specify the Model
End of explanation
"""
# Provide the module with the needed input arguments to create
# an instance of the Mixed Logit model class.
# Note that "chid" is used as the obs_id_col because "chid" is
# the choice situation id.
# Currently, the obs_id_col argument name is unfortunate because
# in the most general of senses, it refers to the situation id.
# In panel data settings, the mixing_id_col argument is what one
# would generally think of as a "observation id".
# For mixed logit models, the "mixing_id_col" argument specifies
# the units of observation that the coefficients are randomly
# distributed over.
example_mixed = pl.create_choice_model(data=long_electricity,
alt_id_col="alt",
obs_id_col="chid",
choice_col="choice",
specification=example_specification,
model_type="Mixed Logit",
names=example_names,
mixing_id_col="id",
mixing_vars=index_var_names)
# Note 2 * len(index_var_names) is used because we are estimating
# both the mean and standard deviation of each of the random coefficients
# for the listed index variables.
example_mixed.fit_mle(init_vals=np.zeros(2 * len(index_var_names)),
num_draws=600,
seed=123)
# Look at the estimated results
example_mixed.get_statsmodels_summary()
"""
Explanation: 6b. Estimate the model
Compared to estimating a Multinomial Logit Model, creating Mixed Logit models requires the declaration of more arguments.
In particular, we have to specify the identification column over which the coefficients will be randomly distributed. Usually, this is the "observation id" for our dataset. While it is unfortunately named (for now), the "obs_id_col" argument should be used to specify the id of each choice situation. The "mixing_id_col" should be used to specify the id of each unit of observation over which the coefficients are believed to be randomly distributed.
At the moment, PyLogit does not support estimation of Mixed Logit models where some coefficients are mixed over one unit of observation (such as individuals) and other coefficients are mixed over a different unit of observation (such as households).
Beyond, specification of the mixing_id_col, one must also specify the variables that will be treated as random variables. This is done by passing a list containing the names of the coefficients in the model. Note, the strings in the passed list must be present in one of the lists within "names.values()".
When estimating the mixed logit model, we must specify the number of draws to be taken from the distributions of the random coefficients. This is done through the "num_draws" argument of the "fit_mle()" function. Additionally, we can specify a random seed so the results of our estimation are reproducible. This is done through the "seed" argument of the "fit_mle()" function. Finally, the initial values argument should specify enough initial values for the original index coefficients as well as the standard deviation values of the coefficients that are being treated as random variables.
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/test-institute-3/cmip6/models/sandbox-2/ocean.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-2', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: TEST-INSTITUTE-3
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
karthikrangarajan/intro-to-sklearn | 05.Unsupervised Learning.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Learning Algorithms - Unsupervised Learning
Reminder: In machine learning, the problem of unsupervised learning is that of trying to find hidden structure in unlabeled data. Since the training set given to the learner is unlabeled, there is no error or reward signal to evaluate a potential solution. Basically, we are just finding a way to represent the data and get as much information from it that we can.
HEY! Remember PCA from above? PCA is actually considered unsupervised learning. We just put it up there because it's a good way to visualize data at the beginning of the ML process.
Let's revisit it in a little more detail using the iris dataset.
End of explanation
"""
from sklearn.decomposition import PCA
from sklearn.datasets import load_iris
iris = load_iris()
# subset data to have only sepal width (cm) and petal length (cm) for simplification
X = iris.data[:, 1:3]
print(iris.feature_names[1:3])
pca = PCA(n_components = 2)
pca.fit(X)
print("% of variance attributed to components: "+ \
', '.join(['%.2f' % (x * 100) for x in pca.explained_variance_ratio_]))
print('\ncomponents and amount of variance explained by each feature:', pca.components_)
print(pca.mean_)
"""
Explanation: PCA revisited
End of explanation
"""
# plot the original data in X (before PCA)
plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.5)
# grab the component means to get the center point for plot below
means = pca.mean_
# here we use the direction of the components in pca.components_
# and the magnitude of the variance explaine by that component in
# pca.explained_variane_
# we plot the vector (manginude and direction) of the components
# on top of the original data in X
for length, vector in zip(pca.explained_variance_, pca.components_):
v = vector * 3 * np.sqrt(length)
plt.plot([means[0], v[0]+means[0]], [means[1], v[1]+means[1]], '-k', lw=3)
# axis limits
plt.xlim(0, max(X[:, 0])+3)
plt.ylim(0, max(X[:, 1])+3)
# original feature labels of our data X
plt.xlabel(iris.feature_names[1])
plt.ylabel(iris.feature_names[2])
"""
Explanation: The pca.explained_variance_ is like the magnitude of a components influence (amount of variance explained) and the pca.components_ is like the direction of influence for each feature in each component.
<p style="text-align:right"><i>Code in next cell adapted from Jake VanderPlas's code [here](https://github.com/jakevdp/sklearn_pycon2015)</i></p>
End of explanation
"""
# get back to our 4D dataset
X, y = iris.data, iris.target
pca = PCA(n_components = 0.95) # keep 95% of variance
X_trans = pca.___(X) # <- fill in the blank
print(X.shape)
print(X_trans.shape)
plt.scatter(X_trans[:, 0], X_trans[:, 1], c=iris.target, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('spring', 10))
plt.ylabel('Component 2')
plt.xlabel('Component 1')
"""
Explanation: QUESTION: In which direction in the data is the most variance explained?
Recall, in the ML 101 module: unsupervised models have a fit(), transform() and/or fit_transform() in sklearn.
If you want to both get a fit and new dataset with reduced dimensionality, which would you use below? (Fill in blank in code)
End of explanation
"""
#!pip install ipywidgets
from ipywidgets import interact
from sklearn.metrics.pairwise import euclidean_distances
from sklearn.datasets.samples_generator import make_blobs
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
iris = load_iris()
X, y = iris.data, iris.target
pca = PCA(n_components = 2) # keep 2 components which explain most variance
X = pca.fit_transform(X)
X.shape
# I have to tell KMeans how many cluster centers I want
n_clusters = 3
# for consistent results when running the methods below
random_state = 2
"""
Explanation: Clustering
KMeans finds cluster centers that are the mean of the points within them. Likewise, a point is in a cluster because the cluster center is the closest cluster center for that point.
If you don't have ipywidgets package installed, go ahead and install it now by running the cell below uncommented.
End of explanation
"""
def _kmeans_step(frame=0, n_clusters=n_clusters):
rng = np.random.RandomState(random_state)
labels = np.zeros(X.shape[0])
centers = rng.randn(n_clusters, 2)
nsteps = frame // 3
for i in range(nsteps + 1):
old_centers = centers
if i < nsteps or frame % 3 > 0:
dist = euclidean_distances(X, centers)
labels = dist.argmin(1)
if i < nsteps or frame % 3 > 1:
centers = np.array([X[labels == j].mean(0)
for j in range(n_clusters)])
nans = np.isnan(centers)
centers[nans] = old_centers[nans]
# plot the data and cluster centers
plt.scatter(X[:, 0], X[:, 1], c=labels, s=50, cmap='rainbow',
vmin=0, vmax=n_clusters - 1);
plt.scatter(old_centers[:, 0], old_centers[:, 1], marker='o',
c=np.arange(n_clusters),
s=200, cmap='rainbow')
plt.scatter(old_centers[:, 0], old_centers[:, 1], marker='o',
c='black', s=50)
# plot new centers if third frame
if frame % 3 == 2:
for i in range(n_clusters):
plt.annotate('', centers[i], old_centers[i],
arrowprops=dict(arrowstyle='->', linewidth=1))
plt.scatter(centers[:, 0], centers[:, 1], marker='o',
c=np.arange(n_clusters),
s=200, cmap='rainbow')
plt.scatter(centers[:, 0], centers[:, 1], marker='o',
c='black', s=50)
plt.xlim(-4, 5)
plt.ylim(-2, 2)
plt.ylabel('PC 2')
plt.xlabel('PC 1')
if frame % 3 == 1:
plt.text(4.5, 1.7, "1. Reassign points to nearest centroid",
ha='right', va='top', size=8)
elif frame % 3 == 2:
plt.text(4.5, 1.7, "2. Update centroids to cluster means",
ha='right', va='top', size=8)
"""
Explanation: <p style="text-align:right"><i>Code in next cell adapted from Jake VanderPlas's code [here](https://github.com/jakevdp/sklearn_pycon2015)</i></p>
End of explanation
"""
# suppress future warning
import warnings
warnings.filterwarnings('ignore')
min_clusters, max_clusters = 1, 6
interact(_kmeans_step, frame=[0, 20],
n_clusters=[min_clusters, max_clusters])
"""
Explanation: KMeans employ the <i>Expectation-Maximization</i> algorithm which works as follows:
Guess cluster centers
Assign points to nearest cluster
Set cluster centers to the mean of points
Repeat 1-3 until converged
End of explanation
"""
%matplotlib inline
from matplotlib import rcParams, font_manager
rcParams['figure.figsize'] = (14.0, 7.0)
fprop = font_manager.FontProperties(size=14)
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.font_manager
from sklearn import svm
from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
xx, yy = np.meshgrid(np.linspace(-2, 9, 500), np.linspace(-2,9, 500))
# Iris data
iris = load_iris()
X, y = iris.data, iris.target
labels = iris.feature_names[1:3]
X = X[:, 1:3]
# split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0)
# make some outliers
X_weird = np.random.uniform(low=-2, high=9, size=(20, 2))
# fit the model
clf = svm.OneClassSVM(nu=0.1, kernel="rbf", gamma=1, random_state = 0)
clf.fit(X_train)
# predict labels
y_pred_train = clf.predict(X_train)
y_pred_test = clf.predict(X_test)
y_pred_outliers = clf.predict(X_weird)
n_error_train = y_pred_train[y_pred_train == -1].size
n_error_test = y_pred_test[y_pred_test == -1].size
n_error_outliers = y_pred_outliers[y_pred_outliers == 1].size
# plot the line, the points, and the nearest vectors to the plane
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.title("Novelty Detection aka Anomaly Detection")
plt.contourf(xx, yy, Z, levels=np.linspace(Z.min(), 0, 7), cmap=plt.cm.Blues_r)
a = plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors='red')
plt.contourf(xx, yy, Z, levels=[0, Z.max()], colors='orange')
b1 = plt.scatter(X_train[:, 0], X_train[:, 1], c='white')
b2 = plt.scatter(X_test[:, 0], X_test[:, 1], c='green')
c = plt.scatter(X_outliers[:, 0], X_outliers[:, 1], c='red')
plt.axis('tight')
plt.xlim((-2, 9))
plt.ylim((-2, 9))
plt.ylabel(labels[1], fontsize = 14)
plt.legend([a.collections[0], b1, b2, c],
["learned frontier", "training observations",
"new regular observations", "new abnormal observations"],
loc="best",
prop=fprop)
plt.xlabel(
"%s\nerror train: %d/200 ; errors novel regular: %d/40 ; "
"errors novel abnormal: %d/10"
% (labels[0], n_error_train, n_error_test, n_error_outliers), fontsize = 14)
"""
Explanation: <b>Warning</b>! There is absolutely no guarantee of recovering a ground truth. First, choosing the right number of clusters is hard. Second, the algorithm is sensitive to initialization, and can fall into local minima, although scikit-learn employs several tricks to mitigate this issue.<br> --Taken directly from sklearn docs
<img src='imgs/pca1.png' alt="Original PCA with Labels" align="center">
Novelty detection aka anomaly detection
QUICK QUESTION:
What is the diffrence between outlier detection and anomaly detection?
Below we will use a one-class support vector machine classifier to decide if a point is weird or not given our original data. (The code was adapted from sklearn docs here)
End of explanation
"""
|
ericmjl/Network-Analysis-Made-Simple | archive/7-game-of-thrones-case-study-student.ipynb | mit | import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
import community
import numpy as np
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
"""
Explanation: Let's change gears and talk about Game of thrones or shall I say Network of Thrones.
It is suprising right? What is the relationship between a fatansy TV show/novel and network science or python(it's not related to a dragon).
If you haven't heard of Game of Thrones, then you must be really good at hiding. Game of Thrones is the hugely popular television series by HBO based on the (also) hugely popular book series A Song of Ice and Fire by George R.R. Martin. In this notebook, we will analyze the co-occurrence network of the characters in the Game of Thrones books. Here, two characters are considered to co-occur if their names appear in the vicinity of 15 words from one another in the books.
Andrew J. Beveridge, an associate professor of mathematics at Macalester College, and Jie Shan, an undergraduate created a network from the book A Storm of Swords by extracting relationships between characters to find out the most important characters in the book(or GoT).
The dataset is publicly avaiable for the 5 books at https://github.com/mathbeveridge/asoiaf. This is an interaction network and were created by connecting two characters whenever their names (or nicknames) appeared within 15 words of one another in one of the books. The edge weight corresponds to the number of interactions.
Credits:
Blog: https://networkofthrones.wordpress.com
Math Horizons Article: https://www.maa.org/sites/default/files/pdf/Mathhorizons/NetworkofThrones%20%281%29.pdf
End of explanation
"""
book1 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book1-edges.csv')
book2 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book2-edges.csv')
book3 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book3-edges.csv')
book4 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book4-edges.csv')
book5 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book5-edges.csv')
"""
Explanation: Let's load in the datasets
End of explanation
"""
book1
"""
Explanation: The resulting DataFrame book1 has 5 columns: Source, Target, Type, weight, and book. Source and target are the two nodes that are linked by an edge. A network can have directed or undirected edges and in this network all the edges are undirected. The weight attribute of every edge tells us the number of interactions that the characters have had over the book, and the book column tells us the book number.
End of explanation
"""
G_book1 = nx.Graph()
G_book2 = nx.Graph()
G_book3 = nx.Graph()
G_book4 = nx.Graph()
G_book5 = nx.Graph()
"""
Explanation: Once we have the data loaded as a pandas DataFrame, it's time to create a network. We create a graph for each book. It's possible to create one MultiGraph instead of 5 graphs, but it is easier to play with different graphs.
End of explanation
"""
for row in book1.iterrows():
G_book1.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])
for row in book2.iterrows():
G_book2.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])
for row in book3.iterrows():
G_book3.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])
for row in book4.iterrows():
G_book4.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])
for row in book5.iterrows():
G_book5.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book'])
books = [G_book1, G_book2, G_book3, G_book4, G_book5]
"""
Explanation: Let's populate the graph with edges from the pandas DataFrame.
End of explanation
"""
list(G_book1.edges(data=True))[16]
list(G_book1.edges(data=True))[400]
"""
Explanation: Let's have a look at these edges.
End of explanation
"""
deg_cen_book1 = nx.degree_centrality(books[0])
deg_cen_book5 = nx.degree_centrality(books[4])
sorted(deg_cen_book1.items(), key=lambda x:x[1], reverse=True)[0:10]
sorted(deg_cen_book5.items(), key=lambda x:x[1], reverse=True)[0:10]
# Plot a histogram of degree centrality
plt.hist(list(nx.degree_centrality(G_book4).values()))
plt.show()
"""
Explanation: Finding the most important node i.e character in these networks.
Is it Jon Snow, Tyrion, Daenerys, or someone else? Let's see! Network Science offers us many different metrics to measure the importance of a node in a network as we saw in the first part of the tutorial. Note that there is no "correct" way of calculating the most important node in a network, every metric has a different meaning.
First, let's measure the importance of a node in a network by looking at the number of neighbors it has, that is, the number of nodes it is connected to. For example, an influential account on Twitter, where the follower-followee relationship forms the network, is an account which has a high number of followers. This measure of importance is called degree centrality.
Using this measure, let's extract the top ten important characters from the first book (book[0]) and the fifth book (book[4]).
End of explanation
"""
def weighted_degree(G, weight):
result = dict()
for node in G.nodes():
weight_degree = 0
for n in G.edges([node], data=True):
weight_degree += ____________
result[node] = weight_degree
return result
plt.hist(___________)
plt.show()
sorted(weighted_degree(G_book1, 'weight').items(), key=lambda x:x[1], reverse=True)[0:10]
"""
Explanation: Exercise
Create a new centrality measure, weighted_degree(Graph, weight) which takes in Graph and the weight attribute and returns a weighted degree dictionary. Weighted degree is calculated by summing the weight of the all edges of a node and find the top five characters according to this measure. [5 mins]
End of explanation
"""
# First check unweighted, just the structure
sorted(nx.betweenness_centrality(G_book1).items(), key=lambda x:x[1], reverse=True)[0:10]
# Let's care about interactions now
sorted(nx.betweenness_centrality(G_book1, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]
"""
Explanation: Let's do this for Betweeness centrality and check if this makes any difference
Haha, evil laugh
End of explanation
"""
# by default weight attribute in pagerank is weight, so we use weight=None to find the unweighted results
sorted(nx.pagerank_numpy(G_book1, weight=None).items(), key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.pagerank_numpy(G_book1, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]
"""
Explanation: PageRank
The billion dollar algorithm, PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites.
End of explanation
"""
cor = pd.DataFrame.from_records([______, _______, _______, ______])
cor.T
cor.T.______()
"""
Explanation: Is there a correlation between these techniques?
Exercise
Find the correlation between these four techniques.
pagerank
betweenness_centrality
weighted_degree
degree centrality
End of explanation
"""
evol = [nx.degree_centrality(book) for book in books]
evol_df = pd.DataFrame.from_records(evol).fillna(0)
evol_df[['Eddard-Stark', 'Tyrion-Lannister', 'Jon-Snow']].plot()
set_of_char = set()
for i in range(5):
set_of_char |= set(list(evol_df.T[i].sort_values(ascending=False)[0:5].index))
set_of_char
"""
Explanation: Evolution of importance of characters over the books
According to degree centrality the most important character in the first book is Eddard Stark but he is not even in the top 10 of the fifth book. The importance changes over the course of five books, because you know stuff happens ;)
Let's look at the evolution of degree centrality of a couple of characters like Eddard Stark, Jon Snow, Tyrion which showed up in the top 10 of degree centrality in first book.
We create a dataframe with character columns and index as books where every entry is the degree centrality of the character in that particular book and plot the evolution of degree centrality Eddard Stark, Jon Snow and Tyrion.
We can see that the importance of Eddard Stark in the network dies off and with Jon Snow there is a drop in the fourth book but a sudden rise in the fifth book
End of explanation
"""
evol_df[__________].plot(figsize=(29,15))
evol = [____________ for graph in books]
evol_df = pd.DataFrame.from_records(evol).fillna(0)
set_of_char = set()
for i in range(5):
set_of_char |= set(list(evol_df.T[i].sort_values(ascending=False)[0:5].index))
evol_df[___________].plot(figsize=(19,10))
"""
Explanation: Exercise
Plot the evolution of weighted degree centrality of the above mentioned characters over the 5 books, and repeat the same exercise for betweenness centrality.
End of explanation
"""
nx.draw(nx.barbell_graph(5, 1), with_labels=True)
sorted(nx.degree_centrality(G_book5).items(), key=lambda x:x[1], reverse=True)[:5]
sorted(nx.betweenness_centrality(G_book5).items(), key=lambda x:x[1], reverse=True)[:5]
"""
Explanation: So what's up with Stannis Baratheon?
End of explanation
"""
partition = community.best_partition(G_book1)
size = float(len(set(partition.values())))
pos = nx.spring_layout(G_book1)
count = 0.
for com in set(partition.values()) :
count = count + 1.
list_nodes = [nodes for nodes in partition.keys()
if partition[nodes] == com]
nx.draw_networkx_nodes(G_book1, pos, list_nodes, node_size = 20,
node_color = str(count / size))
nx.draw_networkx_edges(G_book1, pos, alpha=0.5)
plt.show()
d = {}
for character, par in partition.items():
if par in d:
d[par].append(character)
else:
d[par] = [character]
d
nx.draw(nx.subgraph(G_book1, d[3]))
nx.draw(nx.subgraph(G_book1, d[1]))
nx.density(G_book1)
nx.density(nx.subgraph(G_book1, d[4]))
nx.density(nx.subgraph(G_book1, d[4]))/nx.density(G_book1)
"""
Explanation: Community detection in Networks
A network is said to have community structure if the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally.
We will use louvain community detection algorithm to find the modules in our graph.
End of explanation
"""
max_d = {}
deg_book1 = nx.degree_centrality(G_book1)
for ______ in d:
temp = 0
for _______ in d[group]:
if deg_book1[_______] > temp:
max_d[______] = _______
temp = deg_book1[_______]
max_d
"""
Explanation: Exercise
Find the most important node in the partitions according to degree centrality of the nodes.
End of explanation
"""
G_random = nx.erdos_renyi_graph(100, 0.1)
nx.draw(G_random)
G_ba = nx.barabasi_albert_graph(100, 2)
nx.draw(G_ba)
# Plot a histogram of degree centrality
plt.hist(list(nx.degree_centrality(G_random).values()))
plt.show()
plt.hist(list(nx.degree_centrality(G_ba).values()))
plt.show()
G_random = nx.erdos_renyi_graph(2000, 0.2)
G_ba = nx.barabasi_albert_graph(2000, 20)
d = {}
for i, j in dict(nx.degree(G_random)).items():
if j in d:
d[j] += 1
else:
d[j] = 1
x = np.log2(list((d.keys())))
y = np.log2(list(d.values()))
plt.scatter(x, y, alpha=0.9)
plt.show()
d = {}
for i, j in dict(nx.degree(G_ba)).items():
if j in d:
d[j] += 1
else:
d[j] = 1
x = np.log2(list((d.keys())))
y = np.log2(list(d.values()))
plt.scatter(x, y, alpha=0.9)
plt.show()
"""
Explanation: A bit about power law in networks
End of explanation
"""
|
Bio204-class/bio204-notebooks | 2016-04-20-MultipleRegression.ipynb | cc0-1.0 | # if we use %matplotlib notebook we get embedded plots
# we can interact with!
%matplotlib notebook
import numpy as np
import scipy.stats as stats
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sbn
"""
Explanation: Bio 204: Multiple Regression
End of explanation
"""
# generate example data
np.random.seed(20140420)
npts = 30
x = np.linspace(0,10,npts) + np.random.normal(0,2,size=npts)
a = 1
b = 2
y = b*x + a + np.random.normal(0,4,size=npts)
# fit regression
regr = stats.linregress(x,y)
regr_line = regr.slope*x + regr.intercept
# draw scatter
plt.plot(x,y, 'o')
# draw regression line
plt.plot(x, regr_line, marker=None, color='firebrick', alpha=0.5)
# draw residuals
plt.vlines(x, y, regr_line, color='gray')
plt.title("Regression of Y on X")
plt.xlabel("X")
plt.ylabel("Y")
pass
"""
Explanation: Review of bivariate regression
Recall the model for bivariate least-squares regression.
When we regress $Y$ and $X$ we're looking for a linear function, $f(X)$, for which the following sum-of-squared deviations is minimized:
$$
\sum_{i=1}^n (y_i - f(x_i))^2
$$
The general form a linear function of one variable is a line,
$$
f(x) = a + bX
$$
where $b$ is the slope of the line and $a$ is the intercept.
In pictures, what we're looking for is the line that minimizes the length of the gray lines in the figure below.
End of explanation
"""
# import the formula based module for statsmodels
import statsmodels.formula.api as smf
"""
Explanation: OLS regression with the StatsModels library
Let's introduce the StatsModels library. StatsModels implements a wide range of statistical methods that complement the functions defined in the scipy.stats module. First we'll revisit simple bivariate regression using StatsModels as then we'll turn to multiple regression.
End of explanation
"""
# put the x and y data with column names x and y
data = pd.DataFrame(dict(x=x, y=y))
"""
Explanation: We first put the x and y data into a DataFrame, in order to facilitate the use of R-style formulas for specifying statistical models (R is another programming environment that is popular for statistical analysis).
End of explanation
"""
# read this "formula" as specifying y as a function of x
fitmodel = smf.ols('y ~ x', data).fit()
"""
Explanation: Now we specify the model and fit the model using the ols() (ordinary least squares) function defined in statsmodels.. We can read the formula in the call to ols as specifying that we want to model $y$ as a function of $x$.
End of explanation
"""
type(fitmodel)
"""
Explanation: The ols function returns an object of type RegressionResultsWrapper. This is a Python object that bundles together lots of useful information about the model we just fit.
End of explanation
"""
print(fitmodel.summary())
"""
Explanation: For example, we can print summary tables using the summary() method:
End of explanation
"""
fitmodel.params
fitmodel.rsquared
fitmodel.df_model, fitmodel.df_resid
"""
Explanation: We can get individual properties such as the paramaeters of the model (i.e. slope and intercept), coefficient of determination, the degrees of freedom for the model, etc.
End of explanation
"""
# generate a synthetic data set
npts = 30
x1 = np.linspace(1,5,npts) + np.random.normal(0, 2, size=npts)
x2 = np.linspace(5,10,npts) + np.random.normal(0, 2, size=npts)
# "true "model parameters
a = 1
b1, b2 = 2, -1
y = a + b1*x1 + b2*x2 + np.random.normal(0,3, size=npts)
data2 = pd.DataFrame(dict(x1=x1, x2=x2, y=y))
"""
Explanation: For a full list of attributes and functions associated with the RegressionResults object see the StatsModel documentation here.
Multiple regression is conceptually similar to bivariate regression
The idea behind multiple regression is almost exactly the same as bivariate regression, except now we try and fit a linear model for $Y$ using multiple explanatory variables, $X_1, X_2,\ldots, X_m$. That is we're looking for a linear function, $f(X_1, X_2,\ldots,X_m)$ that minimizes:
$$
\sum_{i=1}^n(y_i - f(x_1, x_2,\ldots, x_m))^2
$$
A linear function of more than one variable is written as:
$$
f(X_1, X_2,\ldots,X_m) = a + b_1X_1 + b_2X_2 + \cdots + b_mX_m
$$
The values, $b_1, b_2,\ldots,b_m$ are the regression coefficients. Geometrically they have the same interpretation as in the bivariate case -- slopes with respect to the corresponding variable.
Solving the multiple regression problem
Mathematically, we simultaneously find the best fitting regression coefficients, $b_1, b_2,\ldots,b_m$, using linear algebra. However, since we haven't covered linear algebra in this course, I will omit the details.
Lines, planes, and hyperplanes
A linear function of one variable is a line, a linear function of two variables is a plane, and a linear function of more than two variables is generally called a hyperplane.
Example: Modeling $Y$ as a function of $X_1$ and $X_2$
Let's fit a multiple regression to a synthetic data set that involves two explanatory variables, $X_1$ and $X_2$. We'll also take this as an opportunity to introduce 3D plots in matplotlib.
End of explanation
"""
# drawing a 3D scatter plot
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter3D(data2.x1, data2.x2, data2.y, c='red', s=30)
# this sets the viewpoint in the 3D space from which we are viewing
# the plot
ax.view_init(elev=10., azim=30)
ax.set_xlabel("$X_1$", fontsize=12)
ax.set_ylabel("$X_2$", fontsize=12)
ax.set_zlabel("$Y$", fontsize=12)
fig.tight_layout()
pass
"""
Explanation: 3D Scatter Plot
Maplotlib 3D plotting functions lives in a module called mpl_toolkits.mplot3d. Here's how to create a 3D scatter plot.
End of explanation
"""
# read the model as y is a function of x1 and x2
model2 = smf.ols('y ~ x1 + x2', data2).fit()
print(model2.summary())
"""
Explanation: Fitting a multiple regression using StatsModels
The formula based method for specifying models extends to the multiple regression case, as illustrated below.
End of explanation
"""
# get coefficients of fit
b1 = model2.params.x1
b2 = model2.params.x2
a = model2.params.Intercept
# find appropriate limits of data
x1min, x1max = data2.x1.min(), data2.x1.max()
x2min, x2max = data2.x2.min(), data2.x2.max()
# setup evenly spaced points to specify the regression surface drawing
X1, X2 = np.meshgrid(np.linspace(x1min, x1max, 50),
np.linspace(x2min, x2max, 50))
# Calculates predicted values
Yhat = a + b1*X1 + b2*X2
# create plot
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.view_init(elev=10., azim=60) # specify view point
# add surface
ax.plot_surface(X1, X2, Yhat,
color='gray',
alpha = 0.2)
# add scatter
ax.scatter3D(data2.x1, data2.x2, data2.y, c='red', s=30)
# setup axis labels
ax.set_xlabel("$X_1$", fontsize=12)
ax.set_ylabel("$X_2$", fontsize=12)
ax.set_zlabel("$Y$", fontsize=12)
fig.tight_layout()
# same plot as before, but including the residuals and using a wireframe surface plot
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter3D(data2.x1, data2.x2, data2.y, c='red', s=30)
ax.scatter3D(data2.x1, data2.x2, model2.fittedvalues, c='black', s=5)
for i in range(len(data2.x1)):
ax.plot((data2.x1[i],data2.x1[i]),
(data2.x2[i],data2.x2[i]),
(data2.y[i], model2.fittedvalues[i]),
color='black', alpha=0.5,zorder=1)
ax.plot_wireframe(X1, X2, Yhat,
rstride=4,
cstride=4,
color='gray',
alpha = 0.3, zorder=10)
ax.view_init(elev=20, azim=80)
ax.set_xlabel("$X_1$", fontsize=12)
ax.set_ylabel("$X_2$", fontsize=12)
ax.set_zlabel("$Y$", fontsize=12)
fig.tight_layout()
pass
"""
Explanation: As was the case for univariate regression, we're interested in key information such as the the regression coefficients, the R-squared value associated with the model, confidence intervals and P-values for the coefficents, etc.
Drawing the regression plane in 3D
To get a 3D representation of the regression plane, requires a fair amount of setup. If you were generating such plots frequntly you'd want to wrap it in a function.
End of explanation
"""
|
PythonFreeCourse/Notebooks | week06/3_Comprehensions.ipynb | mit | names = ['Yam', 'Gal', 'Orpaz', 'Aviram']
"""
Explanation: <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.">
<span style="text-align: right; direction: rtl; float: right;">Comprehensions</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הקדמה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
מפתחי פייתון אוהבים מאוד קוד קצר ופשוט שמנוסח היטב.<br>
יוצרי השפה מתמקדים פעמים רבות בלאפשר למפתחים בה לכתוב קוד בהיר ותמציתי במהירות.<br>
במחברת זו נלמד איך לעבור על iterable וליצור ממנו מבני נתונים מעניינים בקלות ובמהירות.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">List Comprehension</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">עיבוד רשימות</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נתחיל במשימה פשוטה יחסית:<br>
בהינתן רשימת שמות, אני מעוניין להפוך את כל השמות ברשימה ליווניים.<br>
כידוע, אפשר להפוך כל שם ליווני על ידי הוספת ההברה <em>os</em> בסופו. לדוגמה, השם Yam ביוונית הוא Yamos.
</p>
End of explanation
"""
new_names = []
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
למה אנחנו מחכים? ניצור את הרשימה החדשה:
</p>
End of explanation
"""
for name in names:
new_names.append(name + 'os')
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נעבור על הרשימה הישנה בעזרת לולאת <code>for</code>, נשרשר לכל איבר "<em>os</em>" ונצרף את התוצאה לרשימה החדשה:
</p>
End of explanation
"""
print(new_names)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כשהלולאה תסיים לרוץ, תהיה בידינו רשימה חדשה של שמות יוונים:
</p>
End of explanation
"""
new_names = map(lambda name: name + 'os', names)
print(list(new_names))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אם נסתכל על הלולאה שיצרנו, נוכל לזהות בה ארבעה מרכיבים עיקריים.
</p>
<table style="text-align: right; direction: rtl; clear: both; font-size: 1.3rem">
<caption style="text-align: center; direction: rtl; clear: both; font-size: 2rem; padding-bottom: 2rem;">פירוק מרכיבי לולאת for ליצירת רשימה חדשה</caption>
<thead>
<tr>
<th>שם המרכיב</th>
<th>תיאור המרכיב</th>
<th>דוגמה</th>
</tr>
</thead>
<tbody>
<tr>
<td><span style="background: #073b4c; color: white; padding: 0.2em;">ה־iterable הישן</span></td>
<td>אוסף הנתונים המקורי שעליו אנחנו רצים.</td>
<td><var>names</var></td>
</tr>
<tr>
<td><span style="background: #118ab2; color: white; padding: 0.15em;">הערך הישן</span></td>
<td>משתנה הלולאה. הלייזר שמצביע בכל פעם על ערך יחיד מתוך ה־iterable הישן.</td>
<td><var>name</var></td>
</tr>
<tr>
<td><span style="background: #57bbad; color: white; padding: 0.15em;">הערך החדש</span></td>
<td>הערך שנרצה להכניס ל־iterable שאנחנו יוצרים, בדרך כלל מושפע מהערך הישן.</td>
<td><code dir="ltr">name + 'os'</code></td>
</tr>
<tr>
<td><span style="background: #ef476f; color: white; padding: 0.15em;">ה־iterable החדש</span></td>
<td>ה־iterable שאנחנו רוצים ליצור, הערך שיתקבל בסוף הריצה.</td>
<td><var>new_names</var></td>
</tr>
</tbody>
</table>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
השתמשו ב־<var>map</var> כדי ליצור מ־<var>names</var> רשימת שמות יווניים באותה הצורה.<br>
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
צורת ה־<var>map</var> בפתרון שלכם הייתה אמורה להשתמש בדיוק באותם חלקי הלולאה.<br>
אם עדיין לא ניסיתם לפתור בעצמכם, זה הזמן לכך.<br>
התשובה שלכם אמורה להיראות בערך כך:
</p>
End of explanation
"""
names = ['Yam', 'Gal', 'Orpaz', 'Aviram']
new_names = [name + 'os' for name in names] # list comprehension
print(new_names)
"""
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">הטכניקה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
<dfn>list comprehension</dfn> היא טכניקה שמטרתה לפשט את מלאכת הרכבת הרשימה, כך שתהיה קצרה, מהירה וקריאה.<br>
ניגש לעניינים! אבל ראו הוזהרתם – במבט ראשון list comprehension עשוי להיראות מעט מאיים וקשה להבנה.<br>
הנה זה בא:
</p>
End of explanation
"""
names = ['Johnny Eck', 'David Ben Gurion', 'Elton John']
reversed_names = [name[::-1] for name in names]
print(reversed_names)
reversed_names = [int(str(number) * 9) for number in range(1, 10)]
print(reversed_names)
places = (
{'name': 'salar de uyuni', 'location': 'Bolivia'},
{'name': 'northern lake baikal', 'location': 'Russia'},
{'name': 'kuang si falls', 'location': 'Laos'},
)
places_titles = [place['name'].title() for place in places]
print(places_titles)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
הדבר הראשון שמבלבל כשנפגשים לראשונה עם list comprehension הוא סדר הקריאה המשונה:<br>
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>list comprehension מתחיל בפתיחת סוגריים מרובעים (ומסתיים בסגירתם), שמציינים שאנחנו מעוניינים ליצור רשימה חדשה.</li>
<li>את מה שבתוך הסוגריים עדיף להתחיל לקרוא מהמילה <code>for</code> – נוכל לראות את הביטוי <code dir="ltr">for name in names</code> שאנחנו כבר מכירים.</li>
<li>מייד לפני המילה <code>for</code>, נכתוב את ערכו של האיבר שאנחנו רוצים לצרף לרשימה החדשה בכל איטרציה של הלולאה.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נביט בהשוואת החלקים של ה־list comprehension לחלקים של לולאת ה־<code>for</code>:
</p>
<figure>
<img src="images/for_vs_listcomp.png?v=2" style="max-width: 500px; margin-right: auto; margin-left: auto; text-align: center;" alt="לולאת ה־for שכתבנו למעלה עם המשתנה names (ה־iterable) מודגש בצבע מספר 1, המשתנה name (הערך הישן) מודגש בצבע מספר 2, הביטוי name + 'os' מודגשים בצבע 3 והמשתנה new_names (הרשימה החדשה) בצבע 4. מתחתיו לביטוי זה יש קו מקווקו, ומתחתיו הביטוי של ה־list comprehension עם אותם חלקים צבועים באותם צבעים."/>
<figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">השוואה בין יצירת רשימה בעזרת <code>for</code> ובעזרת list comprehension</figcaption>
</figure>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
list comprehension מאפשרת לשנות את הערך שנוסף לרשימה בקלות.<br>
מסיבה זו, מתכנתים רבים יעדיפו את הטכניקה הזו על פני שימוש ב־<var>map</var>, שבה נצטרך להשתמש ב־<code>lambda</code> ברוב המקרים.
</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נתונה הרשימה <code dir="ltr">numbers = [1, 2, 3, 4, 5]</code>.<br>
השתמשו ב־list comprehension כדי ליצור בעזרתה את הרשימה <code dir="ltr">[1, 4, 9, 16, 25]</code>.<br>
האם אפשר להשתמש בפונקציה <var>range</var> במקום ב־<var>numbers</var>?
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
list comprehension הוא מבנה גמיש מאוד!<br>
נוכל לכתוב בערך שאנחנו מצרפים לרשימה כל ביטוי שיתחשק לנו, ואפילו לקרוא לפונקציות.<br>
נראה כמה דוגמאות:
</p>
End of explanation
"""
names = ['Margaret Thatcher', 'Karl Marx', "Ze'ev Jabotinsky", 'Bertrand Russell', 'Fidel Castro']
long_names = []
for name in names:
if len(name) > 12:
long_names.append(name)
long_names
"""
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
השתמשו ב־list comprehension כדי ליצור את הרשימה הבאה:<br>
<code dir="ltr">[(1, 2), (2, 3), (3, 4), (4, 5), (5, 6)]</code>.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">תנאים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נציג תבנית נפוצה נוספת הנוגעת לעבודה עם רשימות.<br>
לעיתים קרובות, נרצה להוסיף איבר לרשימה רק אם מתקיים לגביו תנאי מסוים.<br>
לדוגמה, ניקח מרשימת השמות הבאה רק את האנשים ששמם ארוך מתריסר תווים:
</p>
End of explanation
"""
names = ['Margaret Thatcher', 'Karl Marx', "Ze'ev Jabotinsky", 'Bertrand Russell', 'Fidel Castro']
long_names = [name for name in names if len(name) > 12]
print(long_names)
"""
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
השתמשו ב־<var>filter</var> כדי ליצור מ־<var>names</var> רשימת שמות ארוכים באותה הצורה.<br>
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נפרק את הקוד הקצר שיצרנו למעלה למרכיביו:
</p>
<table style="text-align: right; direction: rtl; clear: both; font-size: 1.3rem">
<caption style="text-align: center; direction: rtl; clear: both; font-size: 2rem; padding-bottom: 2rem;">פירוק מרכיבי לולאת for עם התניה ליצירת רשימה חדשה</caption>
<thead>
<tr>
<th>שם המרכיב</th>
<th>תיאור המרכיב</th>
<th>דוגמה</th>
</tr>
</thead>
<tbody>
<tr>
<td><span style="background: #073b4c; color: white; padding: 0.2em;">איפוס</span></td>
<td>אתחול הרשימה לערך ריק.</td>
<td><code dir="ltr">long_names = []</code></td>
</tr>
<tr>
<td><span style="background: #118ab2; color: white; padding: 0.15em;">הלולאה</span></td>
<td>החלק שעובר על כל האיברים ב־iterable הקיים ויוצר משתנה שאליו אפשר להתייחס.</td>
<td><code dir="ltr">for name in names:</code></td>
</tr>
<tr>
<td><span style="background: #57bbad; color: white; padding: 0.15em;">הבדיקה</span></td>
<td>התניה שבודקת אם הערך עונה על תנאי מסוים.</td>
<td><code dir="ltr">if len(name) > 12:</code></td>
</tr>
<tr>
<td><span style="background: #ef476f; color: white; padding: 0.15em;">הוספה</span></td>
<td>צירוף האיבר לרשימה החדשה, אם הוא עונה על התנאי שנקבע בבדיקה.</td>
<td><code dir="ltr">long_names.append(name)</code></td>
</tr>
</tbody>
</table>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ונלמד איך מממשים את אותו הרעיון בדיוק בעזרת list comprehension:
</p>
End of explanation
"""
files = ['moshe_homepage.html', 'yahoo.html', 'python.html', 'shnitzel.gif']
html_names = [file.split('.')[0] for file in files if file.endswith('.html')]
print(html_names)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נראה שוב השוואה בין list comprehension ללולאת <code>for</code> רגילה, הפעם עם תנאי:
</p>
<figure>
<img src="images/for_vs_listcomp_with_if.png?v=1" style="max-width: 600px; margin-right: auto; margin-left: auto; text-align: center;" alt="בחלק העליון: לולאת ה־for שכתבנו למעלה. long_names = [] מודגש בצבע 1, גוף הלולאה בצבע 2, הבדיקה בצבע 3 וההוספה של האיבר לרשימה בצבע 4. בחלק התחתון: long_names = [ בצבע 1, name בצבע 2, for name in names בצבע 3, if len(name) > 12 בצבע 4, ] בצבע 1."/>
<figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">השוואה בין יצירת רשימה בעזרת <code>for</code> ובעזרת list comprehension</figcaption>
</figure>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
גם כאן יש לנו סדר קריאה משונה מעט, אך הרעיון הכללי של ה־list comprehension נשמר:<br>
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>list comprehension מתחיל בפתיחת סוגריים מרובעים (ומסתיים בסגירתם), כדי לציין שאנחנו מעוניינים ליצור רשימה חדשה.</li>
<li>את מה שבתוך הסוגריים עדיף להתחיל לקרוא מהמילה <code>for</code> – נוכל לראות את הביטוי <code dir="ltr">for name in names</code> שאנחנו כבר מכירים.</li>
<li>ממשיכים לקרוא את התנאי, אם קיים כזה. רק אם התנאי יתקיים, יתווסף האיבר לרשימה.</li>
<li>מייד לפני המילה <code>for</code>, נכתוב את ערכו של האיבר שאנחנו רוצים לצרף לרשימה בכל איטרציה של הלולאה.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אפשר לשלב את השיטות כדי לבנות בקלילות רשימות מורכבות.<br>
נמצא את שמות כל הקבצים שהסיומת שלהם היא "<em dir="ltr">.html</em>":
</p>
End of explanation
"""
import random
import string
CHARACTERS = f'.{string.digits}{string.ascii_letters}'
WEIGHTS = [1] * len(f'.{string.digits}') + [0.05] * len(string.ascii_letters)
def generate_size(length):
return ''.join(random.choices(CHARACTERS, weights=WEIGHTS, k=length))
def generate_closet(closet_size=20, shoe_size=4):
return [generate_size(shoe_size) for _ in range(closet_size)]
generate_closet(5)
"""
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">תרגיל ביניים: טיפול שורש</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בחנות של סנדל'ה הארון קצת מבולגן.<br>
כשלקוח נכנס ומבקש מסנדל'ה למדוד מידה מסוימת, היא צריכה לפשפש בין אלפי המוצרים בארון, ולפעמים המידות שהיא מוצאת שם מוזרות מאוד.<br>
ההנחיות שסנדל'ה נתנה לנו לצורך סידור הארון שלה די פשוטות:<br>
התעלמו מכל מידה שיש בה תו שאינו ספרה או נקודה, והוציאו שורש רק מהמידות המספריות.<br>
התעלמו גם ממספרים עם יותר מנקודה אחת.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לדוגמה, עבור הארון <code dir="ltr">['100', '25.0', '12a', 'mEoW', '0']</code>, החזירו <samp dir="ltr">[10.0, 5.0, 0.0]</samp>.<br>
עבור הארון <code dir="ltr">['Area51', '303', '2038', 'f00b4r', '314.1']</code>, החזירו <samp dir="ltr">[17.4, 45.14, 17.72]</samp>.<br>
(מחקנו קצת ספרות אחרי הנקודה בשביל הנראות).
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <var>organize_closet</var> שמקבלת רשימת ארון ומסדרת אותו.<br>
תוכלו לבדוק את עצמכם באמצעות הפונקציה <var>generate_closet</var> שתיצור עבורכם ארון אסלי מהחנות של סנדל'ה.
</p>
End of explanation
"""
powers = {i: i ** 2 for i in range(1, 11)}
print(powers)
"""
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; ">
<img src="images/tip.png" style="height: 50px !important;" alt="אזהרה!">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
בפייתון, נהוג לכנות משתנה שלא יהיה בו שימוש בעתיד כך: <code>_</code>.<br>
דוגמה טובה אפשר לראות בלולאה שב־<code>generate_closet</code>.
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">Dictionary Comprehension ו־Set Comprehension</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
מלבד <strong>list</strong> comprehension, קיימים גם <strong>set</strong> comprehension ו־<strong>dictionary</strong> comprehension שפועלים בצורה דומה.<br>
הרעיון בבסיסו נשאר זהה – שימוש בערכי iterable כלשהו לצורך יצירת מבנה נתונים חדש בצורה קריאה ומהירה.<br>
נראה דוגמה ל־dictionary comprehension שבו המפתח הוא מספר, והערך הוא אותו המספר בריבוע:
</p>
End of explanation
"""
sentence = "99 percent of all statistics only tell 49 percent of the story."
words = {word for word in sentence.lower().split() if word.isalpha()}
print(words)
print(type(words))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בדוגמה למעלה חישבנו את הריבוע של כל אחד מעשרת המספרים החיוביים הראשונים.<br>
משתנה הלולאה <var>i</var> עבר על כל אחד מהמספרים בטווח שבין 1 ל־11 (לא כולל), ויצר עבור כל אחד מהם את המפתח <var>i</var>, ואת הערך <code>i ** 2</code>.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ראו כיצד בעזרת התחביר העוצמתי הזה בפייתון, אנחנו יכולים ליצור מילונים מורכבים בקלות רבה.<br>
כל שעלינו לעשות הוא להשתמש בסוגריים מסולסלים במקום במרובעים,<br>
ולציין מייד אחרי פתיחת הסוגריים את הצמד שנרצה להוסיף בכל איטרציה – מפתח וערך, כשביניהם נקודתיים.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בצורה דומה אפשר ליצור set comprehension:
</p>
End of explanation
"""
def get_line_lengths(text):
for line in text.splitlines():
if line.strip(): # אם השורה אינה ריקה
yield len(line)
# לדוגמה
with open('resources/states.txt') as states_file:
states = states_file.read()
print(list(get_line_lengths(states)))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
התחביר של set comprehension כמעט זהה לתחביר של list comprehension.<br>
ההבדל היחיד ביניהם הוא שב־set comprehension אנחנו משתמשים בסוגריים מסולסלים.<br>
ההבדל בינו לבין dictionary comprehension הוא שאנחנו משמיטים את הנקודתיים והערך, ומשאירים רק את המפתח.
</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
מצאו כמה מהמספרים הנמוכים מ־1,000 מתחלקים ב־3 וב־7 ללא שארית.<br>
השתמשו ב־set comprehension.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">Generator Expression</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בשבוע שעבר למדנו על הכוח הטמון ב־generators.<br>
בזכות שמירת ערך אחד בלבד בכל פעם, generators מאפשרים לנו לכתוב תוכניות יעילות מבחינת צריכת הזיכרון.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נכתוב generator פשוט שמניב עבורנו את אורכי השורות בטקסט מסוים:
</p>
End of explanation
"""
with open('resources/states.txt') as states_file:
states = states_file.read()
line_lengths = (len(line) for line in states.splitlines() if line.strip())
print(list(line_lengths))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
חדי העין כבר זיהו את התבנית המוכרת – יש פה <code>for</code>, מייד אחריו <code>if</code> ומייד אחריו אנחנו יוצרים איבר חדש.<br>
אם כך, generator expression הוא בסך הכול שם מפונפן למה שאנחנו היינו קוראים לו generator comprehension.<br>
נמיר את הפונקציה <var>get_line_lengths</var> ל־generator comprehension:
</p>
End of explanation
"""
squares = (number ** 2 for number in range(1, 11))
print(list(squares))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נעמוד על ההבדלים בין הגישות:
</p>
<figure>
<img src="images/generator_vs_expression.png" style="max-width: 800px; margin-right: auto; margin-left: auto; text-align: center;" alt="בחלק העליון: פונקציית ה־generator שכתבנו למעלה. כותרת הפונקציה מודגשת בצבע 1, גוף הלולאה בצבע 2, הבדיקה בצבע 3 והנבת האיבר בעזרת yield בצבע 4. בחלק התחתון: ה־generator expression. line_lengths = ( בצבע 1, for line in states.splitlines() בצבע 2, if line.strip() בצבע 3, len(line) בצבע 4, ) בצבע 1."/>
<figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">השוואה בין יצירת generator בעזרת פונקציה ובין יצירת generator בעזרת generator expression</figcaption>
</figure>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כאמור, הרעיון דומה מאוד ל־list comprehension.<br>
האיבר שנחזיר בכל פעם מה־generator בעזרת <code>yield</code> יהפוך ב־generator expression להיות האיבר שנמצא לפני המילה <code>for</code>.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
שימו לב שה־generator expression שקול לערך המוחזר לנו בקריאה לפונקציית ה־generator.<br>
זו נקודה שחשוב לשים עליה דגש: generator expression מחזיר generator iterator, ולא פונקציית generator.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נסתכל על דוגמה נוספת ל־generator expression שמחזיר את ריבועי כל המספרים מ־1 ועד 11 (לא כולל):
</p>
End of explanation
"""
print(list(squares))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בדיוק כמו ב־generator iterator רגיל, אחרי שנשתמש באיבר לא נוכל לקבל אותו שוב:
</p>
End of explanation
"""
next(squares)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
והפעלת <var>next</var> על generator iterator שכבר הניב את כל הערכים תקפיץ <var>StopIterator</var>:
</p>
End of explanation
"""
sum(number ** 2 for number in range(1, 11))
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ולטריק האחרון בנושא זה –<br>
טוב לדעת שכשמעבירים לפונקציה generator expression כפרמטר יחיד, לא צריך לעטוף אותו בסוגריים נוספים.<br>
לדוגמה:
</p>
End of explanation
"""
dice_options = []
for first_die in range(1, 7):
for second_die in range(1, 7):
dice_options.append((first_die, second_die))
print(dice_options)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בדוגמה שלמעלה ה־generator comprehension יצר את כל ריבועי המספרים מ־1 ועד 11, לא כולל.<br>
הפונקציה <var>sum</var> השתמשה בכל ריבועי המספרים שה־generator הניב, וסכמה אותם.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">לולאות מרובות</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפעמים נרצה לכתוב כמה לולאות מקוננות זו בתוך זו.<br>
לדוגמה, ליצירת כל האפשרויות שיכולות להתקבל בהטלת 2 קוביות:
</p>
End of explanation
"""
dice_options = [(die1, die2) for die1 in range(1, 7) for die2 in range(1, 7)]
print(dice_options)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נוכל להפוך גם את המבנה הזה ל־list comprehension:
</p>
End of explanation
"""
dice_options = [
(die1, die2, die3)
for die1 in range(1, 7)
for die2 in range(1, 7)
for die3 in range(1, 7)
]
print(dice_options)
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
כדי להבין איך זה עובד, חשוב לזכור איך קוראים list comprehension:<br>
פשוט התחילו לקרוא מה־<code>for</code> הראשון, וחזרו לאיבר שאנחנו מוסיפים לרשימה בכל פעם רק בסוף.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אם במשחק מוזר כלשהו נצטרך לזרוק 3 קוביות, לדוגמה, ונרצה לראות אילו אופציות יכולות להתקבל, נוכל לכתוב זאת כך:
</p>
End of explanation
"""
import string
text = """
You see, wire telegraph is a kind of a very, very long cat.
You pull his tail in New York and his head is meowing in Los Angeles.
Do you understand this?
And radio operates exactly the same way: you send signals here, they receive them there.
The only difference is that there is no cat.
"""
expected_result = {'you': 3, 'see': 3, 'wire': 4, 'telegraph': 9, 'is': 2, 'a': 1, 'kind': 4, 'of': 2, 'very': 4, 'long': 4, 'cat': 3, 'pull': 4, 'his': 3, 'tail': 4, 'in': 2, 'new': 3, 'york': 4, 'and': 3, 'head': 4, 'meowing': 7, 'los': 3, 'angeles': 7, 'do': 2, 'understand': 10, 'this': 4, 'radio': 5, 'operates': 8, 'exactly': 7, 'the': 3, 'same': 4, 'way': 3, 'send': 4, 'signals': 7, 'here': 4, 'they': 4, 'receive': 7, 'them': 4, 'there': 5, 'only': 4, 'difference': 10, 'that': 4, 'no': 2}
"""
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
שבירת השורה בתא שלמעלה נעשתה מטעמי סגנון.<br>
באופן טכני, מותר לרשום את ה־list comprehension הזה בשורה אחת.
</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
צרו פונקציית generator ו־generator expression מהדוגמה האחרונה.<br>
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
החסרון בדוגמה של קוביות הוא שאנחנו מקבלים בתוצאות גם את <code dir="ltr">(1, 1, 6)</code> וגם את <code dir="ltr">(6, 1, 1)</code> .<br>
האם תוכלו לפתור בעיה זו בקלות?
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<span style="text-align: right; direction: rtl; float: right; clear: both;">נימוסין</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הטכניקות שלמדנו במחברת זו מפקידות בידינו כוח רב, אך כמו שאומר הדוד בן, "עם כוח גדול באה אחריות גדולה".<br>
עלינו לזכור תמיד שהמטרה של הטכניקות הללו בסופו של דבר היא להפוך את הקוד לקריא יותר.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לעיתים קרובות מתכנתים לא מנוסים ישתמשו בטכניקות שנלמדו במחברת זו כדי לבנות מבנים מורכבים מאוד.<br>
התוצאה תהיה קוד שקשה לתחזק ולקרוא, ולעיתים קרובות הקוד יוחלף לבסוף בלולאות רגילות.<br>
כלל האצבע הוא שבשורה לא יהיו יותר מ־99 תווים, ושהקוד יהיה פשוט ונוח לקריאה בידי מתכנת חיצוני.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
קהילת פייתון דשה בנושאי קריאות קוד לעיתים קרובות, תוך כדי התייחסויות תכופות ל־<a href="https://www.python.org/dev/peps/pep-0008/">PEP8</a>.<br>
נסביר בקצרה – PEP8 הוא מסמך שמתקנן את הקווים הכלליים של סגנון הכתיבה הרצוי בפייתון.<br>
לדוגמה, מאגרי קוד העוקבים אחרי המסמך בצורה מחמירה לא מתירים כתיבת שורות קוד שבהן יותר מ־79 תווים.<br>
כתיבה מסוגננת היטב היא נושא רחב יריעה שנעמיק בו בהמשך הקורס.
</p>
<span style="align: right; direction: rtl; float: right; clear: both;">סיכום</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
במחברת זו למדנו 4 טכניקות שימושיות שעוזרות לנו ליצור בצורה קריאה ומהירה מבני נתונים:
</p>
<ul style="text-align: right; direction: rtl; float: right; clear: both;">
<li>List Comprehensions</li>
<li>Dictionary Comprehensions</li>
<li>Set Comprehensions</li>
<li>Generator Expressions</li>
</ul>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
למדנו מעט איך להשתמש בהם ומתי, ועל ההקבלות שלהם ללולאות רגילות ולפונקציות כמו <var>map</var> ו־<var>filter</var>.<br>
למדנו גם איך אפשר להשתמש בכל אחת מהן במצב שבו יש לנו כמה לולאות מקוננות.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
מתכנתי פייתון עושים שימוש רב בטכניקות האלו, וחשוב לשלוט בהן היטב כדי לדעת לקרוא קוד וכדי להצליח לממש רעיונות במהירות.
</p>
<span style="align: right; direction: rtl; float: right; clear: both;">תרגילים</span>
<span style="align: right; direction: rtl; float: right; clear: both;">הֲיִי שלום</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <var>words_length</var> שמקבלת משפט ומחזירה את אורכי המילים שבו, לפי סדרן במשפט.<br>
לצורך התרגיל, הניחו שסימני הפיסוק הם חלק מאורכי המילים.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לדוגמה:<br>
עבור המשפט: <em dir="ltr">Toto, I've a feeling we're not in Kansas anymore</em><br>
החזירו את הרשימה: <samp dir="ltr">[5, 4, 1, 7, 5, 3, 2, 6, 7]</samp>
</p>
<span style="align: right; direction: rtl; float: right; clear: both;">א אוהל, פ זה פייתון</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <var>get_letters</var> שמחזירה את רשימת כל התווים בין a ל־z ובין A ל־Z.<br>
השתמשו ב־list comprehension, ב־<var>ord</var> וב־<var>chr</var>.<br>
הקפידו שלא לכלול את המספרים 65, 90, 97 או 122 בקוד שלכם.
</p>
<span style="align: right; direction: rtl; float: right; clear: both;">חתול ארוך הוא ארוך</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <var>count_words</var> שמקבלת כפרמטר טקסט, ומחזירה מילון של אורכי המילים שבו.<br>
השתמשו ב־comprehension לבחירתכם (או ב־generator expression) כדי לנקות את הטקסט מסימנים שאינם אותיות.<br>
לאחר מכן, השתמשו ב־dictionary comprehension כדי לגלות את אורכה של כל מילה במשפט.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לדוגמה, עבור הטקסט הבא, בדקו שחוזר לכם המילון המופיע מייד אחריו.
</p>
End of explanation
"""
first_names = ['avi', 'moshe', 'yaakov']
last_names = ['cohen', 'levi', 'mizrahi']
# התנאים הבאים צריכים להתקיים
full_names(first_names, last_names, 10) == ['Avi Mizrahi', 'Moshe Cohen', 'Moshe Levi', 'Moshe Mizrahi', 'Yaakov Cohen', 'Yaakov Levi', 'Yaakov Mizrahi']
full_names(first_names, last_names) == ['Avi Cohen', 'Avi Levi', 'Avi Mizrahi', 'Moshe Cohen', 'Moshe Levi', 'Moshe Mizrahi', 'Yaakov Cohen', 'Yaakov Levi', 'Yaakov Mizrahi']
"""
Explanation: <span style="align: right; direction: rtl; float: right; clear: both;">ואלה שמות</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <var>full_names</var>, שתקבל כפרמטרים רשימת שמות פרטיים ורשימת שמות משפחה, ותרכיב מהם שמות מלאים.<br>
לכל שם פרטי תצמיד הפונקציה את כל שמות המשפחה שהתקבלו.<br>
ודאו שהשמות חוזרים כאשר האות הראשונה בשם הפרטי ובשם המשפחה היא אות גדולה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
על הפונקציה לקבל גם פרמטר אופציונלי בשם <var>min_length</var>.<br>
אם הפרמטר הועבר, שמות מלאים שכמות התווים שבהם קטנה מהאורך שהוגדר – לא יוחזרו מהפונקציה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לדוגמה:
</p>
End of explanation
"""
|
wehlutyk/brainscopypaste | notebooks/distance.ipynb | gpl-3.0 | SAVE_FIGURES = False
"""
Explanation: Distance travelled by substitutions
1 Setup
Flags and settings.
End of explanation
"""
from itertools import product
import pandas as pd
import seaborn as sb
import numpy as np
import networkx as nx
from nltk.corpus import wordnet as wn
from nltk.corpus import wordnet_ic as wn_ic
%matplotlib inline
import matplotlib.pyplot as plt
from progressbar import ProgressBar
%cd -q ..
from brainscopypaste.conf import settings
%cd -q notebooks
from brainscopypaste.mine import Model, Time, Source, Past, Durl
from brainscopypaste.db import Substitution
from brainscopypaste.utils import init_db, session_scope, memoized
from brainscopypaste.load import FAFeatureLoader
engine = init_db()
"""
Explanation: Imports and database setup.
End of explanation
"""
model = Model(Time.discrete, Source.majority, Past.last_bin, Durl.all, 1)
data = []
with session_scope() as session:
substitutions = session.query(Substitution.id)\
.filter(Substitution.model == model)
print("Got {} substitutions for model {}"
.format(substitutions.count(), model))
substitution_ids = [id for (id,) in substitutions]
for substitution_id in ProgressBar(term_width=80)(substitution_ids):
with session_scope() as session:
substitution = session.query(Substitution).get(substitution_id)
source_token, destination_token = substitution.tokens
source_lemma, destination_lemma = substitution.lemmas
source_pos, destination_pos = substitution.tags
data.append({'cluster_id': substitution.source.cluster.sid,
'destination_id': substitution.destination.sid,
'occurrence': substitution.occurrence,
'source_id': substitution.source.sid,
'source_token': source_token,
'destination_token': destination_token,
'source_pos': source_pos,
'destination_pos': destination_pos,
'source_lemma': source_lemma,
'destination_lemma': destination_lemma})
original_subs = pd.DataFrame(data)
del data
"""
Explanation: Build our data.
End of explanation
"""
distances = original_subs.copy()
divide_weight_sum = lambda x: x / distances.loc[x.index].weight.sum()
# Weight is 1, at first.
distances['weight'] = 1
# Divided by the number of substitutions that share a durl.
distances['weight'] = distances\
.groupby(['destination_id', 'occurrence'])['weight']\
.transform(divide_weight_sum)
# Divided by the number of substitutions that share a cluster.
# (Using divide_weight_sum, where we divide by the sum of weights,
# ensures we count only one for each group of substitutions sharing
# a same durl.)
distances['weight'] = distances\
.groupby('cluster_id')['weight']\
.transform(divide_weight_sum)
"""
Explanation: Assign proper weight to each substitution.
End of explanation
"""
fa_loader = FAFeatureLoader()
avg_weight = np.mean([weight for _, _, weight
in fa_loader._undirected_norms_graph
.edges_iter(data='weight')])
fa_graph = nx.Graph()
fa_graph.add_weighted_edges_from(
[(w1, w2, avg_weight / weight) for w1, w2, weight
in fa_loader._undirected_norms_graph.edges_iter(data='weight')]
)
"""
Explanation: 2 Distances on the FA network
Get the FA norms undirected graph and invert its weights to use them as costs.
End of explanation
"""
fa_distances = distances.copy()
fa_distances['weighted_distance'] = np.nan
fa_distances['distance'] = np.nan
@memoized
def fa_shortest_path(source, destination, weighted):
return nx.shortest_path_length(fa_graph, source, destination,
'weight' if weighted else None)
for i in ProgressBar(term_width=80)(fa_distances.index):
# Use source token, or lemma if not found,
# or skip this substitution if not found
if fa_graph.has_node(fa_distances.loc[i].source_token):
source_word = fa_distances.loc[i].source_token
elif fa_graph.has_node(fa_distances.loc[i].source_lemma):
source_word = fa_distances.loc[i].source_lemma
else:
continue
# Use destination token, or lemma if not found,
# or skip this substitution if not found
if fa_graph.has_node(fa_distances.loc[i].destination_token):
destination_word = fa_distances.loc[i].destination_token
elif fa_graph.has_node(fa_distances.loc[i].destination_lemma):
destination_word = fa_distances.loc[i].destination_lemma
else:
continue
fa_distances.loc[i, 'weighted_distance'] = \
fa_shortest_path(source_word, destination_word, weighted=True)
fa_distances.loc[i, 'distance'] = \
fa_shortest_path(source_word, destination_word, weighted=False)
"""
Explanation: Compute distances on the FA network, using lemmas if tokens are not found.
End of explanation
"""
def plot_metric(data, name, bin_count):
distances = data[name]
if bin_count <= 0:
bin_count = int(distances.max() - distances.min() + 1)
bins = np.arange(distances.min(), distances.max() + 2) - .5
d_bins = pd.cut(distances, bins, right=False, labels=False)
else:
d_bins, bins = pd.cut(distances, bin_count, right=False,
labels=False, retbins=True)
middles = (bins[:-1] + bins[1:]) / 2
width = middles[1] - middles[0]
# Compute bin values.
heights = np.zeros(bin_count)
for i in range(bin_count):
heights[i] = data[d_bins == i].weight.sum()
# Plot.
plt.bar(middles - width / 2, heights, width=width)
"""
Explanation: Plot them.
End of explanation
"""
plot_metric(fa_distances, 'distance', -1)
"""
Explanation: 2.1 Unweighted distances
End of explanation
"""
plot_metric(fa_distances, 'weighted_distance', 20)
"""
Explanation: 2.2 Weighted distances
End of explanation
"""
print('{} substitutions are not taken into account in the graphs above'
' (because they involve words unknown to FA).'
.format(len(fa_distances[np.isnan(fa_distances.distance)])))
"""
Explanation: 2.3 Comments on FA distances
End of explanation
"""
infocontent = wn_ic.ic('ic-brown.dat')
wordnet_poses = {'a', 'n', 'r', 's', 'v'}
def build_wordnet_metric(data, name, tipe, ic_based,
pos_based=False, distance_kws={}):
if tipe not in ['distance', 'similarity']:
raise ValueError
pos_based = ic_based or pos_based
wn_metrics = data.copy()
wn_metrics['metric'] = np.nan
skipped_no_synsets = 0
skipped_unknown_pos = 0
skipped_incongruent_pos = 0
skipped_noic_pos = 0
for i in ProgressBar(term_width=80)(data.index):
if pos_based:
pos = data.loc[i].source_pos[0].lower()
if pos not in wordnet_poses:
skipped_unknown_pos += 1
continue
if data.loc[i].destination_pos[0].lower() != pos:
skipped_incongruent_pos += 1
continue
if ic_based and (pos not in infocontent.keys()):
skipped_noic_pos += 1
continue
else:
pos = None
# Use source token, or lemma if not found
source_synsets = wn.synsets(wn_metrics.loc[i].source_token, pos=pos)
if len(source_synsets) == 0:
source_synsets = wn.synsets(wn_metrics.loc[i].source_lemma,
pos=pos)
# Use destination token, or lemma if not found
destination_synsets = wn.synsets(wn_metrics.loc[i].destination_token,
pos=pos)
if len(destination_synsets) == 0:
destination_synsets = \
wn.synsets(wn_metrics.loc[i].destination_lemma, pos=pos)
# Skip this substitution if no corresponding synsets were found
if len(source_synsets) == 0 or len(destination_synsets) == 0:
skipped_no_synsets += 1
continue
def get_distance(s1, s2):
distance_func = getattr(s1, name)
if ic_based:
return distance_func(s2, infocontent, **distance_kws)
else:
return distance_func(s2, **distance_kws)
distances = [get_distance(s1, s2)
for s1, s2 in product(source_synsets,
destination_synsets)
if get_distance(s1, s2) is not None]
if len(distances) != 0:
if tipe == 'distance':
wn_metrics.loc[i, 'metric'] = np.min(distances)
else:
wn_metrics.loc[i, 'metric'] = np.max(distances)
else:
wn_metrics.loc[i, 'metric'] = np.nan
if pos_based:
print('Skipped {} substitutions because their '
'source was unknown to WordNet'
.format(skipped_unknown_pos))
print('Skipped {} substitutions because their '
'source and destination pos were different'
.format(skipped_incongruent_pos))
if ic_based:
print('Skipped {} substitutions because their '
'source pos was not in {}'
.format(skipped_noic_pos, infocontent.keys()))
print('Skipped {} substitutions because no synsets were found'
.format(skipped_no_synsets))
return wn_metrics
"""
Explanation: We see in the first graph that most substitutions are not with immediate neighbours, it's rather 3 hops away on average.
The second graph is with weighted distances, with weights (costs) scaled so that the average cost of a link is 1. So a distance of 2 means that you travel the equivalent of 2 average links (be it with 1 link twice as expensive, or 4 links twice as cheap). Here too, we see that most substitutions are at distance 2, meaning it's not immediate neighbours but rather the words after those.
3 Distances with WordNet similarities
End of explanation
"""
jcn_similarities = build_wordnet_metric(distances, 'jcn_similarity',
'similarity', True)
plot_metric(jcn_similarities.loc[jcn_similarities.metric <= 1],
'metric', 20)
"""
Explanation: WordNet defines all sorts of distances/similarities between synsets. We're trying all of them to see what it looks like.
3.1 Jiang-Conrath similarity
End of explanation
"""
lin_similarities = build_wordnet_metric(distances, 'lin_similarity',
'similarity', True)
plot_metric(lin_similarities, 'metric', 20)
"""
Explanation: 3.2 Lin similarity
End of explanation
"""
res_similarities = build_wordnet_metric(distances, 'res_similarity',
'similarity', True)
plot_metric(res_similarities.loc[res_similarities.metric <= 100],
'metric', 20)
"""
Explanation: 3.3 Resnik similarity
End of explanation
"""
lch_similarities = build_wordnet_metric(distances, 'lch_similarity',
'similarity', False, pos_based=True)
plot_metric(lch_similarities, 'metric', 20)
"""
Explanation: 3.4 Leacock Chodorow similarity
End of explanation
"""
path_similarities = build_wordnet_metric(distances, 'path_similarity',
'similarity', False)
plot_metric(path_similarities, 'metric', 20)
"""
Explanation: 3.5 Path Distance similarity (with hypernym/hyponym relationships)
End of explanation
"""
shortest_path_distances = build_wordnet_metric(
distances, 'shortest_path_distance', 'distance', False, pos_based=True,
distance_kws={'simulate_root': True})
plot_metric(shortest_path_distances, 'metric', 20)
"""
Explanation: 3.6 Shortest Path distance (less is more similar, more is less similar)
End of explanation
"""
wup_similarities = build_wordnet_metric(distances, 'wup_similarity',
'similarity', False)
plot_metric(wup_similarities, 'metric', 20)
"""
Explanation: 3.7 Wu-Palmer similarity
End of explanation
"""
|
bw4sz/DeepMeerkat | training/Detection/object_detection/object_detection_tutorial.ipynb | gpl-3.0 | import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
"""
Explanation: Object Detection Demo
Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the installation instructions before you start.
Imports
End of explanation
"""
# This is needed to display the images.
%matplotlib inline
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
"""
Explanation: Env setup
End of explanation
"""
from utils import label_map_util
from utils import visualization_utils as vis_util
"""
Explanation: Object detection imports
Here are the imports from the object detection module.
End of explanation
"""
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
"""
Explanation: Model preparation
Variables
Any model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file.
By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
End of explanation
"""
opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
"""
Explanation: Download Model
End of explanation
"""
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
"""
Explanation: Load a (frozen) Tensorflow model into memory.
End of explanation
"""
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
"""
Explanation: Loading label map
Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
End of explanation
"""
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
"""
Explanation: Helper code
End of explanation
"""
# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ]
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
# Definite input and output Tensors for detection_graph
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
"""
Explanation: Detection
End of explanation
"""
|
rawrgulmuffins/presentation_notes | pycon2016/tutorials/computation_statistics/effect_size_soln.ipynb | mit | %matplotlib inline
from __future__ import print_function, division
import numpy
import scipy.stats
import matplotlib.pyplot as pyplot
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
# seed the random number generator so we all get the same results
numpy.random.seed(17)
# some nice colors from http://colorbrewer2.org/
COLOR1 = '#7fc97f'
COLOR2 = '#beaed4'
COLOR3 = '#fdc086'
COLOR4 = '#ffff99'
COLOR5 = '#386cb0'
"""
Explanation: Effect Size
Examples and exercises for a tutorial on statistical inference.
Copyright 2016 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
"""
mu1, sig1 = 178, 7.7
male_height = scipy.stats.norm(mu1, sig1)
mu2, sig2 = 163, 7.3
female_height = scipy.stats.norm(mu2, sig2)
"""
Explanation: Part One
To explore statistics that quantify effect size, we'll look at the difference in height between men and women. I used data from the Behavioral Risk Factor Surveillance System (BRFSS) to estimate the mean and standard deviation of height in cm for adult women and men in the U.S.
I'll use scipy.stats.norm to represent the distributions. The result is an rv object (which stands for random variable).
End of explanation
"""
def eval_pdf(rv, num=4):
mean, std = rv.mean(), rv.std()
xs = numpy.linspace(mean - num*std, mean + num*std, 100)
ys = rv.pdf(xs)
return xs, ys
"""
Explanation: The following function evaluates the normal (Gaussian) probability density function (PDF) within 4 standard deviations of the mean. It takes and rv object and returns a pair of NumPy arrays.
End of explanation
"""
xs, ys = eval_pdf(male_height)
pyplot.plot(xs, ys, label='male', linewidth=4, color=COLOR2)
xs, ys = eval_pdf(female_height)
pyplot.plot(xs, ys, label='female', linewidth=4, color=COLOR3)
pyplot.xlabel('height (cm)')
None
"""
Explanation: Here's what the two distributions look like.
End of explanation
"""
male_sample = male_height.rvs(1000)
female_sample = female_height.rvs(1000)
"""
Explanation: Let's assume for now that those are the true distributions for the population.
I'll use rvs to generate random samples from the population distributions. Note that these are totally random, totally representative samples, with no measurement error!
End of explanation
"""
mean1, std1 = male_sample.mean(), male_sample.std()
mean1, std1
"""
Explanation: Both samples are NumPy arrays. Now we can compute sample statistics like the mean and standard deviation.
End of explanation
"""
mean2, std2 = female_sample.mean(), female_sample.std()
mean2, std2
"""
Explanation: The sample mean is close to the population mean, but not exact, as expected.
End of explanation
"""
difference_in_means = male_sample.mean() - female_sample.mean()
difference_in_means # in cm
"""
Explanation: And the results are similar for the female sample.
Now, there are many ways to describe the magnitude of the difference between these distributions. An obvious one is the difference in the means:
End of explanation
"""
# Solution goes here
relative_difference = difference_in_means / male_sample.mean()
print(relative_difference * 100) # percent
# A problem with relative differences is that you have to choose which mean to express them relative to.
relative_difference = difference_in_means / female_sample.mean()
print(relative_difference * 100) # percent
"""
Explanation: On average, men are 14--15 centimeters taller. For some applications, that would be a good way to describe the difference, but there are a few problems:
Without knowing more about the distributions (like the standard deviations) it's hard to interpret whether a difference like 15 cm is a lot or not.
The magnitude of the difference depends on the units of measure, making it hard to compare across different studies.
There are a number of ways to quantify the difference between distributions. A simple option is to express the difference as a percentage of the mean.
Exercise 1: what is the relative difference in means, expressed as a percentage?
End of explanation
"""
simple_thresh = (mean1 + mean2) / 2
simple_thresh
"""
Explanation: STOP HERE: We'll regroup and discuss before you move on.
Part Two
An alternative way to express the difference between distributions is to see how much they overlap. To define overlap, we choose a threshold between the two means. The simple threshold is the midpoint between the means:
End of explanation
"""
thresh = (std1 * mean2 + std2 * mean1) / (std1 + std2)
thresh
"""
Explanation: A better, but slightly more complicated threshold is the place where the PDFs cross.
End of explanation
"""
male_below_thresh = sum(male_sample < thresh)
male_below_thresh
"""
Explanation: In this example, there's not much difference between the two thresholds.
Now we can count how many men are below the threshold:
End of explanation
"""
female_above_thresh = sum(female_sample > thresh)
female_above_thresh
"""
Explanation: And how many women are above it:
End of explanation
"""
overlap = male_below_thresh / len(male_sample) + female_above_thresh / len(female_sample)
overlap
"""
Explanation: The "overlap" is the total area under the curves that ends up on the wrong side of the threshold.
End of explanation
"""
misclassification_rate = overlap / 2
misclassification_rate
"""
Explanation: Or in more practical terms, you might report the fraction of people who would be misclassified if you tried to use height to guess sex:
End of explanation
"""
# Solution goes here
sum(x > y for x, y in zip(male_sample, female_sample)) / len(male_sample)
"""
Explanation: Another way to quantify the difference between distributions is what's called "probability of superiority", which is a problematic term, but in this context it's the probability that a randomly-chosen man is taller than a randomly-chosen woman.
Exercise 2: Suppose I choose a man and a woman at random. What is the probability that the man is taller?
End of explanation
"""
def CohenEffectSize(group1, group2):
"""Compute Cohen's d.
group1: Series or NumPy array
group2: Series or NumPy array
returns: float
"""
diff = group1.mean() - group2.mean()
n1, n2 = len(group1), len(group2)
var1 = group1.var()
var2 = group2.var()
pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)
d = diff / numpy.sqrt(pooled_var)
return d
"""
Explanation: Overlap (or misclassification rate) and "probability of superiority" have two good properties:
As probabilities, they don't depend on units of measure, so they are comparable between studies.
They are expressed in operational terms, so a reader has a sense of what practical effect the difference makes.
Cohen's d
There is one other common way to express the difference between distributions. Cohen's $d$ is the difference in means, standardized by dividing by the standard deviation. Here's a function that computes it:
End of explanation
"""
CohenEffectSize(male_sample, female_sample)
"""
Explanation: Computing the denominator is a little complicated; in fact, people have proposed several ways to do it. This implementation uses the "pooled standard deviation", which is a weighted average of the standard deviations of the two groups.
And here's the result for the difference in height between men and women.
End of explanation
"""
def overlap_superiority(control, treatment, n=1000):
"""Estimates overlap and superiority based on a sample.
control: scipy.stats rv object
treatment: scipy.stats rv object
n: sample size
"""
control_sample = control.rvs(n)
treatment_sample = treatment.rvs(n)
thresh = (control.mean() + treatment.mean()) / 2
control_above = sum(control_sample > thresh)
treatment_below = sum(treatment_sample < thresh)
overlap = (control_above + treatment_below) / n
superiority = sum(x > y for x, y in zip(treatment_sample, control_sample)) / n
return overlap, superiority
"""
Explanation: Most people don't have a good sense of how big $d=1.9$ is, so let's make a visualization to get calibrated.
Here's a function that encapsulates the code we already saw for computing overlap and probability of superiority.
End of explanation
"""
def plot_pdfs(cohen_d=2):
"""Plot PDFs for distributions that differ by some number of stds.
cohen_d: number of standard deviations between the means
"""
control = scipy.stats.norm(0, 1)
treatment = scipy.stats.norm(cohen_d, 1)
xs, ys = eval_pdf(control)
pyplot.fill_between(xs, ys, label='control', color=COLOR3, alpha=0.7)
xs, ys = eval_pdf(treatment)
pyplot.fill_between(xs, ys, label='treatment', color=COLOR2, alpha=0.7)
o, s = overlap_superiority(control, treatment)
print('overlap', o)
print('superiority', s)
"""
Explanation: Here's the function that takes Cohen's $d$, plots normal distributions with the given effect size, and prints their overlap and superiority.
End of explanation
"""
plot_pdfs(2)
"""
Explanation: Here's an example that demonstrates the function:
End of explanation
"""
slider = widgets.FloatSlider(min=0, max=4, value=2)
interact(plot_pdfs, cohen_d=slider)
None
"""
Explanation: And an interactive widget you can use to visualize what different values of $d$ mean:
End of explanation
"""
|
hglanz/phys202-2015-work | assignments/assignment10/ODEsEx02.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
"""
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
"""
def lorentz_derivs(yvec, t, sigma, rho, beta):
"""Compute the the derivatives for the Lorentz system at yvec(t)."""
d1 = sigma * (yvec[1] - yvec[0])
d2 = yvec[0] * (rho - yvec[2]) - yvec[1]
d3 = yvec[0] * yvec[1] - beta * yvec[2]
return(d1, d2, d3)
#raise NotImplementedError()
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
"""
Explanation: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:
$$ \frac{dx}{dt} = \sigma(y-x) $$
$$ \frac{dy}{dt} = x(\rho-z) - y $$
$$ \frac{dz}{dt} = xy - \beta z $$
The solution vector is $[x(t),y(t),z(t)]$ and $\sigma$, $\rho$, and $\beta$ are parameters that govern the behavior of the solutions.
Write a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.
End of explanation
"""
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
"""Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
"""
t = np.linspace(0.0, max_time, 250 * max_time)
soln = odeint(lorentz_derivs, ic, t, (sigma, rho, beta))
return(soln, t)
#raise NotImplementedError()
res = solve_lorentz((.5, .5, .5))
soln = res[0]
soln[:,0]
assert True # leave this to grade solve_lorenz
"""
Explanation: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
End of explanation
"""
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
plt.plot?
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
"""Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
"""
np.random.seed(1)
colors = plt.cm.hot(np.linspace(0, 1, N))
for i in range(N):
xrand = np.random.uniform(-15.0, 15.0)
yrand = np.random.uniform(-15.0, 15.0)
zrand = np.random.uniform(-15.0, 15.0)
res, t = solve_lorentz((xrand, yrand, zrand), max_time, sigma, rho, beta)
plt.plot(res[:,0], res[:,2], color = colors[i])
#raise NotImplementedError()
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
"""
Explanation: Write a function plot_lorentz that:
Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.
Plot $[x(t),z(t)]$ using a line to show each trajectory.
Color each line using the hot colormap from Matplotlib.
Label your plot and choose an appropriate x and y limit.
The following cell shows how to generate colors that can be used for the lines:
End of explanation
"""
interact(plot_lorentz, max_time = (1, 10), N = (1, 50), sigma = (0.0, 50.0), rho = (0.0, 50.0), beta = fixed(8.3))
#raise NotImplementedError()
"""
Explanation: Use interact to explore your plot_lorenz function with:
max_time an integer slider over the interval $[1,10]$.
N an integer slider over the interval $[1,50]$.
sigma a float slider over the interval $[0.0,50.0]$.
rho a float slider over the interval $[0.0,50.0]$.
beta fixed at a value of $8/3$.
End of explanation
"""
|
mgalardini/2017_python_course | notebooks/[6a]-Exercises-solutions.ipynb | gpl-2.0 | from Bio import SeqIO
counter = 0
for seq in SeqIO.parse('../data/proteome.faa', 'fasta'):
counter += 1
counter
"""
Explanation: Useful third-party libraries: exercises
Biopython
Can you count the number of sequences in the data/proteome.faa file?
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
sizes = []
for seq in SeqIO.parse('../data/proteome.faa', 'fasta'):
sizes.append(len(seq))
plt.hist(sizes, bins=100)
plt.xlabel('protein size')
plt.ylabel('count');
"""
Explanation: Can you plot the distribution of protein sizes in the data/proteome.faa file?
End of explanation
"""
counter = 0
for seq in SeqIO.parse('../data/ecoli.gbk', 'genbank'):
for feat in seq.features:
if feat.type == 'CDS':
counter += 1
counter
"""
Explanation: Can you count the number of CDS sequences in the data/ecoli.gbk file?
End of explanation
"""
from Bio import Phylo
tree = Phylo.read('../data/tree.nwk', 'newick')
distances = []
for node in tree.get_terminals():
distances.append(tree.distance(tree.root, node))
sum(distances)/float(len(distances))
"""
Explanation: Can you compute the average root-to-tip distance in the data/tree.nwk file?
End of explanation
"""
import networkx as nx
graph = nx.read_gml('../data/yeast.gml')
plt.hist(nx.degree(graph).values(), bins=20)
plt.xlabel('degree')
plt.ylabel('count');
"""
Explanation: Networkx
Can you read the yeast protein interaction network in data/yeast.gml? Can you plot the degree distribution of the proteins contained in the graph?
End of explanation
"""
|
ernestyalumni/CompPhys | partiald_sympy.ipynb | apache-2.0 | import sympy
x, u = sympy.symbols('x u', real=True)
U = sympy.Function('U')(x,u)
U
"""
Explanation: Partial Derivatives in sympy
End of explanation
"""
x = sympy.Symbol('x',real=True)
y = sympy.Function('y')(x)
U = sympy.Function('U')(x,y)
X = sympy.Function('X')(x,y)
Y = sympy.Function('Y')(X)
sympy.pprint(sympy.diff(U,x))
sympy.pprint( sympy.diff(Y,x))
sympy.pprint( sympy.diff(Y,x).args[0] )
sympy.pprint( sympy.diff(U,x)/ sympy.diff(Y,x).args[0])
"""
Explanation: The case of a(n arbitrary) point transformation
cf. Introduction to Differential Invariants, Chapter 2 Lie Transformations pp. 16
End of explanation
"""
YprimeX = sympy.diff(U,x)/sympy.diff(Y,x).args[0]
sympy.pprint( sympy.diff(YprimeX,x).simplify() )
sympy.factor_list( sympy.diff(Y,x)) # EY 20160522 I don't know how to simply obtain the factors of an expression
# EY 20160522 update resolved: look at above and look at this page; it explains all:
# http://docs.sympy.org/dev/tutorial/manipulation.html
t, x, u, u_1, x_t, u_t, u_1t = sympy.symbols('t x u u_1 x_t u_t u_1t', real=True)
X = -u_1
U = u - x*u_1
U_1 = x
"""
Explanation: For $Y''(X)$,
End of explanation
"""
from sympy import Derivative, diff, expr
def difftotal(expr, diffby, diffmap):
"""Take the total derivative with respect to a variable.
Example:
theta, t, theta_dot = symbols("theta t theta_dot")
difftotal(cos(theta), t, {theta: theta_dot})
returns
-theta_dot*sin(theta)
"""
# Replace all symbols in the diffmap by a functional form
fnexpr = expr.subs({s:s(diffby) for s in diffmap})
# Do the differentiation
diffexpr = diff(fnexpr, diffby)
# Replace the Derivatives with the variables in diffmap
derivmap = {Derivative(v(diffby), diffby):dv
for v,dv in diffmap.iteritems()}
finaldiff = diffexpr.subs(derivmap)
# Replace the functional forms with their original form
return finaldiff.subs({s(diffby):s for s in diffmap})
difftotal( U,t,{x:x_t, u:u_t, u_1:u_1t}) + (-U_1)* (-u_1t)
"""
Explanation: cf. How to do total derivatives: http://robotfantastic.org/total-derivatives-in-sympy.html
End of explanation
"""
x = sympy.Symbol('x',real=True)
u = sympy.Function('u')(x)
U = x
X = u
Y = sympy.Function('Y')(X)
sympy.pprint( sympy.diff(Y,x))
sympy.pprint(sympy.diff(U,x))
"""
Explanation: This transformation is the Legendre transformation
cf. 4. Exercises Chapter 2 Lie Transformations Introduction to Differential Invariants.
Consider transformation $(x,u)=(x,u(x)) \to (X,U)=(X(x,u),U(x,u))=(u,x)$. Let $Y=Y(X)$. $Y(X) \in \Gamma(\mathbb{R}^1 \times \mathbb{R}^1)$, i.e. $Y(X)$ is a section. So $Y(X) = Y(X(x,u)) = U(x,u)$. And so in this case,
$Y(X(x,u))=Y(u)=U(x,u) = x$
End of explanation
"""
sympy.pprint( 1/ sympy.diff(Y,x).args[0])
"""
Explanation: And so $Y'(X)$ is
End of explanation
"""
sympy.pprint( sympy.diff( 1/ sympy.diff(Y,x).args[0], x))
"""
Explanation: And so $Y''(X)$ is
End of explanation
"""
x = sympy.Symbol('x',real=True)
y = sympy.Function('y')(x)
U = sympy.Function('U')(x,y)
X = sympy.Function('X')(x,y)
Y = sympy.Function('Y')(X)
sympy.pprint(sympy.diff(U,x))
sympy.pprint( sympy.diff(Y,x))
sympy.pprint( sympy.diff(Y,x).args[0] )
sympy.pprint( sympy.diff(U,x)/ sympy.diff(Y,x).args[0])
"""
Explanation: cf. (2) from 4. Exercises, Chapter 2 Lie Transformations pp. 20
Recall an arbitrary point transformation:
End of explanation
"""
YprimeX = sympy.diff(U,x)/sympy.diff(Y,x).args[0]
Yprime2X = sympy.diff(YprimeX,x)
sympy.pprint( Yprime2X.simplify() )
"""
Explanation: For $Y''(X)$,
End of explanation
"""
|
quantopian/research_public | notebooks/tutorials/2_pipeline_lesson3/notebook.ipynb | apache-2.0 | from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
# New from the last lesson, import the USEquityPricing dataset.
from quantopian.pipeline.data.builtin import USEquityPricing
# New from the last lesson, import the built-in SimpleMovingAverage factor.
from quantopian.pipeline.factors import SimpleMovingAverage
"""
Explanation: Factors
A factor is a function from an asset and a moment in time to a number.
F(asset, timestamp) -> float
In Pipeline, Factors are the most commonly-used term, representing the result of any computation producing a numerical result. Factors require a column of data and a window length as input.
The simplest factors in Pipeline are built-in Factors. Built-in Factors are pre-built to perform common computations. As a first example, let's make a factor to compute the average close price over the last 10 days. We can use the SimpleMovingAverage built-in factor which computes the average value of the input data (close price) over the specified window length (10 days). To do this, we need to import our built-in SimpleMovingAverage factor and the USEquityPricing dataset.
End of explanation
"""
mean_close_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=10)
"""
Explanation: Creating a Factor
Let's go back to our make_pipeline function from the previous lesson and instantiate a SimpleMovingAverage factor. To create a SimpleMovingAverage factor, we can call the SimpleMovingAverage constructor with two arguments: inputs, which must be a list of BoundColumn objects, and window_length, which must be an integer indicating how many days worth of data our moving average calculation should receive. (We'll discuss BoundColumn in more depth later; for now we just need to know that a BoundColumn is an object indicating what kind of data should be passed to our Factor.).
The following line creates a Factor for computing the 10-day mean close price of securities.
End of explanation
"""
def make_pipeline():
mean_close_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=10)
return Pipeline(
columns={
'10_day_mean_close': mean_close_10
}
)
"""
Explanation: It's important to note that creating the factor does not actually perform a computation. Creating a factor is like defining the function. To perform a computation, we need to add the factor to our pipeline and run it.
Adding a Factor to a Pipeline
Let's update our original empty pipeline to make it compute our new moving average factor. To start, let's move our factor instantatiation into make_pipeline. Next, we can tell our pipeline to compute our factor by passing it a columns argument, which should be a dictionary mapping column names to factors, filters, or classifiers. Our updated make_pipeline function should look something like this:
End of explanation
"""
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
result
"""
Explanation: To see what this looks like, let's make our pipeline, run it, and display the result.
End of explanation
"""
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-07')
result
"""
Explanation: Now we have a column in our pipeline output with the 10-day average close price for all 8000+ securities (display truncated). Note that each row corresponds to the result of our computation for a given security on a given date stored. The DataFrame has a MultiIndex where the first level is a datetime representing the date of the computation and the second level is an Equity object corresponding to the security. For example, the first row (2015-05-05 00:00:00+00:00, Equity(2 [AA])) will contain the result of our mean_close_10 factor for AA on May 5th, 2015.
If we run our pipeline over more than one day, the output looks like this.
End of explanation
"""
def make_pipeline():
mean_close_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=10)
latest_close = USEquityPricing.close.latest
return Pipeline(
columns={
'10_day_mean_close': mean_close_10,
'latest_close_price': latest_close
}
)
"""
Explanation: Note: factors can also be added to an existing Pipeline instance using the Pipeline.add method. Using add looks something like this:
>>> my_pipe = Pipeline()
>>> f1 = SomeFactor(...)
>>> my_pipe.add(f1, 'f1')
Latest
The most commonly used built-in Factor is Latest. The Latest factor gets the most recent value of a given data column. This factor is common enough that it is instantiated differently from other factors. The best way to get the latest value of a data column is by getting its .latest attribute. As an example, let's update make_pipeline to create a latest close price factor and add it to our pipeline:
End of explanation
"""
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
result.head(5)
"""
Explanation: And now, when we make and run our pipeline again, there are two columns in our output dataframe. One column has the 10-day mean close price of each security, and the other has the latest close price.
End of explanation
"""
from quantopian.pipeline.factors import VWAP
vwap = VWAP(window_length=10)
"""
Explanation: .latest can sometimes return things other than Factors. We'll see examples of other possible return types in later lessons.
Default Inputs
Some factors have default inputs that should never be changed. For example the VWAP built-in factor is always calculated from USEquityPricing.close and USEquityPricing.volume. When a factor is always calculated from the same BoundColumns, we can call the constructor without specifying inputs.
End of explanation
"""
|
pxcandeias/py-notebooks | decades_octaves_fractions.ipynb | mit | # Ipython 'magic' commands
%matplotlib inline
# Python standard library
import sys
# 3rd party modules
import numpy as np
import scipy as sp
import matplotlib as mpl
import pandas as pd
import matplotlib.pyplot as plt
# Computational lab set up
print(sys.version)
for package in (np, sp, mpl, pd):
print('{:.<15} {}'.format(package.__name__, package.__version__))
"""
Explanation: <a id='top'></a>
Decades, octaves and fractions thereof
In this notebook we will explore the relations between decades and octaves, two scales that are commonly used when working with frequency bands. In fact, they correspond to logarithmic scales, to base 10 and base 2 respectively, and, therefore, are related. Furthermore, fractions (1/N, with N integer and larger than one) are also used to refine the frequency bands.
Table of contents
Preamble
Introduction
Conclusions
Odds and ends
Preamble
The computational environment set up for this Python notebook is the following:
End of explanation
"""
f1 = 1 # starting frequency
nf = 3 # number of frequency bands
fn = f1*10**np.arange(nf+1)
print(fn)
"""
Explanation: Back to top
Introduction
In signal processing, decades and octaves are usually applied to frequency bands. A decade is defined as a 10-fold increase in frequency and an octave as a 2-fold increase (doubling) in frequency. In mathematical terms, this can be translated to:
$$ \frac{f_{10}}{f_1} = 10 $$
$$ \frac{f_2}{f_1} = 2 $$
where $f_{{10}}$ is a frequency one decade above $f_1$ and $f_{2}$ is a frequency one octave above $f_1$. Similarly, $f_{1}$ is a frequency one decade below $f_{10}$ and $f_{1}$ is a frequency one octave below $f_2$. These relations can be further expanded to express n decades or octaves:
$$ \frac{f_{10n}}{f_1} = 10^n $$
$$ \frac{f_{2n}}{f_1} = 2^n $$
Please bear in mind that, in the previous expressions, n is an integer value that can be either positive (for decades or octaves above $f_1$) or negative (for decades or octaves below $f_1$).
Each decade, or octave, can be divided into fractions, that is, it can be divided into multiple steps. For that, the exponent in the previous expressions is no longer an integer but is expressed as a fraction with an integer denominator N:
$$ \frac{f_{10/N}}{f_1} = 10^{1/N} $$
$$ \frac{f_{2/N}}{f_1} = 2^{1/N} $$
This means that in between $f_1$ and $f_{10}$ or $f_2$ there will be N frequencies:
$$ {f_1} \times 10^{1/N}\ ,\ {f_1} \times 10^{2/N}\ ,\ \dots\ ,\ {f_1} \times 10^{(N-1)/N} $$
$$ {f_1} \times 2^{1/N}\ ,\ {f_1} \times 2^{2/N}\ ,\ \dots\ ,\ {f_1} \times 2^{(N-1)/N} $$
The denominator N usually assumes the following values: 3, 6, 12, 24. These are closely related with the Renard numbers.
With these relations in mind, we are now ready to start explore further about decades, octaves and fractions thereof.
Back to top
Decades
Let us start with the decades. We will generate a series of frequencies bands covering three decades and starting at 1Hz:
End of explanation
"""
fn = f1*np.logspace(0, nf, num=nf+1)
print(fn)
"""
Explanation: These are actually no more than a sequence of frequencies like the one generated by the numpy logspace function:
End of explanation
"""
fig, ax = plt.subplots()
ax.plot(np.arange(nf+1), fn)
"""
Explanation: If we try to plot this sequence, we will see the skeleton of an exponential function:
End of explanation
"""
|
quantumlib/Cirq | docs/interop.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The Cirq Developers
End of explanation
"""
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
"""
Explanation: Import/export circuits
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/interop"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/interop.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/interop.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/interop.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
End of explanation
"""
import cirq
# Example circuit
circuit = cirq.Circuit(cirq.Z(cirq.GridQubit(1,1)))
# Serialize to a JSON string
json_string = cirq.to_json(circuit)
print('JSON string:')
print(json_string)
print()
# Now, read back the string into a cirq object
# cirq.read_json can also read from a file
new_circuit = cirq.read_json(json_text=json_string)
print(f'Deserialized object of type: {type(new_circuit)}:')
print(new_circuit)
"""
Explanation: Cirq has several features that allow the user to import/export from other quantum languages.
Exporting and importing to JSON
For storing circuits or for transfering them between collaborators, JSON can be a good choice. Many objects in cirq can be serialized as JSON and then stored as a text file for transfer, storage, or for posterity.
Any object that can be serialized, which includes circuits, moments, gates, operations, and many other cirq constructs, can be turned into JSON with the protocol cirq.to_json(obj). This will return a string that contains the serialized JSON.
To take JSON and turn it back into a cirq object, the protocol cirq.read_json can be used. This can take a python file or string filename as the first argument (file_or_fn) or can take a named json_text parameter to accept a string input.
The following shows how to serialize and de-serialize a circuit.
End of explanation
"""
!pip install --quiet cirq
!pip install --quiet ply==3.4
"""
Explanation: Advanced: Adding JSON compatibility for custom objects in cirq
Most cirq objects come with serialization functionality added already. If you are adding a custom object (such as a custom gate), you can still serialize the object, but you will need to add the magic functions _json_dict_ and _from_json_dict_ to your object to do so.
When de-serializing, in order to instantiate the correct object, you will also have to pass in a custom resolver. This is a function that will take as input the serialized cirq_type string and output a constructable class. See
cirq.protocols.json_serialization for more details.
Importing from OpenQASM
The QASM importer is in an experimental state and currently only supports a subset of the full OpenQASM spec. Amongst others, classical control, arbitrary gate definitions, and even some of the gates that don't have a one-to-one representation in Cirq, are not yet supported. The functionality should be sufficient to import interesting quantum circuits. Error handling is very simple - on any lexical or syntactical error, a QasmException is raised.
Importing cirq.Circuit from QASM format
Requirements: ply
End of explanation
"""
from cirq.contrib.qasm_import import circuit_from_qasm
circuit = circuit_from_qasm("""
OPENQASM 2.0;
include "qelib1.inc";
qreg q[3];
creg meas[3];
h q;
measure q -> meas;
""")
print(circuit)
"""
Explanation: The following call will create a circuit defined by the input QASM string:
End of explanation
"""
quirk_url = "https://algassert.com/quirk#circuit=%7B%22cols%22:[[%22H%22,%22H%22],[%22%E2%80%A2%22,%22X%22],[%22H%22,%22H%22]]}"
c= cirq.quirk_url_to_circuit(quirk_url)
print(c)
"""
Explanation: Supported control statements
| Statement|Cirq support|Description|Example|
|----| --------| --------| --------|
|OPENQASM 2.0;| supported| Denotes a file in Open QASM format| OPENQASM 2.0;|
|qreg name[size];| supported (see mapping qubits)| Declare a named register of qubits|qreg q[5];|
|creg name[size];|supported (see mapping classical register to measurement keys)| Declare a named register of bits|creg c[5];|
|include "filename";| supported ONLY to include the standard "qelib1.inc" lib for compatibility| Open and parse another source file|include "qelib1.inc";|
|gate name(params) qargs;|NOT supported| Declare a unitary gate||
|opaque name(params) qargs;| NOT supported| Declare an opaque gate||
|// comment text| supported|Comment a line of text|// supported!|
|U(θ,φ,λ) qubit/qreg;| supported| Apply built-in single qubit gate(s)|U(pi/2,2*pi/3,0) q[0];|
|CX qubit/qreg,qubit/qreg;| supported|Apply built-in CNOT gate(s)|CX q[0],q[1];|
|measure qubit/qreg|supported|Make measurements in Z basis||
|reset qubit/qreg;| NOT supported|Prepare qubit(s) in <code>|0></code>|reset q[0];|
|gatename(params) qargs;| supported for ONLY the supported subset of standard gates defined in "qelib1.inc"|Apply a user-defined unitary gate|rz(pi/2) q[0];|
|if(creg==int) qop;| NOT supported| Conditionally apply quantum operation|if(c==5) CX q[0],q[1];|
|barrier qargs;| NOT supported| Prevent transformations across this source line|barrier q[0],q[1];|
Gate conversion rules
Note: The standard Quantum Experience gates (defined in qelib.inc) are
based on the U and CX built-in instructions, and we could generate them dynamically. Instead, we chose to map them to native Cirq gates explicitly, which results in a more user-friendly circuit.
| QE gates| Cirq translation| Notes|
| --------| --------| --------|
|U(θ,φ,λ) |QasmUGate(θ,φ,λ)||
|CX |cirq.CX||
|u3(θ,φ,λ)|QasmUGate(θ,φ,λ)||
|u2(φ,λ) = u3(π/2,φ,λ)|QasmUGate(π/2,φ,λ)||
|u1 (λ) = u3(0,0,λ)| NOT supported ||
|id|cirq.Identity| one single-qubit Identity gate is created for each qubit if applied on a register|
|u0(γ)| NOT supported| this is the "WAIT gate" for length γ in QE|
|x|cirq.X||
|y|cirq.Y||
|z|cirq.Z||
|h|cirq.H||
|s|cirq.S||
|sdg|cirq.S**-1||
|t|cirq.T||
|tdg|cirq.T**-1||
|rx(θ)|cirq.Rx(θ)||
|ry(θ)|cirq.Ry(θ)||
|rz(θ)|cirq.Rz(θ)||
|cx|cirq.CX||
|cy|cirq.ControlledGate(cirq.Y)||
|cz|cirq.CZ||
|ch|cirq.ControlledGate(cirq.H)||
|swap|cirq.SWAP||
|ccx|cirq.CCX||
|cswap|cirq.CSWAP||
|crz| NOT supported ||
|cu1| NOT supported||
|cu3| NOT supported||
|rzz| NOT supported||
Mapping quantum registers to qubits
For a quantum register qreg qfoo[n]; the QASM importer will create cirq.NamedQubits named qfoo_0..qfoo_<n-1>.
Mapping classical registers to measurement keys
For a classical register creg cbar[n]; the QASM importer will create measurement keys named cbar_0..cbar_<n-1>.
Importing from Quirk
Quirk is a drag-and-drop quantum circuit simulator, great for manipulating and exploring small quantum circuits. Quirk's visual style gives a reasonably intuitive feel of what is happening, state displays update in real time as you change the circuit, and the general experience is fast and interactive.
After constructing a circuit in Quirk, you can easily convert it to cirq using the URL generated. Note that not all gates in Quirk are currently convertible.
End of explanation
"""
import json
quirk_str="""{
"cols": [
[
"H",
"H"
],
[
"•",
"X"
],
[
"H",
"H"
]
]
}"""
quirk_json=json.loads(quirk_str)
c= cirq.quirk_json_to_circuit(quirk_json)
print(c)
"""
Explanation: You can also convert the JSON from the "export" button on the top bar of Quirk. Note that you must parse the JSON string into a dictionary before passing it to the function:
End of explanation
"""
|
jungmannlab/picasso | samples/SampleNotebook2.ipynb | mit | from picasso import io
path = 'testdata_locs.hdf5'
locs, info = io.load_locs(path)
print('Loaded {} locs.'.format(len(locs)))
"""
Explanation: Sample Notebook 2 for Picasso
This notebook shows some basic interaction with the picasso library. It assumes to have a working picasso installation. To install jupyter notebooks in a conda picasso environment use conda install nb_conda.
The sample data was created using Picasso:Simulate. You can download the files here: http://picasso.jungmannlab.org/testdata.zip
Load Localizations
End of explanation
"""
for i in range(len(info)):
print(info[i]['Generated by'])
# extract width and height:
width, height = info[0]['Width'], info[0]['Height']
print('Image height: {}, width: {}'.format(width, height))
"""
Explanation: Info file
The info file is now a list of dictionaries. Each step in picasso adds an element to the list.
End of explanation
"""
sx_center = 0.82
sy_center = 0.82
radius = 0.04
to_keep = (locs.sx-sx_center)**2 + (locs.sy-sy_center)**2 < radius**2
filtered_locs = locs[to_keep]
print('Length of locs before filtering {}, after filtering {}.'.format(len(locs),len(filtered_locs)))
"""
Explanation: Filter localizations
Filter localizations, i.e., via sx and sy: Remove all localizations that are not within a circle around a center position.
End of explanation
"""
import os.path as _ospath
# Create a new dictionary for the new info
new_info = {}
new_info["Generated by"] = "Picasso Jupyter Notebook"
new_info["Filtered"] = 'Circle'
new_info["sx_center"] = sx_center
new_info["sy_center"] = sy_center
new_info["radius"] = radius
info.append(new_info)
base, ext = _ospath.splitext(path)
new_path = base+'_jupyter'+ext
io.save_locs(new_path, filtered_locs, info)
print('{} locs saved to {}.'.format(len(filtered_locs), new_path))
"""
Explanation: Saving localizations
Add new info to the yaml file and save everything.
End of explanation
"""
# Get minimum / maximum localizations to define the ROI to be rendered
import numpy as np
from picasso import render
import matplotlib.pyplot as plt
x_min = np.min(locs.x)
x_max = np.max(locs.x)
y_min = np.min(locs.y)
y_max = np.max(locs.y)
viewport = (y_min, x_min), (y_max, x_max)
oversampling = 10
len_x, image = render.render(locs, viewport = viewport, oversampling=oversampling, blur_method='smooth')
plt.imsave('test.png', image, cmap='hot', vmax=10)
# Cutom ROI with higher oversampling
viewport = (5, 5), (10, 10)
oversampling = 20
len_x, image = render.render(locs, viewport = viewport, oversampling=oversampling, blur_method='smooth')
plt.imsave('test_zoom.png', image, cmap='hot', vmax=10)
"""
Explanation: Manually export images
Use the picasso functions to render images.
End of explanation
"""
from picasso import postprocess
# Note: to calculate dark times you need picked localizations of single binding sites
path = 'testdata_locs_picked_single.hdf5'
picked_locs, info = io.load_locs(path)
# Link localizations and calcualte dark times
linked_locs = postprocess.link(picked_locs, info, r_max=0.05, max_dark_time=1)
linked_locs_dark = postprocess.compute_dark_times(linked_locs)
print('Average bright time {:.2f} frames'.format(np.mean(linked_locs_dark.n)))
print('Average dark time {:.2f} frames'.format(np.mean(linked_locs_dark.dark)))
# Compare with simulation settings:
integration_time = info[0]['Camera.Integration Time']
tau_b = info[0]['PAINT.taub']
k_on = info[0]['PAINT.k_on']
imager = info[0]['PAINT.imager']
tau_d = 1/(k_on*imager)*10**9*1000
print('------')
print('ON Measured {:.2f} ms \t Simulated {:.2f} ms'.format(np.mean(linked_locs_dark.n)*integration_time, tau_b))
print('OFF Measured {:.2f} ms \t Simulated {:.2f} ms'.format(np.mean(linked_locs_dark.dark)*integration_time, tau_d))
"""
Explanation: Calculate kinetics
Use the picasso functions to calculate kinetics.
End of explanation
"""
|
eshlykov/mipt-day-after-day | ml/shlykov_596_task8.ipynb | unlicense | # additional packages for this notebook
! pip install faker tqdm babel
"""
Explanation: <h1 align="center">Organization Info</h1>
Дедлайн DD MM 2018 23:59 для всех групп.
В качестве решения задания нужно прислать ноутбук с подробными комментариями (<span style='color:red'> без присланного решения результат контеста не будет засчитан </span>).
<span style='color:red'>Название команды в контесте должно соответствовать шаблону: НомерГруппы_Имя_Фамилия, например, 594_Ivan_Ivanov</span>.
Оформление дз:
- Присылайте выполненное задание на почту ml.course.mipt@gmail.com
- Укажите тему письма в следующем формате ML2018_fall_<номер_группы>_<фамилия>, к примеру -- ML2018_fall_495_ivanov
- Выполненное дз сохраните в файл <фамилия>_<группа>_task<номер>.ipnb, к примеру -- ivanov_401_task7.ipnb
Вопросы:
- Присылайте вопросы на почту ml.course.mipt@gmail.com
- Укажите тему письма в следующем формате ML2018_fall Question <Содержание вопроса>
PS1: Используются автоматические фильтры, и просто не найдем ваше дз, если вы неаккуратно его подпишите.
PS2: Просроченный дедлайн снижает максимальный вес задания по формуле, указнной на первом семинаре
PS3: Допустимы исправление кода предложенного кода ниже, если вы считаете
<h1 align="center">Checking Questions</h1>
Вопрос 1: Чем LSTM лучше/хуже чем обычная RNN?
Есть такая история: сначала были обычные нейросети. Оказалось, что если им давать и давать вход, то они сразу же забудут предыдщий вход. Поэтому придумали рекуррентные нейронные сети. Оказалось, что они помнят все, а в этом нет никакой необходимости (действительно, если вы читаете 4-й том "Войны и мира", то вы вполне можете выбросить сценку на французском из начала 1-го тома). Таким образом, мы не держим всякий старый мусор в памяти, и выходят более точные предсказания и большая эффективность.
Вопрос 2: Выпишите производную $\frac{d c_{n+1}}{d c_{k}}$ для LSTM http://colah.github.io/posts/2015-08-Understanding-LSTMs/, объясните формулу, когда производная затухает, когда взрывается?
По ссылке же вообще что-то непонятное расписано. Понятный пример есть на слайдах, там все расписано, но он для просто RNN. Этого мало?
Вопрос 3: Зачем нужен TBPTT почему BPTT плох?
Думаю, ответ на первый вопрос заложен в формулировке второго вопроса. Так что остановимся только на последнем. Основная причина — эффективность. Для того чтобы посчитать все градиенты, нужно идти по всему-всему-всему входу в backpropagation, а это значит, что градиенты могут затухнуть или взорваться. Ну и вообще это долго.
Вопрос 4: Как комбинировать рекуррентные и сверточные сети, а главное зачем? Приведите несколько примеров реальных задач.
Ну сверточные нейронные сети нужны для картинок, а рекуррентные полезны для текста. Соотвественно, если это совместить, то можно придумать реальную задачу. Первое, что приходит в голову: давайте генерировать или переводить стихи. Вот у нас есть отскнаненные фоточки страниц книг. Тогда мы можем а) распознать текст; б) учесть местоположение строк. Тогда у нас будет куда больше информации для воспроизведения рифмы. Ну и это уже поможет рекуррентным нейронным сетям работать получше.
Вопрос 5: Можно ли использовать сверточные сети для классификации текстов? Если нет обоснуйте :D, если да то как? как решить проблему с произвольной длинной входа?
Я думаю, да, посольку когда я вбил в гугле "cnn for", первый suggest был "cnn for text classification". :)
Вообще, кажется, что ничто не мешает просто так взять и представить текст как изображение. Надо подумать, как это сделать. Давайте один пиксель будет представлять собой одно слово (или токены). Вместо цветов красный-синий-зеленый можно взять, например, стили (официальный, разговорный, высокий и так далее), ну или эмоциональный окрас. Просто по словарям пробьем. А располагать слова надо последовательно, чтобы угадывалась структура. Например, будем укладывать слова змейкой. Или спиралькой. Или диагональной змейкой. Но я думаю, что лучше всего спиралька, так как когда слова из разных строк находятся рядом друг с другом, то они будут при распознавании более важны друг для друга. И будет плохо, если с какой-то периодичностью "ширина изображения" какие-то рандомныке слова друг с другом свяжутся. В спиральке тоже будет закономерность, но не такая явная
Вопрос 6: Attention - что это такое, где применяют и как? Приведите пример использования на какой-нибудь задаче
Attention нужны для решения проблемы bottleneck - произвольные объемы информации надо уложить в один фиксированный вектор. Так мы будет что-то терять (вся информация должна пролезть в маленькое бутылочное горлышко). Чтобы побороть проблему, используют Attention - маханизм запомнить какую-либо информацию о нашем входе. На каждом шаге декодера будет фокусироваться на каждом отдельном шаге входа, в зависимости от того, что мы делаем. Пример задач NMT, можно посмотреть задачи из этого ноутбука (но не решения).
Grading
starting at zero points
+2 for describing your iteration path in a report below (compare models).
+2 for correct check questions
+3 (7 total) for 99% accuracy with simple NMT model on TEST dataset
+3 (10 total) for 99% accuracy with attention NMT model on TEST dataset
tatoeba bonus for accuracy on TEST dataset:
+2 for report
60% (14 total)
65% (16 total)
70% (18 total)
75% (20 total)
Bonus points
Common ways to get bonus points are:
* Get higher score, obviously.
* Anything special about your NN. For example "A super-small/fast NN that gets 99%" gets a bonus.
* Any detailed analysis of the results. (attention maps, whatever)
End of explanation
"""
from keras.layers import Embedding, Bidirectional
from keras.layers.core import *
from keras.layers.recurrent import LSTM
from keras.models import *
from keras.layers.merge import Multiply
from keras.utils import to_categorical
from keras.layers import TimeDistributed
import keras.backend as K
import numpy as np
"""
Explanation: Task - translation
The machine translation is old and well-known field in natural language processing. From the 1950s scientists tried to create a model to automatically translate from say French to English. Nowadays it became possible and the attention mechanism takes great part in that. Here the example image with attention map for the neural machine translation of sample phrase:
<p align="center">
<img src="http://d3kbpzbmcynnmx.cloudfront.net/wp-content/uploads/2015/12/Screen-Shot-2015-12-30-at-1.23.48-PM.png" width="400">
</p>
In our lab we will concentrate on much simplier task: we will translate from human readable date to machine readable one.
To do this we need to get one more concept - Sequence-to-Sequence language modeling.
The idea of such architecture is here:
<p aling="center">
<img src="./img/simple_nmt.jpg" width="400">
</p>
There is an Embeding layer at the bottom, the RNN in the middle and softmax as an output.
End of explanation
"""
from faker import Faker
import random
from tqdm import tqdm
from babel.dates import format_date
import numpy as np
fake = Faker()
FORMATS = ['short',
'medium',
'long',
'full',
'd MMM YYY',
'd MMMM YYY',
'dd MMM YYY',
'd MMM, YYY',
'd MMMM, YYY',
'dd, MMM YYY',
'd MM YY',
'd MMMM YYY',
'MMMM d YYY',
'MMMM d, YYY',
'dd.MM.YY']
# change this if you want it to work with another language
LOCALES = ['en_US']
def create_date():
"""
Creates some fake dates
:returns: tuple containing human readable string, machine readable string, and date object
"""
dt = fake.date_object()
try:
human_readable = format_date(dt, format=random.choice(FORMATS), locale=random.choice(LOCALES))
case_change = random.choice([0,1,2])
if case_change == 1:
human_readable = human_readable.upper()
elif case_change == 2:
human_readable = human_readable.lower()
# if case_change == 0, do nothing
machine_readable = dt.isoformat()
except AttributeError as e:
return None, None, None
return human_readable, machine_readable, dt
def create_dataset(n_examples):
"""
Creates a dataset with n_examples and vocabularies
:n_examples: the number of examples to generate
"""
human_vocab = set()
machine_vocab = set()
dataset = []
for i in tqdm(range(n_examples)):
h, m, _ = create_date()
if h is not None:
dataset.append((h, m))
human_vocab.update(tuple(h))
machine_vocab.update(tuple(m))
human = dict(zip(list(human_vocab) + ['<unk>', '<pad>'],
list(range(len(human_vocab) + 2))))
inv_machine = dict(enumerate(list(machine_vocab) + ['<unk>', '<pad>']))
machine = {v:k for k,v in inv_machine.items()}
return dataset, human, machine, inv_machine
def string_to_int(string, lenght, vocab):
if len(string) > lenght:
string = string[:lenght]
rep = list(map(lambda x: vocab.get(x, '<unk>'), string))
if len(string) < lenght:
rep += [vocab['<pad>']] * (lenght - len(string))
return rep
def int_to_string(ints, inv_vocab):
return [inv_vocab[i] for i in ints]
"""
Explanation: Data
Now we need to generate data. It will be dates in different text formats and in fixed output format.
End of explanation
"""
fake.seed(42)
random.seed(42)
N = int(3e5)
dataset, human_vocab, machine_vocab, inv_machine_vocab = create_dataset(N)
dataset[2]
# Пришлось переместить эту ячейку сюда, чтобы код заработал.
# :good-enouht:
ENCODER_UNITS = 32 # change me if u want
DECODER_UNITS = 32 # change me if u want
TIME_STEPS = 20 # change me if u want
inputs, targets = zip(*dataset)
inputs = np.array([string_to_int(i, TIME_STEPS, human_vocab) for i in inputs])
targets = [string_to_int(t, TIME_STEPS, machine_vocab) for t in targets]
targets = np.array(list(map(lambda x: to_categorical(x, num_classes=len(machine_vocab)), targets)))
X_train, y_train, X_valid, y_valid, X_test, y_test = (
inputs[:int(2e5)], targets[:int(2e5)],
inputs[int(2e5):-int(5e4)], targets[int(2e5):-int(5e4)],
inputs[-int(5e4):], targets[-int(5e4):], )
"""
Explanation: Actually generating data:
End of explanation
"""
class NeuroNetwork:
def __init__(self):
self.layers = []
def AddLayer(self, layer):
self.layers += [layer]
return self
def GetOutput(self, inputs):
output = inputs
for layer in self.layers:
output = layer(output)
return output
# input - [bs; in_time_len]
# output - [bs; out_time_len]; out_time_len=10
def model_simple_nmt(in_chars, out_chars):
# RNN encoder -> hidden representation -> RNN decoder
inputs = Input(shape=(TIME_STEPS,))
# your code
output = NeuroNetwork() \
.AddLayer(Embedding(in_chars, 10)) \
.AddLayer(LSTM(ENCODER_UNITS)) \
.AddLayer(RepeatVector(TIME_STEPS)) \
.AddLayer(LSTM(DECODER_UNITS, return_sequences=True)) \
.AddLayer(TimeDistributed(Dense(out_chars, activation='softmax'))) \
.GetOutput(inputs)
# source: https://machinelearningmastery.com/encoder-decoder-attention-sequence-to-sequence-prediction-keras/
model = Model(input=[inputs], output=output)
return model
m = model_simple_nmt(len(human_vocab), len(machine_vocab))
m.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
print(m.summary())
m.fit(
[X_train], y_train,
validation_data=(X_valid, y_valid), # Пришлось добавить запятую, без нее не работало
epochs=10, batch_size=64,
validation_split=0.1)
m.evaluate([X_test], y_test)
"""
Explanation: Part 1: Simple NMT
End of explanation
"""
EXAMPLES = ['3 May 1979', '5 Apr 09', '20th February 2016', 'Wed 10 Jul 2007']
def run_example(model, input_vocabulary, inv_output_vocabulary, text):
encoded = string_to_int(text, TIME_STEPS, input_vocabulary)
prediction = model.predict(np.array([encoded]))
prediction = np.argmax(prediction[0], axis=-1)
return int_to_string(prediction, inv_output_vocabulary)
def run_examples(model, input_vocabulary, inv_output_vocabulary, examples=EXAMPLES):
predicted = []
for example in examples:
predicted.append(''.join(run_example(model, input_vocabulary, inv_output_vocabulary, example)))
print('input:', example)
print('output:', predicted[-1])
return predicted
run_examples(m, human_vocab, inv_machine_vocab)
"""
Explanation: Lets check our model:
End of explanation
"""
# :good-enouht:
ENCODER_UNITS = 32 # change me if u want
DECODER_UNITS = 32 # change me if u want
TIME_STEPS = 20 # change me if u want
def model_attention_nmt(in_chars, out_chars):
# RNN encoder -> hidden representation -> RNN decoder
inputs = Input(shape=(TIME_STEPS,))
# your code
assert(False)
model = Model(input=[inputs], output=output)
return model
m = model_attention_nmt(len(human_vocab), len(machine_vocab))
m.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
print(m.summary())
m.fit(
[X_train], y_train,
validation_data=(X_valid, y_valid)
epochs=10, batch_size=64,
validation_split=0.1)
m.evaluate([X_test], y_test)
"""
Explanation: Part 2: All u need is attention
Here we use more complex idea that simple seq2seq: we're adding two explicit parts of our network - encoder and decoder (which is applied attention on). The explanatory picture for this idea is below:
<p aling="center"><img src="https://i.stack.imgur.com/Zwsmz.png"></p>
The lower part of the network is encoding the input to some hidden intermediate representation and the upper part is decoing the hidвen represenataion into some readable output.
End of explanation
"""
# dataset from http://www.manythings.org/anki/
! wget http://www.manythings.org/anki/rus-eng.zip
! unzip ./rus-eng.zip
with open("./rus.txt") as fin:
data = fin.readlines()
data = list(map(lambda x: x.replace("\n", "").lower(), data))
len(data)
data = data[:int(1e5)]
len(data)
data[-1]
"""
Explanation: Report
final architectures
comparison
as well as training method and tricks
Итак, ну я посмотрел на нейросеть с семинара, и код с ней не заработал из-за несовпадения каких-то размерностей. Поэтому я начал гуглить. Другого выбора не было, так как на семинаре и лекции вообще не объясняли, что такое енкодер и декодер. Да и вообще почти ничего связанного с домашкой не было. Впрочем, как и всегда.
Так вот, где-то первая-третья ссылка в гугле по запросу "keras encode decoder attention" оказалась весьма пристойной, по крайней мере там на удивление даже совпадали названия слоев и вообще были какие-то куски, которые можно опознать в задании. Это клево, и я решил ей воспользоваться.
Взял эту архитектуру, пофиксил параметры под наши обозначения и решил избавляться от излишеств. Для начала какой-то непонятный RepeatVector. Попробовал убрать - опять что-то не совпало. Вернул. Еще там есть TimeDistributed. На удивление код работал и без него, поэтому в соответствии с принципом "бритва Оккама" (не плоди лишнего, если это не нужно), решил оставить. Наверняка это как-то помогает увеличить скор.
Я стал думать, попробовать запускать без TimeDistributed или нет. Простые рассуждения привели меня к и без того очевидному выводу: нет. Нет, я не готов ждать дважды по часу, чтобы заметить разницу в 0.0000001. Тем более, если бы второй вариант оказался хуже, то мне пришлось бы ждать еще час, когда я вернул все обратно.
Дальше я стал думать, какой же там окажется скор. На всякий случай решил взять 20 эпох вместо 10 и запустил. Но к счастью, после 8 эпох (то есть примерно 40 минут) коллаб дисконнектнулся, и все пропало. Почему к счатью? Потому что хотя это и вызвалу очередную бурю негативных эмоций, я смог сократить число эпох до 10. Действительно, за 8 эпох мне уже выдали примерный скор, и там уже набиралось 99%. Но это на трейне. На тесте могло быть чуть похуже. Но я решил, что 12 эпох для теста - роскошь. И вернул 10 эпох. Запустил - и на тесте 99% набралось. По итогу: если бы колаб не рухнул, то мне пришлось бы ждать 12 эпох (час), когда все закончится. А он рухнул, поэтому я запустил с 10 эпохами, и все кончилось за 50 минут. Вот такие чудеса случаются.
Получив скор 99%, мотивация делать задание сразу же пропала. Впрочем, ее особо и не было, разве что я не хочу удовл за курс. Ну а атеншн я не осилил вообще, всякие гуглимые коды слишком большие и сложные, чтобы в них разбираться, если нет интереса, на лекции ничего не понятно, а на семинаре, разумеется, нет даже упоминаний атеншена. Видимо, не очень важная тема.
Третью часть тоже не стал делать, раз не смог атеншн.
Part 3*: tatoeba - real NMT
Data
End of explanation
"""
source = list(map(lambda x: x.split("\t")[0], data))
target = list(map(lambda x: x.split("\t")[1], data))
source_vocab = set("".join(source).strip())
target_vocab = set("".join(target).strip())
source_vocab = dict(zip(
list(source_vocab) + ['<unk>', '<pad>'],
list(range(len(source_vocab) + 2))))
target_vocab = dict(zip(
list(target_vocab) + ['<unk>', '<pad>'],
list(range(len(target_vocab) + 2))))
inv_target_vocab = dict(enumerate(list(target_vocab) + ['<unk>', '<pad>']))
TIME_STEPS = 32
ENCODER_UNITS = 256
DECODER_UNITS = 256
def model_simple_nmt_tatoeba(in_chars, out_chars):
inputs = Input(shape=(TIME_STEPS,))
# your code
model = Model(input=[inputs], output=output)
return model
m = model_attention_nmt(len(human_vocab), len(machine_vocab))
m.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
print(m.summary())
inputs = np.array([string_to_int(i, TIME_STEPS, source_vocab) for i in source])
targets = [string_to_int(t, TIME_STEPS, target_vocab) for t in target]
targets = np.array(list(map(lambda x: to_categorical(x, num_classes=len(target_vocab)), targets)))
m.fit(
[inputs], targets,
epochs=10, batch_size=64,
validation_split=0.1)
run_example(m, source_vocab, inv_target_vocab, 'hello')
"""
Explanation:
End of explanation
"""
|
pfschus/fission_bicorrelation | methods/implement_sparse_matrix.ipynb | mit | import numpy as np
import scipy.io as sio
import os
import sys
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.colors
import inspect
from tqdm import tqdm
sys.path.append('../scripts/')
import bicorr as bicorr
%load_ext autoreload
%autoreload 2
"""
Explanation: Goal
My bicorr_hist_master matrix is enormous, and it takes up 15 GB of space (if 0.25 ns time binning). Most of it is empty, so I could instead convert it to a sparse matrix and store a much smaller matrix to file. Investigate this.
Start by loading a bicorr_hist_master into memory for this study.
Updates
3/22/2017: First version; Implement sparse matrices
7/5/2017: Implement det_df pandas dataframe for loading detector pair indices, angles
End of explanation
"""
help(bicorr.load_bicorr)
"""
Explanation: Generate bicorr_hist_master from another dataset
Use the function I have already built into my bicorr.py functions. I don't have any bicorr_hist_master files stored to disk because they are so large, so I will load a bicorr text file and build a bicorr_hist_master array.
End of explanation
"""
os.listdir('../datar/1')
"""
Explanation: Where are the files I want to import? Use the same file that I use in my analysis build_bicorr_hist_master.
End of explanation
"""
bicorr_data = bicorr.load_bicorr(bicorr_path = '../datar/1/bicorr1_part')
"""
Explanation: I am going to provide the full bicorr_path as input.
End of explanation
"""
det_df = bicorr.load_det_df()
dt_bin_edges, num_dt_bins = bicorr.build_dt_bin_edges()
bhm = bicorr.alloc_bhm(len(det_df),4,num_dt_bins)
bhm = bicorr.fill_bhm(bhm, bicorr_data, det_df, dt_bin_edges)
bhp = bicorr.build_bhp(bhm,dt_bin_edges)[0]
bicorr.bicorr_plot(bhp, dt_bin_edges, show_flag = True)
"""
Explanation: Now I will build bicorr_hist_master.
End of explanation
"""
np.count_nonzero(bhm)
bhm.size
1-np.count_nonzero(bhm)/bhm.size
"""
Explanation: What is a sparse matrix?
See documentation on Wikipedia: https://en.wikipedia.org/wiki/Sparse_matrix
A sparse matrix has most of its elements equal to zero. The number of zero-valued elements divided by the total number of elements is called its sparsity, and is equal to 1 minus the density of the matrix.
How sparse is my matrix?
How much of my matrix is empty?
End of explanation
"""
from scipy import sparse
"""
Explanation: Wow... My matrix is 98.8% sparse. I should definitely be using a sparse matrix to store the information.
Investigate sparse matrix format with scipy.sparse
SciPy has a 2-D sparse matrix package for numeric data. Documentation is available here: https://docs.scipy.org/doc/scipy/reference/sparse.html#usage-information. How do I use it? It will not be simple since my numpy array has four dimensions.
End of explanation
"""
s_bicorr_hist_Master = sparse.csr_matrix(bicorr_hist_master[0,0,:,:])
print(s_bicorr_hist_Master)
type(s_bicorr_hist_Master)
plt.pcolormesh(dt_bin_edges,dt_bin_edges,bicorr_hist_master[0,0,:,:],norm=matplotlib.colors.LogNorm())
plt.colorbar()
plt.show()
"""
Explanation: I am following this StackOverflow for how to convert the array to a sparse matrix: http://stackoverflow.com/questions/7922487/how-to-transform-numpy-matrix-or-array-to-scipy-sparse-matrix
End of explanation
"""
sparseType = np.dtype([('pair_i', np.uint16), ('type_i', np.uint8), ('det1t_i', np.uint16), ('det2t_i', np.uint16), ('count', np.uint32)])
"""
Explanation: It looks like this scipy sparse matrix function will not suit my needs because it is limited to two-dimensional numpy arrays. I need to be able to store a four-dimensional array as a sparse matrix. I will try to write my own technique in the following section.
Build my own sparse matrix generator
It looks like the scipy sparse matrix functions are limited to two-dimensional numpy arrays. So I will try building my own function.
Set up formatting
I am going to build a numpy array with a specified numpy data type (dType). There will be five pieces of data for each element in the array.
4-dimensional index of element
pair_i: Detector pair, length = 990. Use np.uint16.
type_i: Interaction type, length 4 (0=nn, 1=np, 2=pn, 3=pp). Use np.uint8.
det1t_i: dt bin for detector 1 (up to 1000). Use np.uint16.
det2t_i: dt bin for detector 2 (up to 1000). Use np.uint16.
count: Value of that element. Use np.uint64.
First establish the formatting of each element in the array.
End of explanation
"""
num_nonzero = np.count_nonzero(bhm)
print(num_nonzero)
"""
Explanation: Access nonzero count information
End of explanation
"""
i_nonzero = np.nonzero(bhm)
counts = bhm[i_nonzero]
counts[600:800]
"""
Explanation: I'm going to ask numpy to make an array of the indices for non-zero values with np.nonzero. Numpy will return it as an array of tuples organized somewhat strangely. Explore how the indexing works.
First store the indices as tuples using np.nonzero. The tuples are stored as four massive tuples of 2767589 elements. They are so large that I can't print them to my screen. How do I extract the four arguments for the position of the $i^{th}$ nonzero value?
End of explanation
"""
i = 637
counts[637]
counts[i]
i_nonzero[0][i]
i_nonzero[1][i]
i_nonzero[2][i]
i_nonzero[3][i]
"""
Explanation: How do I find the element at the top there that is equal to three? It is at the 627$^{th}$ index position in i_nonzero and counts. How do I find the corresponding indices in bicorr_hist_master?
End of explanation
"""
i_nonzero[:][i]
bhm[100,1,219,211]
"""
Explanation: Can I call them all at once? It doesn't seem so...
End of explanation
"""
sparse_bhm = np.zeros(num_nonzero,dtype=sparseType)
for i in tqdm(np.arange(0,num_nonzero),ascii=True):
sparse_bhm[i]['pair_i'] = i_nonzero[0][i]
sparse_bhm[i]['type_i'] = i_nonzero[1][i]
sparse_bhm[i]['det1t_i'] = i_nonzero[2][i]
sparse_bhm[i]['det2t_i'] = i_nonzero[3][i]
sparse_bhm[i]['count'] = counts[i]
print(sparse_bhm[0:20])
sparse_bhm[0]
bhm[0,0,338,745]
"""
Explanation: Write nonzero count information to sparse array sparse_bhm
Loop through all nonzero counts and store the information.
End of explanation
"""
print(inspect.getsource(bicorr.generate_sparse_bhm))
"""
Explanation: Functionalize this in my bicorr module.
End of explanation
"""
sparse_bhm = bicorr.generate_sparse_bhm(bhm)
sparse_bhm[0:20]
"""
Explanation: Start over and try generating sparse_bhm from bicorr_hist_master.
End of explanation
"""
np.savez('sparse_bhm', det_df = det_df, dt_bin_edges = dt_bin_edges, sparse_bhm=sparse_bhm)
"""
Explanation: Store sparse_bhm, dt_bin_edges to disk and reload
In order to make this useful, I need a clean and simple way of storing the sparse matrix to disk and reloading it. I used to save the following three variables to disk:
bicorr_hist_master
dict_pair_to_index
dt_bin_edges
Instead I will save the following three variables:
sparse_bhm
det_df
dt_bin_edges
Use the same np.save technique and try it here:
End of explanation
"""
help(bicorr.save_sparse_bhm)
bicorr.save_sparse_bhm(sparse_bhm, dt_bin_edges, '../datar')
"""
Explanation: This went much faster, as the bicorr_hist_master was 15 GB in size and this is only 30 MB in size. What a difference!
Write this into a function, and provide an optional destination folder for saving.
End of explanation
"""
sys.getsizeof(bhm)
sys.getsizeof(sparse_bhm)
sys.getsizeof(sparse_bhm)/sys.getsizeof(bhm)
"""
Explanation: Compare size of sparse_bhm and bicorr_hist_master
End of explanation
"""
os.listdir()
help(bicorr.load_sparse_bhm)
os.listdir('subfolder')
sparse_bhm, det_df, dt_bin_edges = bicorr.load_sparse_bhm(filepath = "subfolder")
who
sparse_bhm.size
print(sparse_bhm[0:10])
"""
Explanation: This is for a partial data set. In a larger data set (tested separately), I reduced my storage needs to 1.5%... hurrah! That is a significant change, and makes it possible to store the data on my local machine.
Load sparse_bhm and populate bicorr_hist_master
Load sparse_bhm
I am going to try loading sparse_bhm from file and producing bicorr_hist_master. Use the functions I just created for loading sparse_bhm. (Restart Python kernel from here to make sure it works).
End of explanation
"""
bicorr_hist_master = bicorr.alloc_bhm(len(det_df),4,len(dt_bin_edges)-1)
"""
Explanation: Populate bicorr_hist_master
The only dimension in the size of bicorr_hist_master that would change would be the number of time bins. Otherwise, the number of detector pairs and the number of interaction types I am recording will stay the same. Use the functions I have already developed to allocate the array for bicorr_hist_master.
End of explanation
"""
sparse_bhm[0]
"""
Explanation: Now I need to loop through sparse_bhm and fill bicorr_hist_master with the count at the corresponding index. What does sparse_bhm look like again?
End of explanation
"""
i = 0
bicorr_hist_master[sparse_bhm[i][0],sparse_bhm[i][1],sparse_bhm[i][2],sparse_bhm[i][3]] = sparse_bhm[i][4]
print(bicorr_hist_master[sparse_bhm[i][0],sparse_bhm[i][1],sparse_bhm[i][2],sparse_bhm[i][3]])
"""
Explanation: Fill one element
End of explanation
"""
for i in tqdm(np.arange(0,sparse_bhm.size)):
bicorr_hist_master[sparse_bhm[i][0],sparse_bhm[i][1],sparse_bhm[i][2],sparse_bhm[i][3]] = sparse_bhm[i][4]
np.max(bicorr_hist_master)
"""
Explanation: Fill all of the elements
End of explanation
"""
bicorr_hist_plot = bicorr.build_bicorr_hist_plot(bicorr_hist_master,dt_bin_edges)[0]
bicorr.bicorr_plot(bicorr_hist_plot, dt_bin_edges, show_flag = True)
"""
Explanation: Plot it to see if that looks correct.
End of explanation
"""
print(inspect.getsource(bicorr.revive_sparse_bhm))
sparse_bhm, det_df, dt_bin_edges = bicorr.load_sparse_bhm(filepath="subfolder")
bicorr_hist_master_sparse = bicorr.revive_sparse_bhm(sparse_bhm, det_df, dt_bin_edges)
np.array_equal(bicorr_hist_master,bicorr_hist_master_sparse)
"""
Explanation: Functionalize this... produce bicorr_hist_master from sparse_bhm
End of explanation
"""
|
quantumlib/ReCirq | docs/qaoa/precomputed_analysis.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 Google
End of explanation
"""
try:
import recirq
except ImportError:
!pip install git+https://github.com/quantumlib/ReCirq
"""
Explanation: Precomputed analysis
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/experiments/qaoa/precomputed_analysis"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/qaoa/precomputed_analysis.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/ReCirq/blob/master/docs/qaoa/precomputed_analysis.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/qaoa/precomputed_analysis.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
Use precomputed optimal angles to measure the expected value of $\langle C \rangle$ across a variety of problem types, sizes, $p$-depth, and random instances.
Setup
Install the ReCirq package:
End of explanation
"""
import recirq
import cirq
import numpy as np
import pandas as pd
"""
Explanation: Now import Cirq, ReCirq and the module dependencies:
End of explanation
"""
from recirq.qaoa.experiments.precomputed_execution_tasks import \
DEFAULT_BASE_DIR, DEFAULT_PROBLEM_GENERATION_BASE_DIR, DEFAULT_PRECOMPUTATION_BASE_DIR
records = []
for record in recirq.iterload_records(dataset_id="2020-03-tutorial", base_dir=DEFAULT_BASE_DIR):
dc_task = record['task']
apre_task = dc_task.precomputation_task
pgen_task = apre_task.generation_task
problem = recirq.load(pgen_task, base_dir=DEFAULT_PROBLEM_GENERATION_BASE_DIR)['problem']
record['problem'] = problem.graph
record['problem_type'] = problem.__class__.__name__
record['optimum'] = recirq.load(apre_task, base_dir=DEFAULT_PRECOMPUTATION_BASE_DIR)['optimum']
record['bitstrings'] = record['bitstrings'].bits
recirq.flatten_dataclass_into_record(record, 'task')
recirq.flatten_dataclass_into_record(record, 'precomputation_task')
recirq.flatten_dataclass_into_record(record, 'generation_task')
recirq.flatten_dataclass_into_record(record, 'optimum')
records.append(record)
df_raw = pd.DataFrame(records)
df_raw['timestamp'] = pd.to_datetime(df_raw['timestamp'])
df_raw.head()
"""
Explanation: Load the raw data
Go through each record, load in supporting objects, flatten everything into records, and put into a massive dataframe.
End of explanation
"""
from recirq.qaoa.simulation import hamiltonian_objectives, hamiltonian_objective_avg_and_err
import cirq_google as cg
def compute_energy_w_err(row):
permutation = []
for i, q in enumerate(row['qubits']):
fi = row['final_qubits'].index(q)
permutation.append(fi)
energy, err = hamiltonian_objective_avg_and_err(row['bitstrings'], row['problem'], permutation)
return pd.Series([energy, err], index=['energy', 'err'])
# Start cleaning up the raw data
df = df_raw.copy()
# Don't need these columns for present analysis
df = df.drop(['gammas', 'betas', 'circuit', 'violation_indices',
'precomputation_task.dataset_id',
'generation_task.dataset_id',
'generation_task.device_name'], axis=1)
# p is specified twice (from a parameter and from optimum)
assert (df['optimum.p'] == df['p']).all()
df = df.drop('optimum.p', axis=1)
# Compute energies
df = df.join(df.apply(compute_energy_w_err, axis=1))
df = df.drop(['bitstrings', 'qubits', 'final_qubits', 'problem'], axis=1)
# Normalize
df['energy_ratio'] = df['energy'] / df['min_c']
df['err_ratio'] = df['err'] * np.abs(1/df['min_c'])
df['f_val_ratio'] = df['f_val'] / df['min_c']
df
"""
Explanation: Narrow down to relevant data
Drop unnecessary metadata and use bitstrings to compute the expected value of the energy. In general, it's better to save the raw data and lots of metadata so we can use it if it becomes necessary in the future.
End of explanation
"""
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
sns.set_style('ticks')
plt.rc('axes', labelsize=16, titlesize=16)
plt.rc('xtick', labelsize=14)
plt.rc('ytick', labelsize=14)
plt.rc('legend', fontsize=14, title_fontsize=16)
# theme colors
QBLUE = '#1967d2'
QRED = '#ea4335ff'
QGOLD = '#fbbc05ff'
QGREEN = '#34a853ff'
QGOLD2 = '#ffca28'
QBLUE2 = '#1e88e5'
C = r'\langle C \rangle'
CMIN = r'C_\mathrm{min}'
COVERCMIN = f'${C}/{CMIN}$'
def percentile(n):
def percentile_(x):
return np.nanpercentile(x, n)
percentile_.__name__ = 'percentile_%s' % n
return percentile_
"""
Explanation: Plots
End of explanation
"""
import numpy as np
from matplotlib import pyplot as plt
pretty_problem = {
'HardwareGridProblem': 'Hardware Grid',
'SKProblem': 'SK Model',
'ThreeRegularProblem': '3-Regular MaxCut'
}
for problem_type in ['HardwareGridProblem', 'SKProblem', 'ThreeRegularProblem']:
df1 = df
df1 = df1[df1['problem_type'] == problem_type]
for p in sorted(df1['p'].unique()):
dfb = df1
dfb = dfb[dfb['p'] == p]
dfb = dfb.sort_values(by='n_qubits')
plt.subplots(figsize=(7,5))
n_instances = dfb.groupby('n_qubits').count()['energy_ratio'].unique()
if len(n_instances) == 1:
n_instances = n_instances[0]
label = f'{n_instances}'
else:
label = f'{min(n_instances)} - {max(n_instances)}'
#sns.boxplot(dfb['n_qubits'], dfb['energy_ratio'], color=QBLUE, saturation=1)
#sns.boxplot(dfb['n_qubits'], dfb['f_val_ratio'], color=QGREEN, saturation=1)
sns.swarmplot(x=dfb['n_qubits'], y=dfb['energy_ratio'], color=QBLUE)
sns.swarmplot(x=dfb['n_qubits'], y=dfb['f_val_ratio'], color=QGREEN)
plt.axhline(1, color='grey', ls='-')
plt.axhline(0, color='grey', ls='-')
plt.title(f'{pretty_problem[problem_type]}, {label} instances, p={p}')
plt.xlabel('# Qubits')
plt.ylabel(COVERCMIN)
plt.tight_layout()
plt.show()
"""
Explanation: Raw swarm plots of all data
End of explanation
"""
pretty_problem = {
'HardwareGridProblem': 'Hardware Grid',
'SKProblem': 'SK Model',
'ThreeRegularProblem': '3-Regular MaxCut'
}
df1 = df
df1 = df1[
((df1['problem_type'] == 'SKProblem') & (df1['p'] == 3))
| ((df1['problem_type'] == 'HardwareGridProblem') & (df1['p'] == 3))
]
df1 = df1.sort_values(by='n_qubits')
MINQ = 3
df1 = df1[df1['n_qubits'] >= MINQ]
plt.subplots(figsize=(8, 6))
plt.xlim((8, 23))
# SK
dfb = df1
dfb = dfb[dfb['problem_type'] == 'SKProblem']
sns.swarmplot(x=dfb['n_qubits'], y=dfb['energy_ratio'], s=5, linewidth=0.5, edgecolor='k', color=QRED)
sns.swarmplot(x=dfb['n_qubits'], y=dfb['f_val_ratio'], s=5, linewidth=0.5, edgecolor='k', color=QRED,
marker='s')
dfg = dfb.groupby('n_qubits').mean().reset_index()
# --------
# Hardware
dfb = df1
dfb = dfb[dfb['problem_type'] == 'HardwareGridProblem']
sns.swarmplot(x=dfb['n_qubits'], y=dfb['energy_ratio'], s=5, linewidth=0.5, edgecolor='k', color=QBLUE)
sns.swarmplot(x=dfb['n_qubits'], y=dfb['f_val_ratio'], s=5, linewidth=0.5, edgecolor='k', color=QBLUE,
marker='s')
dfg = dfb.groupby('n_qubits').mean().reset_index()
# -------
plt.axhline(1, color='grey', ls='-')
plt.axhline(0, color='grey', ls='-')
plt.xlabel('# Qubits')
plt.ylabel(COVERCMIN)
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
from matplotlib.legend_handler import HandlerTuple
lelements = [
Line2D([0], [0], color=QBLUE, marker='o', ms=7, ls='', ),
Line2D([0], [0], color=QRED, marker='o', ms=7, ls='', ),
Line2D([0], [0], color='k', marker='s', ms=7, ls='', markerfacecolor='none'),
Line2D([0], [0], color='k', marker='o', ms=7, ls='', markerfacecolor='none'),
]
plt.legend(lelements, ['Hardware Grid', 'SK Model', 'Noiseless', 'Experiment', ], loc='best',
title=f'p = 3',
handler_map={tuple: HandlerTuple(ndivide=None)}, framealpha=1.0)
plt.tight_layout()
plt.show()
"""
Explanation: Compare SK and hardware grid vs. n
End of explanation
"""
dfb = df
dfb = dfb[dfb['problem_type'] == 'HardwareGridProblem']
dfb = dfb[['p', 'instance_i', 'n_qubits', 'energy_ratio', 'f_val_ratio']]
P_LIMIT = max(dfb['p'])
def max_over_p(group):
i = group['energy_ratio'].idxmax()
return group.loc[i][['energy_ratio', 'p']]
def count_p(group):
new = {}
for i, c in enumerate(np.bincount(group['p'], minlength=P_LIMIT+1)):
if i == 0:
continue
new[f'p{i}'] = c
return pd.Series(new)
dfgy = dfb.groupby(['n_qubits', 'instance_i']).apply(max_over_p).reset_index()
dfgz = dfgy.groupby(['n_qubits']).apply(count_p).reset_index()
# In the paper, we restrict to n > 10
# dfgz = dfgz[dfgz['n_qubits'] > 10]
dfgz = dfgz.set_index('n_qubits').sum(axis=0)
dfgz /= (dfgz.sum())
dfgz
dfb = df
dfb = dfb[dfb['problem_type'] == 'HardwareGridProblem']
dfb = dfb[['p', 'instance_i', 'n_qubits', 'energy_ratio', 'f_val_ratio']]
# In the paper, we restrict to n > 10
# dfb = dfb[dfb['n_qubits'] > 10]
dfg = dfb.groupby('p').agg(['median', percentile(25), percentile(75), 'mean', 'std']).reset_index()
plt.subplots(figsize=(5.5,4))
plt.errorbar(x=dfg['p'], y=dfg['f_val_ratio', 'mean'],
yerr=(dfg['f_val_ratio', 'std'],
dfg['f_val_ratio', 'std']),
fmt='o-',
capsize=7,
color=QGREEN,
label='Noiseless'
)
plt.errorbar(x=dfg['p'], y=dfg['energy_ratio', 'mean'],
yerr=(dfg['energy_ratio', 'std'],
dfg['energy_ratio', 'std']),
fmt='o-',
capsize=7,
color=QBLUE,
label='Experiment'
)
plt.xlabel('p')
plt.ylabel('Mean ' + COVERCMIN)
plt.ylim((0, 1))
plt.text(0.05, 0.9, r'Hardware Grid', fontsize=16, transform=plt.gca().transAxes, ha='left', va='bottom')
plt.legend(loc='center right')
ax2 = plt.gca().twinx() # instantiate a second axes that shares the same x-axis
dfgz_p = [int(s[1:]) for s in dfgz.index]
dfgz_y = dfgz.values
ax2.bar(dfgz_p, dfgz_y, color=QBLUE, width=0.9, lw=1, ec='k')
ax2.tick_params(axis='y')
ax2.set_ylim((0, 2))
ax2.set_yticks([0, 0.25, 0.50])
ax2.set_yticklabels(['0%', None, '50%'])
ax2.set_ylabel('Fraction best' + ' ' * 41, fontsize=14)
plt.tight_layout()
"""
Explanation: Hardware grid vs. p
End of explanation
"""
|
nordam/PyPPT | Notebooks/Code examples for exam - optimized.ipynb | mit | def grid_of_particles(N, w):
# Create a grid of N evenly spaced particles
# covering a square patch of width and height w
# centered on the region 0 < x < 2, 0 < y < 1
x = np.linspace(1.0-w/2, 1.0+w/2, int(np.sqrt(N)))
y = np.linspace(0.5-w/2, 0.5+w/2, int(np.sqrt(N)))
x, y = np.meshgrid(x, y)
return np.array([x.flatten(), y.flatten()])
X = grid_of_particles(50, 0.1)
# Make a plot to confirm that this works as expected
fig = plt.figure(figsize = (12,6))
plt.scatter(X[0,:], X[1,:], lw = 0, marker = '.', s = 1)
plt.xlim(0, 2)
plt.ylim(0, 1)
X
"""
Explanation: Transport a collection of particles
End of explanation
"""
N = 10000
X0 = grid_of_particles(N, w = 0.1)
# Array to hold all grid points after transport
X1 = np.zeros((2, N))
# Transport parameters
tmax = 5.0
dt = 0.5
# Loop over grid and update all positions
# This is where parallelisation would happen, since
# each position is independent of all the others
tic = time()
for i in range(N):
# Keep only the last position, not the entire trajectory
X1[:,i] = trajectory(X0[:,i], tmax, dt, rk4, f)[:,-1]
toc = time()
print('Transport took %.3f seconds' % (toc - tic))
# Make scatter plot to show all grid points
fig = plt.figure(figsize = (12,6))
plt.scatter(X1[0,:], X1[1,:], lw = 0, marker = '.', s = 1)
plt.xlim(0, 2)
plt.ylim(0, 1)
"""
Explanation: Optimization - Step 1: Naïve loop implementation
End of explanation
"""
# Implementation of Eq. (1) in the exam set
def doublegyre(x, y, t, A, e, w):
a = e * np.sin(w*t)
b = 1 - 2*e*np.sin(w*t)
f = a*x**2 + b*x
return np.array([
-np.pi*A*np.sin(np.pi*f) * np.cos(np.pi*y), # x component of velocity
np.pi*A*np.cos(np.pi*f) * np.sin(np.pi*y) * (2*a*x + b) # y component of velocity
])
# Wrapper function to pass to integrator
# X0 is a two-component vector [x, y]
def f(X, t):
# Parameters of the velocity field
A = 0.1
e = 0.25 # epsilon
w = 1 # omega
return doublegyre(X[0,:], X[1,:], t, A, e, w)
# 4th order Runge-Kutta integrator
# X0 is a two-component vector [x, y]
def rk4(X, t, dt, f):
k1 = f(X, t)
k2 = f(X + k1*dt/2, t + dt/2)
k3 = f(X + k2*dt/2, t + dt/2)
k4 = f(X + k3*dt, t + dt)
return X + dt*(k1 + 2*k2 + 2*k3 + k4) / 6
# Function to calculate a trajectory from an
# initial position X0 at t = 0, moving forward
# until t = tmax, using the given timestep and
# integrator
def trajectory(X0, tmax, dt, integrator, f):
t = 0
# Number of timesteps
Nt = int(tmax / dt)
# Array to hold the entire trajectory
PX = np.zeros((*X0.shape, Nt+1))
# Initial position
PX[:,:,0] = X0
# Loop over all timesteps
for i in range(1, Nt+1):
PX[:,:,i] = integrator(PX[:,:,i-1], t, dt, f)
t += dt
# Return entire trajectory
return PX
N = 10000
X0 = grid_of_particles(N, w = 0.1)
# Array to hold all grid points after transport
X1 = np.zeros((2, N))
# Transport parameters
tmax = 5.0
dt = 0.5
# Loop over grid and update all positions
# This is where parallelisation would happen, since
# each position is independent of all the others
tic = time()
# Keep only the last position, not the entire trajectory
X1 = trajectory(X0, tmax, dt, rk4, f)[:,:,-1]
toc = time()
print('Transport took %.3f seconds' % (toc - tic))
# Make scatter plot to show all grid points
fig = plt.figure(figsize = (12,6))
plt.scatter(X1[0,:], X1[1,:], lw = 0, marker = '.', s = 1)
plt.xlim(0, 2)
plt.ylim(0, 1)
"""
Explanation: Optimization - Step 2: NumPy array operations
End of explanation
"""
# Implementation of Eq. (1) in the exam set
def doublegyre(x, y, t, A, e, w):
a = e * np.sin(w*t)
b = 1 - 2*e*np.sin(w*t)
f = a*x**2 + b*x
return np.array([
-np.pi*A*np.sin(np.pi*f) * np.cos(np.pi*y), # x component of velocity
np.pi*A*np.cos(np.pi*f) * np.sin(np.pi*y) * (2*a*x + b) # y component of velocity
])
# Wrapper function to pass to integrator
# X0 is a two-component vector [x, y]
def f(X, t):
# Parameters of the velocity field
A = 0.1
e = 0.25 # epsilon
w = 1 # omega
return doublegyre(X[0,:], X[1,:], t, A, e, w)
# 4th order Runge-Kutta integrator
# X0 is a two-component vector [x, y]
def rk4(X, t, dt, f):
k1 = f(X, t)
k2 = f(X + k1*dt/2, t + dt/2)
k3 = f(X + k2*dt/2, t + dt/2)
k4 = f(X + k3*dt, t + dt)
return X + dt*(k1 + 2*k2 + 2*k3 + k4) / 6
# Function to calculate a trajectory from an
# initial position X0 at t = 0, moving forward
# until t = tmax, using the given timestep and
# integrator
def trajectory(X, tmax, dt, integrator, f):
t = 0
# Number of timesteps
Nt = int(tmax / dt)
# Loop over all timesteps
for i in range(1, Nt+1):
X = integrator(X, t, dt, f)
t += dt
# Return entire trajectory
return X
N = 10000
X0 = grid_of_particles(N, w = 0.1)
# Array to hold all grid points after transport
X1 = np.zeros((2, N))
# Transport parameters
tmax = 5.0
dt = 0.5
# Loop over grid and update all positions
# This is where parallelisation would happen, since
# each position is independent of all the others
tic = time()
# Keep only the last position, not the entire trajectory
X1 = trajectory(X0, tmax, dt, rk4, f)
toc = time()
print('Transport took %.3f seconds' % (toc - tic))
# Make scatter plot to show all grid points
fig = plt.figure(figsize = (12,6))
plt.scatter(X1[0,:], X1[1,:], lw = 0, marker = '.', s = 1)
plt.xlim(0, 2)
plt.ylim(0, 1)
"""
Explanation: Optimization - Step 3: Rewrite trajectory function to only return final location
End of explanation
"""
# Implementation of Eq. (1) in the exam set
@jit(UniTuple(float64[:], 2)(float64[:], float64[:], float64, float64, float64, float64), nopython = True)
def doublegyre(x, y, t, A, e, w):
a = e * np.sin(w*t)
b = 1 - 2*e*np.sin(w*t)
f = a*x**2 + b*x
v = np.zeros((2, x.size))
return -np.pi*A*np.sin(np.pi*f) * np.cos(np.pi*y), np.pi*A*np.cos(np.pi*f) * np.sin(np.pi*y) * (2*a*x + b)
# Wrapper function to pass to integrator
# X0 is a two-component vector [x, y]
@jit(nopython = True)
def f(X, t):
# Parameters of the velocity field
A = np.float64(0.1)
e = np.float64(0.25) # epsilon
w = np.float64(1.0) # omega
v = np.zeros(X.shape)
v[0,:], v[1,:] = doublegyre(X[0,:], X[1,:], t, A, e, w)
return v
# 4th order Runge-Kutta integrator
# X0 is a two-component vector [x, y]
@jit(nopython = True)
def rk4(X, t, dt):
k1 = f(X, t)
k2 = f(X + k1*dt/2, t + dt/2)
k3 = f(X + k2*dt/2, t + dt/2)
k4 = f(X + k3*dt, t + dt)
return X + dt*(k1 + 2*k2 + 2*k3 + k4) / 6
# Function to calculate a trajectory from an
# initial position X0 at t = 0, moving forward
# until t = tmax, using the given timestep and
# integrator
@jit(nopython = True)
def trajectory(X, tmax, dt):
t = 0
# Number of timesteps
Nt = int(tmax / dt)
# Loop over all timesteps
for i in range(1, Nt+1):
X = rk4(X, t, dt)
t += dt
# Return entire trajectory
return X
N = 10000
X0 = grid_of_particles(N, w = 0.1)
# Array to hold all grid points after transport
X1 = np.zeros((2, N))
# Transport parameters
tmax = 5.0
dt = 0.5
# Loop over grid and update all positions
# This is where parallelisation would happen, since
# each position is independent of all the others
tic = time()
# Keep only the last position, not the entire trajectory
X1 = endpoints(X0[:,:], tmax, dt)
toc = time()
print('Transport took %.3f seconds' % (toc - tic))
# Make scatter plot to show all grid points
fig = plt.figure(figsize = (12,6))
plt.scatter(X1[0,:], X1[1,:], lw = 0, marker = '.', s = 1)
plt.xlim(0, 2)
plt.ylim(0, 1)
"""
Explanation: Optimization - Step 4: Just-in-time compilation with Numba
End of explanation
"""
|
leopardbruce/FileFun | Course_2_Part_2_Lesson_3_Notebook.ipynb | mit | !wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip \
-O /tmp/horse-or-human.zip
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip \
-O /tmp/validation-horse-or-human.zip
"""
Explanation: <a href="https://colab.research.google.com/github/leopardbruce/TensorflowforAI-ML-DL/blob/master/Course_2_Part_2_Lesson_3_Notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
"""
import os
import zipfile
local_zip = '/tmp/horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/horse-or-human')
local_zip = '/tmp/validation-horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/validation-horse-or-human')
zip_ref.close()
"""
Explanation: The following python code will use the OS library to use Operating System libraries, giving you access to the file system, and the zipfile library allowing you to unzip the data.
End of explanation
"""
# Directory with our training horse pictures
train_horse_dir = os.path.join('/tmp/horse-or-human/horses')
# Directory with our training human pictures
train_human_dir = os.path.join('/tmp/horse-or-human/humans')
# Directory with our training horse pictures
validation_horse_dir = os.path.join('/tmp/validation-horse-or-human/horses')
# Directory with our training human pictures
validation_human_dir = os.path.join('/tmp/validation-horse-or-human/humans')
"""
Explanation: The contents of the .zip are extracted to the base directory /tmp/horse-or-human, which in turn each contain horses and humans subdirectories.
In short: The training set is the data that is used to tell the neural network model that 'this is what a horse looks like', 'this is what a human looks like' etc.
One thing to pay attention to in this sample: We do not explicitly label the images as horses or humans. If you remember with the handwriting example earlier, we had labelled 'this is a 1', 'this is a 7' etc. Later you'll see something called an ImageGenerator being used -- and this is coded to read images from subdirectories, and automatically label them from the name of that subdirectory. So, for example, you will have a 'training' directory containing a 'horses' directory and a 'humans' one. ImageGenerator will label the images appropriately for you, reducing a coding step.
Let's define each of these directories:
End of explanation
"""
train_horse_names = os.listdir(train_horse_dir)
print(train_horse_names[:10])
train_human_names = os.listdir(train_human_dir)
print(train_human_names[:10])
validation_horse_hames = os.listdir(validation_horse_dir)
print(validation_horse_hames[:10])
validation_human_names = os.listdir(validation_human_dir)
print(validation_human_names[:10])
"""
Explanation: Now, let's see what the filenames look like in the horses and humans training directories:
End of explanation
"""
print('total training horse images:', len(os.listdir(train_horse_dir)))
print('total training human images:', len(os.listdir(train_human_dir)))
print('total validation horse images:', len(os.listdir(validation_horse_dir)))
print('total validation human images:', len(os.listdir(validation_human_dir)))
"""
Explanation: Let's find out the total number of horse and human images in the directories:
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# Parameters for our graph; we'll output images in a 4x4 configuration
nrows = 4
ncols = 4
# Index for iterating over images
pic_index = 0
"""
Explanation: Now let's take a look at a few pictures to get a better sense of what they look like. First, configure the matplot parameters:
End of explanation
"""
# Set up matplotlib fig, and size it to fit 4x4 pics
fig = plt.gcf()
fig.set_size_inches(ncols * 4, nrows * 4)
pic_index += 8
next_horse_pix = [os.path.join(train_horse_dir, fname)
for fname in train_horse_names[pic_index-8:pic_index]]
next_human_pix = [os.path.join(train_human_dir, fname)
for fname in train_human_names[pic_index-8:pic_index]]
for i, img_path in enumerate(next_horse_pix+next_human_pix):
# Set up subplot; subplot indices start at 1
sp = plt.subplot(nrows, ncols, i + 1)
sp.axis('Off') # Don't show axes (or gridlines)
img = mpimg.imread(img_path)
plt.imshow(img)
plt.show()
"""
Explanation: Now, display a batch of 8 horse and 8 human pictures. You can rerun the cell to see a fresh batch each time:
End of explanation
"""
import tensorflow as tf
"""
Explanation: Building a Small Model from Scratch
But before we continue, let's start defining the model:
Step 1 will be to import tensorflow.
End of explanation
"""
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 300x300 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fifth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
# Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')
tf.keras.layers.Dense(1, activation='sigmoid')
])
"""
Explanation: We then add convolutional layers as in the previous example, and flatten the final result to feed into the densely connected layers.
Finally we add the densely connected layers.
Note that because we are facing a two-class classification problem, i.e. a binary classification problem, we will end our network with a sigmoid activation, so that the output of our network will be a single scalar between 0 and 1, encoding the probability that the current image is class 1 (as opposed to class 0).
End of explanation
"""
model.summary()
"""
Explanation: The model.summary() method call prints a summary of the NN
End of explanation
"""
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['acc'])
"""
Explanation: The "output shape" column shows how the size of your feature map evolves in each successive layer. The convolution layers reduce the size of the feature maps by a bit due to padding, and each pooling layer halves the dimensions.
Next, we'll configure the specifications for model training. We will train our model with the binary_crossentropy loss, because it's a binary classification problem and our final activation is a sigmoid. (For a refresher on loss metrics, see the Machine Learning Crash Course.) We will use the rmsprop optimizer with a learning rate of 0.001. During training, we will want to monitor classification accuracy.
NOTE: In this case, using the RMSprop optimization algorithm is preferable to stochastic gradient descent (SGD), because RMSprop automates learning-rate tuning for us. (Other optimizers, such as Adam and Adagrad, also automatically adapt the learning rate during training, and would work equally well here.)
End of explanation
"""
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1/255)
validation_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 128 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
'/tmp/horse-or-human/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 150x150
batch_size=128,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow training images in batches of 128 using train_datagen generator
validation_generator = validation_datagen.flow_from_directory(
'/tmp/validation-horse-or-human/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 150x150
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
"""
Explanation: Data Preprocessing
Let's set up data generators that will read pictures in our source folders, convert them to float32 tensors, and feed them (with their labels) to our network. We'll have one generator for the training images and one for the validation images. Our generators will yield batches of images of size 300x300 and their labels (binary).
As you may already know, data that goes into neural networks should usually be normalized in some way to make it more amenable to processing by the network. (It is uncommon to feed raw pixels into a convnet.) In our case, we will preprocess our images by normalizing the pixel values to be in the [0, 1] range (originally all values are in the [0, 255] range).
In Keras this can be done via the keras.preprocessing.image.ImageDataGenerator class using the rescale parameter. This ImageDataGenerator class allows you to instantiate generators of augmented image batches (and their labels) via .flow(data, labels) or .flow_from_directory(directory). These generators can then be used with the Keras model methods that accept data generators as inputs: fit_generator, evaluate_generator, and predict_generator.
End of explanation
"""
history = model.fit_generator(
train_generator,
steps_per_epoch=8,
epochs=15,
verbose=1,
validation_data = validation_generator,
validation_steps=8)
"""
Explanation: Training
Let's train for 15 epochs -- this may take a few minutes to run.
Do note the values per epoch.
The Loss and Accuracy are a great indication of progress of training. It's making a guess as to the classification of the training data, and then measuring it against the known label, calculating the result. Accuracy is the portion of correct guesses.
End of explanation
"""
import numpy as np
from google.colab import files
from keras.preprocessing import image
uploaded = files.upload()
for fn in uploaded.keys():
# predicting images
path = '/content/' + fn
img = image.load_img(path, target_size=(300, 300))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=10)
print(classes[0])
if classes[0]>0.5:
print(fn + " is a human")
else:
print(fn + " is a horse")
"""
Explanation: Running the Model
Let's now take a look at actually running a prediction using the model. This code will allow you to choose 1 or more files from your file system, it will then upload them, and run them through the model, giving an indication of whether the object is a horse or a human.
End of explanation
"""
import numpy as np
import random
from tensorflow.keras.preprocessing.image import img_to_array, load_img
# Let's define a new Model that will take an image as input, and will output
# intermediate representations for all layers in the previous model after
# the first.
successive_outputs = [layer.output for layer in model.layers[1:]]
#visualization_model = Model(img_input, successive_outputs)
visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs)
# Let's prepare a random input image from the training set.
horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names]
human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names]
img_path = random.choice(horse_img_files + human_img_files)
img = load_img(img_path, target_size=(300, 300)) # this is a PIL image
x = img_to_array(img) # Numpy array with shape (150, 150, 3)
x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 150, 150, 3)
# Rescale by 1/255
x /= 255
# Let's run our image through our network, thus obtaining all
# intermediate representations for this image.
successive_feature_maps = visualization_model.predict(x)
# These are the names of the layers, so can have them as part of our plot
layer_names = [layer.name for layer in model.layers]
# Now let's display our representations
for layer_name, feature_map in zip(layer_names, successive_feature_maps):
if len(feature_map.shape) == 4:
# Just do this for the conv / maxpool layers, not the fully-connected layers
n_features = feature_map.shape[-1] # number of features in feature map
# The feature map has shape (1, size, size, n_features)
size = feature_map.shape[1]
# We will tile our images in this matrix
display_grid = np.zeros((size, size * n_features))
for i in range(n_features):
# Postprocess the feature to make it visually palatable
x = feature_map[0, :, :, i]
x -= x.mean()
x /= x.std()
x *= 64
x += 128
x = np.clip(x, 0, 255).astype('uint8')
# We'll tile each filter into this big horizontal grid
display_grid[:, i * size : (i + 1) * size] = x
# Display the grid
scale = 20. / n_features
plt.figure(figsize=(scale * n_features, scale))
plt.title(layer_name)
plt.grid(False)
plt.imshow(display_grid, aspect='auto', cmap='viridis')
"""
Explanation: Visualizing Intermediate Representations
To get a feel for what kind of features our convnet has learned, one fun thing to do is to visualize how an input gets transformed as it goes through the convnet.
Let's pick a random image from the training set, and then generate a figure where each row is the output of a layer, and each image in the row is a specific filter in that output feature map. Rerun this cell to generate intermediate representations for a variety of training images.
End of explanation
"""
import os, signal
os.kill(os.getpid(), signal.SIGKILL)
"""
Explanation: As you can see we go from the raw pixels of the images to increasingly abstract and compact representations. The representations downstream start highlighting what the network pays attention to, and they show fewer and fewer features being "activated"; most are set to zero. This is called "sparsity." Representation sparsity is a key feature of deep learning.
These representations carry increasingly less information about the original pixels of the image, but increasingly refined information about the class of the image. You can think of a convnet (or a deep network in general) as an information distillation pipeline.
Clean Up
Before running the next exercise, run the following cell to terminate the kernel and free memory resources:
End of explanation
"""
|
folivetti/BIGDATA | Spark/.ipynb_checkpoints/Lab04-Resposta-checkpoint.ipynb | mit | import os
import numpy as np
def parseRDD(point):
""" Parser for the current dataset. It receives a data point and return
a sentence (third field).
Args:
point (str): input data point
Returns:
str: a string
"""
data = point.split('\t')
return (int(data[0]),data[2])
def notempty(point):
""" Returns whether the point string is not empty
Args:
point (str): input string
Returns:
bool: True if it is not empty
"""
return len(point[1])>0
filename = os.path.join("Data","MovieReviews2.tsv")
rawRDD = sc.textFile(filename,100)
header = rawRDD.take(1)[0]
dataRDD = (rawRDD
#.sample(False, 0.1, seed=42)
.filter(lambda x: x!=header)
.map(parseRDD)
.filter(notempty)
#.sample( False, 0.1, 42 )
)
print ('Read {} lines'.format(dataRDD.count()))
print ('Sample line: {}'.format(dataRDD.takeSample(False, 1)[0]))
"""
Explanation: Lab 5b - k-Means para Quantização de Atributos
Os algoritmos de agrupamento de dados, além de serem utilizados em análise exploratória para extrair padrões de similaridade entre os objetos, pode ser utilizado para compactar o espaço de dados.
Neste notebook vamos utilizar nossa base de dados de Sentiment Movie Reviews para os experimentos. Primeiro iremos utilizar a técnica word2vec que aprende uma transformação dos tokens de uma base em um vetor de atributos. Em seguida, utilizaremos o algoritmo k-Means para compactar a informação desses atributos e projetar cada objeto em um espaço de atributos de tamanho fixo.
As células-exercícios iniciam com o comentário # EXERCICIO e os códigos a serem completados estão marcados pelos comentários <COMPLETAR>.
Nesse notebook:
Parte 1: Word2Vec
Parte 2: k-Means para quantizar os atributos
Parte 3: Aplicando um k-NN
Parte 0: Preliminares
Para este notebook utilizaremos a base de dados Movie Reviews que será utilizada para o segundo projeto.
A base de dados tem os campos separados por '\t' e o seguinte formato:
"id da frase","id da sentença","Frase","Sentimento"
Para esse laboratório utilizaremos apenas o campo "Frase".
End of explanation
"""
# EXERCICIO
import re
split_regex = r'\W+'
stopfile = os.path.join("Data","stopwords.txt")
stopwords = set(sc.textFile(stopfile).collect())
def tokenize(string):
""" An implementation of input string tokenization that excludes stopwords
Args:
string (str): input string
Returns:
list: a list of tokens without stopwords
"""
str_list = re.split(split_regex, string)
str_list = filter(lambda w: len(w)>0, map(lambda w: w.lower(), str_list))
return [w for w in str_list if w not in stopwords]
wordsRDD = dataRDD.map(lambda x: tokenize(x[1]))
print (wordsRDD.take(1)[0])
# TEST Tokenize a String (1a)
assert wordsRDD.take(1)[0]==[u'quiet', u'introspective', u'entertaining', u'independent', u'worth', u'seeking'], 'lista incorreta!'
"""
Explanation: Parte 1: Word2Vec
A técnica word2vec aprende através de uma rede neural semântica uma representação vetorial de cada token em um corpus de tal forma que palavras semanticamente similares sejam similares na representação vetorial.
O PySpark contém uma implementação dessa técnica, para aplicá-la basta passar um RDD em que cada objeto representa um documento e cada documento é representado por uma lista de tokens na ordem em que aparecem originalmente no corpus. Após o processo de treinamento, podemos transformar um token utilizando o método transform para transformar cada token em uma representaçã vetorial.
Nesse ponto, cada objeto de nossa base será representada por uma matriz de tamanho variável.
(1a) Gerando RDD de tokens
Utilize a função de tokenização tokenize do Lab4d para gerar uma RDD wordsRDD contendo listas de tokens da nossa base original.
End of explanation
"""
# EXERCICIO
from pyspark.mllib.feature import Word2Vec
model = (Word2Vec()
.setVectorSize(5)
.setSeed(42)
.fit(wordsRDD))
print (model.transform(u'entertaining'))
print (list(model.findSynonyms(u'entertaining', 2)))
dist = np.abs(model.transform(u'entertaining')-np.array([0.0136831374839,0.00371457682922,-0.135785803199,0.047585401684,0.0414853096008])).mean()
assert dist<1e-6, 'valores incorretos'
assert list(model.findSynonyms(u'entertaining', 1))[0][0] == 'god', 'valores incorretos'
"""
Explanation: (1b) Aplicando transformação word2vec
Crie um modelo word2vec aplicando o método fit na RDD criada no exercício anterior.
Para aplicar esse método deve ser fazer um pipeline de métodos, primeiro executando Word2Vec(), em seguida aplicando o método setVectorSize() com o tamanho que queremos para nosso vetor (utilize tamanho 5), seguido de setSeed() para a semente aleatória, em caso de experimentos controlados (utilizaremos 42) e, finalmente, fit() com nossa wordsRDD como parâmetro.
End of explanation
"""
# EXERCICIO
uniqueWords = (wordsRDD
.flatMap(lambda ws: [(w, 1) for w in ws])
.reduceByKey(lambda x,y: x+y)
.filter(lambda wf: wf[1]>=5)
.map(lambda wf: wf[0])
.collect()
)
print ('{} tokens únicos'.format(len(uniqueWords)))
w2v = {}
for w in uniqueWords:
w2v[w] = model.transform(w)
w2vb = sc.broadcast(w2v)
print ('Vetor entertaining: {}'.format( w2v[u'entertaining']))
vectorsRDD = (wordsRDD
.map(lambda ws: np.array([w2vb.value[w] for w in ws if w in w2vb.value]))
)
recs = vectorsRDD.take(2)
firstRec, secondRec = recs[0], recs[1]
print (firstRec.shape, secondRec.shape)
# TEST Tokenizing the small datasets (1c)
assert len(uniqueWords) == 3388, 'valor incorreto'
assert np.mean(np.abs(w2v[u'entertaining']-[0.0136831374839,0.00371457682922,-0.135785803199,0.047585401684,0.0414853096008]))<1e-6,'valor incorreto'
assert secondRec.shape == (10,5)
"""
Explanation: (1c) Gerando uma RDD de matrizes
Como primeiro passo, precisamos gerar um dicionário em que a chave são as palavras e o valor é o vetor representativo dessa palavra.
Para isso vamos primeiro gerar uma lista uniqueWords contendo as palavras únicas do RDD words, removendo aquelas que aparecem menos do que 5 vezes $^1$. Em seguida, criaremos um dicionário w2v que a chave é um token e o valor é um np.array do vetor transformado daquele token$^2$.
Finalmente, vamos criar uma RDD chamada vectorsRDD em que cada registro é representado por uma matriz onde cada linha representa uma palavra transformada.
1
Na versão 1.3 do PySpark o modelo Word2Vec utiliza apenas os tokens que aparecem mais do que 5 vezes no corpus, na versão 1.4 isso é parametrizado.
2
Na versão 1.4 do PySpark isso pode ser feito utilizando o método `getVectors()
End of explanation
"""
# EXERCICIO
from pyspark.mllib.clustering import KMeans
vectors2RDD = sc.parallelize(np.array(list(w2v.values())),1)
print ('Sample vector: {}'.format(vectors2RDD.take(1)))
modelK = KMeans.train(vectors2RDD, 200, seed=42)
clustersRDD = vectors2RDD.map(lambda x: modelK.predict(x))
print ('10 first clusters allocation: {}'.format(clustersRDD.take(10)))
# TEST Amazon record with the most tokens (1d)
assert clustersRDD.take(10)==[142, 83, 42, 0, 87, 52, 190, 17, 56, 0], 'valor incorreto'
"""
Explanation: Parte 2: k-Means para quantizar os atributos
Nesse momento é fácil perceber que não podemos aplicar nossas técnicas de aprendizado supervisionado nessa base de dados:
A regressão logística requer um vetor de tamanho fixo representando cada objeto
O k-NN necessita uma forma clara de comparação entre dois objetos, que métrica de similaridade devemos aplicar?
Para resolver essa situação, vamos executar uma nova transformação em nossa RDD. Primeiro vamos aproveitar o fato de que dois tokens com significado similar são mapeados em vetores similares, para agrupá-los em um atributo único.
Ao aplicarmos o k-Means nesse conjunto de vetores, podemos criar $k$ pontos representativos e, para cada documento, gerar um histograma de contagem de tokens nos clusters gerados.
(2a) Agrupando os vetores e criando centros representativos
Como primeiro passo vamos gerar um RDD com os valores do dicionário w2v. Em seguida, aplicaremos o algoritmo k-Means com $k = 200$.
End of explanation
"""
# EXERCICIO
def quantizador(point, model, k, w2v):
key = point[0]
words = tokenize(point[1])
matrix = np.array( [w2v[w] for w in words if w in w2v] )
features = np.zeros(k)
for v in matrix:
c = model.predict(v)
features[c] += 1
return (key, features)
quantRDD = dataRDD.map(lambda x: quantizador(x, modelK, 500, w2v))
print (quantRDD.take(1))
# TEST Implement a TF function (2a)
assert quantRDD.take(1)[0][1].sum() == 5, 'valores incorretos'
"""
Explanation: (2b) Transformando matriz de dados em vetores quantizados
O próximo passo consiste em transformar nosso RDD de frases em um RDD de pares (id, vetor quantizado). Para isso vamos criar uma função quantizador que receberá como parâmetros o objeto, o modelo de k-means, o valor de k e o dicionário word2vec.
Para cada ponto, vamos separar o id e aplicar a função tokenize na string. Em seguida, transformamos a lista de tokens em uma matriz word2vec. Finalmente, aplicamos cada vetor dessa matriz no modelo de k-Means, gerando um vetor de tamanho $k$ em que cada posição $i$ indica quantos tokens pertencem ao cluster $i$.
End of explanation
"""
|
aaronta/illinois | MachineLearning/scikit-learn/sklearn_overview.ipynb | bsd-3-clause | import numpy as np
import matplotlib.pylab as plt
%matplotlib inline
from sklearn.datasets import load_boston
boston = load_boston()
boston.keys()
print(boston.DESCR)
print(boston.feature_names)
print(boston.data.dtype)
print(boston.target.dtype)
train = boston.data
test = boston.target
"""
Explanation: Machine Learning: A sklearn overview
<img src="http://blog.bidmotion.com/bidmotion/wp-content/uploads/sites/3/2016/06/supervised-workflow-machine-learning.png">
Continuous Target Data ($y\in \mathbb{R}$)
End of explanation
"""
from sklearn import linear_model
lin_reg = linear_model.LinearRegression()
model = lin_reg.fit(train, test)
print(model.score(train, test))
plt.plot(model.predict(train), test, 'ro')
plt.plot([0,50],[0,50], 'k', linewidth = 2)
plt.xlabel('Predicted Value')
plt.ylabel('True Value')
plt.axis('square')
plt.ylim([-10, 55])
plt.xlim([-10, 55])
"""
Explanation: Linear Regression
Linear regression fits the data to the follow equation
$\hat{y}i=\beta_0+\sum_j^p\beta_jx{i,j}=\beta_0+\mathbf{x}^T\beta$
where $i=1,...,n$ and $p$ is the number of predictors
The error produced by this equation is
$\epsilon_i=\hat{y_i}-y_i$
For linear regression, the goal is the sum of the errors by using least squares
$\min\left|\sum_i^n\epsilon_i^2\right|$
End of explanation
"""
from sklearn import linear_model
ridge_reg = linear_model.Ridge(alpha=1.0, # regularization
normalize=True, # normalize X regressors
solver='auto') # options = ‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag'
model = ridge_reg.fit(train, test)
print(model.score(train, test))
plt.plot(model.predict(train), test, 'ro')
plt.plot([0,50],[0,50], 'k', linewidth = 2)
plt.xlabel('Predicted Value')
plt.ylabel('True Value')
plt.axis('square')
plt.ylim([-10, 55])
plt.xlim([-10, 55])
"""
Explanation: Ridge Regression
Sometimes there is an issue where the model over-fits the data. For example, the following figure shows a sine curve fit with a 3 and 9 degree polynomial
<img src="http://i.stack.imgur.com/cwAUJ.png">
When this happens, the solution is not always clear. So one way is to use ridge regression, which can be used to surpress certain features of the model that cause overfitting. Rigde regression uses the same model as linear regression but instead of using least squares, ridge regression attempts to minimize
$\min\left|\sum_i^n\epsilon_i^2 + \lambda\sum_j^p\beta_j^2\right|$
Clearly, compared to least squares, this penalizes the squared value of the coefficients. when this is used, the previous figure has the following result
<img src="http://i.stack.imgur.com/ik06V.png">
In terms of using this in sklearn the following code can be used.
End of explanation
"""
from sklearn import linear_model
lasso_reg = linear_model.Lasso(alpha=.10, # regularization
normalize=True) # normalize X regressors
model = lasso_reg.fit(train, test)
print(model.score(train, test))
plt.plot(model.predict(train), test, 'ro')
plt.plot([0,50],[0,50], 'k', linewidth = 2)
plt.xlabel('Predicted Value')
plt.ylabel('True Value')
plt.axis('square')
plt.ylim([-10, 55])
plt.xlim([-10, 55])
"""
Explanation: The error rate actually decreased, by making a simplier model.
Lasso Regression
A very similar method to ridge regression is lasso regression. The difference is that the function to minimize is
$\min\left|\sum_i^n\epsilon_i^2 + \lambda\sum_j^p|\beta_j|\right|$
Thus, lasso penalizes the abosulte value of the coeffiecients. These two methods are commonly visualized as
<img src="https://jamesmccammondotcom.files.wordpress.com/2014/04/screen-shot-2014-04-19-at-11-19-00-pm.png?w=1200">
In sklearn this can be used as
End of explanation
"""
elastic_reg = linear_model.ElasticNet(alpha=1.0, # penalty, 0 is OLS
selection='cyclic') # or 'random', which converges faster
model = elastic_reg.fit(train, test)
print(model.score(train, test))
plt.plot(model.predict(train), test, 'ro')
plt.plot([0,50],[0,50], 'k', linewidth = 2)
plt.xlabel('Predicted Value')
plt.ylabel('True Value')
plt.axis('square')
plt.ylim([-10, 55])
plt.xlim([-10, 55])
"""
Explanation: Elastic Net
Another method to minimize overfitting is the elastic net method. This is a combination of lasso and ridge. Thus, this minimizes the following
$\min\left|\sum_i^n\epsilon_i^2 + \lambda_1\sum_j^p|\beta_j|+\lambda_2\sum_j^p\beta_j^2\right|$
This is in sklearn as
End of explanation
"""
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
model = make_pipeline(PolynomialFeatures(3), ridge_reg)
model.fit(train, test)
print(model.score(train, test))
plt.plot(model.predict(train), test, 'ro')
plt.plot([0,50],[0,50], 'k', linewidth = 2)
plt.xlabel('Predicted Value')
plt.ylabel('True Value')
plt.axis('square')
plt.ylim([-10, 55])
plt.xlim([-10, 55])
"""
Explanation: Polynomial Fits
While linear regression is nice, there is often a need for higher order parameters and cross terms, such as $x^2$ or $x_1x_2$. In this case the following programs can be used to add the polynomial terms.
End of explanation
"""
from sklearn import ensemble
rf_reg = ensemble.RandomForestRegressor(n_estimators=10, # number of trees
criterion='mse', # how to measure fit
max_depth=None, # how deep tree nodes can go
min_samples_split=2, # samples needed to split node
min_samples_leaf=1, # samples needed for a leaf
min_weight_fraction_leaf=0.0, # weight of samples needed for a node
max_features='auto', # max feats
max_leaf_nodes=None, # max nodes
n_jobs=1) # how many to run parallel
model = rf_reg.fit(train, test)
print(model.score(train, test))
"""
Explanation: Random Forest
End of explanation
"""
ab_reg = ensemble.AdaBoostRegressor(base_estimator=None, # default is DT
n_estimators=50, # number to try before stopping
learning_rate=1.0, # decrease influence of each additional estimator
loss='linear') # also ‘square’, ‘exponential’
model = ab_reg.fit(train, test)
print(model.score(train, test))
"""
Explanation: Boosting
End of explanation
"""
n_samples = 100
train = np.random.normal(size=n_samples) # make random data set
target = (train > 0).astype(np.float) # make half the values true/1/yes
train[train > 0] *= 2
train += .3 * np.random.normal(size=n_samples) # create abiguity near the transition
train = np.reshape(train, [n_samples, 1])
target = np.reshape(target, [n_samples, 1])
plt.plot(train, target, 'o')
plt.ylim([-.1,1.1])
plt.xlabel('Predictor')
plt.ylabel('Target')
"""
Explanation: Discrete Target Data (classification problem)
Logistic Regression
For this type of problem the target data takes on discretet values, for example the target be 'yes' or 'no' for a disease, or equivalently '1' or '0'
End of explanation
"""
lin_reg = linear_model.LinearRegression()
model = lin_reg.fit(train, target)
plt.plot(train, target, 'o')
plt.plot(train, model.coef_*train + model.intercept_, 'r-', linewidth=2)
plt.ylim([-.1,1.1])
plt.xlabel('Predictor')
plt.ylabel('Target')
"""
Explanation: The data above shows that the predictor can estimate the target. However, if we fit the data with a linear fit, the solution is rather ambigous.
End of explanation
"""
log_reg = linear_model.LogisticRegression()
target = np.reshape(target, [n_samples, ])
logistic_fit = log_reg.fit(train,target)
logistic_line = 1/(1+np.exp(-(logistic_fit.coef_*sorted(train)+logistic_fit.intercept_)))
plt.plot(train, target, 'o')
plt.plot(train, model.coef_*train + model.intercept_, 'r-', linewidth=2)
plt.plot(sorted(train), logistic_line, 'k-', linewidth=2)
plt.ylim([-.1,1.1])
plt.xlabel('Predictor')
plt.ylabel('Target')
plt.legend(('data', 'linear', 'logistic'), loc="lower right")
"""
Explanation: With a linear model, it is hard to really descide what value for the predictor results in a '1' or '0'. Also hard to really make sense of the negative target predictions.
So, instead we can use logistic regression which fits the following equation
$P(x) = \frac{1}{1+\exp(-\mathbf{x}^T\beta)}$
where again we are trying to fit the coefficeints $\beta$ by using least squares. Here P(x) can be understood as the probability of predicting a '1'
End of explanation
"""
import sklearn.datasets
num_classes = 4
train, target = sklearn.datasets.make_classification(n_samples=100, n_features=2,
n_informative=2, n_redundant=0, n_repeated=0,
n_clusters_per_class=1, n_classes=num_classes)
for group in range(num_classes):
plt.plot(train[target==group][:,0],train[target==group][:,1], 'o')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
"""
Explanation: Logistic regression removed negative values and values greater than 1. Also, thresholding is more intuitive since we are predicting the probability now. Thus, if we were doing disease prediction, we could threshold depending on how many false positives we want for example by increasing or decreasing the threshold.
However, logistic regression becomes more complicated when there is more than 2 classifications, say even three. One method is to do logistic regression for each classification. But usually, it is better to use other methods.
Multi-class dataset
End of explanation
"""
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
gnb_fit = gnb.fit(train, target)
gnb_pred = gnb_fit.predict(train)
print((gnb_pred!=target).sum()/100.)
for group in range(num_classes):
plt.plot(train[target==group][:,0],train[target==group][:,1], 'o')
plt.plot(train[gnb_pred!=target][:,0], train[gnb_pred!=target][:,1], 'kx', markersize = 10)
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
side = np.linspace(-3,3,100)
X,Y = np.meshgrid(side,side)
X = np.reshape(X, [-1])
Y = np.reshape(Y, [-1])
points = np.column_stack((X,Y))
pred = gnb_fit.predict(points)
for i in range(num_classes):
plt.plot(points[pred==i][:,0],points[pred==i][:,1],'o')
"""
Explanation: Here I have plotted each of the classes as a different color.
Naive Bayes
The first method to approach this is naive bayes, which got its name for using bayes theorem to predict the classifer.
The goal of the method is to find the classifier of a point based on the maximum of the probabilies
$P(y|\mathbf{x})$
which with bayes theorem becomes
$P(y|\mathbf{x}) = \frac{P(y_i)P(\mathbf{x}|y)}{P(\mathbf{x})}$
if we assume independence between the different features of x, then
$P(\mathbf{x}|y) = \prod_i^nP(x_i|y)$
Thus, we can make our prediction based on
$\hat{y} = \max P(y)\prod_i^nP(x_i|y)$
And lastly, we can then assume the shape of $P(x_i|y)$, often assumed to be a multivariate gaussian.
End of explanation
"""
from sklearn import neighbors
nbrs = neighbors.KNeighborsClassifier(n_neighbors=1)
nbrs_fit = nbrs.fit(train, target)
nbrs_pred = nbrs_fit.predict(train)
print((nbrs_pred!=target).sum()/100.)
for group in range(num_classes):
plt.plot(train[target==group][:,0],train[target==group][:,1], 'o')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
pred = nbrs_fit.predict(points)
for i in range(num_classes):
plt.plot(points[pred==i][:,0],points[pred==i][:,1],'o')
"""
Explanation: The nice about this method is that it is fairly simple to excute and understand. However it is only accurate when classes are well defined locally, which may not always be the case.
K-nearest neighbors
The next method that could be used is k nearest neighbors, or knn. This method can also work very well for continues data, but I think it is easier to visualize and explain as a classification model.
This method works on the idea that if you look to the k train data points nearest to the point x, then you can determine the class based on the most common class of the nearest points. Often, euclidean distance is used for this, but other methods of deciding the nearest neighbor is possible, such as weighting the strength of nearest neighbor based on its distance.
If k == 1, this is equivalent to using a step function.
End of explanation
"""
from sklearn import neighbors
nbrs = neighbors.KNeighborsClassifier(n_neighbors=15)
nbrs_fit = nbrs.fit(train, target)
nbrs_pred = nbrs_fit.predict(train)
print((nbrs_pred!=target).sum()/100.)
pred = nbrs_fit.predict(points)
for i in range(num_classes):
plt.plot(points[pred==i][:,0],points[pred==i][:,1],'o')
"""
Explanation: However, using k = 1 is overfitting the data. We will try using 15 now. It will increase the number of incorrect train data that are mislabeled, but should increase the test error rate
End of explanation
"""
from sklearn import svm
svm_method = svm.SVC()
svm_fit = svm_method.fit(train, target)
for group in range(num_classes):
plt.plot(train[target==group][:,0],train[target==group][:,1], 'o')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
pred = svm_fit.predict(points)
for i in range(num_classes):
plt.plot(points[pred==i][:,0],points[pred==i][:,1],'o')
"""
Explanation: While k nearest neighbors is a very powerful classifing algorithm, it suffers from dimensionality. This method works great for the above example because we only have 2 features, but as the feature size increases to large numbers, the euclidean space increases by power law.
SVM (support vector machine)
A method that overcomes the dimensionality issue of knn, and overcomes the simpliest of naive bayes, is SVM, support vector machines. It is probably one of the most popular machine learning algorithms. It can be used for both classification problems and continues target problems. The disadvantage is the computational cost of the method.
SVM is another model that can be used for continous target problems, but I feel is easier to understand on a classification problem.
First, the idea behind SVM is to fit a hyperplane that divides the different classes. However, determining the best hyperplane can be complex as shown in the following figure
<img src="https://computersciencesource.files.wordpress.com/2010/01/svmafter.png">
Clearly, the decision is arbitrary. SVM thus tries to maximize the distance of the points to the plane, by using the maximal margin classifier. However, this idea clearly will only work when the points are serperable by a plane. To correct for this, SVM was created with a generalized method that can use any K(x) kernel. The previous example, used a linear kernel, but gaussian, polynomial, and many other kernels also exist. The effects of kernel can be seen in the following image
<img src="http://scikit-learn.org/stable/_images/sphx_glr_plot_iris_0012.png">
This method has had great success. To use it in sklearn is similar to the previous methods. By default, this uses RBF kernel
End of explanation
"""
from sklearn.cross_validation import train_test_split
boston = load_boston()
X_train, X_test, y_train, y_test = train_test_split(boston.data, boston.target,
train_size=0.8, test_size=0.2)
"""
Explanation: This used the gaussian kernel.
Organizing Your Data
Train/test data sets
A crucial part of machine learning is obtaining a training data set to build the model and a testing data set to verify the success of the model. The purpose of a new test data set that was not used in building the model is that if the model is tested against the training data, the model may be overfit. For example, if the model fits the train data perfectly, there is a good chance that the predictions on new data are poor.
<img src="https://upload.wikimedia.org/wikipedia/commons/6/68/Overfitted_Data.png">
The above figure shows how a high order polynomial can overfit a simple line. Thus, it is important to split the data into train and test data, and a strong model will have a low test data error rate.
To do this with sklearn, there is a package to split data sets into train and test
End of explanation
"""
from sklearn.model_selection import cross_val_predict
lin_reg = linear_model.LinearRegression()
predicted_values = cross_val_predict(lin_reg, boston.data, boston.target, cv=5)
"""
Explanation: The above function splits the data into 80% train and 20% test data. Generally, it is recommended to use somewhere around a 80:20 split. Now in all the previous models, the predict function can be used on the X_test data and the error can be tested on y_test. This method helps prevent against over fitted modeling.
Cross-validation
One issue with the above method of splitting the data is that we did it randomly and blindly. This can lead to high variance in the models, if the split accidentally places all of one class in the train and all of a different class in the test data. Another issue is, by splittling the data 80:20, we have reduced the sample size of the data we are using to build our model. What if we want to keep the same sample size but also have a test set? Similaly, if the dataset is originally small, we may want to not waste train data.
One way to overcome both of these issues is to use cross validation. In cross validation, we split the data into train data and test data, but we do this multiple times such that we build a model with all data. We then get a final model by averaging the multiple models.
Several methods for this exist, one such method is called leave-one-out cross-validation. This splits the data into train and test sets, where the test set only has one case. The model is created for each of the n-1 possible train sets. This works well until n becomes very large.
Another common validation method is k-fold cross validation, this splits the data set into k even groups, and makes k models with each group used as a test set once. The final model is then averaged. Using cross-validation helps greatly with the bias-variance trade-off in machine learning.
To do cross-validation in sklearn, the model is first chosen then provided to the following method
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA(n_components=None) #returns the same number of features as the original set
pca_fit = pca.fit(boston.data)
print(pca.explained_variance_ratio_)
"""
Explanation: The previous code uses 5-fold cross-validation to predict the values for the boston data with a linear model.
PCA
Sometimes, there are too many features to describe the target, maybe some features are duplicates or combinations of other features, or maybe some features just dont correlate well with the target. In these cases it is good to reduce the feature space, because the lower the dimension the easier it is to create a model and the easier it is for us to comprehend with our simple brains.
A common way to reduce the feature space is to use principle component analysis. PCA computes the directions that best describe the data. Here is a 2-d exmaple
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f5/GaussianScatterPCA.svg/720px-GaussianScatterPCA.svg.png">
This shows that pca will change from x,y to vectors that better describe the data. PCA in sklearn is determined by SVD and can be used with the following commands
End of explanation
"""
pca = PCA(n_components=3) #returns 3 componenets
pca_fit = pca.fit(boston.data)
print(pca.explained_variance_ratio_)
"""
Explanation: The printed values show the significance of the principle compnent. Thus, we can reduce the feature space by say only keeping the principle components that explain more than 90% of the data, which means we would only keep the first two principle compenents here.
End of explanation
"""
new_boston = pca.fit_transform(boston.data)
"""
Explanation: Now we can transform the data to use in the models
End of explanation
"""
lin_reg = linear_model.LinearRegression()
model = lin_reg.fit(new_boston, test)
print(model.score(new_boston, test))
plt.plot(model.predict(new_boston), test, 'ro')
plt.plot([0,50],[0,50], 'k', linewidth = 2)
plt.xlabel('Predicted Value')
plt.ylabel('True Value')
plt.axis('square')
plt.ylim([-10, 55])
plt.xlim([-10, 55])
"""
Explanation: Now we can use this in our models. We now only have 2 features, thus building the models will be far simplier.
End of explanation
"""
|
emptymalei/emptymalei.github.io | _til/programming/assets/programming/python_list_comprehensions.ipynb | mit | list_with_for_loop = [x for x in range(10)]
print list_with_for_loop
"""
Explanation: Python List Comprehensions
Notes for the article Python List Comprehensions: Explained Visually by Trey Hunner
Making a List
Integrated with loops
End of explanation
"""
list_with_for_loop_conditional = [x for x in range(10) if x%2 == 1]
print list_with_for_loop_conditional
"""
Explanation: Even with conditions in the for loop
End of explanation
"""
list_with_nested_loops = [ [x, y] for x in range(3) for y in range(3) ]
print list_with_nested_loops
"""
Explanation: Nested loops in a list
End of explanation
"""
list_with_nested_loops_2 = [ x for x in range(y) for y in range(3)]
print list_with_nested_loops_2
"""
Explanation: Another example of nested loops
End of explanation
"""
matrix = [[11,12],[21,22]]
row = [1,2]
wrong_flatten_of_matrix = [x for x in row for row in matrix]
print "matrix is", matrix
print "flattened matrix is", wrong_flatten_of_matrix
"""
Explanation: The article gives an example of how to flatten a matrix using this trick. Semantically, one would using
End of explanation
"""
right_flatten_of_matrix = [x for row in matrix for x in row]
print "matrix is", matrix
print "flattened matrix is", right_flatten_of_matrix
"""
Explanation: which is obviously WRONG. The correct code is given by the author as
End of explanation
"""
right_flatten_of_matrix_line_breaking = [
x
for row in matrix
for x in row
]
print "matrix is", matrix
print "flattened matrix is", right_flatten_of_matrix_line_breaking
"""
Explanation: The key is to write the nested loops in a list as the normal nested loops.
With this possible confusion, the author proposed a line breaking solution
End of explanation
"""
|
tuanavu/coursera-university-of-washington | machine_learning/4_clustering_and_retrieval/lecture/week3/.ipynb_checkpoints/quiz-Decision Trees-checkpoint.ipynb | mit | import graphlab
graphlab.canvas.set_target('ipynb')
x = graphlab.SFrame({'x1':[1,0,1,0],'x2':['1','1','0','0'],'x3':['1','0','1','1'],'y':['1','-1','-1','1']})
x
features = ['x1','x2','x3']
target = 'y'
decision_tree_model = graphlab.decision_tree_classifier.create(x, validation_set=None,
target = target, features = features)
"""
Explanation: Decision Trees
Question 1
<img src="images/lec3_quiz01.png">
Screenshot taken from Coursera
<!--TEASER_END-->
Answer
We are picking feature x3, because it has the lowest classification error
At row 3, x3 = 1, y = -1, only 1 error compare to other features.
Question 2
<img src="images/lec3_quiz02.png">
Screenshot taken from Coursera
<!--TEASER_END-->
Answer
End of explanation
"""
decision_tree_model.show()
decision_tree_model.show(view="Tree")
"""
Explanation: The best feature to split on first is x3
In this tree below you will see that starting from x3 = 1, the depth of the tree is 3.
End of explanation
"""
# Accuracy
print decision_tree_model.evaluate(x)['accuracy']
"""
Explanation: Question 3
<img src="images/lec3_quiz03.png">
Screenshot taken from Coursera
<!--TEASER_END-->
End of explanation
"""
|
arnaldog12/Manual-Pratico-Deep-Learning | Neurônio Sigmoid.ipynb | mit | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import accuracy_score
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
"""
Explanation: Sumário
Introdução
Função de Custo
Regressão Logística
Exercícios
Referências
Imports e Configurações
End of explanation
"""
df = pd.read_csv('data/anuncios.csv')
print(df.shape)
df.head()
x, y = df.idade.values.reshape(-1,1), df.comprou.values.reshape(-1,1)
print(x.shape, y.shape)
plt.scatter(x, y, c=y, cmap='bwr')
plt.xlabel('idade')
plt.ylabel('comprou?')
minmax = MinMaxScaler(feature_range=(-1,1))
x = minmax.fit_transform(x.astype(np.float64))
print(x.min(), x.max())
"""
Explanation: Introdução
A Regressão Logística, apesar do nome, é uma técnica utilizada para fazer classificação binária. Nesse caso, ao invés de prever um valor contínuo, a nossa saída é composta de apenas dois valores: 0 ou 1, em geral. Para fazer a regressão logística, utilizamos como função de ativação a função conhecida como sigmoid. Tal função, é descrita pela seguinte fórmula:
$$\widehat{y} = \frac{1}{1+e^{-z}} = \frac{e^z}{1+e^z}$$
No caso de redes neurais, em geral consideramos $z(w,b) = xw^T + b$.
Função de Custo
A função de custo da regressão logística é chamada de entropia cruzada (do inglês, cross-entropy) e é definida pela seguinte fórmula:
$$J(z) = -\frac{1}{N}\sum_{i}^N y_i\log(\widehat{y}_i) + (1-y_i)\log(1-\widehat{y}_i)$$
Onde $N$ é quantidade de amostras e $y_i$ representa o valor da $i$-ésima amostra (0 ou 1). Lembrando que $\widehat{y}_i$ é agora calculada agora utilizando a função sigmoid, como mostrado na seção anterior.
Repare também que:
quando $y_i = 0$, o primeiro termo anula-se (pois $y_i = 0$). Logo, vamos considerar os dois casos extremos para $\widehat{y}_i = 0$ no segundo termo da equação ($(1-y_i)\log(1-\widehat{y}_i)$):
quando $\widehat{y}_i = 0$, temos que o $\log(1-\widehat{y}_i) = \log(1) = 0$. Logo, o nosso custo $J = 0$. Repare que isso faz todo sentido, pois $y_i = 0$ e $\widehat{y}_i = 0$.
quando $\widehat{y}_i = 1$, temos que o $\log(1-\widehat{y}_i) = \log(0) = \infty$. Agora, o nosso custo $J = \infty$. Ou seja, quanto mais diferente são $y_i$ e $\widehat{y}_i$, maior o nosso custo.
quando $y_i = 1$, o segundo termo anula-se (pois $(1-y_i) = 0$). Novamente, vamos considerar os dois casos extremos para $\widehat{y}_i = 0$, só que agora no primeiro termo da equação ($y_i\log(\widehat{y}_i)$):
quando $\widehat{y}_i = 0$, temos que o $\log(\widehat{y}_i) = \infty$. Logo, o nosso custo $J = \infty$. Novamente, como $y_i$ e $\widehat{y}_i$ são bem diferentes, o custo tende a aumentar.
quando $\widehat{y}_i = 1$, temos que $\log(\widehat{y}_i) = \log(1) = 0$. Agora, o nosso custo $J = 0$. Novamente, isso faz todo sentido, pois $y_i = 1$ e $\widehat{y}_i = 1$.
Derivada da Cross-Entropy
Para calcular a derivada da nossa função de custo $J(z)$, primeiramente vamos calcular $\log(\widehat{y}_i)$:
$$\log(\widehat{y}_i) = log\frac{1}{1+e^{-z}} = log(1) - log(1+e^{-z}) = -log(1+e^{-z})$$
E $\log(1-\widehat{y}_i)$:
$$\log(1-\widehat{y}_i) = log \left(1-\frac{1}{1+e^{-z}}\right) = log(e^{-z}) - log(1+e^{-z}) = -z -log(1+e^{-z})$$
Substituindo as duas equações anteriores na fórmula da função de custo, temos:
$$J(z) = -\frac{1}{N}\sum_{i}^N \left[-y_i\log(1+e^{-z}) + (1-y_i)(-z -\log(1+e^{-z}))\right]$$
Efetuando as distribuições, podemos simplificar a equação acima para:
$$J(z) = -\frac{1}{N}\sum_{i}^N \left[y_iz -z -\log(1+e^{-z})\right]$$
Uma vez que:
$$-z -\log(1+e^{-z}) = -\left[\log e^{z} + log(1+e^{-z})\right] = -log(1+e^z)$$
Temos:
$$J(z) = -\frac{1}{N}\sum_{i}^N \left[y_iz -\log(1+e^z)\right]$$
Como a derivada da diferença é igual a diferença das derivadas, podemos calcular cada derivada individualmente em relação a $w$:
$$\frac{\partial}{\partial w_i}y_iz = y_ix_i,\quad \frac{\partial}{\partial w_i}\log(1+e^z) = \frac{x_ie^z}{1+e^z} = x_i \widehat{y}_i$$
e em relação à $b$:
$$\frac{\partial}{\partial b}y_iz = y_i,\quad \frac{\partial}{\partial b}\log(1+e^z) = \frac{e^z}{1+e^z} = \widehat{y}_i$$
Assim, a derivada da nossa função de custo $J(z)$ é:
$$\frac{\partial}{\partial w_i}J(z) = \sum_i^N (y_i - \widehat{y}_i)x_i$$
$$\frac{\partial}{\partial b}J(z) = \sum_i^N (y_i - \widehat{y}_i)$$
Por fim, repare que o gradiente de J ($\nabla J$) é exatamente o mesmo que o gradiente da função de custo do Perceptron Linear. Portanto, os pesos serão atualizados da mesma maneira. O que muda é a forma como calculamos $\widehat{y}$ (agora usando a função sigmoid) e a função de custo $J$.
Regressão Logística
End of explanation
"""
clf_sk = LogisticRegression(C=1e15)
clf_sk.fit(x, y.ravel())
print(clf_sk.coef_, clf_sk.intercept_)
print(clf_sk.score(x, y))
x_test = np.linspace(x.min(), x.max(), 100).reshape(-1,1)
y_sk = clf_sk.predict_proba(x_test)
plt.scatter(x, y, c=y, cmap='bwr')
plt.plot(x_test, y_sk[:,1], color='black')
plt.xlabel('idade')
plt.ylabel('comprou?')
# implemente a função sigmoid aqui
"""
Explanation: Vamos utilizar o sklearn como gabarito para nossa implementação. Entretanto, como a Regressão Logística do sklearn faz uma regularização L2 automaticamente, temos de definir $C=10^{15}$ para "anular" a regularização. O parâmetro $C$ define a inversa da força da regularização (ver documentação). Logo, quanto menor for o $C$, maior será a regularização e menores serão os valores dos pesos e bias.
End of explanation
"""
# implemente o neurônio sigmoid aqui
x_test = np.linspace(x.min(), x.max(), 100).reshape(-1,1)
y_sk = clf_sk.predict_proba(x_test)
y_pred = sigmoid(np.dot(x_test, w.T) + b)
plt.scatter(x, y, c=y, cmap='bwr')
plt.plot(x_test, y_sk[:,1], color='black', linewidth=7.0)
plt.plot(x_test, y_pred, color='yellow')
plt.xlabel('idade')
plt.ylabel('comprou?')
print('Acurácia pelo Scikit-learn: {:.2f}%'.format(clf_sk.score(x, y)*100))
y_pred = np.round(sigmoid(np.dot(x, w.T) + b))
print('Acurária pela nossa implementação: {:.2f}%'.format(accuracy_score(y, y_pred)*100))
"""
Explanation: Numpy
End of explanation
"""
x, y = df[['idade', 'salario']].values, df.comprou.values.reshape(-1,1)
print(x.shape, y.shape)
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(111, projection='3d')
ax.scatter3D(x[:,0], x[:,1], y, c=y.ravel())
minmax = MinMaxScaler(feature_range=(-1,1))
x = minmax.fit_transform(x.astype(np.float64))
print(x.min(), x.max())
clf_sk = LogisticRegression(C=1e15)
clf_sk.fit(x, y.ravel())
print(clf_sk.coef_, clf_sk.intercept_)
print(clf_sk.score(x, y))
"""
Explanation: Exercícios
End of explanation
"""
D = x.shape[1]
w = 2*np.random.random((1, D))-1 # [1x2]
b = 2*np.random.random()-1 # [1x1]
learning_rate = 1.0 # <- tente estimar a learning rate
for step in range(1): # <- tente estimar a #epochs
# calcule a saida do neuronio sigmoid
z =
y_pred =
error = y - y_pred # [400x1]
w = w + learning_rate*np.dot(error.T, x)
b = b + learning_rate*error.sum()
if step%100 == 0:
# implemente a entropia cruzada (1 linhas)
cost =
print('step {0}: {1}'.format(step, cost))
print('w: ', w)
print('b: ', b)
x1 = np.linspace(x[:, 0].min(), x[:, 0].max())
x2 = np.linspace(x[:, 1].min(), x[:, 1].max())
x1_mesh, x2_mesh = np.meshgrid(x1, x2)
x1_mesh = x1_mesh.reshape(-1, 1)
x2_mesh = x2_mesh.reshape(-1, 1)
x_mesh = np.hstack((x1_mesh, x2_mesh))
y_pred = sigmoid(np.dot(x_mesh, w.T) + b)
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, projection='3d')
ax.scatter3D(x[:,0], x[:,1], y, c=y.ravel())
ax.plot_trisurf(x1_mesh.ravel(), x2_mesh.ravel(), y_pred.ravel(), alpha=0.3, shade=False)
print('Acurácia pelo Scikit-learn: {:.2f}%'.format(clf_sk.score(x, y)*100))
y_pred = np.round(sigmoid(np.dot(x, w.T) + b))
print('Acurária pela nossa implementação: {:.2f}%'.format(accuracy_score(y, y_pred)*100))
"""
Explanation: Numpy
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td1a_soft/td1a_unit_test_ci.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
from pyensae.graphhelper import draw_diagram
"""
Explanation: 1A.soft - Tests unitaires, setup et ingéniérie logicielle
On vérifie toujours qu'un code fonctionne quand on l'écrit mais cela ne veut pas dire qu'il continuera à fonctionner à l'avenir. La robustesse d'un code vient de tout ce qu'on fait autour pour s'assurer qu'il continue d'exécuter correctement.
End of explanation
"""
draw_diagram("blockdiag { f0 -> f1 -> f3; f2 -> f3;}")
"""
Explanation: Petite histoire
Supposons que vous ayez implémenté trois fonctions qui dépendent les unes des autres. la fonction f3 utilise les fonctions f1 et f2.
End of explanation
"""
draw_diagram('blockdiag { f0 -> f1 -> f3; f2 -> f3; f2 -> f5 [color="red"]; f4 -> f5 [color="red"]; }')
"""
Explanation: Six mois plus tard, vous créez une fonction f5 qui appelle une fonction f4 et la fonction f2.
End of explanation
"""
def solve_polynom(a, b, c):
# ....
return None
"""
Explanation: Ah au fait, ce faisant, vous modifiez la fonction f2 et vous avez un peu oublié ce que faisait la fonction f3... Bref, vous ne savez pas si la fonction f3 sera impactée par la modification introduite dans la fonction f2 ? C'est ce type de problème qu'on rencontre tous les jours quand on écrit un logiciel à plusieurs et sur une longue durée. Ce notebook présente les briques classiques pour s'assurer de la robustesse d'un logiciel.
les tests unitaires
un logiciel de suivi de source
calcul de couverture
l'intégration continue
écrire un setup
écrire la documentation
publier sur PyPi
Ecrire une fonction
N'importe quel fonction qui fait un calcul, par exemple une fonction qui résoud une équation du second degré.
End of explanation
"""
from IPython.display import Image
try:
im = Image("https://travis-ci.com/sdpython/ensae_teaching_cs.png")
except TimeoutError:
im = None
im
from IPython.display import SVG
try:
im = SVG("https://codecov.io/github/sdpython/ensae_teaching_cs/coverage.svg")
except TimeoutError:
im = None
im
"""
Explanation: Ecrire un test unitaire
Un test unitaire est une fonction qui s'assure qu'une autre fonction retourne bien le résultat souhaité. Le plus simple est d'utiliser le module standard unittest et de quitter les notebooks pour utiliser des fichiers. Parmi les autres alternatives : pytest et nose.
Couverture ou coverage
La couverture de code est l'ensemble des lignes exécutées par les tests unitaires. Cela ne signifie pas toujours qu'elles soient correctes mais seulement qu'elles ont été exécutées une ou plusieurs sans provoquer d'erreur. Le module le plus simple est coverage. Il produit des rapports de ce type : mlstatpy/coverage.
Créer un compte GitHub
GitHub est un site qui contient la majorité des codes des projets open-source. Il faut créer un compte si vous n'en avez pas, c'est gratuit pour les projets open souce, puis créer un projet et enfin y insérer votre projet. Votre ordinateur a besoin de :
git
GitHub destkop
Vous pouvez lire GitHub Pour les Nuls : Pas de Panique, Lancez-Vous ! (Première Partie) et bien sûr faire plein de recherches internet.
Note
Tout ce que vous mettez sur GitHub pour un projet open-source est en accès libre. Veillez à ne rien mettre de personnel. Un compte GitHub fait aussi partie des choses qu'un recruteur ira regarder en premier.
Intégration continue
L'intégration continue a pour objectif de réduire le temps entre une modification et sa mise en production. Typiquement, un développeur fait une modification, une machine exécute tous les tests unitaires. On en déduit que le logiciel fonctionne sous tous les angles, on peut sans crainte le mettre à disposition des utilisateurs. Si je résume, l'intégration continue consiste à lancer une batterie de tests dès qu'une modification est détectée. Si tout fonctionne, le logiciel est construit et prêt à être partagé ou déployé si c'est un site web.
Là encore pour des projets open-source, il est possible de trouver des sites qui offre ce service gratuitement :
travis - Linux
appveyor - Windows - 1 job à la fois, pas plus d'une heure.
circle-ci - Linux et Mac OSX (payant)
GitLab-ci
A part GitLab-ci, ces trois services font tourner les tests unitaires sur des machines hébergés par chacun des sociétés. Il faut s'enregistrer sur le site, définir un fichier .travis.yml, .appveyor.yml ou circle.yml puis activer le projet sur le site correspondant. Quelques exemples sont disponibles à pyquickhelper ou scikit-learn. Le fichier doit être ajouté au projet sur GitHub et activé sur le site d'intégration continue choisi. La moindre modification déclenchera un nouveau build.permet
La plupart des sites permettent l'insertion de badge de façon à signifier que le build fonctionne.
End of explanation
"""
try:
im = SVG("https://badge.fury.io/py/ensae_teaching_cs.svg")
except TimeoutError:
im = None
im
"""
Explanation: Il y a des badges un peu pour tout.
Ecrire un setup
Le fichier setup.py détermin la façon dont le module python doit être installé pour un utilisateur qui ne l'a pas développé. Comment construire un setup : setup.
Ecrire la documentation
L'outil est le plus utilisé est sphinx. Saurez-vous l'utiliser ?
Dernière étape : PyPi
PyPi est un serveur qui permet de mettre un module à la disposition de tout le monde. Il suffit d'uploader le module... Packaging and Distributing Projects ou How to submit a package to PyPI. PyPi permet aussi l'insertion de badge.
End of explanation
"""
|
BjornFJohansson/pydna-examples | notebooks/simple_examples/pGUP1.ipynb | bsd-3-clause | # NBVAL_SKIP
from IPython.display import IFrame
IFrame('https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1474799', width="100%", height=500)
"""
Explanation: Cloning by homologous recombination: construction of pGUP1
The construction of the vector pGUP1 was described in the publication below:
End of explanation
"""
from IPython.core.display import Image
Image('figure2.png', width=600)
"""
Explanation: The cloning is described in the paper on the upper left side of page 2637:
"The expression vectors harboring GUP1 or GUP1H447A were obtained as follows: the open reading frame of GUP1
was amplified by PCR using plasmid pBH2178 (kind gift from Morten Kielland-Brandt) as a template and using
primers and , underlined sequences being homologous to the target vector pGREG505 (Jansen et al., 2005).
The PCR fragment was purified by a PCR purification kit (QIAGEN, Chatsworth, CA) and introduced into
pGREG505 by co transfection into yeast cells thus generating pGUP1 (Jansen et al., 2005)."
The description above was used to create the image below:
End of explanation
"""
Image('pGREG505.png',width=400)
"""
Explanation: Briefly, two primers (GUP1rec1sens and GUP1rec2AS) were used to amplify the GUP1 gene from Saccharomyces cerevisiae chromosomal DNA.
Jansen G, Wu C, Schade B, Thomas DY, Whiteway M. 2005. Drag&Drop cloning in yeast. Gene, 344: 43–51.
Jansen et al describe the pGREG505 vector and that it is digested with SalI before cloning. The SalI digests the vector in two places, so a fragment containing the HIS3 gene is removed.
This is a cloning in three steps (see above figure):
PCR of the GUP1 locus using GUP1rec1sens GUP1rec2AS, resulting in
a linear insert (A). The primer sequences are given in Bosson et al.
The template sequence can be found at Saccharomyces Genome Database (SGD)
Digestion of the plasmid pGREG505 with SalI, This step is not
mentioned explicitly in Bosson et. al, but is evident from Jansen et al. (B).
This digestion removes a DNA fragment containing the HIS3 marker gene
from the final construct. The sequence of the pGREG505 plasmid (as well as
the physical plasmid itself) can be obtained from
EUROSCARF
Recombination between the linear insert and the linear vector (C).
The SalI sites are visible in the plasmid drawing of pGREG505 below:
End of explanation
"""
from pydna.parsers import parse_primers
"""
Explanation: We will now replicate the cloning procedure using pydna.
End of explanation
"""
GUP1rec1sens, GUP1rec2AS = parse_primers('''
>GUP1rec1sens
gaattcgatatcaagcttatcgataccgatgtcgctgatcagcatcctgtc
>GUP1rec2AS
gacataactaattacatgactcgaggtcgactcagcattttaggtaaattccg''')
"""
Explanation: The primer sequences are read into BioPython SeqRecord objects. Primers are single stranded DNA molecules, so
SeqRecords are adequate for describing them.
End of explanation
"""
from pydna.genbank import Genbank
gb = Genbank("bjornjobb@gmail.com")
GUP1_locus = gb.nucleotide("NC_001139 REGION: complement(350616..352298)")
"""
Explanation: The template sequences is th GUP1 locus from SGD. It is read remotely from Genbank into a pydna Dseqrecord object.
End of explanation
"""
from pydna.download import download_text
from pydna.genbankfixer import gbtext_clean
from pydna.readers import read
gbtext=download_text("http://www.euroscarf.de/files/dna/P30350/P30350%20(2013-10-11%2013_49_14).dna")
cleaned_gb_text = gbtext_clean(gbtext)
pGREG505 = read(cleaned_gb_text.gbtext)
pGREG505.cseguid()
"""
Explanation: The pGREG505 vector sequence is downloaded from EUROSCARF. This sequence is unfortunately not correctly formatted according to the Genbank format.
This is unfortunately not uncommon to find "Genbank" files online that are not properly formatted. The BioPython parser on which the pydna parser based is somewhat sensitive, which can be a good or a bad thing depending on the situation.
Pydna provide a Genbank cleaning function that tries to salvage as much as possible of the sequence.
In the cells below
file is dowloaded as text
cleaned using the gbtext_clean function
read into a Dseqrecord object using the read function
The cSEGUID checksum is calculated in order to confirm the integrity of the sequence.
The cSEGUID method provide a checksum for circular sequences. Look at this blog post for further information on the cSEGUID checksum and its definition. The cSEGUID checksum can be caluclated for any sequence with a GUI app called SEGUID calculator. This can be found here. The cSEGUID is the identical with the checksum calulated with the cSEGUID checksum, proving that the file cleanup did not alter the sequence. It is probably best to save a properly formatted sequence in a local file that has been verified.
End of explanation
"""
from pydna.amplify import pcr
insert = pcr(GUP1rec1sens, GUP1rec2AS, GUP1_locus)
insert.name="GUP1"
"""
Explanation: The PCR product sequence is simulated from the primers and the template sequence using the pydna.pcr function. The pydna.Amplicon class offer greater control over the pcr parameters.
End of explanation
"""
insert.figure()
"""
Explanation: We can inspect how the primers anneal by using the .figure() method:
End of explanation
"""
insert.program()
"""
Explanation: A suggested PCR program is also available:
End of explanation
"""
from Bio.Restriction import SalI
"""
Explanation: We need to import the SalI restriction enzyme from Biopython.
End of explanation
"""
linear_vector, his3 = pGREG505.cut(SalI)
"""
Explanation: We cut the vector with SalI using the cut method.
End of explanation
"""
linear_vector
"""
Explanation: The linearized pGREG505 has 8301 bp and is of course linear.
End of explanation
"""
from pydna.assembly import Assembly
asm = Assembly((linear_vector, insert))
"""
Explanation: The pydna Assembly class can simulate homologous recombination as well as fusion PCR and Gibson assembly.
The variable asm, below will contain all possible recombination products between the two linear fragments.
The table below gives an overview of the recombination products as well as parameters used. The two last rows show that one circular [9981] and four linear products [10013] [10011] [32] and [30] were formed.
We are only interested in the circular product, since the pGUP1 plasmid is expected to be circular.
End of explanation
"""
candidate = asm.assemble_circular()[0]
"""
Explanation: The cirular product can be accessed from the assemble_circular() property of the asm object.
End of explanation
"""
candidate.figure()
"""
Explanation: We can study this further by accessing the figure method. We can see from the figure below, that the circular product is what we want.
End of explanation
"""
pGUP1 = candidate.synced(pGREG505)
"""
Explanation: The sync method below makes sure that our new vector starts from the same position as the pGREG vector did. This makes our recombinant plasmid easier to read and compare in a plasmid editor. Pydna can incorporate the free ApE plasmid editor in a way that it can be opened on any sequence object direcly from a Jupyter notebook or an IPython shell.
End of explanation
"""
pGUP1.write("pGUP1.gb")
"""
Explanation: Finally we write the sequence to a local file.
End of explanation
"""
|
terrydolan/lfcmanagers | lfcmanagers.ipynb | mit | import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import sys
import collections
from datetime import datetime
from __future__ import division
# enable inline plotting
%matplotlib inline
"""
Explanation: LFC Data Analysis: LFC Managers
See Terry's blog Being Liverpool Manager for a discussion of the data analysis.
This notebook analyses the performance of Liverpool's most recent managers at 4th October 2015.
The analysis uses IPython Notebook, python, pandas and matplotlib to explore the data.
Set-up
Import the modules needed for the analysis.
End of explanation
"""
print 'python version: {}'.format(sys.version)
print 'pandas version: {}'.format(pd.__version__)
print 'matplotlib version: {}'.format(mpl.__version__)
print 'numpy version: {}'.format(np.__version__)
"""
Explanation: Print version numbers.
End of explanation
"""
from datetime import datetime
DATA_EXTRACT_DATE = '4th Oct 2015'
LFC_GAMES_CSV_FILE = 'data\lfc_games_official_4thOct2015.csv' # output csv file to contain official games (excludes friendlies)
dflfc_games = pd.read_csv(LFC_GAMES_CSV_FILE, parse_dates=['Date'],
date_parser=lambda x: datetime.strptime(x, '%d.%m.%Y'))
# show shape
dflfc_games.shape
# check data types
dflfc_games.dtypes
dflfc_games.head()
# check dates, 1st league game vs ManU in 2011-12 is 15th Oct 2011, 2nd is 11th Feb 2012 (not Nov!)
dflfc_games[(dflfc_games.Season == '2011-2012') & (dflfc_games.Opposition == 'Manchester United')]
"""
Explanation: Load the data into a dataframes and munge
Create dataframe of LFC games
Data source: lfchistory.net
End of explanation
"""
df_rodgers_wins = dflfc_games[(dflfc_games.Competition == 'Premier League') & (dflfc_games.Season >= '2012-2013') & (dflfc_games.Result == 'W')][['Season', 'Result']].groupby('Season').count()
df_rodgers_wins.columns = ['Wins']
df_rodgers_wins['TotGames'] = 38
df_rodgers_wins.ix['2015-2016']['TotGames'] = dflfc_games[(dflfc_games.Season == '2015-2016') & (dflfc_games.Competition == 'Premier League')].Result.count()
df_rodgers_wins.ix['Total'] = df_rodgers_wins.sum(axis=0)
df_rodgers_wins['Win%'] = np.round(100*(df_rodgers_wins.Wins/df_rodgers_wins.TotGames), 1)
print 'Rodgers record: Win percentage per season, at {}'.format(DATA_EXTRACT_DATE)
df_rodgers_wins
# produce same table using dates of manager's time in charge
BR_START = pd.datetime(2012, 6, 1)
BR_END = pd.datetime(2015, 10, 4) # stop press!
df_rodgers_wins = dflfc_games[(dflfc_games.Competition == 'Premier League') &
(dflfc_games.Date >= BR_START) & (dflfc_games.Date <= BR_END) &
(dflfc_games.Result == 'W')][['Season', 'Result']].groupby('Season').count()
df_rodgers_wins.columns = ['Wins']
df_rodgers_wins['TotGames'] = 38
df_rodgers_wins.ix['2015-2016']['TotGames'] = dflfc_games[(dflfc_games.Season == '2015-2016') &
(dflfc_games.Competition == 'Premier League')].Result.count()
df_rodgers_wins.ix['Total'] = df_rodgers_wins.sum(axis=0)
df_rodgers_wins['Win%'] = np.round(100*(df_rodgers_wins.Wins/df_rodgers_wins.TotGames), 1)
print 'Rodgers record: Win percentage per season, at {}'.format(DATA_EXTRACT_DATE)
df_rodgers_wins
# define start and end dates for a selection of managers
# only include full seasons before 2015 season
manager_dict = collections.OrderedDict()
manager_dict['Brendan Rodgers'] = [pd.datetime(2012, 6, 1), pd.datetime(2015, 10, 4)]
manager_dict['Kenny Dalglish 2'] = [pd.datetime(2011, 1, 8), pd.datetime(2012, 5, 16)]
manager_dict['Roy Hodgson'] = [pd.datetime(2010, 7, 1), pd.datetime(2011, 1, 8)]
manager_dict['Rafael Benitez'] = [pd.datetime(2004, 6,16), pd.datetime(2010, 6, 3)]
for k,v in manager_dict.items():
print k, v
# show win% (etc) for all recent managers in manager dict
df_mgr_sum = pd.DataFrame() # new dataframe for summary
for manager, (start_date, end_date) in manager_dict.iteritems():
print '\n\nManager: {} \nFrom: {}, To: {}\n'.format(manager, start_date.strftime('%d %b %Y'), end_date.strftime('%d %b %Y'))
df_mgr = dflfc_games[(dflfc_games.Competition == 'Premier League') &
(dflfc_games.Date >= start_date) & (dflfc_games.Date <= end_date)]\
[['Season', 'Result', 'Score']].groupby(['Season', 'Result']).count().unstack()
df_mgr.columns = df_mgr.columns.droplevel()
del df_mgr.columns.name
df_mgr = df_mgr.reindex_axis(['W', 'D', 'L'], axis=1)
df_mgr['TotGames'] = -1 # dummy value for new column
for s in df_mgr.index.values:
df_mgr.ix[s]['TotGames'] = dflfc_games[(dflfc_games.Season == s) &
(dflfc_games.Competition == 'Premier League') &
(dflfc_games.Date >= start_date) &
(dflfc_games.Date <= end_date)].Result.count()
df_mgr.ix['Total'] = df_mgr.sum(axis=0)
df_mgr['W%'] = np.round(100*(df_mgr.W/df_mgr.TotGames), 1)
df_mgr['D%'] = np.round(100*(df_mgr.D/df_mgr.TotGames), 1)
df_mgr['L%'] = np.round(100*(df_mgr.L/df_mgr.TotGames), 1)
df_mgr['TotPoints'] = df_mgr.D + df_mgr.W*3
df_mgr['PointsPerGame'] = np.round(df_mgr.TotPoints/df_mgr.TotGames, 1)
df_mgr_sum.set_value(manager, 'W%', df_mgr.ix['Total']['W%'])
df_mgr_sum.set_value(manager, 'PointsPerGame', df_mgr.ix['Total']['PointsPerGame'])
print df_mgr
df_mgr_sum
# plot as bar chart
fig = plt.figure(figsize=(9, 6))
ax = df_mgr_sum['PointsPerGame'].plot(kind='bar', color='r')
ax.set_title("Summary view of each manager's points pergGame")
ax.set_xlabel('Manager')
ax.set_ylabel('PPG')
ax.text(-0.45, 1.92, 'prepared by: @lfcsorted', bbox=dict(facecolor='none', edgecolor='none', alpha=0.6))
plt.show()
# plot as bar chart, with PPG above the bar
fig = plt.figure(figsize=(9, 6))
x = x = range(1, len(df_mgr_sum)+1)
y = df_mgr_sum['PointsPerGame'].values
ax = plt.bar(x, y, align='center', color='r', width=0.6)
# plot ppg as text above bar
for xidx, yt in enumerate(y):
xt = xidx + 1
plt.annotate(str(yt), xy=(xt,yt), xytext=(xt, yt), va="bottom", ha="center")
# add labels
plt.xticks(x, df_mgr_sum.index.values, rotation='vertical')
plt.ylim((0, 2.2))
plt.title("Summary of each manager's points per game")
plt.xlabel('Manager')
plt.ylabel('PPG')
plt.text(0.55, 2.1, 'prepared by: @lfcsorted', bbox=dict(facecolor='none', edgecolor='none', alpha=0.6))
# plot
plt.show()
fig.savefig('ManagervsPPG.png', bbox_inches='tight')
"""
Explanation: Analyse the data
Ask a question and find the answer!
Show Liverpool's win % under Rodgers in Premiership, at the DATA_EXTRACT_DATE
End of explanation
"""
|
jasontlam/snorkel | tutorials/workshop/Workshop_6_Advanced_Grid_Search.ipynb | apache-2.0 | %load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import numpy as np
# Connect to the database backend and initalize a Snorkel session
from lib.init import *
"""
Explanation: <img align="left" src="imgs/logo.jpg" width="50px" style="margin-right:10px">
Snorkel Workshop: Extracting Spouse Relations <br> from the News
Advanced Part 6: Hyperparameter Tuning via Grid Search
End of explanation
"""
Spouse = candidate_subclass('Spouse', ['person1', 'person2'])
"""
Explanation: We repeat our definition of the Spouse Candidate subclass, and load the test set:
End of explanation
"""
from snorkel.annotations import FeatureAnnotator
featurizer = FeatureAnnotator()
F_train = featurizer.load_matrix(session, split=0)
F_dev = featurizer.load_matrix(session, split=1)
F_test = featurizer.load_matrix(session, split=2)
if F_train.size == 0:
%time F_train = featurizer.apply(split=0, parallelism=1)
if F_dev.size == 0:
%time F_dev = featurizer.apply_existing(split=1, parallelism=1)
if F_test.size == 0:
%time F_test = featurizer.apply_existing(split=2, parallelism=1)
print F_train.shape
print F_dev.shape
print F_test.shape
"""
Explanation: I. Training a SparseLogisticRegression Discriminative Model
We use the training marginals to train a discriminative model that classifies each Candidate as a true or false mention. We'll use a random hyperparameter search, evaluated on the development set labels, to find the best hyperparameters for our model. To run a hyperparameter search, we need labels for a development set. If they aren't already available, we can manually create labels using the Viewer.
Feature Extraction
Instead of using a deep learning approach to start, let's look at a standard sparse logistic regression model. First, we need to extract out features. This can take a while, but we only have to do it once!
End of explanation
"""
from snorkel.annotations import load_marginals
train_marginals = load_marginals(session, split=0)
import matplotlib.pyplot as plt
plt.hist(train_marginals, bins=20)
plt.show()
"""
Explanation: First, reload the training marginals:
End of explanation
"""
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
L_gold_dev.shape
"""
Explanation: Load our development data for tuning
End of explanation
"""
from snorkel.learning import RandomSearch
from snorkel.learning import SparseLogisticRegression
seed = 1234
num_model_search = 10
# search over this parameter grid
param_grid = {}
param_grid['batch_size'] = [64, 128]
param_grid['lr'] = [1e-4, 1e-3, 1e-2]
param_grid['l1_penalty'] = [1e-6, 1e-4, 1e-2]
param_grid['l2_penalty'] = [1e-6, 1e-4, 1e-2]
param_grid['rebalance'] = [0.0, 0.5]
model_class_params = {
'seed': seed,
'n_threads':num_procs
}
model_hyperparams = {
'n_epochs': 2000,
'print_freq': 1,
'dev_ckpt_delay': 0.5,
'X_dev': F_dev,
'Y_dev': L_gold_dev
}
searcher = RandomSearch(SparseLogisticRegression, param_grid, F_train, train_marginals,
n=num_model_search, seed=seed,
model_class_params=model_class_params,
model_hyperparams=model_hyperparams)
print "Discriminitive Model Parameter Space (seed={}):".format(seed)
for i, params in enumerate(searcher.search_space()):
print i, params
disc_model, run_stats = searcher.fit(X_valid=F_dev, Y_valid=L_gold_dev, n_threads=1)
print run_stats
"""
Explanation: The following code performs model selection by tuning our learning algorithm's hyperparamters.
End of explanation
"""
from lib.scoring import *
print_top_k_features(session, disc_model, F_train, top_k=25)
"""
Explanation: Examining Features
Extracting features allows us to inspect and interperet our learned weights
End of explanation
"""
from snorkel.annotations import load_gold_labels
L_gold_test = load_gold_labels(session, annotator_name='gold', split=2)
_, _, _, _ = disc_model.score(session, F_test, L_gold_test)
"""
Explanation: Evaluate on Test Data
End of explanation
"""
|
Hyperparticle/deep-learning-foundation | lessons/transfer-learning/Transfer_Learning_Solution.ipynb | mit | from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
"""
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg.
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link.
End of explanation
"""
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
"""
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
"""
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
"""
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $244 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code:
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
"""
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
images = np.concatenate(batch)
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
"""
Explanation: Below I'm running images through the VGG network in batches.
End of explanation
"""
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
"""
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
"""
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)
"""
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
"""
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
train_idx, val_idx = next(ss.split(codes, labels_vecs))
half_val_len = int(len(val_idx)/2)
val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
"""
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
"""
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
fc = tf.contrib.layers.fully_connected(inputs_, 256)
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer().minimize(cost)
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
"""
def get_batches(x, y, n_batches=10):
""" Return a generator that yields batches from arrays x and y. """
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
"""
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
"""
epochs = 10
iteration = 0
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in get_batches(train_x, train_y):
feed = {inputs_: x,
labels_: y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(loss))
iteration += 1
if iteration % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt")
"""
Explanation: Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help.
End of explanation
"""
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
"""
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
"""
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
"""
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation
"""
|
majtotim/portfolio | Post_1.ipynb | mit | from fuzzywuzzy import fuzz, StringMatcher
import difflib
import pandas as pd
### loading partially cleaned csvs containing place names and dictionary
kbbi = pd.read_csv("/Users/admin/Desktop/loanwords/clean.kbbi.csv")
places = pd.read_csv("/Users/admin/Desktop/loanwords/concat_places.csv")
kbbi.columns = ['old_index', 'words']
"""
Explanation: A Big Data Search for Prehistoric Placenames
Place names (or 'Toponyms') can reveal a great deal about the history of a geographic region. They stick around even as locals adopt new languages or populations are forced out by famine or conflict. Pick a region anywhere in the world at random and you'll likely find a spattering of place names originating from a dead or moribund languages. In the Americas this is likely to be a Na-Dené, Algic, or Uto-Aztecan language. Similarly, Western Europe is full of place names of pre-historic Celtic origin. In some cases, names are the only remaining evidence of a particular speech community having once existed in a certain place, and for this reason, place names can be very useful for understanding the history of regions where linguistic prehistory is yet unclear.
In this post, I'm going to use focus on a region of the world whose linguistic prehistory is remains murky. Sumatra a large island located in the far west of the Indonesian Archipelago. Local languages throughout much of the island belong to the Malayic branch of the Austronesian language family, a family of languages spoken across an enourmous region spanning from Madagascar off the east coast of Africa to Rapa Nui (Easter Island) off the coast of Chile. The Malayic languages are spoken by over 200 million speakers and include the national languages on Malaysia, Indonesia and Brunei. Sumatra (and more specifically, the Jambi/Palambang regions, which were likely the the center of Sriwijaya, a Buddhist Kingdom whose influence and power extended throughout South East Asia) is recognized by indonesianists as the Malay Homeland--the region where the Malay language first flourished and spread out to become an influential trade language.
Despite its prevalence in Sumatra over the past 1000+ years, the Malay language is a relative newcomer to the island. Scholars estimate that Malay speakers began to settle in Sumatra in relatively large numbers (very) roughly two millenia ago, and that these speakers were preceded by an earlier wave of languages also belonging to the Austronesian language family. However, even by generous estimates of how early the arrival of Austonesian languages to the island took place, it is clear from archeological evidence that human populations inhabited Sumatra for thousands of years before Austronesian languages departed their homeland of Formosa (Taiwan), let alone spread to South East Asia.
In light of this history, we might expect to find place names in Sumatra originating from pre-Malay (or even pre-Austonesian) times. To find out, I have put together an abridged database of Sumatran place names using data from the GEONet Names Server (GNS), a database sanctioned by the U.S. Board on Geographic Names. The technique I eploy to find relic place names involves filtering words of known origin from this database, then subsequently examining the place names to see if they provide any hints as to the linguistic identity of earlier populations. I filter out place names by of know origin through a matching algorithm using a wordlist I compiled based on the KBBI (Kamus Besar Bahasa Indonesia), an authoritative Indonesian Dictionary (Indonesian, itself a variety of Malay, will serve as a stand-in for Malay). Since there is considerable variation in the spelling and pronounciation of place names, I will use fuzzy matching algorithms to identify names of known origin. The three packages I will use are fuzzywuzzy, difflib and pandas.
End of explanation
"""
places.head()
"""
Explanation: Databases:
The place names dictionary contains full names of locations, a location index which can be used to locate each place on a map, and a column containing information regarding the type of geographic feature. Notice that many of the place names contain two words. I want check each of these words individually for matches in the dictionary, therefore I have split rows which contain place names containing more than one word. For example, The place 'Laut Seram' corresponds to two separate rows: one for 'Laut' and the other for 'Seram'. The database contains a total of 438,600 rows.
End of explanation
"""
kbbi.head()
"""
Explanation: The dictionary database comprises a list of words taken for the KBBI along with original dictionary indexes. The database contains 50,000+ entries.
End of explanation
"""
def prepare_string(word):
word = word.lower()
word = word.replace('ng','N')
word = word.replace('ny','Y')
word = word.replace('sy','s')
word = word.replace('kh','k')
word = word.replace('oe','u')
word = word.replace('tj','c')
word = word.replace('dj','j')
return(word)
kbbi['prepared_entries'] = kbbi.words.apply(lambda x: prepare_string(str(x)))
places['prepared_places'] = places.split_word.apply(lambda x: prepare_string(str(x)))
"""
Explanation: Step 1: Removing orthographic irregularities
Unlike English, the Indonesian spelling system is almost perfectly phonemic i.e. in most cases there is a one-to-one relationship between a letter and the sound it represents. This being said, there are a few irregularities in the orthography. To rid the data of as many of these irregularities as possible, I created the following function:
End of explanation
"""
kbbi.head()
places.head()
"""
Explanation: Moreover, I generated new columns in the dictionary and place names database for the cleaned strings.
End of explanation
"""
def find_matches(place, wordlist_series):
bool_series = wordlist_series.str.contains(place,case=True)
return(bool_series.value_counts(ascending=True).index[0])
"""
Explanation: Identifying place names of known origin: perfect matches
The following function takes a place name as input, compares the name to each word in the dictionary wordlist. A boolean is returned indicating whether any matches were found.
End of explanation
"""
sample_df = places.loc[0:1000,:]
sample_df['matches'] = sample_df.prepared_places.apply(lambda x: find_matches(x,kbbi.prepared_entries))
sample_df.head()
"""
Explanation: I use the this function in conjunction with the .apply() method in pandas to return a series of booleans as a column in the place name database. The large size of the database means that running this function could take up to an hour. To demonstrate how the function works, I will apply the function to a subset of the data 'sample_df' containing 1000 rows.
End of explanation
"""
sample_df = sample_df[sample_df.matches == False]
sample_df.head()
"""
Explanation: Using the boolean values in the 'matches' column, I will filter out place names with an exact match in the dictionary.
End of explanation
"""
fuzz.ratio('bat','pat')
fuzz.ratio('bat','sat')
"""
Explanation: Identifying place names of known origin: near matches
Although we have gotten rid of all place names with exact matches in the dictionary, there are still plenty of place names which represent variants of items in the dictionary (e.g. words which are historically the same word, but are now spelled and/or pronounced differently). We want to get rid of these words. The method that I am going to use is called 'fuzzy matching'. I will borrow a function called 'fuzzy wuzzy' which calculates a percentage similarity score with 100 being a perfect match. This function is a variant of the Levinshtein distance function, a function which quantifies the distance between two words a and b based on number of deletions, substitutions or insertions it would take to transform a into b.
Before applying this function, however, there is an important problem with the function that needs to be addressed. Levenshtein distance does not take the phonological similarity between two sounds into account. To illustrate this, consider two pairs of words:
Pair 1: bat, pat
Pair 2: bat, sat
End of explanation
"""
def phono_matrix(string):
string_of_matrixes = []
for character in string:
matrix = {}
###populate manner:
### sonorant
if character in ['a','e','i','o','u','y','w','m','N','Y','l','r','h','q']:
matrix['sonorant'] = 'Y'
else:
matrix['sonorant'] = 'N'
###continuant
if character in ['l','r','y','w','a','e','i','o','u','s','z','f']:
matrix['continuant'] = 'Y'
else:
matrix['continuant'] = 'N'
###consonant
if character in ['p','t','k','q','h','c','b','d','g','j','s','z','f','m','n','Y','N','l','r']:
matrix['consonant'] = 'Y'
else:
matrix['consonant'] = 'N'
###syllabic
if character in ['a','e','i','o','u']:
matrix['syllabic'] = 'Y'
else:
matrix['syllabic'] = 'N'
###strident
if character in ['s','j','c']:
matrix['strident'] = 'Y'
else:
matrix['strident'] = 'N'
###populate place: labial, coronal, palatal, velar, glottal
###labial
if character in ['p','m','f','b','w','u','o']:
matrix['labial'] = 'Y'
else:
matrix['labial'] = 'N'
###coronal
if character in ['t','d','n','s','j','c','Y','i','e','r','l']:
matrix['coronal'] = 'Y'
else:
matrix['coronal'] = 'N'
###palatal
if character in ['s','j','c','i','Y','e']:
matrix['palatal'] = 'Y'
else:
matrix['palatal'] = 'N'
###palatal
if character in ['u','k','g','N','o']:
matrix['velar'] = 'Y'
else:
matrix['velar'] = 'N'
###glottal
if character in ['h','q']:
matrix['glottal'] = 'Y'
else:
matrix['glottal'] = 'N'
###glottal
if character in ['h','q']:
matrix['glottal'] = 'Y'
else:
matrix['glottal'] = 'N'
###nasality
###nasal/oral
if character in ['m','n','Y','N']:
matrix['nasal'] = 'Y'
else:
matrix['nasal'] = 'N'
###populate obstruent voicing
###i assume that [voice] is only phonologically active in obstruents
###voiced/voiceless obstruent
if character in ['b','d','g','j']:
matrix['voice'] = 'Y'
else:
matrix['voice'] = 'N'
### populate lateral/rhotic
###lateral
if character == 'l':
matrix['lateral'] = 'Y'
else:
matrix['lateral'] = 'N'
###rhotic
if character == 'r':
matrix['rhotic'] = 'Y'
else:
matrix['rhotic'] = 'N'
string_of_matrixes.append(matrix)
###populate vowel height
###I assume at mid is not an active feature
### high
if character in ['i','u']:
matrix['high'] = 'Y'
else:
matrix['high'] = 'N'
### low
if character == 'a':
matrix['low'] = 'Y'
else:
matrix['low'] = 'N'
string_of_matrixes.append(matrix)
return(string_of_matrixes)
kbbi['matrixes'] = kbbi.prepared_entries.apply(lambda x: phono_matrix(x))
sample_df['matrixes'] = sample_df.prepared_places.apply(lambda x: phono_matrix(x))
kbbi.matrixes.head()
sample_df.matrixes.head()
"""
Explanation: In terms of Levenshtein distance, both pairs of words earn the same score. This is because, in both cases, this first item in the pair can be transformed into the second item by substituting the first letter(b for p in Pair 1 and b for s in Pair 2). In terms of actual phonological distance, the words in Pair 1 are much closer to one another than the words in Pair 2. This is because the sound [b] is nearly identical to [p] in terms of both the physical gestures involved in its pronunciation as well its accoustic properties. In both of these respects, [b] and [s] are quite different from one another.
Drawing this distinction might seem pedantic, but phonological distance is important within the context of discovering pairs of words which are variants of one another, or related historically. Changes in pronounciation that occur over time are gradual. For example, cases where [p] becomes [b] are extremely common across the world's languages, whereas [b] to [s] is an extremely rare if not unattested change. Since we are trying to establish whether place names which do not have perfect matches in the dictionary are nevertheless etymologically related to words in the dictionary, it is important that we quantify phonological distance. To accomplish this, I have created a function which converts each sound into a matrix dictionary of its component phonological features (i.e. where in the vocal tract it is pronounced, whether it is nasal etc.) (For more information about phonological features see Wikipedia's page: https://en.wikipedia.org/wiki/Distinctive_feature)
End of explanation
"""
def tier_builder(string_of_matrixes):
features = ['sonorant','consonant','continuant','syllabic','strident','labial','coronal','palatal','velar', 'glottal','nasal','voice','lateral','rhotic']
tier_dictionary = {}
for feature in features:
tier_dictionary[feature] = str()
for matrix in string_of_matrixes:
if matrix[feature] is not None:
tier_dictionary[feature] = tier_dictionary[feature] + matrix[feature]
return(tier_dictionary)
kbbi['tiers'] = kbbi.matrixes.apply(lambda x: tier_builder(x))
sample_df['tiers'] = sample_df.matrixes.apply(lambda x: tier_builder(x))
"""
Explanation: Now that each sound is represented by its features, we're almost at the point where we can compare sounds on a feature by feature basis to get a much more accurate measure of their phonological distance. To allow this feature by feature comparison, I have created the following function, which takes a string of matrixes corresponding to individual sounds, and generates multiple tiers, each correponding to a single binary feature ('Y' for 'has feature' and 'N' for 'lacks feature'). Consider the tier for the feature [consonant] in the case of Pair 1 above: This tier will be identical for the words 'pat' and 'bat', and in both cases, it will take the form 'YNY' (i.e. b,s=consonant, a=vowel, t=consonant). The following function translates a string of feature matrixes into tiers, each corresponding to a single feature.
End of explanation
"""
kbbi.prepared_entries[0]
kbbi.tiers[0]
"""
Explanation: Let's take a closer look at the first entry in the dictionary and how it is represented in terms of tiers of phonological features:
End of explanation
"""
def similarity(word1_tiers,word2_tiers):
features = ['sonorant','continuant','syllabic','strident','labial','coronal','palatal','velar', 'glottal','nasal','voice','lateral','rhotic']
tier_similarity = {}
for feature in features:
tier_similarity[feature] = fuzz.ratio(word1_tiers[feature],word2_tiers[feature])
tier_similarity = pd.Series(tier_similarity)
return(tier_similarity.mean())
"""
Explanation: Now that we have complete phonological representations for each place name and dictionary item, we can apply the following function to calculate the Levenshtein distance between each place name and each dictionary item. The function calculates phonological similarity on a feature-by-feature basis, by comparing the tiers we generated above using 'fuzzy matching'.
Similarity values are calculated for each feature tier, then the average of all tiers is returned, giving us a global measure of phonolgical similarity between two words.
End of explanation
"""
### Building feature matrixes:
pat = phono_matrix('pat')
pat = tier_builder(pat)
bat = phono_matrix('bat')
bat = tier_builder(bat)
sat = phono_matrix('sat')
sat = tier_builder(sat)
similarity(pat,bat)
similarity(bat,sat)
"""
Explanation: Let's test this out with the word pairs 'bat'/'pat' and 'bat'/'sat'. Remamber that these pairs recieved the same score for Levenshtein distance before. If we convert them from strings into phonological matrixes using the functions presented above, we expect that 'pat'/'bat' will get a higher similarity score than 'bat'/'sat'.
End of explanation
"""
def find_partners(wordlist):
ranking_dict = {}
for item in wordlist:
for item2 in wordlist:
ranking_dict[item,item2] = similarity(item,item2)
ranking_dict = pd.Series(ranking_dict)
return(ranking_dict.sort_values(ascending=False))
"""
Explanation: Now that we know that our similarity function works, we can return to the task of identifying place names that have near matches in the dictionary, and are thus likely to be of known etymological origin. I have developed a function which scores the similarity of all words in the dictionary to a single place name, it then returns the dictionary word which is most phonologically similar along with the similarity score for the same dictionary entry. However, at this point the function takes a very long time to execute for the entire list of place names.
Next Steps:
Once I have similarity values for the entire place name database, I will be able to find those words which are least likely to have originated from Malay/Indonesian. I will then investigate the origins of these names. For any given place name X from an unknown source, there are two basic hypotheses regarding its origins:
Hypothesis 1: X was coined and thus does not originate from another language
Hypothesis 2: X originates from another language
These hypotheses predict different distributional properties: In the case of coinages, we predict that identical coinages will rarely occur independently. On the other hand, we expect that place names related to geographic terms (e.g. English 'river', 'mountain', 'hill', etc.) will occur frequently and in diverse locations.
Bringing it all together, we are looking for words which both 1. have the lowest scores in terms of their nearest match with a dictionary entry (this means that they are highly unlikely to have a Malay/Indonesian etymological source); and 2. Have a high similarity scores with other words of unknown origins (since this would indicate a distribution in line with Hypothesis 2 above. We can calculate a score for 2, using the following function. (n.b. I haven't tested this function yet.)
End of explanation
"""
|
tsarouch/data_science_references_python | clustering/A story of clustering bananas.ipynb | gpl-2.0 | %matplotlib inline
import matplotlib.pyplot as plt
# Lets assume our dealer lies and indeed bananas come from elsewhere
from sklearn.datasets.samples_generator import make_blobs
bananas_dimentions = \
[[10, 3], # long - thin
[5, 2], # short - thin
[7.5, 5]] # middle -thick
bananas_dimentions_std = 1.0
n_bananas = 1000
X, banana_labels = make_blobs(n_samples=n_bananas,
centers=bananas_dim,
cluster_std=bananas_dimentions_std)
# Now lets pretend all we have are the n_bananas above all in the same basket
# We know nothing about the origin
# Lets plot what we see in term of thickness / length
plt.scatter(X[:,0], X[:,1])
"""
Explanation: Lets assume we have a shop that sells bananas
We buy bananas from one dealer who claims that he cultivates them organically
in his own garden.
When we recieve our bananas do not look the same though,
some small some thick some long... we observe variation.
There are 2 scenarios:
1) It is indeed one sort of bananas and the variation is natural
2) There are more than one sort of banans and our dealer probably buys them
from third party dealers.
End of explanation
"""
# At first look something looks suspicious
# lets perform some clustering to see what we can measure
# Since we do not know if our bananas come from different places,
# we can try to assign different number of clusters and evaluate some metric
def perform_k_means(X, n_clusters):
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
y_pred = KMeans(n_clusters=n_clusters)\
.fit_predict(X)
return y_pred
# Lets see some plots:
# dymmy case no cluster
y_pred = perform_k_means(X, 1)
plt.subplot(221)
plt.scatter(X[:, 0], X[:, 1], c=y_pred)
y_pred = perform_k_means(X, 2)
plt.subplot(222)
plt.scatter(X[:, 0], X[:, 1], c=y_pred)
y_pred = perform_k_means(X, 3)
plt.subplot(223)
plt.scatter(X[:, 0], X[:, 1], c=y_pred)
y_pred = perform_k_means(X, 4)
plt.subplot(224)
plt.scatter(X[:, 0], X[:, 1], c=y_pred)
"""
Explanation: Perform Clustering (Kmeans)
End of explanation
"""
# At this point we need some evaluation phase
# We need a metric that can give us hints which of the above clusters
# performs better
# ... TODO coming in the next episode :-)
"""
Explanation: Evaluate The clustering
End of explanation
"""
|
tensorflow/docs-l10n | site/en-snapshot/lattice/tutorials/shape_constraints.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
"""
#@test {"skip": true}
!pip install tensorflow-lattice tensorflow_decision_forests
"""
Explanation: Shape Constraints with Tensorflow Lattice
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/shape_constraints"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/lattice/blob/master/docs/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/lattice/blob/master/docs/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/lattice/docs/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
This tutorial is an overview of the constraints and regularizers provided by the TensorFlow Lattice (TFL) library. Here we use TFL canned estimators on synthetic datasets, but note that everything in this tutorial can also be done with models constructed from TFL Keras layers.
Before proceeding, make sure your runtime has all required packages installed (as imported in the code cells below).
Setup
Installing TF Lattice package:
End of explanation
"""
import tensorflow as tf
import tensorflow_lattice as tfl
import tensorflow_decision_forests as tfdf
from IPython.core.pylabtools import figsize
import itertools
import logging
import matplotlib
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import sys
import tempfile
logging.disable(sys.maxsize)
"""
Explanation: Importing required packages:
End of explanation
"""
NUM_EPOCHS = 1000
BATCH_SIZE = 64
LEARNING_RATE=0.01
"""
Explanation: Default values used in this guide:
End of explanation
"""
def click_through_rate(avg_ratings, num_reviews, dollar_ratings):
dollar_rating_baseline = {"D": 3, "DD": 2, "DDD": 4, "DDDD": 4.5}
return 1 / (1 + np.exp(
np.array([dollar_rating_baseline[d] for d in dollar_ratings]) -
avg_ratings * np.log1p(num_reviews) / 4))
"""
Explanation: Training Dataset for Ranking Restaurants
Imagine a simplified scenario where we want to determine whether or not users will click on a restaurant search result. The task is to predict the clickthrough rate (CTR) given input features:
- Average rating (avg_rating): a numeric feature with values in the range [1,5].
- Number of reviews (num_reviews): a numeric feature with values capped at 200, which we use as a measure of trendiness.
- Dollar rating (dollar_rating): a categorical feature with string values in the set {"D", "DD", "DDD", "DDDD"}.
Here we create a synthetic dataset where the true CTR is given by the formula:
$$
CTR = 1 / (1 + exp{\mbox{b(dollar_rating)}-\mbox{avg_rating}\times log(\mbox{num_reviews}) /4 })
$$
where $b(\cdot)$ translates each dollar_rating to a baseline value:
$$
\mbox{D}\to 3,\ \mbox{DD}\to 2,\ \mbox{DDD}\to 4,\ \mbox{DDDD}\to 4.5.
$$
This formula reflects typical user patterns. e.g. given everything else fixed, users prefer restaurants with higher star ratings, and "\$\$" restaurants will receive more clicks than "\$", followed by "\$\$\$" and "\$\$\$\$".
End of explanation
"""
def color_bar():
bar = matplotlib.cm.ScalarMappable(
norm=matplotlib.colors.Normalize(0, 1, True),
cmap="viridis",
)
bar.set_array([0, 1])
return bar
def plot_fns(fns, split_by_dollar=False, res=25):
"""Generates contour plots for a list of (name, fn) functions."""
num_reviews, avg_ratings = np.meshgrid(
np.linspace(0, 200, num=res),
np.linspace(1, 5, num=res),
)
if split_by_dollar:
dollar_rating_splits = ["D", "DD", "DDD", "DDDD"]
else:
dollar_rating_splits = [None]
if len(fns) == 1:
fig, axes = plt.subplots(2, 2, sharey=True, tight_layout=False)
else:
fig, axes = plt.subplots(
len(dollar_rating_splits), len(fns), sharey=True, tight_layout=False)
axes = axes.flatten()
axes_index = 0
for dollar_rating_split in dollar_rating_splits:
for title, fn in fns:
if dollar_rating_split is not None:
dollar_ratings = np.repeat(dollar_rating_split, res**2)
values = fn(avg_ratings.flatten(), num_reviews.flatten(),
dollar_ratings)
title = "{}: dollar_rating={}".format(title, dollar_rating_split)
else:
values = fn(avg_ratings.flatten(), num_reviews.flatten())
subplot = axes[axes_index]
axes_index += 1
subplot.contourf(
avg_ratings,
num_reviews,
np.reshape(values, (res, res)),
vmin=0,
vmax=1)
subplot.title.set_text(title)
subplot.set(xlabel="Average Rating")
subplot.set(ylabel="Number of Reviews")
subplot.set(xlim=(1, 5))
_ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))
figsize(11, 11)
plot_fns([("CTR", click_through_rate)], split_by_dollar=True)
"""
Explanation: Let's take a look at the contour plots of this CTR function.
End of explanation
"""
def sample_restaurants(n):
avg_ratings = np.random.uniform(1.0, 5.0, n)
num_reviews = np.round(np.exp(np.random.uniform(0.0, np.log(200), n)))
dollar_ratings = np.random.choice(["D", "DD", "DDD", "DDDD"], n)
ctr_labels = click_through_rate(avg_ratings, num_reviews, dollar_ratings)
return avg_ratings, num_reviews, dollar_ratings, ctr_labels
np.random.seed(42)
avg_ratings, num_reviews, dollar_ratings, ctr_labels = sample_restaurants(2000)
figsize(5, 5)
fig, axs = plt.subplots(1, 1, sharey=False, tight_layout=False)
for rating, marker in [("D", "o"), ("DD", "^"), ("DDD", "+"), ("DDDD", "x")]:
plt.scatter(
x=avg_ratings[np.where(dollar_ratings == rating)],
y=num_reviews[np.where(dollar_ratings == rating)],
c=ctr_labels[np.where(dollar_ratings == rating)],
vmin=0,
vmax=1,
marker=marker,
label=rating)
plt.xlabel("Average Rating")
plt.ylabel("Number of Reviews")
plt.legend()
plt.xlim((1, 5))
plt.title("Distribution of restaurants")
_ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))
"""
Explanation: Preparing Data
We now need to create our synthetic datasets. We start by generating a simulated dataset of restaurants and their features.
End of explanation
"""
def sample_dataset(n, testing_set):
(avg_ratings, num_reviews, dollar_ratings, ctr_labels) = sample_restaurants(n)
if testing_set:
# Testing has a more uniform distribution over all restaurants.
num_views = np.random.poisson(lam=3, size=n)
else:
# Training/validation datasets have more views on popular restaurants.
num_views = np.random.poisson(lam=ctr_labels * num_reviews / 50.0, size=n)
return pd.DataFrame({
"avg_rating": np.repeat(avg_ratings, num_views),
"num_reviews": np.repeat(num_reviews, num_views),
"dollar_rating": np.repeat(dollar_ratings, num_views),
"clicked": np.random.binomial(n=1, p=np.repeat(ctr_labels, num_views))
})
# Generate datasets.
np.random.seed(42)
data_train = sample_dataset(500, testing_set=False)
data_val = sample_dataset(500, testing_set=False)
data_test = sample_dataset(500, testing_set=True)
# Plotting dataset densities.
figsize(12, 5)
fig, axs = plt.subplots(1, 2, sharey=False, tight_layout=False)
for ax, data, title in [(axs[0], data_train, "training"),
(axs[1], data_test, "testing")]:
_, _, _, density = ax.hist2d(
x=data["avg_rating"],
y=data["num_reviews"],
bins=(np.linspace(1, 5, num=21), np.linspace(0, 200, num=21)),
cmap="Blues",
)
ax.set(xlim=(1, 5))
ax.set(ylim=(0, 200))
ax.set(xlabel="Average Rating")
ax.set(ylabel="Number of Reviews")
ax.title.set_text("Density of {} examples".format(title))
_ = fig.colorbar(density, ax=ax)
"""
Explanation: Let's produce the training, validation and testing datasets. When a restaurant is viewed in the search results, we can record user's engagement (click or no click) as a sample point.
In practice, users often do not go through all search results. This means that users will likely only see restaurants already considered "good" by the current ranking model in use. As a result, "good" restaurants are more frequently impressed and over-represented in the training datasets. When using more features, the training dataset can have large gaps in "bad" parts of the feature space.
When the model is used for ranking, it is often evaluated on all relevant results with a more uniform distribution that is not well-represented by the training dataset. A flexible and complicated model might fail in this case due to overfitting the over-represented data points and thus lack generalizability. We handle this issue by applying domain knowledge to add shape constraints that guide the model to make reasonable predictions when it cannot pick them up from the training dataset.
In this example, the training dataset mostly consists of user interactions with good and popular restaurants. The testing dataset has a uniform distribution to simulate the evaluation setting discussed above. Note that such testing dataset will not be available in a real problem setting.
End of explanation
"""
train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_train,
y=data_train["clicked"],
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
shuffle=False,
)
# feature_analysis_input_fn is used for TF Lattice estimators.
feature_analysis_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_train,
y=data_train["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
val_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_val,
y=data_val["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
test_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_test,
y=data_test["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
"""
Explanation: Defining input_fns used for training and evaluation:
End of explanation
"""
def analyze_two_d_estimator(estimator, name):
# Extract validation metrics.
if isinstance(estimator, tf.estimator.Estimator):
metric = estimator.evaluate(input_fn=val_input_fn)
else:
metric = estimator.evaluate(
tfdf.keras.pd_dataframe_to_tf_dataset(data_val, label="clicked"),
return_dict=True,
verbose=0)
print("Validation AUC: {}".format(metric["auc"]))
if isinstance(estimator, tf.estimator.Estimator):
metric = estimator.evaluate(input_fn=test_input_fn)
else:
metric = estimator.evaluate(
tfdf.keras.pd_dataframe_to_tf_dataset(data_test, label="clicked"),
return_dict=True,
verbose=0)
print("Testing AUC: {}".format(metric["auc"]))
def two_d_pred(avg_ratings, num_reviews):
if isinstance(estimator, tf.estimator.Estimator):
results = estimator.predict(
tf.compat.v1.estimator.inputs.pandas_input_fn(
x=pd.DataFrame({
"avg_rating": avg_ratings,
"num_reviews": num_reviews,
}),
shuffle=False,
))
return [x["logistic"][0] for x in results]
else:
return estimator.predict(
tfdf.keras.pd_dataframe_to_tf_dataset(
pd.DataFrame({
"avg_rating": avg_ratings,
"num_reviews": num_reviews,
})),
verbose=0)
def two_d_click_through_rate(avg_ratings, num_reviews):
return np.mean([
click_through_rate(avg_ratings, num_reviews,
np.repeat(d, len(avg_ratings)))
for d in ["D", "DD", "DDD", "DDDD"]
],
axis=0)
figsize(11, 5)
plot_fns([("{} Estimated CTR".format(name), two_d_pred),
("CTR", two_d_click_through_rate)],
split_by_dollar=False)
"""
Explanation: Fitting Gradient Boosted Trees
Let's start off with only two features: avg_rating and num_reviews.
We create a few auxillary functions for plotting and calculating validation and test metrics.
End of explanation
"""
gbt_model = tfdf.keras.GradientBoostedTreesModel(
features=[
tfdf.keras.FeatureUsage(name="num_reviews"),
tfdf.keras.FeatureUsage(name="avg_rating")
],
exclude_non_specified_features=True,
num_threads=1,
num_trees=32,
max_depth=6,
min_examples=10,
growing_strategy="BEST_FIRST_GLOBAL",
random_seed=42,
temp_directory=tempfile.mkdtemp(),
)
gbt_model.compile(metrics=[tf.keras.metrics.AUC(name="auc")])
gbt_model.fit(
tfdf.keras.pd_dataframe_to_tf_dataset(data_train, label="clicked"),
validation_data=tfdf.keras.pd_dataframe_to_tf_dataset(
data_val, label="clicked"),
verbose=0)
analyze_two_d_estimator(gbt_model, "GBT")
"""
Explanation: We can fit TensorFlow gradient boosted decision trees on the dataset:
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
dnn_estimator = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
# Hyper-params optimized on validation set.
hidden_units=[16, 8, 8],
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
dnn_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(dnn_estimator, "DNN")
"""
Explanation: Even though the model has captured the general shape of the true CTR and has decent validation metrics, it has counter-intuitive behavior in several parts of the input space: the estimated CTR decreases as the average rating or number of reviews increase. This is due to a lack of sample points in areas not well-covered by the training dataset. The model simply has no way to deduce the correct behaviour solely from the data.
To solve this issue, we enforce the shape constraint that the model must output values monotonically increasing with respect to both the average rating and the number of reviews. We will later see how to implement this in TFL.
Fitting a DNN
We can repeat the same steps with a DNN classifier. We can observe a similar pattern: not having enough sample points with small number of reviews results in nonsensical extrapolation. Note that even though the validation metric is better than the tree solution, the testing metric is much worse.
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
"""
Explanation: Shape Constraints
TensorFlow Lattice (TFL) is focused on enforcing shape constraints to safeguard model behavior beyond the training data. These shape constraints are applied to TFL Keras layers. Their details can be found in our JMLR paper.
In this tutorial we use TF canned estimators to cover various shape constraints, but note that all these steps can be done with models created from TFL Keras layers.
As with any other TensorFlow estimator, TFL canned estimators use feature columns to define the input format and use a training input_fn to pass in the data.
Using TFL canned estimators also requires:
- a model config: defining the model architecture and per-feature shape constraints and regularizers.
- a feature analysis input_fn: a TF input_fn passing data for TFL initialization.
For a more thorough description, please refer to the canned estimators tutorial or the API docs.
Monotonicity
We first address the monotonicity concerns by adding monotonicity shape constraints to both features.
To instruct TFL to enforce shape constraints, we specify the constraints in the feature configs. The following code shows how we can require the output to be monotonically increasing with respect to both num_reviews and avg_rating by setting monotonicity="increasing".
End of explanation
"""
def save_and_visualize_lattice(tfl_estimator):
saved_model_path = tfl_estimator.export_saved_model(
"/tmp/TensorFlow_Lattice_101/",
tf.estimator.export.build_parsing_serving_input_receiver_fn(
feature_spec=tf.feature_column.make_parse_example_spec(
feature_columns)))
model_graph = tfl.estimators.get_model_graph(saved_model_path)
figsize(8, 8)
tfl.visualization.draw_model_graph(model_graph)
return model_graph
_ = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: Using a CalibratedLatticeConfig creates a canned classifier that first applies a calibrator to each input (a piece-wise linear function for numeric features) followed by a lattice layer to non-linearly fuse the calibrated features. We can use tfl.visualization to visualize the model. In particular, the following plot shows the two trained calibrators included in the canned classifier.
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: With the constraints added, the estimated CTR will always increase as the average rating increases or the number of reviews increases. This is done by making sure that the calibrators and the lattice are monotonic.
Diminishing Returns
Diminishing returns means that the marginal gain of increasing a certain feature value will decrease as we increase the value. In our case we expect that the num_reviews feature follows this pattern, so we can configure its calibrator accordingly. Notice that we can decompose diminishing returns into two sufficient conditions:
the calibrator is monotonicially increasing, and
the calibrator is concave.
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
# Larger num_reviews indicating more trust in avg_rating.
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
model_graph = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: Notice how the testing metric improves by adding the concavity constraint. The prediction plot also better resembles the ground truth.
2D Shape Constraint: Trust
A 5-star rating for a restaurant with only one or two reviews is likely an unreliable rating (the restaurant might not actually be good), whereas a 4-star rating for a restaurant with hundreds of reviews is much more reliable (the restaurant is likely good in this case). We can see that the number of reviews of a restaurant affects how much trust we place in its average rating.
We can exercise TFL trust constraints to inform the model that the larger (or smaller) value of one feature indicates more reliance or trust of another feature. This is done by setting reflects_trust_in configuration in the feature config.
End of explanation
"""
lat_mesh_n = 12
lat_mesh_x, lat_mesh_y = tfl.test_utils.two_dim_mesh_grid(
lat_mesh_n**2, 0, 0, 1, 1)
lat_mesh_fn = tfl.test_utils.get_hypercube_interpolation_fn(
model_graph.output_node.weights.flatten())
lat_mesh_z = [
lat_mesh_fn([lat_mesh_x.flatten()[i],
lat_mesh_y.flatten()[i]]) for i in range(lat_mesh_n**2)
]
trust_plt = tfl.visualization.plot_outputs(
(lat_mesh_x, lat_mesh_y),
{"Lattice Lookup": lat_mesh_z},
figsize=(6, 6),
)
trust_plt.title("Trust")
trust_plt.xlabel("Calibrated avg_rating")
trust_plt.ylabel("Calibrated num_reviews")
trust_plt.show()
"""
Explanation: The following plot presents the trained lattice function. Due to the trust constraint, we expect that larger values of calibrated num_reviews would force higher slope with respect to calibrated avg_rating, resulting in a more significant move in the lattice output.
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: Smoothing Calibrators
Let's now take a look at the calibrator of avg_rating. Though it is monotonically increasing, the changes in its slopes are abrupt and hard to interpret. That suggests we might want to consider smoothing this calibrator using a regularizer setup in the regularizer_configs.
Here we apply a wrinkle regularizer to reduce changes in the curvature. You can also use the laplacian regularizer to flatten the calibrator and the hessian regularizer to make it more linear.
End of explanation
"""
def analyze_three_d_estimator(estimator, name):
# Extract validation metrics.
metric = estimator.evaluate(input_fn=val_input_fn)
print("Validation AUC: {}".format(metric["auc"]))
metric = estimator.evaluate(input_fn=test_input_fn)
print("Testing AUC: {}".format(metric["auc"]))
def three_d_pred(avg_ratings, num_reviews, dollar_rating):
results = estimator.predict(
tf.compat.v1.estimator.inputs.pandas_input_fn(
x=pd.DataFrame({
"avg_rating": avg_ratings,
"num_reviews": num_reviews,
"dollar_rating": dollar_rating,
}),
shuffle=False,
))
return [x["logistic"][0] for x in results]
figsize(11, 22)
plot_fns([("{} Estimated CTR".format(name), three_d_pred),
("CTR", click_through_rate)],
split_by_dollar=True)
"""
Explanation: The calibrators are now smooth, and the overall estimated CTR better matches the ground truth. This is reflected both in the testing metric and in the contour plots.
Partial Monotonicity for Categorical Calibration
So far we have been using only two of the numeric features in the model. Here we will add a third feature using a categorical calibration layer. Again we start by setting up helper functions for plotting and metric calculation.
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
tf.feature_column.categorical_column_with_vocabulary_list(
"dollar_rating",
vocabulary_list=["D", "DD", "DDD", "DDDD"],
dtype=tf.string,
default_value=0),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
),
tfl.configs.FeatureConfig(
name="dollar_rating",
lattice_size=2,
pwl_calibration_num_keypoints=4,
# Here we only specify one monotonicity:
# `D` resturants has smaller value than `DD` restaurants
monotonicity=[("D", "DD")],
),
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_three_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: To involve the third feature, dollar_rating, we should recall that categorical features require a slightly different treatment in TFL, both as a feature column and as a feature config. Here we enforce the partial monotonicity constraint that outputs for "DD" restaurants should be larger than "D" restaurants when all other inputs are fixed. This is done using the monotonicity setting in the feature config.
End of explanation
"""
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
tf.feature_column.categorical_column_with_vocabulary_list(
"dollar_rating",
vocabulary_list=["D", "DD", "DDD", "DDDD"],
dtype=tf.string,
default_value=0),
]
model_config = tfl.configs.CalibratedLatticeConfig(
output_calibration=True,
output_calibration_num_keypoints=5,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="output_calib_wrinkle", l2=0.1),
],
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
),
tfl.configs.FeatureConfig(
name="dollar_rating",
lattice_size=2,
pwl_calibration_num_keypoints=4,
# Here we only specify one monotonicity:
# `D` resturants has smaller value than `DD` restaurants
monotonicity=[("D", "DD")],
),
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_three_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
"""
Explanation: This categorical calibrator shows the preference of the model output: DD > D > DDD > DDDD, which is consistent with our setup. Notice there is also a column for missing values. Though there is no missing feature in our training and testing data, the model provides us with an imputation for the missing value should it happen during downstream model serving.
Here we also plot the predicted CTR of this model conditioned on dollar_rating. Notice that all the constraints we required are fulfilled in each of the slices.
Output Calibration
For all the TFL models we have trained so far, the lattice layer (indicated as "Lattice" in the model graph) directly outputs the model prediction. Sometimes we are not sure whether the lattice output should be rescaled to emit model outputs:
- the features are $log$ counts while the labels are counts.
- the lattice is configured to have very few vertices but the label distribution is relatively complicated.
In those cases we can add another calibrator between the lattice output and the model output to increase model flexibility. Here let's add a calibrator layer with 5 keypoints to the model we just built. We also add a regularizer for the output calibrator to keep the function smooth.
End of explanation
"""
|
rdhyee/CommonCrawlTutorials | Experiments/IPython-notebook-docker/2014_08_Crawl.ipynb | apache-2.0 | def key_for_new_crawl(crawl_name, file_type='all', return_name=False, plural=True):
suffix = "s" if plural else ""
if file_type.upper() == 'WARC':
file_name = "warc.path{suffix}.gz".format(suffix=suffix)
elif file_type.upper() == 'WAT':
file_name = "wat.path{suffix}.gz".format(suffix=suffix)
elif file_type.upper() == 'wet':
file_name = "wet.path{suffix}.gz".format(suffix=suffix)
else:
file_name = "segment.path{suffix}.gz".format(suffix=suffix)
key_name = "common-crawl/crawl-data/{crawl_name}/{file_name}".format(crawl_name=crawl_name,
file_name=file_name)
return S3Key(key_name)
list(keys_in_bucket("common-crawl/crawl-data/CC-MAIN-2014-35/"))
k = S3Key("common-crawl/crawl-data/CC-MAIN-2014-35/segment.paths.gz")
k
k.size
k = key_for_new_crawl("CC-MAIN-2014-35", "all")
k
"""
Explanation: August 2014 crawl
http://commoncrawl.org/august-2014-crawl-data-available/
all segments (CC-MAIN-2014-35/segment.paths.gz)
all WARC files (CC-MAIN-2014-35/warc.paths.gz)
all WAT files (CC-MAIN-2014-35/wat.paths.gz)
all WET files (CC-MAIN-2014-35/wet.paths.gz)
https://aws-publicdatasets.s3.amazonaws.com/common-crawl/crawl-data/CC-MAIN-2014-35/segment.path.gz
or
https://aws-publicdatasets.s3.amazonaws.com/common-crawl/crawl-data/CC-MAIN-2014-35/segment.paths.gz ?
End of explanation
"""
[key for key in keys_in_bucket("common-crawl/crawl-data/") if key.startswith("CC-MAIN")]
# convert year/week number to datetime
import datetime
def week_in_year(year, week):
return datetime.datetime(year,1,1) + datetime.timedelta((week-1)*7)
week_in_year(2014,35)
import gzip
import StringIO
# the following functions
def gzip_from_key(key_name, bucket_name="aws-publicdatasets"):
k = S3Key(key_name, bucket_name)
f = gzip.GzipFile(fileobj=StringIO.StringIO(k.get_contents_as_string()))
return f
def segments_from_gzip(gz):
s = gz.read()
return filter(None, s.split("\n"))
# let's parse segment.path(s).gz
valid_segments = segments_from_gzip(gzip_from_key(key_for_new_crawl("CC-MAIN-2014-35", "all", return_name=True)))
valid_segments[:5]
# side issue: rewrite to deal with gzip on the fly?
from boto.s3.key import Key
from gzipstream import GzipStreamFile
def gzip_from_key_2(key_name, bucket_name='aws-publicdatasets'):
k = S3Key(key_name, bucket_name)
f = GzipStreamFile(k)
return f
def segments_from_gzip_2(gz):
BUF_SIZE = 1000
s = ""
buffer = gz.read(BUF_SIZE)
while buffer:
s += buffer
buffer = gz.read(BUF_SIZE)
return filter(None, s.split("\n"))
valid_segments_2 = segments_from_gzip_2(gzip_from_key_2("common-crawl/crawl-data/CC-MAIN-2014-15/segment.paths.gz"))
valid_segments_2[:5]
"""
Explanation: Do all of the nutch era crawls follow the same structure?
End of explanation
"""
wat_segments = segments_from_gzip(gzip_from_key(key_for_new_crawl("CC-MAIN-2014-35", "WAT", return_name=True)))
len(wat_segments)
wat_segments[0]
"""
Explanation: WAT: metadata files
all WAT files (CC-MAIN-2014-15/wat.paths.gz)
End of explanation
"""
k = S3Key(wat_segments[0])
k.size
def get_key_sizes(keys, bucket_name="aws-publicdatasets"):
try:
conn = S3Connection(anon=True)
bucket = conn.get_bucket(bucket_name)
sizes = []
for key in keys:
try:
k = bucket.get_key(key)
sizes.append((key, k.size if hasattr(k, 'size') else 0))
except Exception, e:
sizes.append((key, e))
return sizes
except Exception, e:
sizes = []
for key in keys:
sizes.append ((key, e))
return sizes
get_key_sizes(wat_segments[0:20])
# http://stackoverflow.com/questions/2348317/how-to-write-a-pager-for-python-iterators/2350904#2350904
def grouper(iterable, page_size):
page= []
for item in iterable:
page.append( item )
if len(page) == page_size:
yield page
page= []
if len(page) > 0:
yield page
"""
Explanation: http://commoncrawl.org/navigating-the-warc-file-format/
json file
forgot to check how big the file is before downloading it.
End of explanation
"""
wat_segments[0:2]
"""
Explanation: multiprocessing approach
when to use Processes vs Threads? I think you use threads when there's no danger of memory collision
http://sebastianraschka.com/Articles/2014_multiprocessing_intro.html#Multi-Threading-vs.-Multi-Processing:
If we submit "jobs" to different threads, those jobs can be pictured as "sub-tasks" of a single process and those threads will usually have access to the same memory areas (i.e., shared memory). This approach can easily lead to conflicts in case of improper synchronization, for example, if processes are writing to the same memory location at the same time.
End of explanation
"""
# with pool.map
from multiprocessing.dummy import Pool as ThreadPool
from functools import partial
get_key_sizes_for_bucket = partial(get_key_sizes, bucket_name="aws-publicdatasets")
PAGE_SIZE = 1
POOL_SIZE = 100
MAX_SEGMENTS = 100
pool = ThreadPool(POOL_SIZE)
results = pool.map(get_key_sizes_for_bucket, grouper(wat_segments[0:MAX_SEGMENTS], PAGE_SIZE))
pool.close()
pool.join()
results[:5]
# Pool.imap_unordered
from multiprocessing.dummy import Pool as ThreadPool
from functools import partial
get_key_sizes_for_bucket = partial(get_key_sizes, bucket_name="aws-publicdatasets")
PAGE_SIZE = 1
POOL_SIZE = 100
MAX_SEGMENTS = 10 # replace with None for all segments
CHUNK_SIZE = 10
pool = ThreadPool(POOL_SIZE)
results_iter = pool.imap_unordered(get_key_sizes_for_bucket,
grouper(wat_segments[0:MAX_SEGMENTS], PAGE_SIZE),
CHUNK_SIZE)
results = []
for (i, page) in enumerate(islice(results_iter,None)):
print ('\r>> Result %d' % i, end="")
for result in page:
results.append(result)
results[:5]
# Pool.imap_unordered with ProcessPool
# not necessarily that useful since this calculation is I/O bound
from multiprocessing import Pool as ProcessPool
from functools import partial
get_key_sizes_for_bucket = partial(get_key_sizes, bucket_name="aws-publicdatasets")
PAGE_SIZE = 1
POOL_SIZE = 100
MAX_SEGMENTS = 10 # replace with None for all segments
CHUNK_SIZE = 10
pool = ProcessPool(POOL_SIZE)
results_iter = pool.imap_unordered(get_key_sizes_for_bucket,
grouper(wat_segments[0:MAX_SEGMENTS], PAGE_SIZE),
CHUNK_SIZE)
results = []
for (i, page) in enumerate(islice(results_iter,None)):
print ('\r>> Result %d' % i, end="")
for result in page:
results.append(result)
from pandas import Series, DataFrame
df = DataFrame(results, columns=['key', 'size'])
df.head()
df['size'].describe()
%pylab --no-import-all inline
import pylab as P
P.hist(df['size'])
# total byte size of all the wat files
print (format(sum(df['size']),",d"))
# save results
import csv
with open('CC-MAIN-2014-35.wat.csv', 'wb') as csvfile:
wat_writer = csv.writer(csvfile, delimiter=',',
quotechar='|', quoting=csv.QUOTE_MINIMAL)
wat_writer.writerow(['key', 'size'])
for result in results:
wat_writer.writerow(result)
!head CC-MAIN-2014-35.wat.csv
"""
Explanation: multiprocessing async approach
End of explanation
"""
# this won't work for the version of boto I'm currently using -- a bug to be resolved
# key = bucket.get_key(wat_segments[0])
# url = key.generate_url(expires_in=0, query_auth=False)
# url
wat_segments[0]
# looks like within any segment ID, we should expect warc, wat, wet buckets
!s3cmd ls s3://aws-publicdatasets/common-crawl/crawl-data/CC-MAIN-2014-35/segments/1408500800168.29/
#https://github.com/commoncrawl/gzipstream
from gzipstream import GzipStreamFile
import boto
from itertools import islice
from boto.s3.key import Key
from gzipstream import GzipStreamFile
import warc
import json
def test_gzipstream():
output = []
# Let's use a random gzipped web archive (WARC) file from the 2014-15 Common Crawl dataset
## Connect to Amazon S3 using anonymous credentials
conn = boto.connect_s3(anon=True)
pds = conn.get_bucket('aws-publicdatasets')
## Start a connection to one of the WARC files
k = Key(pds)
k.key = 'common-crawl/crawl-data/CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00000-ip-10-147-4-33.ec2.internal.warc.gz'
# The warc library accepts file like objects, so let's use GzipStreamFile
f = warc.WARCFile(fileobj=GzipStreamFile(k))
for num, record in islice(enumerate(f),100):
if record['WARC-Type'] == 'response':
# Imagine we're interested in the URL, the length of content, and any Content-Type strings in there
output.append((record['WARC-Target-URI'], record['Content-Length']))
output.append( '\n'.join(x for x in record.payload.read().replace('\r', '').split('\n\n')[0].split('\n') if 'content-type:' in x.lower()))
output.append( '=-=-' * 10)
return output
#test_gzipstream()
def warc_records(key_name, limit=None):
conn = boto.connect_s3(anon=True)
bucket = conn.get_bucket('aws-publicdatasets')
key = bucket.get_key(key_name)
# The warc library accepts file like objects, so let's use GzipStreamFile
f = warc.WARCFile(fileobj=GzipStreamFile(key))
for record in islice(f, limit):
yield record
# let's compute some stats on the headers
from collections import Counter
c=Counter()
[c.update(record.header.keys()) for record in warc_records(wat_segments[0],1)]
c
# warc-type
# looks like
# first record is 'warcinfo
# rest is 'metadata'
c=Counter()
[c.update([record['warc-type']]) for record in warc_records(wat_segments[0],100)]
c
# content-type
c=Counter()
[c.update([record['content-type']]) for record in warc_records(wat_segments[0],100)]
c
# what's in the records
wrecords = warc_records(wat_segments[0])
wrecords
record = wrecords.next()
if record.header.get('content-type') == 'application/json':
s = record.payload.read()
payload = json.loads(s)
else:
payload = s
payload
record.header.get('content-type')
record.url
# http://warc.readthedocs.org/en/latest/#working-with-warc-header
record.header.items()
#payload
dir(record.payload)
record.header.get('content-type')
type(payload)
if isinstance(payload, dict):
print (payload['Envelope']['WARC-Header-Metadata'].get('WARC-Target-URI'))
record.header
import urlparse
urlparse.urlparse(payload['Envelope']['WARC-Header-Metadata']['WARC-Target-URI']).netloc
# let's just run through a segment with minimal processing (to test processing speed)
import urlparse
def walk_through_segment(segment_id, limit=None):
for (i,record) in enumerate(warc_records(segment_id,limit)):
print ("\r>> Record: %d" % i, end="")
%time walk_through_segment(wat_segments[0],10)
import urlparse
def netloc_count_for_segment(segment_id, limit=None):
c = Counter()
for (i, record) in enumerate(warc_records(segment_id,limit)):
print ("\r>>Record: %d" % i, end="")
if record.header.get('content-type') == 'application/json':
s = record.payload.read()
payload = json.loads(s)
url = payload['Envelope']['WARC-Header-Metadata'].get('WARC-Target-URI')
if url:
netloc = urlparse.urlparse(url).netloc
c.update([netloc])
else:
c.update([None])
return c
%time netloc_count_for_segment(wat_segments[0],100)
# rewrite dropping Counter to use defaultdict
# http://evanmuehlhausen.com/simple-counters-in-python-with-benchmarks/ suggests Counter is slow
import urlparse
from collections import defaultdict
counter = defaultdict(int)
def netloc_count_for_segment_dd(segment_id, limit=None):
c = defaultdict(int)
for (i, record) in enumerate(warc_records(segment_id,limit)):
print ("\r>>Record: %d" % i, end="")
s = record.payload.read()
if record.header.get('content-type') == 'application/json':
payload = json.loads(s)
url = payload['Envelope']['WARC-Header-Metadata'].get('WARC-Target-URI')
if url:
netloc = urlparse.urlparse(url).netloc
c[netloc] += 1
else:
c[None] += 1
return c
%time netloc_count_for_segment_dd(wat_segments[0],100)
"""
Explanation: Taking apart a WAT file
End of explanation
"""
# draw from https://github.com/Smerity/cc-mrjob/blob/7ab5a81ee698a2819ae1bc5295ac0de628f1ea6a/mrcc.py
!s3cmd ls s3://aws-publicdatasets/common-crawl/crawl-data/CC-MAIN-2014-35/warc.path.gz
# WARC files from the latest crawl
# s3://aws-publicdatasets/common-crawl/crawl-data/CC-MAIN-2014-35/
warc_segments = segments_from_gzip(gzip_from_key(key_for_new_crawl("CC-MAIN-2014-23", "WARC", return_name=True)))
len(warc_segments)
warc_segments[0]
walk_through_segment(warc_segments[0],100)
# code adapted from https://github.com/Smerity/cc-mrjob/blob/7ab5a81ee698a2819ae1bc5295ac0de628f1ea6a/tag_counter.py
import re
from collections import Counter
# Optimization: compile the regular expression once so it's not done each time
# The regular expression looks for (1) a tag name using letters (assumes lowercased input) and numbers
# and (2) allows an optional for a space and then extra parameters, eventually ended by a closing >
HTML_TAG_PATTERN = re.compile('<([a-z0-9]+)[^>]*>')
def get_tag_count(data, ctr=None):
"""Extract the names and total usage count of all the opening HTML tags in the document"""
if ctr is None:
ctr = Counter()
# Convert the document to lower case as HTML tags are case insensitive
ctr.update(HTML_TAG_PATTERN.findall(data.lower()))
return ctr
# Let's check to make sure the tag counter works as expected
assert get_tag_count('<html><a href="..."></a><h1 /><br/><p><p></p></p>') == {'html': 1, 'a': 1, 'p': 2, 'h1': 1, 'br': 1}
def tag_counts_for_segment(segment_id, limit=None, print_record_num=False):
ctr = Counter()
try:
for (i, record) in enumerate(warc_records(segment_id,limit)):
if print_record_num:
print ("\r>>Record: %d" % i, end="")
if record.header.get('content-type') == 'application/http; msgtype=response':
payload = record.payload.read()
# The HTTP response is defined by a specification: first part is headers (metadata)
# and then following two CRLFs (newlines) has the data for the response
headers, body = payload.split('\r\n\r\n', 1)
if 'Content-Type: text/html' in headers:
# We avoid creating a new Counter for each page as that's actually quite slow
tag_count = get_tag_count(body,ctr)
return (segment_id, ctr)
except Exception, e:
return (segment_id, e)
# let's use multiprocessing to try to duplicate the list of segments on the mrjob example
# https://github.com/commoncrawl/cc-mrjob/blob/1914a8923a6c79ffff28d269799a109d2805b04e/input/test-1.warc
import requests
url = "https://raw.githubusercontent.com/commoncrawl/cc-mrjob/1914a8923a6c79ffff28d269799a109d2805b04e/input/test-1.warc"
mrjobs_segs = filter(None,
requests.get(url).content.split("\n"))
mrjobs_segs
from multiprocessing.dummy import Pool as ThreadPool
from multiprocessing import Pool as ProcessPool
from functools import partial
def calc_tag_counts_for_segments(segments, max_segments=None, max_record_per_segment=None, pool_size=4, chunk_size=1,
print_result_count=False, threads=True):
"""
pool_size: number of works in pool
max_segments: number of segments to calculate (None for no limit)
max_record_per_segment: max num of records to calculate per segment (None for no limit)
chunk_size: how to chunk jobs among pool workers
print_result_count: whether to print number of results available
"""
tag_counts_for_segment_limit = partial(tag_counts_for_segment, limit=max_record_per_segment)
if threads:
pool = ThreadPool(pool_size)
else:
pool = ProcessPool(pool_size)
results_iter = pool.imap_unordered(tag_counts_for_segment_limit,
mrjobs_segs[0:max_segments],
chunk_size)
results = []
exceptions = []
net_counts = Counter()
for (i, (seg_id, result)) in enumerate(results_iter):
if print_result_count:
print ('\r>> Result %d' % i, end="")
results.append((seg_id, result))
if isinstance(result, Counter):
net_counts.update(result)
elif isinstance(result,Exception):
exceptions.append((seg_id, result))
return (net_counts, results, exceptions)
%time (tag_counts, results, exceptions) = calc_tag_counts_for_segments(mrjobs_segs, max_segments=None, \
max_record_per_segment=100, \
pool_size=8, \
chunk_size=1, \
print_result_count=True, \
threads=True)
# calculate vector for h1...h6
def htags(tag_counts):
return [tag_counts.get('h{i}'.format(i=i),0) for i in range(1,7)]
def normalize(a):
s = np.sum(a)
if s > 0:
return np.array(a).astype('float')/s
else:
return np.array(a)
k = normalize(htags(tag_counts))
k
from itertools import izip
colors = ['b', 'g', 'r', 'c','m', 'y', 'k', 'w']
def plot_hdist(props):
c_props = np.cumsum(props)
for (prop, c_prop, color) in izip(props, c_props, colors):
plt.bar(left=0, width=width, bottom = c_prop-prop, height = prop, color=color)
plt.plot()
plot_hdist(normalize(htags(tag_counts)))
exceptions
from IPython.html import widgets
m = widgets.interact(plot_hdist, props=normalize(htags(tag_counts)))
"""
Explanation: Tag Count
End of explanation
"""
import multiprocessing
multiprocessing.cpu_count()
from IPython import parallel
rc = parallel.Client()
rc.block = True
rc.ids
"""
Explanation: Using IPython parallel
You can have multithreading within a given process but to use multiple processes, need to go with IPython parallel -- at least, that's what I've been able to work out
End of explanation
"""
|
srcole/qwm | burrito/.ipynb_checkpoints/Burrito_correlations-checkpoint.ipynb | mit | %config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
sns.set_style("white")
"""
Explanation: San Diego Burrito Analytics: Correlations
Scott Cole
21 May 2016
This notebook investigates the correlations between different burrito measures.
imports
End of explanation
"""
import util
df = util.load_burritos()
N = df.shape[0]
"""
Explanation: Load data
End of explanation
"""
m_corr = ['Google','Yelp','Hunger','Cost','Volume','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Synergy','Wrap','overall']
M = len(m_corr)
dfcorr = df[m_corr].corr()
from matplotlib import cm
clim1 = (-1,1)
plt.figure(figsize=(12,10))
cax = plt.pcolor(range(M+1), range(M+1), dfcorr, cmap=cm.bwr)
cbar = plt.colorbar(cax, ticks=(-1,-.5,0,.5,1))
cbar.ax.set_ylabel('Pearson correlation (r)', size=30)
plt.clim(clim1)
cbar.ax.set_yticklabels((-1,-.5,0,.5,1),size=20)
ax = plt.gca()
ax.set_yticks(np.arange(M)+.5)
ax.set_yticklabels(m_corr,size=25)
ax.set_xticks(np.arange(M)+.5)
ax.set_xticklabels(m_corr,size=25)
plt.xticks(rotation='vertical')
plt.tight_layout()
plt.xlim((0,M))
plt.ylim((0,M))
figname = 'metriccorrmat'
plt.savefig('C:/gh/fig/burrito/'+figname + '.png')
"""
Explanation: Correlation matrix
Note that many dimensions of the burrito are correlated with each other. This lack of independence across dimensions means we should be careful while interpreting the presence of a correlation between two dimensions.
End of explanation
"""
plt.figure(figsize=(4,4))
ax = plt.gca()
df.plot(kind='scatter',x='Cost',y='Volume',ax=ax,**{'s':40,'color':'k'})
plt.xlabel('Cost ($)',size=20)
plt.ylabel('Volume (L)',size=20)
plt.xticks(np.arange(3,11),size=15)
plt.yticks(np.arange(.4,1.4,.1),size=15)
plt.tight_layout()
print df.corr()['Cost']['Volume']
from tools.misc import pearsonp
print pearsonp(df.corr()['Cost']['Volume'],len(df[['Cost','Volume']].dropna()))
figname = 'corr-volume-cost'
plt.savefig('C:/gh/fig/burrito/'+figname + '.png')
"""
Explanation: Correlation: Cost and volume
End of explanation
"""
# Visualize some correlations
from tools.plt import scatt_corr
scatt_corr(df['overall'].values,df['Meat'].values,
xlabel = 'overall rating', ylabel='meat rating', xlim = (-.5,5.5),ylim = (-.5,5.5),xticks=range(6),yticks=range(6))
#showline = True)
figname = 'corr-meat-overall'
plt.savefig('C:/gh/fig/burrito/'+figname + '.png')
"""
Explanation: Positive correlation: Meat and overall rating
End of explanation
"""
plt.figure(figsize=(4,4))
ax = plt.gca()
df.plot(kind='scatter',x='Meat',y='Fillings',ax=ax,**{'s':40,'color':'k','alpha':.1})
plt.xlabel('Meat rating',size=20)
plt.ylabel('Non-meat rating',size=20)
plt.xticks(np.arange(0,6),size=15)
plt.yticks(np.arange(0,6),size=15)
dfMF = df[['Meat','Fillings']].dropna()
print sp.stats.spearmanr(dfMF.Meat,dfMF.Fillings)
figname = 'corr-meat-filling'
plt.savefig('C:/gh/fig/burrito/'+figname + '.png')
"""
Explanation: Positive correlation: Meat and Non-meat fillings
End of explanation
"""
# Restrict analysis to burritos at the taco stand
restrictCali = False
import re
reTS = re.compile('.*taco stand.*', re.IGNORECASE)
reCali = re.compile('.*cali.*', re.IGNORECASE)
locTS = np.ones(len(df))
for i in range(len(df)):
mat = reTS.match(df['Location'][i])
if mat is None:
locTS[i] = 0
else:
if restrictCali:
mat = reCali.match(df['Burrito'][i])
if mat is None:
locTS[i] = 0
temp = np.arange(len(df))
dfTS = df.loc[temp[locTS==1]]
plt.figure(figsize=(4,4))
ax = plt.gca()
dfTS.plot(kind='scatter',x='Meat',y='Fillings',ax=ax,**{'s':40,'color':'k','alpha':.1})
plt.xlabel('Meat rating',size=20)
plt.ylabel('Non-meat rating',size=20)
plt.xticks(np.arange(0,6),size=15)
plt.yticks(np.arange(0,6),size=15)
dfTSMF = dfTS[['Meat','Fillings']].dropna()
print sp.stats.spearmanr(dfTSMF.Meat,dfTSMF.Fillings)
figname = 'corr-meat-filling-TS'
plt.savefig('C:/gh/fig/burrito/'+figname + '.png')
"""
Explanation: Positive correlation: Meat and non-meat fillings at The Taco Stand
The positive correlation here indicates that the correlation between these two features is not simply due to a restaurant having good meat tending to have good non-meat fillings
End of explanation
"""
plt.figure(figsize=(4,4))
ax = plt.gca()
df.plot(kind='scatter',x='Hunger',y='overall',ax=ax,**{'s':40,'color':'k'})
plt.xlabel('Hunger',size=20)
plt.ylabel('Overall rating',size=20)
plt.xticks(np.arange(0,6),size=15)
plt.yticks(np.arange(0,6),size=15)
print df.corr()['Hunger']['overall']
from tools.misc import pearsonp
print pearsonp(df.corr()['Hunger']['overall'],len(df[['Hunger','overall']].dropna()))
figname = 'corr-hunger-overall'
plt.savefig('C:/gh/fig/burrito/'+figname + '.png')
"""
Explanation: Correlation: Hunger level and overall rating
End of explanation
"""
|
kimkipyo/dss_git_kkp | 통계, 머신러닝 복습/160607화_12일차_(확률론적)선형 회귀 분석 Linear Regression Analysis/4.분산 분석 기반의 카테고리 분석.ipynb | mit | from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
x0 = np.random.choice(3, 10)
x0
encoder.fit(x0[:, np.newaxis])
X = encoder.transform(x0[:, np.newaxis]).toarray()
X
dfX = pd.DataFrame(X, columns=encoder.active_features_)
dfX
"""
Explanation: 분산 분석 기반의 카테고리 분석
회귀 분석 대상이 되는 독립 변수가 카테고리 값을 가지는 변수인 경우에는 카테고리 값에 의해 종속 변수인 y값이 달라진다. 이러한 경우, 분산 분석(ANOVA)을 사용하면 카테고리 값의 영향을 정량적으로 분석할 수 있다. 또한 이는 카테고리 값에 의해 회귀 모형이 달라지는 것으로도 볼 수 있기 때문에 모형 비교에도 사용될 수 있다.
카테고리 독립 변수와 더미 변수
카테고리 값은 여러개의 다른 상태를 나타내는 값이다. 분석시에는 편의상 이 값은 0, 1과 같은 정수로 표현하지만 원래 카테고리값은 1, 2, 3과 같이 숫자로 표현되어 있어도 이는 단지 "A", "B", "C"라는 라벨을 숫자로 대신 쓴 것에 지나지 않으며 실제로 크기의 의미가 없다는 점에 주의해야 한다. 즉, 2라는 값이 1보다 2배 더 크다는 뜻이 아니고 3이라는 값도 1보다 3배 더 크다는 뜻이 아니다.
따라서 카테고리 값을 그냥 정수로 쓰면 회귀 분석 모형은 이 값을 크기를 가진 숫자로 인식할 수 있는 위험이 있기 때문에 반드시 one-hot-encoding 등을 통해 더미 변수(dummy variable)의 형태로 변환해야 함
더미 변수는 0 또는 1만으로 표현되는 값으로 어떤 요인이 존재하는가 존재하지 않는가를 표시하는 독립 변수이다. 다음과 같은 명칭으로도 불린다.
indicator variable
design variable
Boolean indicator
binary variable
treatment
End of explanation
"""
from sklearn.datasets import load_boston
boston = load_boston()
dfX0_boston = pd.DataFrame(boston.data, columns=boston.feature_names)
dfy_boston = pd.DataFrame(boston.target, columns=["MEDV"])
import statsmodels.api as sm
dfX_boston = sm.add_constant(dfX0_boston)
df_boston = pd.concat([dfX_boston, dfy_boston], axis=1)
df_boston.tail()
dfX_boston.CHAS.plot()
dfX_boston.CHAS.unique()
model = sm.OLS(dfy_boston, dfX_boston)
result = model.fit()
print(result.summary())
"""
Explanation: 더미 변수와 모형 비교
더미 변수를 사용하면 사실상 회귀 모형 복수개를 동시에 사용하는 것과 실질적으로 동일하다.
더미 변수의 예 1
$$ Y = \alpha_{1} + \alpha_{2} D_2 + \alpha_{3} D_3 $$
$D_2 = 0, D_3 = 0$ 이면 $Y = \alpha_{1} $
$D_2 = 1, D_3 = 0$ 이면 $Y = \alpha_{1} + \alpha_{2} $
$D_2 = 0, D_3 = 1$ 이면 $Y = \alpha_{1} + \alpha_{3} $
<img src="https://upload.wikimedia.org/wikipedia/commons/6/61/Anova_graph.jpg" style="width:70%; margin: 0 auto 0 auto;">
더미 변수의 예 2
$$ Y = \alpha_{1} + \alpha_{2} D_2 + \alpha_{3} D_3 + \alpha_{4} X $$
$D_2 = 0, D_3 = 0$ 이면 $Y = \alpha_{1} + \alpha_{4} X $
$D_2 = 1, D_3 = 0$ 이면 $Y = \alpha_{1} + \alpha_{2} + \alpha_{4} X $
$D_2 = 0, D_3 = 1$ 이면 $Y = \alpha_{1} + \alpha_{3} + \alpha_{4} X $
<img src="https://upload.wikimedia.org/wikipedia/commons/2/20/Ancova_graph.jpg" style="width:70%; margin: 0 auto 0 auto;">
더미 변수의 예 3
$$ Y = \alpha_{1} + \alpha_{2} D_2 + \alpha_{3} D_3 + \alpha_{4} X + \alpha_{5} D_4 X + \alpha_{6} D_5 X $$
$D_2 = 0, D_3 = 0$ 이면 $Y = \alpha_{1} + \alpha_{4} X $
$D_2 = 1, D_3 = 0$ 이면 $Y = \alpha_{1} + \alpha_{2} + (\alpha_{4} + \alpha_{5}) X $
$D_2 = 0, D_3 = 1$ 이면 $Y = \alpha_{1} + \alpha_{3} + (\alpha_{4} + \alpha_{6}) X $
<img src="https://docs.google.com/drawings/d/1U1ahMIzvOq74T90ZDuX5YOQJ0YnSJmUhgQhjhV4Xj6c/pub?w=1428&h=622" style="width:90%; margin: 0 auto 0 auto;">
더미 변수의 예 4: Boston Dataset
End of explanation
"""
params1 = result.params.drop("CHAS")
params1
params2 = params1.copy()
params2["const"] += result.params["CHAS"]
params2
df_boston.boxplot("MEDV", "CHAS")
plt.show()
sns.stripplot(x="CHAS", y="MEDV", data=df_boston, jitter=True, alpha=.3)
sns.pointplot(x="CHAS", y="MEDV", data=df_boston, dodge=True, color='r')
plt.show()
"""
Explanation: interation(인터렉션)은 더미변수 D와 X가 동시에 들어가는 경우
만약 찰스가 1이라서 조망권이 있을 때 방이 많을수록 가격이 더 많이 띄게 되는 경우가 생긴다. 이럴 때 포뮬라를 쓰면 된다.
df_boston.boxplot("MEDV", "CHAS") => 강변에 있는 게 진짜 프리미엄이 있는가
2가지 방법이 있었다. 싱글 F ~~
anova test. F test. 상관계수가 1개일 경우에는 이걸 쓰면 된다.
End of explanation
"""
import statsmodels.api as sm
model = sm.OLS.from_formula("MEDV ~ C(CHAS)", data=df_boston)
result = model.fit()
table = sm.stats.anova_lm(result)
table
model1 = sm.OLS.from_formula("MEDV ~ CRIM + ZN +INDUS + NOX + RM + AGE + DIS + RAD + TAX + PTRATIO + B + LSTAT", data=df_boston)
model2 = sm.OLS.from_formula("MEDV ~ CRIM + ZN +INDUS + NOX + RM + AGE + DIS + RAD + TAX + PTRATIO + B + LSTAT + C(CHAS)", data=df_boston)
result1 = model1.fit()
result2 = model2.fit()
table = sm.stats.anova_lm(result1, result2)
table
"""
Explanation: 분산 분석을 이용한 모형 비교
$K$개의 복수의 카테고리 값을 가지는 더미 변수의 영향을 보기 위해서는 F-검정을 통해 복수 개의 모형을 비교하는 분산 분석을 사용할 수 있다.
이 경우에는 분산 분석에 사용되는 각 분산의 의미가 다음과 같다.
ESS: 각 그룹 평균의 분산 (Between-Group Variance)
$$ BSS = \sum_{k=1}^K (\bar{x} - \bar{x}_k)^2 $$
RSS: 각 그룹 내의 오차의 분산의 합 (Within-Group Variance)
$$ WSS = \sum_{k=1}^K \sum_{i} (x_{i} - \bar{x}_k)^2 $$
TSS : 전체 오차의 분산
$$ TSS = \sum_{i} (x_{i} - \bar{x})^2 $$
| | source | degree of freedom | mean square | F statstics |
|-|-|-|-|-|
| Between | $$\text{BSS}$$ | $$K-1$$ | $$\dfrac{\text{ESS}}{K-1}$$ | $$F$$ |
| Within | $$\text{WSS}$$ | $$N-K$$ | $$\dfrac{\text{RSS}}{N-K}$$ |
| Total | $$\text{TSS}$$ | $$N-1$$ | $$\dfrac{\text{TSS}}{N-1}$$ |
| $R^2$ | $$\text{BSS} / \text{TSS}$$ |
이 때 F-검정의 귀무가설은 $\text{BSS}=0$ 즉 $\text{WSS}=\text{TSS}$ 이다. 즉, 그룹간 차이가 없는 경우이다.
변수 선택에 있어서 global optic한 것을 찾는 것이 어렵다.
F값은 RM mean에서 residual mean으로 나눈 것(ANOVA 맨 밑에 result)
stats모델에는 별로 없고 sklearn에 패키지가 많다. 그래서 둘 다 할 줄 알아야 하고 조합할 줄 알아야 함
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.